Literature at Lightspeed: A Community of Writers on the World Wide Web And its Relationship to the Print Publishing Industry

Ira Nayman Graduate Programme in Communications McGill University, Montreal

A thesis submitted to the faculty of Graduate Studies and Research In partial fulfillment of the requirements of the degree of PhD

May 1, 2001

© Copyright 2001 by Ira Nayman Abstract

The World Wide Web offers individual writers new possibilities for producing and distributing fiction. This dissertation begins with an in-depth look at who these writers are, what they are doing and the advantages and disadvantages the Web has compared to traditional publishing venues. The common goals of these writers and the various ties which bind them suggests that a community of writers has developed on the Web. No community exists in isolation, however, so the dissertation also looks at some of the other forces at work in society which may have an effect on this community. Transnational entertainment conglomerates, for instance, are attempting to change the underlying structure of the technology in order to reap potentially great profits from it; their efforts may result in diminishing the ability of individual writers to use the Web to effectively distribute their work. Governments, to use another example, can affect the way individuals use the medium by enacting laws restricting certain categories of content online or developing copyright laws which favour the interests of large entertainment producing corporations. Finally, writers publishing on the Web may have a disintermediating effect on traditional print publishing, with ramifications for, among others, publishing houses and bookstores. What emerges in this dissertation is a portrait of the complex web of relationships between individual and institutional stakeholders in this developing technology.

Réduction de texte

L’Internet offre aux écrivains de nouvelles possibilités de réalisation et de distribution de la littérature contemporaine. Cette dissertation commence par examiné en profondeur l’identité de ces écrivains, ce que font ces écrivains et les avantages et les désavantages du Web en comparaison avec les méthodes d’édition traditionnelles. Etant donné les réseaux qui se produisent et l’existence d’objectifs communs parmi les écrivains, on s’aperçoit qu’une véritable communauté d’auteurs s’est développé sur le Web. Ceci dit, aucune communauté n’existe indépendement d’autres facteurs sociaux qui l’influencent. Les sociétés de divertissement multinationales, par exemple, essaient de changer la structure fondatrice de ces technologies pour en tirer des profits potentiels qui seraient extraordinaires; leurs efforts pourraient effectivement diminuer la capacité des auteurs indépendents à se servir du Web pour distribuer leurs œuvres de manière efficace. Les gouvernenments, pour citer un autre exemple, peuvent transformer les pratiques des utilisateurs du Web en imposant des lois contraignantes. Ces lois peuvent délimiter le contenu en ligne tout comme elles peuvent favoriser les intérêts des grandes sociétés de production. Pour terminer, les auteurs qui se publient sur le Web pourraient avoir un effet de désintermédiation sur l’édition traditionnelle. Cet effet aurait des répercussions importantes pour, entre autres, les maisons d’édition et les librairies. Cette dissertation dresse le portrait d’un réseau de liens complexes entre les auteurs indépendents et les développeurs institutionnels qui se servent de cette nouvelle technologie toujours en évolution. Contents

Abstract page i Chapter 1: Introduction page 1 Theories of Technology page 4 The Story So Far… page 20 The Story To Come… page 31

Chapter 2: Fiction Writers on the World Wide Web page 41 Introduction page 41 Survey Methodology page 43

Ezines page 61 Individual Writers page 81 Hypertext Fiction on the Web page 172 Conclusion page 211

Chapter 3: The Economics of Information on the Web page 216 Introduction page 216 Corporate Conglomeration in the Information and Entertainment

Industries page 220 What Is Information Worth? Page 229 The Gift Economy and Generalized Exchange of Public Goods page 232 Micropayments page 241 Branding page 246 Traditional Forms of Income page 252 Old Media for New page 272 The Attention Economy page 290

Chapter 4: What Governments Can (And Cannot) Do page 296 Introduction page 296 The Stick: Government Control Through Censorship page 300 Problems with Government Regulation 1: Jurisdictional Disputes page 318 Copyright page 329 Problems with Government Regulation 2: Limited Instruments page 353 The Carrot: State Support page 367

Chapter 5: Other Stakeholders in Publishing and New Media page 373 Publishers page 375 Production and Design page 384 Retail Booksellers page 389 Portals as Gatekeepers page 402 Critics and Other Filters page 407 Readers page 414 Literature at Lightspeed – page iii

Chapter Six: page 418 The Story So Far… page 418 Media Theories Revisited page 422 Resolving Conflicts page 431 Bias? Page 440 Recommendations About Individual User Autonomy page 443

Radio in the United States: A Cautionary Tale page 447

Notes page 460

Appendix A: Surveys page 465

Sources Cited page 472 Reaching back to McLuhan, if it were true that the medium is indeed the message then the potential benefits of the web for society would be almost certainly attainable. But if we look at the medium itself as a symbolic construction created by society or parts of it, then we can start to appreciate how fragile the potential of the web really is. The web is not a technologically determined panacea. It is a bundle of technologies that will be socially constructed to fit whatever niche society needs it to. The real question is, what forces are determining how the web is constructed? (McQuivey, 1997, 3)

...technology is a social construction. Its design, organization, and use reflect the values and priorities of the people who control it in all its phases, from design to end use. After the design has been implemented, the system organized, and the infrastructures put in place, the technology then becomes deterministic, imposing the values and biases built into it. (Menzies, 1996, 27)

[H]ave you seen the movie Henry Fool? Just saw it last night. It’s about a fellow who can’t get print magazines to publish his poetry because they consider it too pornographic, too crude, but he has it published on the Internet, and he becomes a huge celebrity. Just thought you’d find it of interest. (Wu, 1998, unpaginated)

Chapter 1: Introduction In the beginning was the word. Initially, the word was spoken. In the bible, the word of god breathes life into the universe. In its oral form, the word was (and remains) a living thing, an ever-changing repository of tribal wisdom handed down from generation to generation. Literature at Lightspeed – page 2 Print changed everything. The word was no longer elusive, but fixed. In such a form, it no longer relied on the memory of people for its existence; it now had a form independent of those that created it. Fixed between two covers, the word, which had once belonged to the entire tribe, could now be ascribed to a single author. In its new form on the page, the word began to take on a life of its own: now, the people who had created it would look to the word for wisdom. Books could be consulted long after their authors had died, and the information in them would always be the same. New types of knowledge could now be built on solid foundations which could not exist when the word was in its evanescent oral form. Digital technologies give the word a new form. Or, do they? The differences between the spoken word and the written word are obvious: one is based on the sounds made through exhalations of air, the other is made by the imprint of abstract characters on paper. The difference between analogue and digital forms of the word are less obvious. You may be reading this dissertation in print form, or you may be reading it on a computer. In both cases, the same words (approximately 150,000 of them), in the same sequences, grow into the same sentences. The sentences follow one another to create exactly the same paragraphs, paragraphs become chapters, and the chapters follow, one upon the other, to create the whole. The word is the same. Yet, it is different. Print is portable; you can read a book anywhere you can carry it. The advent of electronic books notwithstanding, digital media are bulky and difficult to transport, requiring the reader to go to them. Print pages are easy on the eyes; computer screens are difficult for many people to read from for long periods of time. On the other hand, it’s not always easy to find a specific passage in a print book, while a simple word search can usually take a reader to a specific place in a digital document almost instantaneously. Print requires a lot of effort to copy and send to others; digital text is trivially easy to copy and send to others. In fact, while the printed word is fixed, Literature at Lightspeed – page 3 the digital word is constantly changing. Our experience of the word in these two forms is substantially different. But the differences do not end there. An entire industry has developed over the centuries whose goal is to deliver the printed word from its creator to its reader. This includes: printing presses; publishing houses; bookstores, and; vast distribution networks. The electronic word, by way of contrast, takes a completely different route from creator to reader. This route includes: Internet Service Providers; telephone or cable companies, and; computer hardware and software manufacturers. The social structures which have developed to disseminate the word in its print and digital forms are substantially different. This difference is one of the primary subjects of this dissertation. The word’s most enduring form, one which cuts across all media, is the story. In oral cultures, the story was both a way of communicating the history of the tribe and teaching each new generation the consequences of disobeying its moral imperatives. (An argument can be made that the modern equivalent of the oral story, urban myths, serve this second purpose as well.) Print enhances story by ensuring that it does not change from generation to generation, creating the possibility of a dialogue between the present and representations of the past (most often engaged in literary scholarship, although we all engage in this process when we read a book which was published before we were born). With the rise of print, story undergoes an amoeba-like bifurcation, splitting into the categories of “fiction” and “non-fiction.” Non-fiction stories are based on real people and events, making the claim to be based on verifiable facts, a claim we mostly accept. This dissertation is a non-fiction story. Fiction stories, by way of contrast, are populated by characters who never lived, having adventures which are the stuff of imagination. Oral stories do not make such a distinction: real and made-up characters interact with scant regard to historical accuracy; embellishments added from generation to generation Literature at Lightspeed – page 4 change the story until it bears little resemblance to its original version. With oral stories (as with fictional print stories), the test of their value is not verifiability, but the “truth” they speak to the human heart. I am most interested in the production of fictional stories. Casually surfing the World Wide Web, one can sometimes come across a piece of fiction. Fictional stories are easy enough to find if one is looking specifically for them, hundreds upon hundreds. This observation has led me to suspect that the introduction of networked digital technologies, particularly the Web, is changing the nature of publishing. This dissertation will explore the phenomenon of fiction writers who place their stories on the World Wide Web. To do so, I will attempt to answer a series of questions about the subject. Who are the people putting their fiction online? What do they hope to accomplish with digital communication media that they could not accomplish with traditional publishing media? What are the features of the World Wide Web which either help them further or keep them from achieving their goals? What other groups have an interest in this technology? How will those other groups’ actions in furthering their own goals affect the efforts of individuals putting their fiction online? To answer these questions, we need a general idea of how technology interacts with human desire, how technology shapes and is shaped by human goals. This theoretical understanding can then be applied to the specific situation under consideration. So, to begin, we must ask ourselves: what is the relationship between technology and society?

Theories of Technology Thesis: Technological Determinism There is a school of thought which claims that technological innovations change the way individuals see themselves and each other, inevitably leading to changes in social institutions and relations. This possibility was delineated most directly by Marshall Literature at Lightspeed – page 5 McLuhan, who wrote, “In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message. This is merely to say that the personal and social consequences of any new medium -- that is, of any extension of ourselves -- result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology.” (1964, 23) This belief is known as technological determinism. It’s precedents lie in the work of Harold Innis (1951), a mentor of McLuhan’s at the University of Toronto. Current proponents of the idea include Derrick de Kerckhove (1997) and Christopher Dewdney (1998); the theory’s most vociferous champions can be found in the pages of Wired magazine (which claims McLuhan as its “patron saint"). Technological determinism offers important insights into how technologies affect society. However, there are at least three problems with the way in which deterministic theories posit the relationship between technology and society which diminish their usefulness. 1) Deterministic theories do not have anything to say about how technologies are developed before they enter society. There may be some validity to the position that technologies take on a life of their own once they are introduced into the world; they certainly make some options for their use more likely than others (you would find it hard to talk to a friend in a distant city if the only machine at your disposal was a coffeemaker, for example). However, technological determinists do not address issues such as the assumptions which go into the development of technologies, or even why research into some technologies is pursued by industry and the academy while research into other technologies is not. 2) Deterministic theories do not have anything to say about what happens when competing versions of the same technology are introduced into a society. We can all Literature at Lightspeed – page 6 agree that the VCR changes people’s television viewing habits (by allowing them: to watch programs on their schedules, not those of the network; to fast forward through commercials; to avoid scheduling conflicts by watching one program while taping another; etc.) It is less clear, from a purely deterministic point of view, why VHS machines should have captured the marketplace rather than Beta. Yet, one system will arguably have somewhat different effects on society than the other. 3) Deterministic theories do not have anything to say about the ongoing development of technologies after they have been introduced into society. Anybody who has purchased a computer in the last few years knows that memory and computing speed increase on a regular basis, which allows for increasingly sophisticated software. If you were to be charitable, you could say that we are in a state of constant innovation (if you were to be cynical, you could say that we were in a state of constant built-in obsolescence). Improvements in memory and computing speed which allow for increasingly sophisticated software are not merely changes of degree, but, in fact, change the nature of computing. For example, one important threshold was passed when processors became powerful enough to make a graphical interface feasible, making computer users less dependent on line commands. Prior to this, only experts with the time to learn all of the esoteric commands of the computer could use it; now, of course, it is not necessary to be a programmer to use a computer. Other thresholds might include networking computers and the ability to call up video, both of which involve incremental changes in existing technologies which hold the potential to change the way the technology is used, and, in turn, affects individuals and human relationships.1 Antithesis: Social Constructivism Any theory attempting to draw a relationship between technology and society must explain how technologies develop, both before and after they are introduced into the marketplace. Such a theory must take into account the fact that decisions about how Literature at Lightspeed – page 7 technologies are developed and employed, are made by human beings. As David Noble describes it,

And like any human enterprise, it [technology] does not simply proceed automatically, but rather contains a subjective element which drives it, and assumes the particular forms given it by the most powerful and forceful people in society, in struggle with others. The development of technology, and thus the social development it implies, is as much determined by the breadth of vision that informs it, and the particular notions of social order to which it is bound, as by the mechanical relations between things and the physical laws of nature. Like all others, this historical enterprise always contains a range of possibilities as well as necessities, possibilities seized upon by particular people, for particular purposes, according to particular conceptions of social destiny.” (1977, xxii)

Different individuals will want to use technology for their own purposes; where such purposes conflict, how the technology develops will be determined by the outcome of the negotiations around the conflict. This view of technological development is part of a field known as social constructivism. One traditional way of looking at technological development is that scientists are given a research problem and, through diligent work, find the best solution to the problem. Social constructivists begin by arguing that “social groups give meaning to technology, and that problems...are defined within the context of the meaning assigned by a social group or a combination of social groups. Because social groups define the problems of technological development, there is flexibility in the way things are designed, not one best way.” ("Section 1 Introduction,” 1987, 12) The myth of “the one best way” creates a linear narrative of technological development which tends to obscure the role negotiation plays. It has also allowed historians of technology to avoid the issue of the interests of scientists in the work they pursue, giving them the chance to portray scientists as disinterested “seekers after truth." To understand the contested nature of technology, it is first necessary to define what are known in the literature as “relevant social groups.” “If we want to understand Literature at Lightspeed – page 8 the development of technology as a social process, it is crucial to take the artifacts as they are viewed by the relevant social groups.” (Bijker, 1995, 49) How do we draw the line between relevant and irrelevant social groups for a given technology? Groups who have a stake in the outcome of a given technology can be said to be relevant; for instance, relevant stakeholders in the development of interstate highways would include auto manufacturers and related industries, construction companies, rail companies, drivers groups and the like, because each group’s interests will be affected. It is hard to see, by way of contrast, the stake which beekeepers or stamp collectors hold in interstate highways. Because they describe the same thing, more or less, I will be using the terms “stakeholders” and “relevant social groups” interchangeably throughout this dissertation. What actually holds groups together? That is to say, what is their common stake? Social constructivists suggest that they share a common way of defining and/or using a technology, a “technological frame."

A technological frame is composed of, to start with, the concepts and technique employed by a community in its problem solving... Problem solving should be read as a broad concept, encompassing within it the recognition of what counts as a problem as well as the strategies available for solving the problems and the requirements a solution has to meet. This makes a technological frame into a combination of current theories, tacit knowledge, engineering practice (such as design methods and criteria), specialized testing procedures, goals, and handling and using practice. (Bijker, 1987, 168)

In later writing, Bijker would argue that the concept of the technological frame should apply to all relevant social groups, not just, as one might think from the above quote, engineers. (1995, 126) To this list, then, we might wish to add additional qualities for consumers of technology, qualities such as existing competing technologies, availability, cost, learning curve, et al. No one list of attributes will describe the technological frame of every possible relevant social group, but every relevant social group will have its own technological frame. Literature at Lightspeed – page 9 The impetus for technological change begins with a problem. This makes sense: if there were no problem, there would be no reason to change the status quo. The farmer who wants to get his or her produce to a wider potential market will embrace solutions like a better interstate highway system. This may give the mistaken impression, however, that scientists respond to the real world problems of others. In fact, as will become apparent in Chapter Three, it sometimes happens that the only stakeholder with an interest in the development of a technology is the corporation which is developing that technology, and the problem to be solved is the rather prosaic one of how to increase the company’s profits. Our common sense understanding of the world would suggest that a technological artifact has a single definition upon which all can agree. A car is a car is a car is a car... Yet, constructivists would argue that, seen through the filter of different technological frames, artifacts can be radically different. “Rather, an artifact has a fluid and ever- changing character. Each problem and each solution, as soon as they are perceived by a relevant social group, changes the artifact’s meaning, whether the solution is implemented or not.” (ibid, 52) A car means one thing to a suburbanite who needs it to get to work; it means quite another thing to a member of MADD (Mothers Against Drunk Driving) who has lost a child in an automobile accident. The same artifact which is a solution to the problem of one relevant social group may, in fact, be a problem which needs a different solution to the member of another relevant social group. This is a direct challenge to the “one best way” model of scientific development, since it suggests that whether or not a technology “works” depends upon how you have defined the problem it is intended to solve. As Yoxen puts it, “Questions of inventive success or failure can be made sense of only by reference to the purposes of the people concerned.” (1987, 281) This multiplicity of interpretations, or actual artifacts, does not, cannot last. We require technologies to work in the real world; therefore, they must, at some point, take a Literature at Lightspeed – page 10 fixed form. The process by which this happens is referred to by social constructivists as “closure.”

Closure occurs in science when a consensus emerges that the ‘truth’ has been winnowed from various interpretations; it occurs in technology when a consensus emerges that a problem arising during the development of technology has been solved. When the social groups involved in designing and using technology decide that a problem is solved, they stabilize the technology. The result is closure. ("Section 1 Introduction,” 1987, 12)

That is to say, when there is general agreement on the nature of a technology, when different stakeholder groups adopt a common technological frame, the technology takes on a fixed form. It is possibly that various frames will be combined before closure occurs, but, “Typically, a closure process results in one relevant social group’s meaning becoming dominant.” (Bijker, 1995, 283) Occasionally, the process of technological innovation is compared to Darwin’s theory of natural selection. “[P]arts of the descriptive model can effectively be cast in evolutionary terms. A variety of problems are seen by the relevant social groups; some of these problems are selected for further attention; a variety of solutions are generated; some of these solutions are selected and yield new artifacts.” (ibid, 51) This theoretical analogy is misleading, however: nature is unconcerned with the outcome of the struggle between species for survival, but relevant social groups are, by definition, concerned with the struggle between different forms of technology for supremacy in the marketplace. Natural selection is blind and directionless, whereas technological selection is interested and purposefully directed. This is not to suggest that the work of Pinch and Bijker, et al is the only worthwhile approach to technological change. The area known as the Social Shaping of Technology (SST), of which what I am calling social constructivism is only one aspect, has yielded many very worthwhile studies and ways of looking at technological change. Literature at Lightspeed – page 11 A good starting point would be Langdon Winner's The Whale and the Reactor, a collection of articles which includes the important "Do Artifacts Have Politics?" Using the example of highways built in the 1930s, Winner convincingly argues that the intentions of those who create a technology condition how it can be used. (Winner may be surprised to find himself a proponent of SST, given that in an earlier work, Autonomous Technology, he took the deterministic position that the unintended consequences of a new technology will change society in ways which could not have been foreseen by the technology's creators and, therefore, that our vast modern technological systems had evolved beyond human control. Below, I hope to show that there need be no necessary contradiction between these two positions, that both intended and unintended consequences occur with the introduction of a new technology. However, in these two very different works, Winner does not address the apparent tension.) Once one has accepted the idea that artifacts have politics, it is possible to apply existing political theory to the relationship between technology and society. For example, Harry Braverman, in "Technology and capitalist control," takes a Marxist position on technological change: "Machinery offers management the opportunity to do by wholly mechanical means that which it had previously attempted to do by organizational and disciplinary means. The fact that many machines may be paced and controlled according to centralized decisions, and that these controls may thus be in the hands of management, removed from the site of production to the office – these technical possibilities are of just as great interest to management as the fact that the machine multiplies the productivity of labor..." (1985) In a similar vein, Tine Bruland (1985) and William Lazonick (1985) examine how the introduction of the automatic spinning mule changed labour conditions in the 19th century textile industry, while the writing of David Noble, Progress Without People, and Jeremy Rifkin, The End of Work, bring an examination of the ways in which new technologies change the workplace into the present. Literature at Lightspeed – page 12 Another approach to the politics of technological artifacts derives from feminist analysis. In "When Old Technologies Were New," for instance, Carolyn Marvin argues, through an exploration of the creation and adoption of the light bulb and electricity, that electrical engineering became a profession which dominated discourse on the emerging technology, giving engineers the power to shape the technology. Since the profession was overwhelmingly male, the technology developed with the interests of men in mind. (1988) Cynthia Cockburn extends this argument, pointing out that by designing tools for "larger, more physically able" men (even though physical differences between the sexes are exaggerated), many professions end up excluding women. In this way, the domination of male values in engineering becomes self-perpetuating. (1985) It is also possible to explore the politics of technological artifacts without recourse to existing theory, of course. Robert Babe, for example, intriguingly argues that media began in a converged state (for instance, the telephone was used to broadcast programmes as well as for point to point communication) and that government regulation, at the behest of corporations which saw profit in the control of them, forced an artificial and unnecessary division between media. (1996) These are all important pieces of the puzzle, vital ways of answering the question, "What is the relationship between technological change and social change?" However, they are not the whole puzzle. Applying existing political theory to new technologies, for instance, tends to posit technological change as the struggle between binary sets of interests (whether male-female or worker-management). It may be that one or the other of these binaries is all-important, with other interests being irrelevant (Orthodox Marxists, in particular, tend to view all aspects of human existence as an outgrowth of the class struggle, reducing all other considerations to relative unimportance.) Again, while valid as far as they go, I do not believe such analyses give the whole picture. To me, the attraction of constructivism is that it does not determine, a priori, Literature at Lightspeed – page 13 what the most important social groups are in the development and adoption of new technologies. Rather, it is a method which allows for the discovery of the interests of a variety of groups in the course of the investigation. It may turn out that the gender or class based view is correct, but this has to be proven rather than taken as a given. Babe's analysis is probably closest in spirit to the current work, although it has an important limitation: it does not address the issue of competing groups whose vision of the technologies he considers did not prevail. As we shall see in the final chapter, there was substantial resistance to the development of radio as a commercial mass medium, but this does not figure in Babe's narrative. Without this information, however, the story of radio is incomplete, even in terms of the groups (government and industry) which Babe looks at. How did those groups define their interests against citizen groups with a very different vision of the medium? What strategies did they adopt to deal with this resistance? These are important questions that get to the heart of the issue of how new technologies are shaped by existing social arrangements.

Social constructivism, as I have developed the idea in this chapter (and will discuss further in the concluding chapter), helps us get at the complexity of the relationship between society and technology, especially as both change, in ways which the other SST theories, as useful as they are, do not.

There are problems with social constructivism, however. They include:

1) Constructivists have nothing to say about what happens after a technology is introduced into a society. The problem goes beyond the fact that most of us have experienced significant changes to our lives as a result of our introduction to new Literature at Lightspeed – page 14 technologies, an experience which would give us an intuitive sense that technology does have effects once people start using it. Constructivists ignore the consequences of the actions of the groups they study, devaluing their own work. It is interesting, to be sure, to know how Bakelite or the safety bicycle developed, but what gives such studies import is how those technologies then go on and affect the world. If the effects of new technologies are a matter of indifference, the process by which they are created isn’t an especially important research question.

2) The term closure suggests complete acceptance and permanence of a technology; I would argue, however, that closure is never perfect in either of these ways.

For one thing, some social groups may never accept the dominant definition of a technology. Controversy over the nature of radio, it is argued, was effectively closed when the corporate, advertising-based structure with which we are all familiar turned it into a broadcast medium. However, from time to time stakeholder groups have developed which did not accept this form of radio, whose interest was in using radio as a one to one form of communication (thus, the CB craze of the 1960s and 1970s, or the use of HAM radios since the medium’s inception), or in simply opposing the regulation of the airwaves which supports the corporate domination of the medium (pirate radio stations).

To be sure, these groups are marginal and do not pose a serious threat to the general consensus as to what radio is; however, they illustrate the point that closure is not complete, that relevant social groups will continue to exist even after debate about the nature of a technology seems to have been closed. Literature at Lightspeed – page 15 Even more telling is the fact that technological research and development continues even after conflict over a specific artifact seems to have been closed. In its earliest form, radio programming included educational and cultural programs and entertainment, including all of the genres (comedy, western, police drama, etc.) which we now take for granted. However, when television began to become widespread in the 1950s, it replaced radio as an entertainment medium; people could see their favourite stars as well as hear them! Many of radio’s most popular performers (Jack Benny, Burns and Allen, et al) moved to television. The introduction of television opened up the debate about what radio was, a debate which had seemed closed for over 20 years; stakeholders had to find a new definition of the medium (which they did, moving towards all music and all news formats). Thus, even though the technology itself had not changed, I would argue that radio was a different medium after the introduction of television than it had been before. A more dramatic example is the emergence of digital radio transmitted over the Internet. Many stations which broadcast via traditional means are, at the same time, digitizing their signals and sending them over the Internet. On the one hand, this increases their potential listenership (although the value of this is debatable given that most radio stations’ advertising base is local). On the other hand, it opens radio up to competition from individuals who can set up similar music distribution systems on their personal Web pages. Where this will lead is anybody’s guess. For our purposes, it is only important to note that digital distribution of radio signals opens up, again, the seemingly closed debate about what radio is. (In a similar way, I hope that this dissertation will show that online distribution of text has opened up the debate about publishing technology which seemed to have been closed when the printing press established itself in Europe 500 years ago and especially as it became an industrial process in the nineteenth and twentieth centuries.) Literature at Lightspeed – page 16 These are, for the most part, conservative changes in Hughes’ sense, where “Inventions can be conservative or radical. Those occurring during the invention phase are radical because they inaugurate a new system; conservative inventions predominate during the phase of competition and system growth, for they improve or expand existing systems.” (1987, 57) Nonetheless, even conservative changes reintroduce conflict over the meaning of a technological artifact which had been considered closed. 3) Constructivist accounts of technological creation tend to be ahistorical. To allow that new technologies have developed out of existing technologies would be to admit that closure is an imperfect mechanism, as well as possibly requiring the recognition of some deterministic effects in the way old technologies create social problems which require the creation of new technologies to solve. However, new technologies frequently develop out of combinations of existing theories and technologies. Synthesis: Mutual Shaping Technological determinism and social constructivism suffer from a similar problem: they both posit a simple, one-way relationship between technology and society. With technological determinism, technology determines social structure. With social constructivism, social conflict ultimately determines technology. I would suggest that the situation is more akin to a feedback loop: the physical world and existing technologies constrain the actions which are possible for human beings to take. Out of the various possibilities, stakeholder groups vie to define new technologies. Once the controversy has died down, the contested technology becomes an existing technology which constrains what is now possible. And so on. (See Figure 1.1) We can determine the positions of constructivists by analyzing their stated claims and actions; we can construct the effects of determinism, however provisionally, by linking consensus about the shape of technology to wider social trends. Literature at Lightspeed – page 17 Boczkowski calls this process “mutual shaping.”

Often, scholars have espoused a relatively unilateral causal view. They have focused either on the social consequences of technological change, or, most recently, on the social shaping of technological systems. Whereas the former have usually centered upon how technologies impact upon users’ lives, the latter have tended to emphasize how designers embed social features in the artifacts they build. In this sense, the process of inquiry has fixed either the technological or the social, thus turning it into an explanans rarely problematized. However, what the study of technology-in-use has ultimately shown is that technological and social elements recursively influence each other, thus becoming explanans (the circumstances that are believed to explain the event or pattern) and explanandum (the event or pattern to be explained) at different periods in the unfolding of their relationships.” (1999, 92)

This feedback model is roughly the situation described by Heather Menzies in the quote above. It is also hinted at in some of the work of pure social constructivists. For instance, Bijker writes that “A theory of technical development should combine the contingency of technical development with the fact that it is structurally constrained; in other words, it must combine the strategies of actors with the structures by which they are bound.” (1995, 15) Surely, the structural constraints Bijker alludes to must include existing technologies; but this would allow deterministic effects in by the back door, so he cannot pursue the line of thought. Mutual shaping solves the problems faced by each of the theories taken on their own. It explains what happens before a technology is introduced into a society and after it has been shaped by various stakeholders. It allows us to understand what happens if different technologies are sent out to compete in the marketplace. ("Empirical studies informed by this conceptual trend have revealed that users integrate new technologies into their daily lives in myriad ways. Sometimes they adapt to the constraints artifacts impose. On other occasions they react to them by trying to alter unsuitable technological Literature at Lightspeed – page 18

Figure 1.1 The Iterative Relationship Between Society And Technology Adapted from Boczkowski, 1992. configurations. Put differently, technologies’ features and users’ practices mutually shape each other.” (ibid, 90)) We can also use this way of looking at the relationship between technology and society to go beyond the concept of closure, which doesn’t seem to account for the way definitions of technology continue to be contested. I would suggest that technologies go through periods of “stability,” periods where the definition of what the technology is is widely (though perhaps not universally) agreed upon, and where the form of the artifact does not change. Stability roughly corresponds to the period in Figure 1.1 between when a technological change is introduced into a society and the recursively triggered mediations which lead to the creation of new technologies. Periods of stability can last centuries or a matter of years Mutual shaping is not likely to satisfy deterministic or constructivist purists. However, those who are less dogmatic about such things (for instance, what Paul Levinson might call “weak determinists” (1997)), may be willing to allow some of the effects of the other theory, to their benefit. Literature at Lightspeed – page 19 It isn’t necessary to equally balance deterministic and constructivist effects in every study of technology. Those who are most interested in social effects will emphasize certain aspects of the relationship between society and technology; those interested in technological development will emphasize other aspects. The important thing is to recognize that both processes are at work in a larger process, and to make use of one theory when it will help illuminate the other. In this dissertation, for example, I will take a primarily social constructivist approach. For this reason, I have to answer at least two questions. 1) What are the relevant social groups/stakeholder groups and what is the technological frame which they use to define their interest in the technology? 2) How are conflicts between stakeholders resolved en route to the stability of the technological artifact? The final section in this chapter will give a brief overview of the stakeholders in traditional and electronic publishing. The body of the dissertation will then elaborate in much greater detail on the interests of specific stakeholder groups. What I hope to accomplish is a “thick description” of the social forces which are at work shaping this particular use of this particular computer mediated communication technology, and what various possible configurations of the technology do and do not allow different stakeholder groups to achieve. Thick description involves “looking into what has been seen as the black box of technology (and, for that matter, the black box of society)... This thick description results in a wealth of detailed information about the technical, social, economic, and political aspects of the case under study.” ("General Introduction,” 1987, 5) Where necessary, such description will include the possible effects of specific forms of technology on individuals and their social relationships. Before I do that, it is useful to consider a brief history of the technologies involved in digital publishing, which I do in the next section. These histories are by no means definitive. My intention is not to close off debate about what the technology is Literature at Lightspeed – page 20 before it has even begun. Instead, I hope to set the table for the reader, to give a brief indication of how we got to the point where the conflicts described in the rest of the dissertation are possible.

The Story So Far... Electronic publishing comes at the intersection of two very different technologies: the printing press, over 500 years old, and the digital computer, over 50 years old. This section will look at some of the relevant history of the two technologies. The Printing Press Imagine a pleasant summer’s day in 1460. Through the window of your monastery in the German countryside, you can see fluffy white clouds in a pale blue sky. But you only catch fleeting glimpses of the world outside, because you are engaged in serious business: the transcription of a Latin text. You work in what is known as a “scriptorium:” your job is to take an ancient text and copy it, word for word. The work is detailed, painstaking; a single text will take several years for you to complete (after which you must hand it over to the illustrators and illuminators, who will take additional years to perfect the volume). You are not allowed to emend the text in any way, and you certainly are not allowed to comment on what you are reading; the quality of your work will be based solely on its fidelity to the original. Yet, the work is satisfying, because you know that your small efforts are part of a great project to keep knowledge alive during a period of intellectual darkness. If you are lucky, you will go to your grave without learning that the world you believe you live in no longer exists. Five years earlier, in 1455, the son of a wealthy family from Mainz, Johannes Gutenberg, published what was known as the 42 line bible (because each page was made up of 42 lines). The work itself was of generally poor quality; it wasn’t nearly as esthetically pleasing as a hand-copied bible. Nonetheless, it may be the most significant Literature at Lightspeed – page 21 book in publishing history because it is the first book historians acknowledge to be printed with a press. The Gutenberg printing press consisted of two plates connected by a spring. On the bottom plate was placed the sheet of paper on which text would be imprinted; the top plate contained metal cubes from which characters (letters, punctuation marks, blanks for spaces between words) were projected. The surface of the upper plate was swathed in ink; when pressed onto the paper on the bottom plate, it left an impression of the text. The spring would then pop the upper plate back in place so that another piece of paper could be placed underneath it. And another. And another. Whereas a monk in a monastery would takes years to complete a single volume, those who ran a printing press could create several copies of an entire book in a matter of days. Gutenberg’s press was not created out of thin air, of course. The press itself was a modification of an existing device used to press grapes to make wine. Furthermore, the concept of imprinting on paper had existed for thousands of years: woodcutting, where text and images were carved into blocks of wood over which ink would be spread so that they could be stamped on paper, had existed in China for millennia. Gutenberg’s particular genius was to take these and other ideas and put them together into a single machine. One of the most important aspects of the printing press was moveable type (which, again, had existed in Asia for perhaps centuries before Gutenberg employed it); instead of developing a single unit to imprint on a page (as had to be done with woodcuts), the Gutenberg press put each letter on a cube, called type. The type was lined up in a neat row, and row upon row was held firmly in a frame during printing. Woodcutting had a couple of drawbacks: it was slow, detailed work, which meant that publications with a lot of pages would take a long time to prepare. Furthermore, if a mistake was made, the whole page had to be recut. Pages of moveable type, on the other Literature at Lightspeed – page 22 hand, could be developed quickly, and mistakes caught before publication required the recasting of a single line rather than the recreation of an entire page. It is not an exaggeration to say that printing modeled after Gutenberg’s press exploded: printing is believed to have followed the old trade route along the Rhine river from Mainz: Strassbourg soon became one of the chief printing centers of Europe, followed by Cologne, Augsburg and Nuremberg. Within 20 years, by the early 1470s, printing had reached most of Europe; for example, William Caxton set up his printing shop in Westminster Abbey in late 1476. Within about 100 years, printing presses had been set up throughout the world: by 1540, there was a print shop in Mexico, while by 1563, there was a press in Russia. The fact that it took a century for the printing press to travel throughout the world may not impress modern people who are used to hardware and software being updated on a virtually daily basis; however, given the crudity of transportation facilities and lack of other communications networks at the time, the speed with which the press developed was astonishing. Of course, wherever the printing press went, a huge increase in the number of books available soon followed. Within a couple of decades, most monasteries were relieved of their responsibility for copying books and the scriptoria were closed down, although some held out for as much as a century. The printing press had almost immediate effects on the social and political structures of the day. The scriptoria concentrated on books in Latin, a language which most laypeople (who were illiterate) did not understand; interpretation of the word of god, and the power over people which went with it, was jealously guarded by the Church. That institution’s monopoly of knowledge (to use Innis’ insightful phrase) was all but airtight. Soon after the printing press began to spread, copies of the bible and other important religious texts were printed in the vernacular of the people in various countries; it now became possible for individuals to read and interpret the bible themselves. This is believed to have been a contributing factor in the Protestant Literature at Lightspeed – page 23 Reformation: Martin Luther’s famous challenge to the Catholic Church was accompanied by the widespread dissemination of inexpensive copies of the Bible in languages people could understand. From the perspective of the current work, one important aspect of the printing press (although it would take hundreds of years to manifest) was the way it created a mystique around the individual author. The monks who copied manuscripts were not considered originators of the work, and are now almost completely forgotten. Before that, information was, for the most part, transmitted orally. While some cultures developed rich histories through the stories which were passed down from generation to generation, they were not considered to be the work of any individual. The printing press, on the other hand, made it possible to identify works as the sole creation of an individual, since they went directly from the author to the printer. Another important aspect of printing was that it made knowledge sufficiently uniform and repeatable that it could be commodified. Oral stories were considered the property of the tribe; because they often contained morals about acceptable and non- acceptable behaviour, it was important that everybody in the tribe know them. Printed books, especially when the printer (and, later, the author) could be identified, were seen as a source of income, and were immediately marketed as any other product. Books also required a new skill of people: literacy. In the early days of print, only approximately 25% of men and far fewer women knew how to read. Many more had a command of “pragmatic literacy” (knowing how to read just enough to get by in their day to day lives), although how literate this made them is debatable. The people who could read were primarily clerics, teachers, students and some of the nobility; reading also quickly developed among urban residents in government or those involved with trade, law, medicine, etc. This latter trend is only logical: these were areas of knowledge Literature at Lightspeed – page 24 where information was being published, and the ability to read was necessary to keep up with the latest writing in one’s field. Universal literacy only became an issue toward the end of the 19th century. At the time, some people argued that literacy would benefit people by giving them direct access to classical texts which would expand their minds. However, it has been argued that broader literacy was actually necessary for the industrial revolution, since workers needed to read technical manuals to better help them run the machines of the new age. In the 18th century, methods of copper engraving made it possible to print sophisticated drawings on the same page as text. In addition, the move from wooden to metal presses meant an increase in how much could be printed: 250 pages an hour! In the 19th century, mechanical processes of paper production were introduced which further increased the production of paper 10- to 20-fold. (This was accompanied by a change in materials, from pulped rags to pulped wood, which helped speed up the process, but meant that the paper would begin to disintegrate after 50 or 100 years.) In the 19th century, innovations in printing fixed the technology more or less as we know it today. In 1847, R. Hoe & Co. introduced the rotary press, where a flexible metal plate was rotated over a moving sheet of paper. It was inked after every turn, and paper was automatically fed into the machine as the plate turned. The rotary press was also the first press not to be run by human power: it was powered by steam, at first, and eventually by electricity. Unlike the hand press, which could only produce 300 to 350 sheets a day at that time, a power rotary press could print from 12,000 to 16,000 sections in the same day. These improvements led to an explosion of printed material which continued unabated through the 20th century: “There might be a panic about the passing of print and the growing popularity of the screen, but the reality is that more books are being published now [in 1995] than ever before.” (Spender, 1995, 57) Literature at Lightspeed – page 25 For more on the history of the printing press, see Lehmann-Haupt (1957), Butler (1940), Katz (1995) and Pearson (1871). The Digital Computer To fight World War II, the Allied Forces developed bigger and bigger artillery to deliver larger and larger destructive payloads to the enemy. However, the army faced a difficult problem: how to ensure that the shells hit their targets? Calculating rocket trajectories was an extremely complicated process which involved the weight of the vehicle, the terrain over which it was to travel, fuel consumption, velocity, weather conditions, the speed of the moving target and a host of other variables. Human beings computed these trajectories, but their work was slow and riddled with errors. This meant that enemy aircraft were virtually free to roam Allied airspace at will. What was needed was a mechanical device which could calculate trajectories much more quickly, and with a much smaller margin of error than human computers. The answer, of course, was the development of mechanical computers. This problem had plagued armies since the 19th century, and one solution had, in fact, been proposed at the time. In 1822, Charles Babbage proposed a mechanical device which could calculate sums which he called a “Difference Engine.” The Difference Engine used the most sophisticated technology available at the time: cogs and gears. As designed, it would have calculated trajectories much faster than human beings, and, since Babbage designed a means by which the calculated tables could be printed directly from the output of the Difference Engine (an early form of computer printer), human error was all but eliminated. The Difference Engine had a major drawback, however: it could only be designed to calculate the solution to one mathematical problem. If conditions changed and you needed the solution to a different mathematical formula, you had to construct a different machine. Literature at Lightspeed – page 26 Understanding the problems with this, Babbage went on to design something different, which he called the “Analytical Engine.” The Analytical Engine employed a series of punched cards (perfected in 1801 by Joseph-Marie Jacquard for use in automating looms for weaving) to tell the machine what actions to perform; the action wasn’t built into the machine itself. The alternation between solid spaces and holes in the card are a type of binary language, which is, of course, the basis of modern computers. With the Analytical Engine, branching statements (if x =10, then go to step 21) became possible, giving the machine a whole new set of possible uses. The first person to develop series of cards into what we would now call “programs” was Lady Ada Augusta, Countess of Lovelace (daughter of the poet Lord Byron). Neither of the machines envisioned by Babbage was built in his lifetime. However, his ideas can, in retrospect, be seen as the precursors to the first digital computer, the ENIAC. Research in the United States leading to the creation of ENIAC (Electronic Numerical Integrator and Calculator) began in 1943. When completed, the machine consisted of 18,000 vacuum tubes, 1,500 relays 70,000 resistors and 10,000 capacitors. ENIAC was huge, taking up an entire large room. It had to be built on a platform, its wires running beneath it. The machinery was so hot that the room had to be constantly kept at a low temperature. Ironically, given its intended purpose, ENIAC didn’t become fully functional until after the war had ended in 1945. At around the same time, researchers in England were developing a different digital machine in order to solve a different military problem. German orders were scrambled by a machine code named “Enigma;” if the Allies could break the German code, they could anticipate German actions and plan their own strategies accordingly. A key insight in this effort came from a brilliant English mathematician, Alan Turing: he realized that any machine which could be programmed with a simple set of instructions Literature at Lightspeed – page 27 could be reproduced by any other machine which could be so programmed. This has come to be known as the Turing Universal Machine (the implication being that all such machines can be reduced to one machine which can do what they all could do). The original designs of the “Ultra,” Britain’s decoding machine, were electromechanical, calling for mechanical relays (shades of Babbage!), but they switched to electronic relays as the technology became available. This work was an important part of the war effort. These two threads of research paved the way for modern computers, but a lot of changes had to be made before we could achieve the systems we have today. One of the first was the development of transistors out of semiconducting materials at AT&T in the 1950s. Replacing vacuum tubes with transistors made computers smaller, faster and cheaper. However, the transistors had to be wired together, and even a small number of faulty connections could render a large computer useless. The solution was the creation of the integrated circuit, developed separately by Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductors; this put all of the transistors on a single wafer of silicon. This has led to generation after generation of smaller and faster computers as more and more transistors are packed onto single circuits. The very first computers had toggle switches on their fronts which had to be thrown by users at the right time. These were soon replaced by piles of bubble cards which contained the program to be run and the data to run it on. You sent the cards to a data center, where your programme would be put at the end of a line with the programmes of others and run in its turn; the next day, you would get a printout of your programme and see if it, in fact, worked. An important development in the history of mainframes, as these room-sized computers were called, was the concept of time-sharing: instead of running programmes in sequence, 10 or 15 programmers could use the mainframe at the same time. Moreover, by using keyboard inputs rather than punch cards, programmers could communicate directly with the machine. Literature at Lightspeed – page 28 Still, to use a computer, you had to be trained in its language. Two very important technological developments had to occur before computers could be used by untrained people: the mouse and the iconic visual display. The visual display allowed computer users to see what they were doing on a screen. By using a mouse for input, users could point at an icon on the screen which represented what they wanted to accomplish and click on it to command the computer to do it. This more intuitive method of using computers was developed in the 1960s by Douglas Englebart at the Augmentation Research Center (although practical working models weren’t developed until the late 1960s and early 1970s at a different institution, Xerox Palo Alto Research Center, or PARC). In the late 1970s, Apple Computer introduced its first desktop personal computer. Their machine was a unit small enough to fit on a person’s desk; it included a screen, keyboard and mouse. Unlike the International Business Machines computers of the time, Apple’s computers were created to be used by individuals, not businesses. Unlike machines for computer hobbyists (such as the Altair), Apple computers did not require engineering knowledge to assemble. The company’s biggest breakthrough came in 1984, with the popularity of the Macintosh. A single computer can be powerful enough, but connecting computers together not only creates an even more powerful computing entity, but it also gives computers a new function: tools with which to communicate. This was the impetus for the American Department of Defense’s Advanced Research Projects Agency (ARPA) to develop methods of connecting computers together. This research in the 1960s and 1970s led to the creation of ARPANet, a computer network designed to link researchers (primarily on military projects, although it soon expanded beyond this group) at educational institutions. Literature at Lightspeed – page 29 Computer networks are based on an architecture of nodes and a delivery system of packets. Nodes are computers which are always connected to the network through which digital messages flow. Information from a computer is split up into packets, each with the address of the receiver; each packet is sent through the system separately, and recreated as the original message at the receiver’s computer. The system is designed so that each packet flows by the quickest route possible at the time it was sent; in this way, various parts of a message may flow through a variety of different nodes before reaching their destination. Legend has it that ARPANet was designed to withstand a nuclear attack: if some nodes were knocked off the system, messages could still get through by finding the fastest route through whatever nodes remained. As the technology spread, other computer networks began developing. By the 1980s, a movement to connect all of these networks had developed, which led to what we now know as the network of networks, the Internet. ARPANet had originally been designed to allow researchers to share their work by making papers and other documents available. Early designers were surprised to find that the technology was used primarily for personal communications, yet email was, and remains, one of the most popular uses of computer networks. By the 1990s, huge amounts of information were becoming available on the Internet. This created a serious problem: how to find what one wanted in the mountainous volume of information available. Some tools, such as Archie and Gopher, had been created to search through indexes of material. However, a different approach to information was sought to alleviate this growing problem, an approach for which the theoretical groundwork had been laid in 1945 by a pioneer in computing research, Vannevar Bush. In an article titled “How We May Think” which first appeared in The Atlantic Monthly, Bush described a machine, which he called a Memex, which could call up Literature at Lightspeed – page 30 documents requested by a user. The user could then annotate the documents, store them for future use and, perhaps most importantly, create links from one document to another which made explicit the connections between them which had been made by the user. The Memex was an unwieldy mechanical device which was never produced, but the article suggested a new way of organizing at information. This concept was developed in the 1960s by a man named Theodore Nelson. He suggested a computer system where documents contained links to other documents. By clicking on the link, a window would be opened up which contained the text referred to in the first text. Of course, the second text would have links to tertiary texts, which would have their own links, and so on. Nelson named his system Xanadu, a reference to a Coleridge poem. His ideas circulated on the Internet throughout the 1960s and 1970s; by the time they were published in the 1980s as Literary Machines, Nelson had worked out an elaborate system, which he called hypertext, which included payment schemes for authors, public access and human pathseekers to help users find useful information. The electronic Xanadu was never built; it was superseded by a similar, although slightly less ambitious system: the World Wide Web. In the early 1990s, Tim Berners- Lee and researchers at the Conseil Europeen pour la Recherche Nucleaire (the European physics laboratory known as CERN) developed HyperText Mark-up Language (HTML). HTML allowed links from one document to another to be embedded into text; links were highlighted (often in blue) to indicate that they were active. Using a browser (the first of which, Mosaic, was developed by Marc Andreessen at the American National Centre for Supercomputing Applications), computer users could click on interesting links and be immediately transported (electronically) to the document. HTML has a variety of useful features: it is simple enough to be written in a word processing programme; in its original form, it took very little time to learn; it was designed to be read by whatever browser a user employed; and so on. Web browsers (today primarily Microsoft’s Internet Explorer Literature at Lightspeed – page 31 and Netscape Navigator) also have a lot of useful features: users can store links; they can access the HTML code of any page (if they find a use for HTML which they would like to employ); etc. For more on Charles Babbage, see Moseley (1964) and Rowland (1997). For more on the development of computers, see Rheingold (1985 and 1993), Selkirk (1995), Rowland (1997) and Edwards (1997). For more on the history of hypertext, see Bush (1945) and Nelson (1992).

The Story To Come... Traditional academic work based on the scientific method requires the researcher to start with a well-formulated question, gather evidence relevant to the problem and offer a solution to the question. The traditional method is useful when the problem to be solved has well defined parameters, which usually occurs in what Kuhn calls “normal science.” (1962) Unfortunately, these are not the conditions under which research on computer mediated communications networks, and especially the World Wide Web, is conducted. Although various researchers are fruitfully borrowing ideas from their home disciplines, there is no paradigm (set of values, methods, research questions, et al) for this kind of communications research. In truth, we don’t know what the proper questions are, let alone the most appropriate method(s) of answering them. The literature on the subject is growing in a variety of directions without, to this point, cohering in a single agreed upon way. For this dissertation, I have borrowed a technique from social constructivism called “snowballing.” Start with any single stakeholder group in a developing technology. It will become apparent that other stakeholder groups either support or, more often, conflict with this group. Add them to your list of research subjects. Iterate until you have as complete a picture of all of the stakeholders as possible. Starting with individual Literature at Lightspeed – page 32 fiction writers, I expanded the scope of the dissertation to include other stakeholders whose interests may affect those of the writers. While the strength of the traditional method is the way it shows the causal relationships between variables, the strength of the snowball method is that it helps uncover a web of relationships. This makes it ideal for research into the interactions between individuals and/or social groups. A weakness of this method is that, sooner or later, everything is related to everything else, making the whole of human experience a potential part of one’s research. While the traditional method carefully delineates its subject, this method does not. It becomes important, then, to ensure that all of the stakeholder groups which become part of a research project are, in fact, related to a central object. Typically with social constructivist studies, the technology under dispute becomes the one factor which binds all of the groups together in their web. For the present study, I narrowed the focus further: stakeholder groups were chosen because they were affected by or affected the interests of the individual writers who use the specific technology (the Web). Further, rather than focusing on where research is taking place, the traditional site of social constructivist studies, I intend, instead, to focus on what Schwartz Cowan calls the “consumption junction,” the point where technology is actually used. “There are many good reasons for focusing on the consumption junction,” she writes. “This, after all, is the interface where technological diffusion occurs, and it is also the place where technologies begin to reorganize social structures.” (Schwartz Cowan, 1987, 263) Or, to put it another way, the consumption junction is the place at which the mutual shaping of technology and society is most apparent. Focusing on the consumption junction seems, at first glance, to ignore all of what traditionally is considered necessary to understand technological development: processes of invention, innovation, development and production. Instead, it focuses on the Literature at Lightspeed – page 33 importance of the diffusion of a technology into society. However, as Schwartz Cowan argues, “Most artifacts have different forms (as well as different meanings) at each stage in the process that ends with use, so that an analysis that ignores the diffusion stage does so only at its peril. In any event, a consumer-focused analysis that deals properly with the diffusion stage can also shed important light on invention, innovation, development, and production.” (ibid, 278/279) This last point is invaluable. The mutual shaping model which I posited above can be read to show that consumers of a technology give feedback to its producers. This occurs not only in terms of whether they accept or reject it, but the circumstances under which it is used; the telephone, for instance, was considered by its creators as primarily a tool for business, until women took it up in large numbers in the home and turned it into an instrument for personal communication. Moreover, consumers can have a direct effect on the shape of a technology, as when consumer groups threaten to boycott a producer unless action is taken to change a specific product, or as when they simply refuse to use an innovation, as we shall see in Chapter Three’s exploration of “push” technology. “Thus different using practices may bear on the design of artifacts, even though they are elements of technological frames of non-engineers.” (Bijker, 1987, 172) In a sense, it does not matter which part of the mutual shaping process one takes as a starting point. One must work both backwards and forwards in the loop to give the fullest possible picture of how a technology develops. Starting from a point of consumption, as I intend to, one can then work backwards to give a sense of how the technology developed before it was put on the market to be consumed, as well as moving forward to show how the consumption of the technology may affect the next iteration of the technology, as well as other existing technologies. In this case, the important process is the one by which the work of writers is purchased -- or otherwise consumed -- by readers. Complex systems of publishing have Literature at Lightspeed – page 34 evolved in order to achieve this transfer of knowledge. The most important stakeholders in publishing will be those groups which are directly involved in this process. Determining the stakeholders in traditional publishing is relatively easy, given that the technology is well established (see Figure 1.2). It starts with an individual with a story to tell, a writer. The writer submits this story to a publisher (whether of magazines or books). The publisher function can be divided into at least two specialties: an editor or series of editors who work with the text of the story; and a designer or series of designers who take the text and develop its visual presentation on the printed page. Once a book or magazine has been designed, it goes to a printer, whose job is to make copies of it. This requires physical inputs: paper and ink, obviously, but also staples and glue (or other binding materials) and even electricity (to run the machines). Once the book or magazine has been fixed in a physical form, it must then be distributed to readers. A distribution network must be established (which may include trucks for local distribution and trains or even planes for national or international distribution). Sometimes, as with mail order books, the distribution network is sufficient. However, most people buy books from retail outlets (which include card shops and general stores as well as stores specializing in books). All of the functions described here, the various people who work on a manuscript after it leaves the hands of a writer and before it reaches a reader, are increasingly being concentrated in the hands of a smaller number of large corporations. In fact, as we shall Literature at Lightspeed – page 35

Figure 1.2 Stakeholders in Print Publishing see in Chapter Three, many are owned by the very largest of transnational entertainment conglomerates. The reader is the final link in the publishing chain. Readers may know exactly what they are looking for when they enter a bookstore (for instance, the latest book by a Literature at Lightspeed – page 36 favourite author). However, more often, they require help to know what might interest them out of the vast number of books which are available to them. For this reason, a critical establishment often intercedes between the book buyer and the outlet (represented by a broken line in Figure 1.2 to indicate that it isn’t always necessary). Every person who performs one of these functions has a stake in the publishing industry as it is currently configured. Contrast this with how stories are distributed over digital networks (Figure 1.3). We start, once again, with an author with a work. The author requires a computer and a physical connection to the World Wide Web. This physical connection might be over phone lines, cable modem or some form of wireless Internet connection, such as a satellite dish. It must also include an Internet Service Provider.

Figure 1.3 Stakeholders in Online Publishing Literature at Lightspeed – page 37 The writer can put her or his material directly onto a Web page. Many, however, are publishing their material in electronic magazines, or ezines. With a personal Web page, the writer is her or his own editor and page designer. With ezines, additional people will (probably) edit the story and design the Web page on which it will appear (represented by a broken line, again, to indicate that they are not always necessary). Unless the reader is a close personal friend of the writer, she or he will require some sort of filtering device (most often a search engine, although other devices are currently being developed -- see Chapter Five) to find material she or he will enjoy. The computer has been hailed in the past as the means by which we would achieve a paperless world. So far, this has not happened. Most people find computer screens difficult to read off of for any length of time. When confronted with long passages of text, most people print them so that they can be read off sheets of paper. For this reason, the physical inputs shift from being the responsibility of the printer to that of the reader. This quick look at the stakeholders in traditional and online publishing is provisional: we must, in our investigations, be open to the possibility that other stakeholders will be uncovered, or that what appeared at first blush to be a relevant social group was either not a definable social group or not relevant. Nonetheless, this gives an initial shape to our inquiries. Chapter Two will tell the story of writers who choose to place their fiction on the World Wide Web. It is the story of an eighteen year-old who “started off writing self indulgent teenage poetry and funny essays to make my teachers laugh” and “ran away with the circus to go to film school, where i studied a lot of screenwriting.” (Poulsen, 1998, unpaginated) It is also the story of “a retired chemistry professor, whose writing was confined to technical journals until two years ago, when I began writing fiction as a kind of hobby.” (Steiner, 1998, unpaginated) In addition, it is the story of a 61 year-old Literature at Lightspeed – page 38 man who has published “two novels in the U.S. one with Grove Press and the other with Dalkey Archive Press..., poetry in various journals including Beloit Poetry Journal [and] articles in The Review of Contemporary Fiction.” (Tindall, 1998, unpaginated) What unites these, and the other people surveyed for this dissertation, is the desire to write fiction and have it read. However, as we shall see, these writers belong to different sub-groups whose aims and interests are not always the same. These groups are: writers who put their fiction on their own Web sites; writers who have their fiction publishing in electronic magazines; writers of hypertext fiction, and; writers of collaborative works of fiction. Along the way, we shall identify, and explore, an important related stakeholder group: editor/publishers of online magazines. Many of the writers and editors surveyed for Chapter Two claim varying degrees of interest (ranging from curiousity to concern) in the possibility of generating income from the publication of their writing on the Web. In response to this concern, the economics of information will be the broad subject of Chapter Three. In that chapter, I will delve into the issue of what information is worth, and consider the application of various economic models to information delivered over the Internet. In order to properly explore these issues, another stakeholder group will have to be considered: the transnational entertainment conglomerates which have an interest in both exploiting the Web for their own profit and ensuring that it doesn’t seriously interfere with the revenues which they can generate from their stake in existing media. In this chapter, I also intend to show the efforts the corporations have made to shape the new medium to their advantage; in so doing, I hope to make it clear that the interests of these corporations are often in conflict with those of individual writers. Another concern of many of the writers surveyed in Chapter Two was the effect government intervention could have -- in both positive and negative ways -- on their Literature at Lightspeed – page 39 ability to publish online. Chapter Four will look at the role governments have in Web publishing. This includes: extending traditional financial support for the arts to the new medium of communication; and, enforcing legal frameworks such as copyright (an issue which, as we shall soon see, is of concern to many of the individual authors). The chapter will also look at a couple of barriers to effective government legislation: the international nature of the medium, which, because it crosses national borders, plays havoc with traditional notions of jurisdiction, and; the chameleon-like nature of digital media, which does not fit comfortably into traditional models of communication, and, therefore, does not fit comfortably into traditional models of communication regulation. As we saw in Figure 1.2, many different groups of people are involved in the production of print books and magazines. To the extent that online publishing threatens to change the nature of publishing, those who work in print publishing industries should be considered stakeholders in the new medium. Chapter Five will look at some of these other stakeholders who will be affected by (and may affect) the new technology. These include: publishing companies; designers; bookstores; and, critical online filtering mechanisms. By the end of Chapter Five, we will hopefully arrive at an understanding of the complexity of the various interests involved in this new technology. Chapter Six, the conclusion, will attempt to synthesize these interests so that we may better understand how the conflicting interests of stakeholder groups affects the development and use of a new technology. In addition, I will revisit the theoretical question of the relationship between technology and society. The dissertation will conclude with a cautionary story about the history of another technology which seemed allow individuals to communicate with each other: radio. Literature at Lightspeed – page 40

Note about the evidence

This dissertation employs three very different kinds of evidence: individual responses to a survey I conducted, data from popular media and academic writing. These three sources are used in different ways. The surveys were an essential part of an ethnographic study of a nascent online community.

Reports in the popular media (especially newspapers, but also magazines and popular literature) were an essential part of a structural analysis of transnational entertainment conglomerates (Chapter Three) and the publishing industry (Chapter Five).

It is important to note that both the surveys and the popular literature contribute to the empirical basis of the dissertation, and that much of their value rests on the timeliness of the information which they contain.

The academic writing referred to in this dissertation, by way of contrast, is intended to shape, explain or otherwise give deeper meaning to the empirical evidence presented. On the one hand, theoretical considerations structure the dissertation (ie: the stakeholder model of social constructivism). On the other hand, a theoretical construct such as Ursula Franklin's holistic/prescriptive technologies dichotomy helps further understanding of how digital communications technologies differ from existing print publishing technologies. Freedom of the press belongs to those who own one. (A. J. Liebling)

My husband likes to point out that while a typical small-circulation magazine may expose 500 readers to half a dozen stories every three months, about a hundred of my stories are downloaded every day. And every few days somebody reads them all (that’s nearly a novel, sizewise). So on the Net I have roughly the equivalent of a small lit-mag all my own, as opposed to the print journal that just held a manuscript for NINETEEN MONTHS and returned it with a form rejection slip. (Youngren, 1998, unpaginated)

Let me emphasize, however, that “virtual” should never be understood as meaning “almost” or “not quite” a community. As we first begin to think about or experience such communities, they may not seem as vibrant or vivid or muscular as some communities and identities more familiar or habitual to us, such as Canadian, Judaist, Muslim, Quebecois, or British Columbian. My argument, or hypothesis if you prefer, states that they have the potential to be just as fundamental to the identities of the some people as the existing ethnic communities whose existence we have taken for granted for decades or even centuries. (Elkins, 1997, 141)

Chapter Two: Fiction Writers on the World Wide Web

Introduction Prose fiction writers on the World Wide Web are a small tribe. I was only able to identify about 1,500 of them. Given that the best search engines only catalogue about a third of the sites on the Web, as many as 4,500 writers may have posted stories there. The fact that I combined a Yahoo! search with an additional search using lists of pages on fiction Web rings means I definitely found more writers than if I had just done the one search. However, there may be many sites which contain fiction but are not listed with search Literature at Lightspeed – page 42 engines or Web rings. Moreover, many pages which are listed with search engines may contain fiction, but, because it is but a small aspect of their larger site, may not use fiction as one of its keywords, and, therefore, not be picked up by the type of searches I was conducting. In addition, many of the sites which I was unable to find may contain the work of more than writer. Let’s say that 4,500 is not a completely unreasonable estimate of the number of fiction writers posting their work on a web page. In 1999, there were an estimated 150 million people using the Internet. (Cerf, 1999, unpaginated) A very small tribe, indeed. To fully understand this tribe we must answer several questions. Who are the people who make up the tribe? This is not an obvious question, since, unlike traditional groupings of people, groupings of people on the Internet cannot be defined by simple geography. Most often, online people group themselves according to common activities or interests. It is necessary to go on to ask, then, what are the activities or behaviours which bind these people together? The intuition which spurred my initial interest in this subject was that the group consisted of people publishing their fiction writing on the World Wide Web. While this remains the thread which binds the people in this chapter together, we shall soon see that this one activity does not define an entirely homogenous group: what they do and how and why they do it are all variables with a variety of parameters. Finally, in order to fully understand our subject, we must ask how do the people engaged in these activities view them? Or, more simply, why are these people engaged in this activity? Somebody who has written a short story has a number of established options for getting it to readers, including having it published in a magazine, an anthology of stories in a book and publishing it in print themselves. Why would writers choose one medium over the other? That is, what advantages does publishing online offer writers over traditional media (and what disadvantages does it have which must be Literature at Lightspeed – page 43 overcome)? Moreover, digital publishing comes in many forms: stories can be emailed to subscribers, placed on discs or sent to newsgroups. What advantages does publishing on the Web offer writers over other forms of digital publication? In order to explore this phenomenon, I conducted a survey of prose and hypertext writers and prose ezine editors using email. Since this is a relatively new tool, I shall begin this chapter with a discussion of my methodology. Using the responses to the survey, supplemented by statements the correspondents made in pages on their Web sites, I hope to answer these questions in this chapter.

Survey Methodology Fiction writers on the World Wide Web may seem like a very specific subject. This dissertation is not about the Internet as a whole, for instance, but a single technology through which people access the Internet. Nor is it about what people using the Web do generally (although, hopefully, some general principles will emerge); rather, I chose a very specific activity as my subject. Despite this, it is necessary to begin by showing how I further delineated my subject. This dissertation, for example, is exclusively about prose fiction writers. I chose not to consider writers of poetry because I knew some evaluation of the work would be necessary, and I didn’t feel competent to conduct even the most basic analysis of poems. In addition, this dissertation does not deal with fan fiction (referred to as “slash” fiction when the story revolves around the sexual adventures of characters who are not sexually involved, often two or more male characters), a genre of prose in which characters, settings and situations from popular media franchises (usually, but not always science fiction -- the Star Trek franchise is a very popular basis of fan fiction) form the foundation of the stories. Fan fiction raises many interesting questions about how individuals position themselves within the larger culture; however, to do these questions Literature at Lightspeed – page 44 justice would have required a lot of writing which would have taken me too far away from the issues I felt needed to be explored. Finally, for purposes of this dissertation, I define a fiction writer as anybody who has written a piece of fiction. This may seem obvious, but in most other contexts it is not. Many people define writers by income, for example: if you make money from your writing, you can call yourself a writer, but if you don’t, you can’t. Other people define writers by genre: those who write literary fiction are writers, those who write science fiction, fantasy or romance are not really “serious” writers. These and other distinctions are artificial and, for my purposes, obscure the subject of interest, so I do not use them. Comparison of Different Research Methods Given the subject of prose writers on the World Wide Web, I was, as all researchers are, presented with the problem of how to collect information. Part of the fascination of the subject is that it is a relatively new area of research: the Internet is only 25 years old, and the graphical interface of the Web, at the time of my research, less than five. “The existence of the Internet and the World Wide Web (WWW) clearly provides new horizons for the researcher. A potentially vast population of all kinds of individuals and groups may be more easily reached than ever before, across geographical borders, and even continents. This is particularly true in relation to comparative social survey research.” (Coomber, 1997, unpaginated.) Since I was most interested in the practices of people who are actually putting their work online, it became apparent early on that I would have to use them as a primary source of information. Having settled on this issue, the next question was how best to gather information from this group. As Rosenberg explains, “The most convenient way is of course to ask them. Failing that, we can experiment: We try to arrange their circumstances so that their behavior will reveal their beliefs and desires. But usually the only way to discern the beliefs and desires of others is to observe their behavior.” (1988, 32) Since the Web is an Literature at Lightspeed – page 45 international communications network, I expected most of the people I would want to study to be scattered throughout North America, with some possibly in other parts of the world; this made observation more costly, in terms of both time and money, than I could afford. Experimentation, as Coomber suggests, should be considered a last resort, since laboratory conditions can never precisely duplicate the real world, a problem which can seriously bias results. The obvious method of collecting information would be some kind of survey of those doing the work online. One of the advantages of digital communications networks is that they can not only be the subject of study, but the tool by which the subject is studied. As it happens, computers themselves have been used increasingly over the past 20 years in this type of social science research. “Electronic data collection is a growing area of application of computer technology.” (Helgeson and Ursic, 1989, 305) Computers have been both introduced into traditional interview settings, and have created possibilities for interviewing which did not previously exist (see Chart 2.1). An example of the former is the use of portable computers in face to face interviewing. Here, either the interviewer types in the respondent’s answers as he or she gives them (Computer Assisted Personal Interviewing), the interviewer gives the portable computer to the respondent, who types in answers to questions him or herself (Computer Assisted Self-Interviewing with Interviewer Present), or some combination of the two. Another example of computers being used to aid traditional surveying methods is when interviewers type responses given to them over the telephone directly into a computer (Computer Assisted Telephone Interviewing). An example of an interview format which was not possible before the advent of computers is Voice Recognition, where the computer calls a respondent, asks the first from a menu of prerecorded questions, listens for the respondent’s response and (presumably) understands enough of it to ask an appropriate follow-up question. A Literature at Lightspeed – page 46 different example is the use of computer networks such as the Internet to distribute questionnaires online (Electronic Mail Surveys). Although they differ widely, the various uses of computers in research have some characteristics in common. “Characteristic of all forms of computer assisted interviewing is that questions are read from the computer screen, and that responses are entered directly in the computer, either by an interviewer or by a respondent. An interactive program presents the questions in the proper order, which may be different for different (groups of) respondents.” (ibid) Which method a researcher uses will depend upon, among other things, the quality of the software (voice recognition software, for instance, still being in its infancy, isn’t very reliable) and the cost and availability of software and hardware. Intuitively, I decided to use the Internet to distribute questionnaires to writers who had placed their fiction on the Web.

Specific method Computer assisted form

Face-to-face interview CAPI (Computer Assisted Personal Interviewing) Telephone interview CATI (Computer Assisted Telephone Interviewing) Self-administered form CASI (Computer Assisted Self Interviewing) CSAQ (Computerized Self-Administered Questionnaire) Interviewer present CASI of CASIIP (computer assisted self-interviewing with interviewer present) CASI-V (question text on screen: visual) CASI-A (text on screen and on audio) Mail survey DBM (Disk by Mail) and EMS (Electronic Mail Survey) Panel research CAPAR (Computer Assisted Panel Research) Teleinterview (Electronic diaries) Various (no interviewer) TDE (Touchtone Data Entry) VR (Voice Recognition) ASR (Automatic Speech Recognition) (de Leeuw and Nicholls II, 1996, unpaginated)

Chart 2.1 Taxonomy of Computer Assisted Interviewing methods Literature at Lightspeed – page 47 Email interviews have a lot in common with traditional mail interviews: they are received by the correspondent in text form, which requires that the respondent be literate; the respondent can do them in her or his own time, which allows for more complex and numerous questions; errors don’t creep in because interviewers can subconsciously “lead” respondents to specific answers (nor can the interviewer intentionally “cheat” by leading the respondent to answers conforming to the interviewer’s expectations). Email interviews compare favourably to in-person interviews, where the respondent has to answer in the time the interviewer is present (requiring simpler and less numerous questions), and because the in-person interviewer can consciously or subconsciously bias the respondent. They compare unfavourably to in-person interviews, which do not require respondents to be literate. (Since my survey was of writers, however, I assumed that literacy was not an issue.) Mail and in-person surveys have certain advantages over email surveys. For one thing, they do not require special equipment (computers), which are both expensive and require that the respondents have the skills to use them. (Here again, though, if one is researching the behaviour of people online, as I am, one must assume that potential respondents have access to computers; otherwise, they would not be part of the phenomenon being researched in the first place.) Furthermore, in-person surveys have advantages over both forms of mail surveys: for one, the respondent of the former is not free to ask others for help answering questions, as he or she is in the latter. For another, non-verbal behaviours may be noted during in-person interviews, whereas they are impossible to note in either form of mail survey. Email surveys do have some advantages over both of the other two forms, however. It is a relatively simple matter to customize survey questions for specific sub- groups within the research population; this is somewhat more difficult for regular mail surveys, and very difficult for in-person surveys. Where answers are numerical, it is a Literature at Lightspeed – page 48 relatively simple matter to input the numbers into spreadsheet programs which can perform a variety of calculations on them; with the other two types of surveys, since the information is usually collected on print forms, it has to be input into computers before calculations can be made, which adds substantially to the amount of work the researchers have to do, as well as adding a potential source of human error. The most important advantage that email surveys have over the other forms of survey is that they are far less time consuming and, therefore, far less costly. In-person surveys are very labour-intensive, which can make them highly expensive to undertake. Regular mail interviews require postage, of course, both from the researcher to the respondent and back again; depending upon the size of the population to be researched, this can very expensive. Email surveys avoid these costly steps. For this reason, under the right circumstances, they have the potential to “democratize” social science research, giving individuals or small groups the potential of doing large scale research. I am not exaggerating when I say that the time and cost of doing the research on which this dissertation is based would have been prohibitive for me, a single graduate student, using any other method. Note that I claim that email research must be conducted in the right circumstances. As has already been pointed out, one needs a computer and the skills to use it to answer an email survey. In 1999, slightly less than a third of American adults, 92 million (INT’L.com, 1999b, unpaginated), and under half of Canadians, 13.5 million (INT’L.com, 1999a, unpaginated), used the Internet. Any survey of the general population of either of these countries by email, then, would be necessarily skewed because over half the population would not be eligible to respond. (By way of contrast, telephone surveys generally ignore the three per cent of the North American population which does not have phones, while mail surveys do not count the relatively small number of North Americans who do not have permanent home addresses.) Until computers have Literature at Lightspeed – page 49 the same home penetration that telephones do, they will not be appropriate for general surveying. However, as a tool for learning about groups of people who are already online, email surveys appear to be the best tool. These and other similarities and differences between the three methods of research are shown in Chart 2.2. How the Email Survey Was Conducted For the present survey, the questions were written to be as general as possible in order to elicit the widest possible response. Some of the respondents objected to this. “If you don’t mind I won’t respond to your questions,” one writer stated. “In general I think you are asking the wrong questions. Simple questions usually get simple answers, and the subject you are aiming to clarify does not lend itself to simple questions.” (Beardsley, 1998b, unpaginated) As it happened, I had conducted a less ambitious version of the survey in 1996, and was satisfied that the answers to simple questions could reveal complex patterns. The reader of the current volume can, of course, judge the results of the survey for her or himself. For this project, I identified five groups whose work contributed to the distribution of fiction on the World Wide Web: writers who put their work on their own Web pages; writers whose work appears in ezines; editors of ezines; writers of hypertext fiction, and; writers of collaborative fiction. I defined an ezine as a Web page with the writing of more than one author. I considered collaborative fiction to be a subset of hypertext where the segments are written by different people rather than a single author. The questionnaire sent to writers with their own pages was basically the same questionnaire I used in 1996. The questionnaire sent to the other four groups used this questionnaire as the template, adding or removing questions to reflect what I thought would be the specific interests of each group. (The five questionnaires are reproduced in Appendix A.) Literature at Lightspeed – page 50 in-person email mail interviewer present interviewer absent interviewer absent oral print print literacy not important literacy necessary literacy necessary equipment unnecessary equipment necessary equipment unnecessary no incompatibility problem compatibility problem no incompatibility problem special skills unnecessary special skills necessary special skills unnecessary population not an important issue population an important issue population less of an issue synchronous asynchronous asynchronous limited by time unlimited by time unlimited by time completed at once completed at leisure completed at leisure physically invasive not physically invasive not physically invasive date of interview precise date of interview less precise date of interview imprecise interviewee’s comfort no issue interviewee’s comfort assured interviewee’s comfort assured questions must be simple questions can be complex questions can be complex summarization difficult summarization possible summarization not possible additional research unlikely additional research possible additional research possible interviewer control interviewee/programmer control interviewee control branching errors (interviewer) few branching errors branching errors (interviewee) customized survey difficult customized survey possible customized survey difficult random question order difficult random question order possible random question order difficult calculations can be difficult calculations are simple calculations can be difficult hard not to answer questions easier not to answer questions easy not to answer questions non-verbal behaviour noted non-verbal behaviour unnotable non-verbal behaviour unnotable probing possible probing not possible probing not possible potential interviewer errors no interviewer errors no interviewer errors intimate subject discomfort discomfort lessened discomfort lessened social desirability bias social desirability bias lessened social desirability bias lessened validation after interview some validation during interview validation after interview interviewer cheating possible interviewer cheating not possible interviewer cheating not possible interviewee cannot be helped others can help interviewee others can help interviewee irrelevant info takes up time irrelevant info more easily ignored irrelevant info more easily ignored addressing not an issue bad addresses are usually known bad addresses aren’t as easily known interface not an issue screen hard to read off of paper easier to read information limited to speech information limited to screen information limited to page expensive potentially least expensive potentially less expensive

Chart 2.2: Comparison of Surveying Techniques Finding the subjects was a relatively straightforward matter: I conducted searches using the Yahoo search engine, using the terms “online fiction,” “fiction ezines” and Literature at Lightspeed – page 51 “hypertext fiction.” This yielded a large number of pages. Going through the pages of individual writers, I found that many belonged to web rings. A web ring is a list of pages connected by a common theme: “In each of its tens of thousands of rings, member web sites have banded together to form their sites into linked circles... Through navigation links found most often at the bottom of member pages, visitors can travel [to] all or any of the sites in a ring. They can move through a ring in either direction, going to the next or previous site, or listing the next five sites in the ring. They can jump to a random site in the ring, or survey all the sites that make up the ring.” (Starseed, Inc., 1998, unpaginated) By accessing Web ring lists of pages with fiction on them, I was able to gain a lot more names. From there, it was a matter of going to each page, identifying the author of each story or editor/publisher of each online magazine and collecting his or her name, email address and the URL of the story. In this manner, I harvested 1678 names in the five categories. I began emailing the surveys on Saturday, June 27, 1998. One round of surveys was sent out per week; each round contained between 100 and 150 questionnaires. Thirteen rounds of surveys were sent in total. Respondents were given two weeks to reply, which meant that there was some overlap in responses coming in. To help me organize the responses, each email was sent with a number identifying which round it was a part of in the subject line. Initially, the surveys were sent in batches of five to an email message, with a generic name in the subject line. Unfortunately, some people found this problematic. For one thing, the salutation “To Whom It May Concern” struck some as impersonal. “To Whom It May Concern? If you don’t take the time to find out who you are talking with, why should I take the time to fill out your questionnaire? Sorry for the tone--this miffed me a little.” (Hunter, 1998, unpaginated) Others recognized that this was the only way to open a letter going to more than one person: “I nearly took umbrage at your salutation Literature at Lightspeed – page 52 until I read that you were sending this email to a number of people.” (Jennings, 1998, unpaginated) Of greater importance is the fact that some people felt that this kind of survey was “UNSOLICITED and therefore annoying.” (Kay, 1998, unpaginated) Because of the ease with which email can be sent, many people find their inboxes flooded with messages from people they do not know on subjects in which they are not interested. This is often referred to as “spam,” after a sketch by the Monty Python’s Flying Circus comedy troupe in which the word spam is repeated ad neauseum. “I nearly deleted your message without reading it because I thought it was spam :-),” one person wrote. (Nixon, 1998, unpaginated) Aware that this was a problem, I had written in the covering letter to the questionnaire that it was an academic exercise, and that the information collected was not going to be used for commercial purposes. I thought that this would satisfy most people, whose objection was to unsolicited commercial email. What Nixon’s letter made me realize, though, was that some people would assume that my survey was spam from its generic subject line, and delete it from their inboxes before they ever got the chance to read my disclaimer. There is no way of knowing how many people didn’t respond to the survey for this reason, but it is possible that many didn’t. This is in accord with the experience of Witmer, Colman and Katzman, who wrote: “Our results indicate that attaching an introductory paragraph with no forewarning to a full, on-line survey instrument is inadequate and inappropriate to the electronic environment.” (1999, 156/157) Had I been aware of it at the time, I would have applied their solution to this problem: sending a short email asking potential respondents if they would be willing to participate in a survey before sending them the survey itself. As it happened, Nixon suggested a solution himself: “If I were you I’d try to personalize your message by including a reference to the author’s story in your subject line.” (1998, unpaginated) Starting with round six, I sent the questionnaire to each Literature at Lightspeed – page 53 individual in a separate email and named the story that each had written in the subject line of the email. This hopefully reduced the number of people who interpreted the survey as spam, as well as minimizing the impersonality of this initial contact. Still, one person wrote: “I see that I am the only recipient on that particular distribution of your survey, and I am curious why you chose to query me, versus the other writers who appear in that edition of the on-line literary magazine. Or did you survey all the writers in that edition?” (Cochrane, 1998, unpaginated) Ultimately, no method is going to satisfy all research subjects. In the covering letter of the survey, I told potential respondents that I would be willing to answer any question they had, so I responded to the above query. A couple refused to fill out my questionnaire because they didn’t believe me. “I would like to help you,” one explained, “however I have participated in several things of this nature in the past, where people said they would share the results of whatever project they were doing by sending me a writeup, or references, or something similar, and I have yet to see one person make good on this promise. I understand that things don’t always get finished or sometimes commitments get forgotten, but I’ve decided I’m tired of empty promises.” (Robert, 1998, unpaginated) In fact, I placed the name and email addresses of every one of the respondents to my survey who asked to be kept informed as to the progress of the dissertation into a file. When the first draft of the dissertation was written, they received a notice telling them about it, and giving a tentative date when it would be complete. I see no reason why I won’t be able to email them the URL of the dissertation if it is published online, or send them a copy of this chapter by email if it isn’t, as I promised anybody who asked. To me, this is an issue of “fair treatment” of research subjects. If they devote the time to answer a survey, they have the right to know the results of the survey. The Internet makes this particularly easy, since email is neither as costly nor as time- Literature at Lightspeed – page 54 consuming as mail; and, if the results are published online, notification of respondents can be as simple as an email containing a single line with a URL. As Robert’s quote suggests, being unresponsive to requests for information from subjects can result in their being less willing to participate in future research. If too many people on the Internet are burned by unethical researchers, the majority of people online may become hostile to research, to the detriment of everybody who may wish to conduct research on digital communications in the future, as has been noted: “research that violates an online group’s sense of privacy may leave ‘scorched earth’ behind for prospective future participants and future researchers as participants seek more private online spaces to carry out their group’s business or simply scatter under the scrutiny of researchers. [note omitted]". (Marc A. Smith, 1999, 211) The most frequently asked question by respondents was, “Please tell me how you got my name.” (Greenstein, 1998, unpaginated) At first, the answer seemed self-evident, given the public nature of the Internet. However, I realized that some writers have material in several electronic magazines and, therefore, would be curious about which venue I found which of their stories in, especially in the first five rounds of the survey, where the story was not named. I dutifully responded to each of these queries. Another important lesson to me was that some survey subjects will not share my understanding of what they are doing. “What e-zine?” one writer asked “-- are you talking about Duct Tape Press? That’s an e-zine?” (Muri, 1998, unpaginated) The line between an ezine and a personal page is fuzzy. Some writers put their stories on the Web page of a friend; they wouldn’t consider this an ezine, although, by my definition, it is. In retrospect, I should have defined my terms more clearly, which would have made the intent of my questions less open to misunderstanding. Finally, it should be noted that the ease with which email can be sent works both ways: whereas somebody who didn’t like being sent a paper survey through regular mail Literature at Lightspeed – page 55 would likely simply throw it away, email makes it trivially easy for respondents to online surveys to express their displeasure. This sort of vituperative email is known as a “flame.” In the course of the survey, I received two. One read in part:

For you to truly understand e-zines, web-zines, and the web as a form of media for communicating the many forms of art you should go and view this art. You should read what zinesters have to say in their articles. I sincerely find it hard to believe that you are a PhD student. If you decide to get serious about this and do your own research (instead of depending on zinesters to do [sic] just inform you) contact me again. If not, buzz off. (Kay, 1998, unpaginated)

The other was from science fiction writer Norman Spinrad. In answer to the question “Has your writing been published in traditional media?” Spinrad wrote: “Insulting!” When I asked where, he wrote “More insulting!” (1998, unpaginated) Spinrad seemed angry that I didn’t know who he was. In fact, I’ve known about his work since I was a teenage science fiction fan; however, I decided that all of my potential subjects would be asked the same set of questions. Flames are the online equivalent of somebody shouting at you in person and slamming the door in your face. They don’t happen often, and the best response is to shrug and move on. Of the 1678 surveys sent out, 300 were not deliverable. The most common reason for returned email, by far, was “User unknown.” Most likely, that is because the person switched her or his account to a new service provider or dropped off the Internet in the time between when I harvested their email address and sent out the questionnaire. Less frequently, surveys were returned with the message “Host unknown -- Name server: man.network: host not found.” This usually means either that the Internet Service Provider’s computer is temporarily out of service because of hardware or software malfunction, that it has been bought by a larger ISP which subsequently changed all of its Literature at Lightspeed – page 56 addresses or that it has simply gone out of business. Returned mail also sometimes came with the message “MAILBOX FULL” or “Mail quota exceeded,” both of which are self- explanatory. The majority of returned email came from contributors to ezines; relatively little came from writers with their own Web pages. A moment’s reflection should show why this would be. Since a writer’s story resides on the ezine’s Web server, even if the writer drops off the Net the story will remain; not only can it be read when the writer is no longer online, but the return email address will continue to accompany the story even if the writer can no longer be reached there. When an individual moves to a different service provider or simply stops using the Internet, by way of contrast, his or her page will be immediately removed from the original server. Thus, which server a piece by a writer is on is revealed as an important aspect to keep in mind when conducting this kind of research. In addition to the mail which could not be delivered, 12 people wrote to say that they refused to take part in the survey. When these numbers are subtracted from the number of surveys sent, that leaves 1366 potentially answerable surveys. Of these, 444 were returned filled out (32.5%). Despite all the rookie mistakes I made in the design and conduct of the survey, I am satisfied with this result. I suspect that this rate of return was based, in part, on the fact that subjects are more likely to respond to a survey if they have “a higher personal investment in the subject or a higher interest level in the general study.” (Witmer, Colman and Katzman, 1999, 156) The fact that the survey was on a subject of import to my respondents likely contributed to a higher response rate than I would have received if it had been on what type of soap they use or what car they drive. Interpreting Online Survey Responses Interpreting the information collected in an online survey is a challenge. To determine whether the sample of writers which I have collected is representative of people on the Literature at Lightspeed – page 57 Internet as a whole, it is necessary to know how many there are. However, there is no way of knowing for certain how many people use the Internet. A computer may be used by a single person in his or her own home. Or the person may invite his or her friends over to use it. Or the person may live with his or her family, in which case several people may use it. In addition, businesses and schools often have pools of computers which may be accessed by dozens or hundreds of employees or students. Finally, public terminals are springing up in libraries and cybercafes (in addition to commercial terminals such as can be found at copy houses like Kinkos) on which anybody can anonymously access the Internet. In order to determine the number of people with Internet access, researchers usually start with figures which can be more easily calculated: the number of hosts or domains on the Internet or the number of networks connected to it. A host is usually a single computer which connects many people at different terminals to the Internet. The name of a given host computer is usually the first thing after the @ sign in an email address; in my email address, [email protected], for instance, the host computer is “po-box.” The domain name is usually made up of the rest of the address (ie: “mcgill.ca"), but is sometimes referred to as everything after the @ sign. There may be many host computers within a single domain (Mcgill, for instance, has Music, Musica, Musicb and CC in addition to Po-box). A network is a somewhat arbitrarily defined collection of computers; it is often synonymous with the domain, but it need not be. Estimates of how many people are connected to the Internet use either host, domain or network counts as their base, multiplying these known numbers by an estimate of how many people on average use each host, domain or network. These estimates can vary drastically (sometimes by as much as 100%) depending upon the assumptions used in the calculation. Just as there is no entirely trustworthy way of knowing how many people are Literature at Lightspeed – page 58 on the Internet as a whole, there is no accurate way of knowing what percentage of these people are engaged in any specific activity. Even if it is possible to get an accurate measure of how many people are connected to the Internet at a given moment, this figure will become quickly obsolete. Because computers are added or removed from the network on a daily basis, “We can trace out network connections throughout the world, even as we realize that the network’s constantly changing parameters ensure no printed map, not even an electronic one posted online, can be completely up to date.” (Gilster, 1993, 18) Individual users are coming on and dropping off the Internet all the time. Moreover, Web pages appear, move and disappear on a regular basis. This last phenomenon had a direct bearing on my research. In order not to get too swamped by the work, I decided to read only the stories of the people who responded to my survey. For this reason, I waited until after the survey data had been collected before I began downloading the stories to read. Unfortunately, in the year and a half that passed between the time I sent out the questionnaires and the time I began writing the dissertation, many of the individual Web pages and some of the ezines could no longer be found at the addresses I had for them. If I had to do it again, I would store each story as I harvested the name and email address of the person who wrote it so that I would have it whenever I decided to read it. There is a valuable lesson here, however. In order to rectify this problem, I did a Yahoo search for each ezine and individual Web page which could not be found on the URL where I had originally encountered it. I was able to find an additional 60 stories (21.0% of the 286 stories I read). This substantial number of stories had moved in the year and a half since I had first found them. In addition, there were other sites which had changed URLs which left notice of their address change at their original URLs. I didn’t catch all of these because some sites leave change of address notices on their home page, Literature at Lightspeed – page 59 while I was trying to access interior pages with specific stories directly. In future, I will keep the URL of a site’s home page as well as any more specific pages I may want in order to be able to access a change of address page in case the specific page is not there. Perhaps most important, regardless of when I collect information online, I will revisit as many sites as I can as soon before publication as possible to ensure that the links to them still function. Owing to this fundamental counting problem, I do not believe it is possible to do anything more than a cursory quantitative analysis of the survey results. In statistics, it isn’t always necessary to know the population of a group under study; there is a threshold number of responses past which a survey can be said to be representative of any population. However, although the total number of responses to the survey is above this threshold, none of the five individual survey segments is. Since the surveys are substantially different, I cannot say that any of them are representative. Therefore, when figures are used in the analysis which follows, they are meant to be suggestive rather than definitive. In addition, the Internet is something of a moving target for any researcher. As has already been noted, pages are placed on and taken off the Web on a literally moment by moment basis. This means that any conclusions drawn about it at any given time are likely to be out of date soon after. In this dissertation, I have tried to tease out general principles which are likely to apply for a long time to come. (And, indeed, many of the concerns of writers in this survey echoed the concerns of those in the 1996 survey, suggesting that they are consistent over time.) However, it must be understood that this survey is a snapshot of the Web taken at a specific point in time. Literature at Lightspeed – page 60

On the ethics of online research and the relationship with research subjects

To begin, the researcher should clearly identify him or herself, the institution with which the researcher is affiliated and the purpose of the research. I did this in the covering letter which I sent to all of my potential subjects (see Appendix A). This is essential for

"informed consent:" the right of any individual to know what he or she is being asked to become involved in. In addition, I offered to answer any questions if any aspect of the work was unclear; my responses to all such queries were sent within two days of receiving the question.

In cases where the information being collected is very personal, the researcher may grant his or her subjects anonymity. I did not feel that the information I was asking for was so sensitive, so I did not offer anonymity, nor did any of my respondents ask for it. It is worth noting, however, that some of the subjects used pseudonyms in their work on the Web, a convention which I accepted without pressing to learn their real names.

Another aspect of dealing fairly with research subjects is to allow them to see the results of the research after it has been completed. I offered to do this in my covering letter. Over 100 of my respondents asked to see the results when I had them. I put their names and email addresses into a file. I have updated them regularly: in the summer of

2000 to let them know how the writing was going, and again in late November 2000 to let them know some details of the finished dissertation and the successful defense. I have been told the dissertation will be published online in early 2001; if so, my final email to my survey respondents will be to give them the URL. Literature at Lightspeed – page 61

In all other ways, my dissertation research was conducted in accordance with the relevant sections of the Tri-Council Policy Statement on Ethical Conduct for Research

Involving Humans.

Ezines

More than twice as many of the writers who responded to my survey had contributed to electronic magazines (227) than had put fiction on their own Web pages (109). To better understand what they were doing, it is necessary for us to look at the ezines themselves before we look at what writers have said about their online experiences.

A Note About Terminology

Because the terms “zine” and “ezine” are close, it would be natural to assume that they are analogous phenomena. In fact, this is not the case. To avoid the confusion that may arise, it is necessary to take a brief look at the two terms.

In print, zine is a short form for the term “fanzine.” Such publications became popular in the 1960s when fans of science fiction films, television series and prose began putting out small magazines about their favourite works. Eventually, the word fan was dropped from the term when it became apparent that a wide variety of publications shared some common features with science fiction fanzines, not all of which were devoted to a single medium or cultural artifact.

Ezine is a short form for the term “electronic magazine.” Literature at Lightspeed – page 62

Owing to the large number of print zines (estimated at between 20,000 and 50,000

in 1995) (Ardito, 1999, unpaginated), it is hard to find a definition which will cover all

examples. However, one which seems appropriate is that “zines are noncommercial,

nonprofessional, small-circulation magazines which their creators produce, publish, and

distribute by themselves.” (Duncombe, 1997, 6) Mark Gunderloy, founding editor of a

zine review publication called Factsheet Five, and Cari Goldberg Janice claim that

“Generally they’re created by one person, for love rather than money, and focus on a particular subject.” (1992, 2)

As we shall see, many ezines, by way of contrast, try to recreate the editorial process of print magazines, where there are a variety of editors and page designers who prepare the material for electronic publication. This is a far cry from the one-person operations of most print zines. Furthermore, again as we shall see, many ezines have tried to develop means of generating revenue for their work, and the publishers of others would like to be generating revenue; thus, they are not analogous to print zines, which are not commercially oriented. Literature at Lightspeed – page 63

The closest analogy to print zines online, I would argue, are personal home pages.

They are the work of individuals. They are clearly not commercial. Although of widely varying degrees of quality, they do not profess to any form of professionalism.

Duncombe’s claim that “In zines, everyday oddballs were speaking plainly about themselves and our society with an honest sincerity, a revealing intimacy, and a healthy

‘fuck you’ to sanctioned authority – for no money and no recognition, writing for an audience of like-minded misfits” (1997, 2) could just as easily be referring to these pages.

There are some points at which this analogy breaks down; the point at which a personal page which solicits work from others stops becoming analogous to a print zine and becomes more like a print magazine is ambiguous. Nonetheless, I believe this distinction generally holds true.

For this reason, when I refer to zines in this dissertation, I am referring to print publications as defined by Gunderloy, Goldberg Janice and Duncombe. Unless otherwise stated, when I refer to ezines, I am referring to online publications which are close in spirit and structure, if not always in results, to print magazines. Literature at Lightspeed – page 64

The Variety of Subject Matter of Ezines

Print magazines exist on a continuum of specifocity based on the content they provide for

what they perceive to be their audience. There are general purpose magazines (such as

news magazines Time or Newsweek) with wide circulations. On the other hand, there are

magazines with very specific subject matters (such as Radio Control Modeler or

Stamping Arts & Crafts) which are targeted at much smaller, niche audiences. Such a continuum exists in the magazines which are devoted to fiction on the World Wide Web.

There are ezines which, with one typical limitation, are willing to accept anything:

“We hate to be vague, but Utterants... does not have a preference in terms of style or content. The only thing we look for is quality.” ([[email protected]], undated, unpaginated). In these publications, links to literary stories can be found next to links to genre stories; there is no distinction between “high” and “low” literary forms, a distinction that leads many professionals to assume that genre writing, by definition, cannot be “quality” writing. By breaking down the distinction between high and low forms, these ezines try to reach the widest possible audiences, offering a little something for every taste.

Slightly down the continuum are the literary ezines. “The Richmond Review,” for example, “was established by novelist Steven Kelly in October 1995 as the UK’s first literary magazine to be published exclusively on the World Wide Web.”

([[email protected]], undated, unpaginated) Like their print counterparts, literary ezines contain fiction which is about lived human experience. The subject matter can vary widely (although it rarely is allowed to drift into pure genre subjects), giving Literature at Lightspeed – page 65 these publications a wide potential audience (keeping in mind that such an audience does not necessarily include fans of specific genres). Other examples of literary ezines include The Barcelona Review [http://www.web-show.com/barcelona/review/] and Eclectica [http://www.eclectica.org/]. Various genres are represented in the ezines available on the Web. Genre ezines are further down the continuum than literary ezines because, although sometimes quite popular, their potential audience is limited to fans of the specific genre. There are ezines devoted to the biggest genres: science fiction (Aphelion [http://www.aphelion- webzine.com/], for example, or Jackhammer E-zine [http://www.eggplant- productions.com/]) and fantasy (DargonZine [http://www.dargonzine.org/] and Faerytales [http://www.geocities.com/Area51/Shire/3951/door1.html]). As the subject matter becomes more focused, the ezine moves further down the continuum. Thus, there are ezines such as HistOracle, which “focuses on historical fiction, blending historical fact with intriguing characters,” ([[email protected]], undated, unpaginated) and Cafe Irreal, which specializes in “absurdist and surreal fiction.” (Whittenburg and Evans, 1998, unpaginated) Not surprisingly, some of the fiction deals with subject matter of specific interest to computer users. “The Scarlet Netter,” for example, “began as an e-mail exchange between friends and lovers. Our frank discussions, log files, letters, and erotic fiction about On-Line Love Affairs and Internet Adultery began to take on a life of its own. It evolved into a slightly sophisticated, pointedly explicit, deeply personal and very modern Newsletter. With all the interest generated by the first few issues, it became apparent that a web site was called for (begged for, actually).” (Hester, undated, unpaginated) The defining feature of fiction in StoryBytes, to use another example, is that “Story length must fall on a power of 2. That means 2 words, 4 words, 8 words, 16 words, etc. That’s Literature at Lightspeed – page 66 not simply an even number. To get a power of 2, you start with the number 2 and keep doubling (2*2=4, 4*2=8, 8*2=16, 16*2=32, etc.).” (Bubien, 1997, unpaginated) Sometimes, the subject matter can be very specific: Dark Annie [http://members.aol.com/darkannie/], for example, features fiction on the subject of Jack the Ripper. In a similar vein, The Inflated Graveworm offers a very specific type of dark fantasy: “My question was, ‘Where in the world would H.P. Lovecraft, Lord Dunsansy-- even Edgar Allen Poe--get published today?’ The answer was, unfortunately, ‘Nowhere.’“ (David Powers, undated, unpaginated) According to its publisher, Idling “was started as an experiment to see what kind of material was being written on a particular subject: in this case, unemployment.” (Wakulich, 1998, unpaginated) Publications which have very specific content probably can muster only a small readership. As we shall see, the Web gives publishers of such niche magazines some important advantages over print. Moving back up the continuum, it should also be noted that there are several ezines which, while not devoted to a specific type of content, put limitations on who can contribute. “The TimBookTu Homepage,” for example, is designed to be a showcase for up-and-coming African-American writers and poets who desire a place to have their works made available to the World Wide Web audience.” (Vaughan, Jr., 1997, unpaginated) To use another example: “Blithe House Quarterly considers unpublished short stories by emerging and established gay, lesbian and bisexual authors for publication.” (Alvarez, undated, unpaginated) There is no restriction on who can read these publications, of course, but they are more likely to be read by members of the Literature at Lightspeed – page 67 minority group which are their intended writers. For this reason, they should be placed near genre publications on the continuum. Various other niche audiences are served by fiction ezines. Some may be written by and for religious groups: “MorningStar is a quarterly electronic publication of the Writing Academy, a not-for-profit organization of Christian writers.” (Kyrlach, 1997, unpaginated) Others may be written by and for people who live in a specific area: “Welcome to the website of Border Beat, the Border Arts Journal, a quarterly publication presenting literary and visual arts from and about the U.S.- Mexico border region, Mexico, and the American Southwest.” (Carvalho, undated, unpaginated) There is at least one ezine, The Twilight Times, that positions itself as catering to a niche taste which is not accommodated by other magazines, in print or online: “I’ve been on the internet a few months and have met dozens of unpublished writers who have real talent. Twilight Times was created to present the works of those writers whose stories ‘blend’ genres, are too ‘literary’ for other zines or seem too mainstream or ‘quirky’ in tone.” (Quillen, 1998, unpaginated) One other category worth noting is ezines which contain fiction in more than one language. “PARK & READ,” for example, “is the European Internet Literature Magazine which is open to every language spoken on this continent. While the first issue of PARK & READ was mainly in German we are happy to announce the second issue containing a lot of texts which were originally written in English or Spanish. Most texts were translated at least into one other language.” (Zinner, 1996, unpaginated) The Barcelona Review, perhaps not as ambitious, claims to be “the Web’s first electronic review of international contemporary cutting-edge fiction in English/Spanish bilingual format. (Original texts of other languages, such as Catalan, the official language of Catalunya, are

Literature at Lightspeed – page 68 presented along with English and Spanish translations as available.)” (Jill Adams, undated, unpaginated) I’m not certain where on the continuum to place multi-lingual publications. On the one hand, they have an increased potential readership: the combination of those who speak the various languages in which their stories are written.

On the other hand, they may still encounter cultural barriers: are there a lot of readers interested in works that describe how other societies are structured, how other people live? If not, the potential increased readership may be ephemeral. This is a subject which calls for further investigation.

As the World Wide Web becomes a place known for the publication of fiction, the number and diversity of niche publications will grow. This can only benefit writers, who will have a greater opportunity to find a place where they can be published, no matter what the subject matter or style of their work, and readers, who will be able to find exactly the kind of writing they are looking for. Age As the World Wide Web is a new medium of communication, it is to be expected that most of the publications on it are also new. Nine (15.3%) of the 59 ezines I studied had only been in existence for six months or less; 14 (23.7%) had been existence for more than six months and less than a year; 21 (35.6%) had been in existence for more than a year and less than two years. Interestingly, 15 (25.4%) of the ezine publishers claimed that their publication had existed for more than two years. Why interestingly? The graphical interface of the World Wide Web, the most important factor in making it accessible to a broad public,

Literature at Lightspeed – page 69 only really started catching on in 1995. Many of the publications which claimed a longer provenance would be older than this aspect of the Web itself.

There are a couple of reasonable explanations for this. As David Sutherland, editor of Recursive Angel explained, “We moved from print media to electronic due to ever rising costs of both paying contributors and printing issues.” (1998, unpaginated) Thus, a publication which had a print counterpart could claim the print publication’s history as part of its own for purposes of calculating its age. Moreover, some publications had existed in other digital forms before they had migrated to the Web. DargonZine claims, with some credibility, to be the oldest continuous publication on the Internet: it had started publishing in 1985 ("About DargonZine,” 1998, unpaginated) on FSFnet ("DargonZine Writers’ FAQ,” 1998, unpaginated). Thus, although the popular graphical interface of the Web is relatively new, one cannot assume that everything that appears on it is. "Perpetual Proliferation" Traditional print magazines, newspapers and newsletters are collectively known as “periodicals” because new issues appear at the end of a given period of time. Many of the ezines attempt to emulate print magazines by holding to a schedule. Of the 59 ezines represented in the survey: one (1.7%) published daily; one published twice a week (1.7%); two (3.4%) published weekly; one (1.7%) published every two weeks; six (10.2%) published monthly; one (1.7%) published eight times a year; five (8.5%) published once every two months; nine (15.3%) published quarterly; two (3.4%) published three times a year, and; two (3.4%) published twice a year. Literature at Lightspeed – page 70 While these figures suggest a great stability, in fact, they are not as solid as they may appear. Many publishers stated a frequency preference, then added that they may or may not make their avowed schedule. “I try to publish every two months,” one publisher admitted, “but sometimes fall behind and skip a month now and then.” (Carroll, 1998, unpaginated) This is the nature of small publishing: in print, small magazines and journals are notorious for missing deadlines and dropping issues completely. Two of the publications (3.4%) were no longer publishing at the time of my survey. The stories remained on the Web for archival purposes. Four of the zines (6.8%) had started with one publishing schedule and, over time, changed to another. Twilight World, for instance, “used to be released every two months until early 1997. Then I started running out of stories and needed to wait longer until people would send them to me. It’s really irregular now.” (Karsmakers, 1998, unpaginated) Four of the publishers (6.8%) didn’t answer this question. The remainder, 20 ezines (33.9) do not have regular schedules. Since stories do not have to bundled together (as they do in print), they need not be placed on a Web page at a specific time as a group. They can just as easily be place on the page individually as they are ready. This completely eliminates the need for a set publishing schedule, which makes some people rethink the nature of periodically publishing: “Currently it [The Pseudo-Magazine of Writings] is only a one-issue publication, in that if someone sends something to post, I post it when I have the time.” (G. Murphy, 1998, unpaginated) As the publisher of what used to be a monthly ezine put it: “I basically call the magazine a weekly, although I add updates on Monday, Tuesday, Wendsday, [sic] and Friday. So, it is bascially [sic] Perpetual Proliferation...” (Rick, 1998, unpaginated) Regular schedules may be an artifact of print publishing which, in time, will be discarded by online magazines. However, some publishers argued that they had value even in the online world: “In order to give the issues ample time to be read, the Literature at Lightspeed – page 71 publication still needs a regular schedule that the readership can count on. To constantly update the issues would not be fair to the writers or the readers (who would have time for a daily magazine; and how many people would read it daily?).” (Dave, 1998, unpaginated) Readership With print publications, defining readership is a relatively simple matter. We assume that when a publication claims that “X” numbers of people read it per month, that means that that number of people own discrete copies of the publication. Again, when a publisher claims that a book has been read by a given number of people, we assume that that many people have physical copies of it. Defining the readership of online publications is not so simple. What does it mean when an online publisher claims that “We get about 150-200 hits/day."? (Green Onions, 1998, unpaginated) Hits measure the number of times a remote computer asks a server to send it any element of a page. If a Web page is made up of plain text, then a person who accesses it counts as one hit (a request for the HTML). If, on the other hand, a page contains 20 graphics, the person who accesses it will count as 21 hits (20 for the graphics and one for the HTML). In this way, the most complexly designed pages tend to register the most “hits,” although it isn’t really a good measure of how many people actually access their page. The number of individual readers must be assumed to be somewhat less than the number of hits a site gets; how much less is impossible for somebody who doesn’t have access to the output of the publication’s tracking software to know. Some organizations use the term “unique readers” to refer to what, in print, would be simply readers, and to allow them to talk about readers in a more specific way than when publishers talk about hits. Sometimes readership is more clear cut. One publisher wrote: “The actual ‘zine has done very well for the month and a half it has been in existence, now averaging about Literature at Lightspeed – page 72 4,000 hits to the home page a month.” (Farber, 1998, unpaginated) Measuring the hits to a single page, rather then every page on a site, gives a more accurate measure of readership, much closer to the idea of unique readers (although there is no guarantee that somebody who accesses a home page actually proceeds to read any of the contents of the ezine). When the editor of Jackhammer E-zine stated that “Our readership is somewhere around 600 right now,” (Henderson, 1998, unpaginated) I read it to mean that it had 600 unique readers, although she didn’t use that term. Still, most publications count their readership by the number of hits they receive. Let us assume, for the sake of argument, that the number of hits equals the number of readers, even though we know that this isn’t likely to be the case. By this definition, readership of ezines which contain mostly fiction varies considerably. The publisher of The ShallowEND stated that it received “about 1000 hits per month” (Matteson, 1998, unpaginated) According to Heather Hoffman, editor, “Since its beginning three years ago, Interbang has doubled in print circulation, and gets about one hundred hits a day [3,000 per month] on the web site.” (1998, unpaginated) The publisher of now defunct ezine think said “We finished with a circulation of about 3,000 quarterly in print and about 25,000 hits per month on the Web.” (Sandvig, 1998, unpaginated) John Mahoney, creator of The Log Cabin Chronicles, claimed that “I now get about 100,000 hits a month from all over the world.” (1998, unpaginated) How does this compare to other sites on the Web? According to an advertisement in the Globe and Mail, the combined readership of CANOE and its recent acquisition i|money is over 1 million unique readers and 30 million page-views a month. ("The ultimate synergy of tools and content,” 2000, B7) This suggests that the range of hits (1,000 to 100,000) for ezines with fiction is, in fact, not that large, relative to what is possible on the Internet. Perhaps more importantly, we see that the fiction ezine which claims the largest number of readers is still pretty small. Literature at Lightspeed – page 73 In terms of fiction publishing, however, the numbers are quite impressive. The Web unquestionably increased the readership of Shadow Feast Magazine according to its publisher: “It started off with only the work of friends to publish and only friends to read it. Now it has over 100 subscribers and approximately 3000 hits per issue.” (Kirkwood, 1998) (The term subscribers usually refers to people on a publication’s mailing list. Some publications mail plain text versions of the content of each issue to their subscribers who have trouble accessing the Web; others mail notices that the new issue is now available on the Web. Subscriber numbers are a good measure of how many people are interested in a publication.) Another publisher argued that “We get about 150-200 [readers] a month when a new issue goes up. Not bad considering on paper I can only sell about 50 without going to a major magazine distributor to get it out there.” (Kline, 1998, unpaginated) The advantages of the Web over paper as a distribution medium for small press publications (and the work of self-publishing individuals) will develop as an important theme of this chapter. Thus, although compared with commercial Web sites the number of hits fiction ezines get may not seem that numerous, relative to the number of readers they could get in print, ezine publishers feel they are further ahead. “I intend for it to get *really big*,” one publisher stated, “with a few thousand hits per day.” (Karsmakers, 1998, unpaginated) Monetary Considerations Financial considerations are an important aspect of any publication, on the Web no less than in Print. Most of the publishers who responded to my survey (41 -- 69.5%) stated that they had no sources of revenue, and that they had no plans to get a source of revenue. The other 18 (31.0%) claimed that they either had one or more sources of revenue, or were hoping to have them in the future. Since few of them were actually able to generate Literature at Lightspeed – page 74 revenue at the time of the survey, this overestimates the number who do; it would be fair to say that virtually all of the publications in my survey had no income. Of the 18 publications which had or hoped to have revenue, almost all (15 -- 83.3%) expected it to come from advertising. The publisher of one, Pif’s Camille Renshaw, claimed that it was “already profitable” owing to its advertising revenues. (1998, unpaginated) Given the relatively low number of readers for most online publications, though, it is hard to see how they would be able to generate enough revenue from advertising to be financially self-sustaining. Only three of the publications (16.7%) expected to make money from subscription sales. An equal number expected to make money from the advertising and subscriptions of a print counterpart to their Web publication; in a similar vein, two (11.1%) were planning on making money from selling print anthologies of the writing which had appeared on the Web. One publisher (5.6%) said he was going to sell t-shirts and other merchandise. Another publisher was hoping for government support: “We have applied for funding from the Catalan and Spanish Ministry of Culture.” (Jill Adams, 1998, unpaginated) Given the newness of the Internet, Adams wasn’t optimistic about getting the funds, however. The example of the Canadian government’s funding of artwork on the Internet, from which a couple of general principles about government funding can be derived, will be explored in Chapter Four. Finally, four of the publishers (22.2) said that they were seeking corporate partners; two of them named Amazon.com specifically. Online bookseller Amazon.com has a policy whereby any site which refers customers to it will be given a percentage of whatever sales Amazon.com makes to them. This strikes me as being akin to the symbiotic relationship between the bird and the hippopotamus whose teeth it cleans: a beneficial deal for both sides as long as the hippo doesn’t decide to close its mouth. We Literature at Lightspeed – page 75 need more experience before we can tell if this is a sustainable source of revenue for ezines. (Percentages may add up to more than 100% because respondents could choose more than one answer.) These and other financial issues will be taken up again at greater length in Chapter Three. Not surprisingly, given the general lack of income of ezines, few can afford to pay their writers. Fully 49 of the 59 publishers (84.7%) did not pay their contributors. With the exception of two that paid a penny a word, each of the 10 (16.9%) ezines which paid contributors had a different rate: from 3 cents a word to $5, $15, $15-$25, $20-$40 or $5 to $50 per story (depending on length and, sometimes, how long the publication intended to archive the story). One publication offered writers 1/4 cent per word or $5, whichever was greater. This is not a lot of money. The main reason for not paying writers was, of course, that the publications themselves have no revenue. “I don’t have the resources” to pay writers, one publisher, speaking for many, stated. (Vary Stark, 1998, unpaginated) Many of the publishers said that their aid in promoting authors was valuable: “I feel the free publicity I give is worth something to the writers,” was a common claim. This promotion not only comes in the form of publishing the work itself, but in linking the writer’s work in the ezine to his or her home page. As we shall see, many writers do value these things. Other publishers pointed out that publication itself was a form of payment since “We offer our megabytes which cost us...” (Bardelli, 1998, unpaginated) By publishing a writer’s work, an ezine saves the writer the cost of producing and maintaining her or his own Web page. Some publishers tried to compensate for their lack of funding by offering other advantages to writers. “In lieu of pay,” the publisher of the bilingual Barcelona Review wrote, “we offer a translation of the writer’s work -- worth quite a bit of money in itself (between 150 and 300 dollars).” (Jill Adams, 1998, unpaginated) Those who avail Literature at Lightspeed – page 76 themselves of this form of payment are getting more value than those who are paid in cash by other ezines. One publisher, though, was staunchly opposed to the practice of not financially compensating writers. “I consider non-paying publishers,” asserted Ana Maria Gallo, “principally those that have a paying subscriber base, to be reprehensible. The role of the publisher is to finance the project. If they can’t do that, *and* pay the contributors, I don’t feel they should be in the ‘game’.” (1998, unpaginated) Gallo seems to be objecting in particular to publishers who are making money from their venture but not sharing it with their writers. To my knowledge, none of the publishers of the ezines I studied were engaged in this practice. How this lack of revenue affects the decision of writers to publish their work in ezines will be explored later in the chapter. The “Accidental Publisher" Few of the editor/publishers of fiction ezines had editing or publishing experience prior to putting out the magazines. Of the 59 publishers who responded to the survey, 18 (31.0%) claimed no previous experience whatsoever. Of those who had experience, 27 (45.8%) had had some of their own writing published, while 12 (20.3%) had been editors. Thirteen respondents (22.0%) had previously acted as print publishers, all of them for small presses: professional newsletters, print zines or small runs of their own writing. Only one of the ezine publishers claimed to have had experience with a major publisher, as a reader (1.7%). (Figures might add up to more than 100% because some people had experience in more than one of the categories.) In this way, it would appear that most of the publishers of ezines are amateurs. This impression is reinforced by what I think of as the “accidental publisher” phenomenon. Literature at Lightspeed – page 77 Common sense would suggest that publishing an electronic magazine is an intentional act, that is, the publisher makes a conscious decision to solicit the writing of others and present it as a literary package. Many of them are not created by this process, however; they begin as a Web page with another purpose, and slowly evolve into literary ezines. This process can take many forms. Usually, the publisher starts with a home page for his or her own writings, which then grows to encompass the writing of others: “I started it [her ezine] to showcase my own stories. I now include work by others, and the webpage is about 10 times bigger than when I started it.” (Janine Smith, 1998, unpaginated) In one case, the original impetus was to showcase the work of another writer: “The main author (Craig) started writing these very funny stories on one of the iMusic bulletin boards out on the web. I loved them, and when I was ready to do my page I asked him if I could put his stories up. He said yes and since he’s a very prolific writer Story Land was born. Other people then started writing stories and I was able to get other peoples [sic] works out there” (Sandi, 1998, unpaginated) In another case, the magazine began as a technical exercise: “TW3 began as a vehicle by which to demonstrate my then new company’s abilities in digital publishing. It worked, too, gaining us clients among nonprofits in the humanities and technology research. Over the last couple of years, however, it has grown into an entity in its own right and currently logs + or- 50,000 page views a month.” (Bancroft, 1998, unpaginated) There was also a case where technical considerations spurred a writer to create an ezine: “It started out as a section of my personal homepage to share my writing and some writing by my friends with the rest of the Internet. When I ran out of room in my account, I took the whole writing section and moved it to a free homepage. Setting it up as a zine was accidental. Free homepages require that you not just use them as loading space, so I essentially created an entire separate page for that writing section, and only later realized that it kinda fell under the category of e-zine.” (Darkshine, 1998, unpaginated) Literature at Lightspeed – page 78 Finally, one of the ezines was created out of the ashes of a failed print project. “I was approached to edit a print ‘zine,” the publisher explained, “but the backing fell through. I had already solicted [sic] some writing and art, so I decided to publish it on my own, on the web (as that required little backing).” (Farber, 1998, unpaginated) The common thread to the genesis of these and other Web ezines is that the publishers backed into them; their original intention was not to become fiction publishers. The low cost of placing material on the Web (which, although disputed, will become a common theme in this chapter) is one obvious reason: adding a friend’s work to a print publications entails adding more pages, which drives up the cost. Once an online publication is established, adding pages does not add to the publisher’s cost (unless he or she hits his or her server limit, in which case the publisher will have to pay for more space). Perhaps a less obvious reason is the ease with which digital information can be edited. To change the content of a print zine from all one’s work to one’s work and that of others is not possible if copies have already been printed; even if caught before the print stage, it requires time and effort to redesign the physical layout of the publication, shoot new negatives for printing (or photocopy more pages), etc. Adding new material to a Web page, by way of contrast, may be a simple matter of uploading it to one’s server and adding a few lines of code to link existing material to it. The accidental publisher phenomenon helps make sense of something that, at first blush, seemed odd.. In the section on methodology, I mentioned that a few publishers were surprised that I considered their activities “zine” publishing. I suggested that the reason was that I considered home pages with writing by more than their creator zines, even though their publishers might not define their activities in that way. The accidental publisher phenomenon suggests an additional reason: even those who publish what are undeniably zines may not takes themselves seriously as publishers. “My site isn’t an e- Literature at Lightspeed – page 79 zine,” one publisher insisted, “merely a collection of music, writing, and artwork that people send me.” (Johnson, 1998, unpaginated) Editing In traditional publishing, material is usually edited before it is made available to the public. For the most part, this is true of ezines. Of the 59 editors who responded to the survey, 32 (54.2%) claimed that each story they published was edited once. In all but one of the cases, this edit was done by the publisher her or himself (in the other case, it was sometimes done by the editor’s assistant). This makes sense: because the majority of zines have no revenues and no plans to ever develop any revenues, they cannot afford staffs of editors. Some of the ezines do have enough volunteers, or generate enough revenue, to be able to give stories more than one edit. At thirteen of the ezines (22%) each story is given two edits; three ezine editors (5.1%) claim to edit each story three times, and; one ezine editor (1.7%) claimed his publication edits four or more times. It is worth noting that ten of the ezine publishers (17.0%) claim not to edit at all. “We rarely edit at this point...” one publisher stated. “I don’t do this for a living and no longer have time to correct sloppy work. If work is poorly written, we do not accept it.” (Bardelli, 1998, unpaginated) These publishers won’t accept just anything; stories submitted must meet their standards of originality and/or craft. However, they will not work with a writer of a marginal story to make it publishable. This makes submitting to these ezines an all or nothing proposition: “I take ‘em like they is, or not at all. At this level, it’s not a matter of changing little things to make a work suitable for publication. Either you’ve got it, or you’re so far off there’s no point.” (Darkshine, 1998, unpaginated) While most ezine publishers who didn’t edit stories cited practical reasons for not doing so, at least one offered an ideological reason: “The Inditer is not in the business of Literature at Lightspeed – page 80 editing or censoring.” (Loeppky, 1998, unpaginated) Censorship? Editors? That’s not how we usually think of the editorial process, so we might want to ponder this point for a moment. The common belief is that editors help writers improve their texts by pointing out to them where their writing has not satisfactorily achieved what the writer had set out to achieve. To be sure, most writer/editor relationships are based on the idea that the editor’s goal is to help the writer fulfill her or his “vision." However, sometimes an editor has her or his own agenda which competes with this need to help the writer create his or her best work. Most often these days, this has financial roots: the editor does not see a market for certain subjects, treatments or writing styles, and gives the writer the choice of conforming to the publisher’s expectations or not being published. If there is a pattern of certain subjects, treatment or writing styles not getting published, some people believe that a form of “commercial censorship” has taken place. Others take the more extreme position that all editing is censorship since it necessarily interferes with the writer’s freedom of expression. Some of the publisher/editors who did edit stories were also wary of editing the work of others. Richard Karsmakers, editor of Twilight World, stated that he only edited “typographically and gramatically. [sic] I don’t believe in editing someone’s work. I wouldn’t want others to edit my work either.” (Karsmakers, 1998, unpaginated) Since many of the publisher/editors of online magazines were originally writers or people with no publishing background who backed into their role as publishers, it makes sense that they would not have a traditional approach to editing. Other Ezine Practices A couple of other aspects of ezines should be mentioned. All of the ezines that published editorial guidelines had a policy of claiming first publication rights to a story. In the vast majority of cases, these rights reverted back to the writer upon publication; in a small number of cases, they reverted back to the writer a Literature at Lightspeed – page 81 short period (three to six months) after publication. What does this mean? From the moment a story was accepted to the point at which the rights reverted to the author, the author could not sell it to, or otherwise have it published by, another publication. This is fairly standard in print publishing, although it has implications for writers which will be explored later in the chapter. As one might expect, most of the publications conducted the majority of their business by email, although a small number insisted upon regular mail submissions. Because there are a large number of word processing programmes, each ezine had to specify the format of the file attachments which they were capable of processing. To avoid this problem, some ezines only accepted plain text versions of submissions which had been pasted into the body of an email. However, other ezines would not accept submission pasted into email, the publishers arguing that important formatting information was lost. Finally, most of the publishers limited the length of submissions they would consider from 1,000 to 5,000 words. However, a few of them allowed that they were prepared to make exceptions. “Articles and fiction can be up to 3000 words,” the submission guidelines of one publication read, “however, for good content, we will be flexible.” ("Submission Guidelines,” undated, unpaginated) A small number of ezine publishers either made room specifically for novels, or stated on their submissions guidelines page that they would consider serializing longer works.

Individual Writers Let us move our investigation of fiction publishing on the Web to a consideration of writers who publish their work on their own page and those who publish in the ezines of others. As we shall see, these writers have many common -- as well as divergent -- concerns. Literature at Lightspeed – page 82 Who Publishes on the Web? The image of the typical computer user has for a long time been that of a young man, possibly a computer science student at a university, who is also into fantasy role playing games and science fiction books and movies. This person doesn’t have any professional publishing experience, but has been writing stories for his own pleasure, usually in the fields of fantasy and science fiction, which are his passion. How accurately does this reflect the reality of those who publish their fiction on the World Wide Web? To be sure, there are some people who fit this description. “i’m eighteen, started off writing self indulgent teenage poetry and funny essays to make my teachers laugh. from there it became like an addiction,” one said. (Poulsen, 1998, unpaginated) Another explained: “I don’t have a writing background. If you mean ‘career’ or resume, I don’t have those either. I just write when I think there is something to be said and I may have an original way of saying it.” (Tasane, 1998. unpaginated) Some people who publish their fiction on the Web are computer professionals. “I work in the WWW industry,” wrote one “and thought it would be nice to have a few of my stories online at web sites I have visited and liked.” (Levens, 1998, unpaginated) Another claimed that “I’ve worked in Web development professionaly [sic] for over 3 years now. I used to host my own online writing workshop(participants [sic] work was protected by password) and became interested in Ezines.” (Atkinson, 1998, unpaginated) Moving away from the stereotype, we find that some of the creators have degrees in English, Creative Writing or a related discipline. “I began writing stories before I even knew how to write,” one author said.

I made scribbles on the page and drew pictures to go with it. No one could read them, but I knew what they said. I was discouraged from writing -- it, like my commercial art studies, was a “waste of time” according to my mother. I stopped until highschool, began writing short pieces for the school lit mag and then played around with it in college. I found myself Literature at Lightspeed – page 83

writing fiction during much of my free time, began winning small prizes, and even saw myself ripped off by a semi-famous writing instructor at one point. I started writing alternative fiction at Binghamton, and challenged myself to change my style each semester. I surprised myself as much as my professors and fellow students I think. I’m now close to completing my PhD in Creative Writing, and am starting to win fellowships and awards based on my work. (Shirley, 1998, unpaginated)

Another writer has an “M.A. in English from San Francisco State University.” (Hearne, 1998, unpaginated) A small number of creators are even professors: “I teach classes at the University of Richmond on electronic publishing, and the developing relationships writers have with the Internet...” (Trammell, 1998, unpaginated) The assumption is that because they are young and inexperienced, Web writer/publishers use the medium because they cannot be published in print. Occasional stories such as “I have tried to get a couple of things published, but to no avail” (Barber, 1998, unpaginated) would seem to bear this out. However, the number of writers who had not been previously published in print was in the minority, and the number who claimed to have decided to publish their work on the Web after having been rejected by print publishers was much smaller. In fact, some of the writers who have placed fiction on the Web have extensive writing backgrounds. “I have been writing fiction -- five published novels, four short story collections; one biography (opera), edited one anthology; one volume of plays - since my first book (theology) in 1951,” wrote David Watmough (1998, unpaginated) Another writer simply forwarded his press release to me:

DANIEL CURZON is one of the principal gay writers to walk the minefields of literary and social criticism to make it easier for those who have followed. His works include the landmark novel Something You Do in the Dark (1971), The World Can Break Your Heart (1984), Superfag (1996) and Not Necessarily Nice: stories (1998) as well as the plays My Unknown Son (Circle Rep Lab, New York, 1987) and 1001 Nights at the House of Pancakes (San Francisco, 1998). He has also written and published non-gay fiction and plays. His plays, both gay and non-gay – Literature at Lightspeed – page 84

some winning contests -- have been produced in several cities. (1998, unpaginated)

As these quotes suggest, some of the writers on the Web have not only been published in print, but also have professional experience writing for other media. Those who publish on their own Web pages are less likely to have print publishing credits than those who publish in ezines. Thirty-nine per cent of individual Web page creators (43) had not had work published in print, while only 19.5 per cent of ezine contributors (44) had not been previously published; almost twice as many. In addition, 41 of the writers with individual Web pages (37.6%) identified themselves as amateurs or hobbyists, as opposed to 66 zine contributors (29.1%). This suggests that those who create their own Web pages for their work are closer to the stereotype of the typical computer user (especially young and with little experience) than those who get published in ezines. Other evidence supports this. Some of the contributors to ezines claimed that it was, for them, a logical extension of the work they had been doing in print. “Friends who had Zines started doing them [ezines]. It’s hard to express how completely natural it seemed to everyone in Zines.” (Jeff Weston, 1998, unpaginated) This was not true of the people who put their writing on their individual pages; they were more likely to claim that it was the beginning of their efforts to reach an audience. “I write to express myself,” was a typical comment, “and my website I consider a first step to ‘communication’.” (Sonnenschein, 1998, unpaginated) Reflecting the uncertain status of zines in the print publishing hierarchy, one respondent wrote: “I’m not sure if you can really call the places I’ve been published in print traditional: mostly they’ve been low-circulation zines and other such journals published by private individuals (usually from their garages), i.e. Lucid Moon in Chicago.” (Switaj, 1998, unpaginated) Another respondent had been a print zine Literature at Lightspeed – page 85 publisher: “I and a friend had a Rocky Horror Picture Show Fanzine called In Your Pants. We put out 4 issues. Subscription was international. It was mostly my friends artwork with a lot of my fiction. Was a nice zine, but was way too costly.” (Wombat, 1998, unpaginated) While it might seem logical that people who had print zine experience would migrate to the Web, this does not, for the most part, seem to have happened. Only 11 of the 336 individual writers claimed to have experience writing for or publishing zines (3.3%). This suggests that, rather than bringing zine creation experience to the Web, the Web is recreating the conditions, originally created in print by the photocopier, which allows for individual expression, which is being taken up by a new generation of writer/publishers. Although not necessarily computer professionals, many of the writers in the survey had placed their fiction on digital communications networks other than the Web. “Actually,” one writer explained, “the particular series of stories I’ve been putting on the Web originally started online, on a BBS, before the Net was widely available (1991). My husband pestered me into calling a BBS to make the computer ‘fun and friendly,’ and I found I liked the people. About a year or so later I got the idea of writing some very short stories and posting them anonymously, without warning... [W]hen the Net became more popular (1995) I got pestered to do something there. The stories I’d been putting on the BBS were the obvious choice. For a while they appeared both there and on the BBS; now they’re only on the Web.” (Youngren, 1998, unpaginated) Other writers who claimed to have begun by publishing on bulletin boards (stand-alone computers which have to be dialed directly into) include Craig Lutke (1998, unpaginated), Jon Lindsay (1998, unpaginated), Kira Fremont (1998, unpaginated) and Rupert Goodwins (1998, unpaginated) Literature at Lightspeed – page 86 Another pre-Web venue for digital fiction was the system of newsgroups on Usenet. “I sent a poem to a rec.art poems newsgroup before the internet was operative on a commercial level,” one writer explained, “and recieved [sic] a response [from an ezine] asking me to submit for their publication. this was about ‘94 or ‘95, I suspect.” (Garni, 1998, unpaginated) Another writer stated: “I had published some of my writing on UseNet years ago... As soon as I saw the Web, I recognized its usefulness as a ‘self- publishing’ venue.” (O’Neal, 1998, unpaginated) Others who had published in newsgroups include John Aviott (1998, unpaginated) and Ace Starry (1998, unpaginated) Some writers first published their electronic works on commercial services, most notably America Online. “I posted some of my short stories to the fiction boards in the Writer’s Club on AOL, so I could get some needed reviews of my work,” a writer stated. “This lead to my writing group which lead” to the creation of an ezine in which his work subsequently appeared. (Schmitz, 1998, unpaginated) Sometimes, publishing a story on AOL led directly to getting published in a Web zine: “The person who puts out the e-zine The Rose & Thorn approached me. The story that she published, One Last Yearning, caught her eye when it won an online contest on America OnLine in the area called the Amazing Instant Novelist at Keyword: NOVEL And she thought it was right for her magazine -- so I looked over her magazine, liked what I saw and gave her the go-ahead.” (Minton, 1998, unpaginated) The commercial services have a disadvantage, though: only those who pay to be members can access work within them. Even the largest, with 10 or 20 million members, has a fraction of the number of people who are currently on the Internet as a whole (over 150 million, as previously stated). (Cerf, 1999, unpaginated) Thus, one writer began putting stories on a Web page because “i wanted my stories to be available to everyone, not just aol people...” (Ulmen, 1998, unpaginated) Literature at Lightspeed – page 87 Finally, some people published their work digitally outside of the Internet. “I’ve been into electronic publishing since I was about 18, where I published some of my own work and the work of others on floppy disk. The collections I produced were reviewed by magazines and well received in the Public Domain. The WWW/Internet seemed like the next step up...” (Campbell, 1998, unpaginated) The survey did not ask for the nationality of the writers, but there are methods of determining this. One is to look at the email addresses of the correspondents; those outside of the United States frequently end with a two letter nation designation. Looking for “.ca,” which represents Canada, for example, I found that 12 surveys were sent from this country. In addition, 23 surveys were sent from the United Kingdom, Australia or New Zealand. One was sent from Japan. This likely underestimates the number of writers working outside the United States: some may subscribe to Internet Service Providers (such as America Online) or free email services (ie: HotMail) whose email would be sent with a generic domain name which does not refer to the sender’s country of origin. There is also evidence within the surveys: some people identify themselves as living outside of the United States. Thus, there were respondents from Denmark (Tindall, 1998, unpaginated; Shafir, 1998, unpaginated), Korea (Potts, 1998, unpaginated; Wallis, 1998, unpaginated), Germany (Kaeser, 1998, unpaginated), Sweden (Nilsson, 1998, unpaginated) and Singapore (Qining, 1998, unpaginated). Note that looking at their email addresses would not necessarily have been sufficient to tell where the writers were located. Rolf Potts’ email, for example, is [[email protected]], which does not end with a national designation. For this reason, again, it is likely that I have underestimated the number of people who write outside the United States. We can say, though, that at least 43 of the 336 individual writers of fiction came from outside the United States, approximately 13 per cent. Around the same time as the surveys were conducted, June, 1998, the online population totaled 122 million people, Literature at Lightspeed – page 88 approximately 52 million (42.6%) coming from outside the United States and Canada. (Everard, 1999, 26) This is a substantial difference. It probably has to do with my limitations as a researcher: since I was looking primarily for English-language literature (because it is the only language I feel comfortable communicating in), I would not have been able to find writing from other countries which had been published online in languages other than English. Had I been able to include stories written in other languages in my survey, the number of writers from outside the United States would likely have been higher. Politics may also play a part in the lack of representative content from countries outside North America. As we shall see in Chapter Four, many countries attempt to control what their citizens can upload to or download from the Internet, making certain kinds of content illegal. Such bans would apply to fiction as well as non-fiction. One of the consequences of such a policy may be that creative voices from those countries may be stifled online. In a similar fashion, I did not ask the survey respondents to state their gender, but we can infer that information from their names. It is a simple matter of going through the list of respondents and counting the number of those with masculine and feminine names. Well, perhaps not quite so simple. Many of the names of the respondents could not be counted because they were not gender-specific: this was either because the first names were not included, just the first initial(s) of the author (for example: D. K. Smith); the first names were ambiguous, possibly belonging to either gender (Chris Bernard, for example, which could be either Christopher or Christine); or the identifier was obviously a pseudonym (ie: Lachesis January). In this last case, I did not assume that the pseudonym referred to a specific gender even when it appeared to (for instance, Lady In Black). Literature at Lightspeed – page 89 Of the 336 responses from individual prose writers, the gender of 57 could not be clearly established. Of the 279 names which could be identified, 174 (62.4%) were masculine while 105 (37.6%) were feminine. According to a 1996 Georgia Tech survey, 31.5% of computer users identified themselves as female. Assuming that the unwillingness of female users to identify themselves as such was constant for both surveys, it would seem that women are a little more likely to post fiction to the Web than men. The number of women detected by this method, in both absolute and percentage terms, is likely to be lower than the actual number of women publishing their writing on the Net, since women are more likely to use gender-neutral means of identifying themselves. Why? “It’s nice knowing that I am noticed,” one woman remarked about the survey, “without being asked to meet at ‘intimate’ restaurants etc. *L* That happens on occassion [sic].” (Winkler, 1998, unpaginated) Many women find that the potential anonymity of the Internet gives some men license to send them sexually crude or harassing emails. "I also got some cyber fans,” one woman said, “who did stuff like send me their cyber underwear (could it be the stuff I write??) I was also propositioned by cyber fans...” (Shirley, 1998, unpaginated) There is no reason for a man to believe that a woman who writes erotica and posts it to the Net is looking for a sexually aggressive response (just as it is inappropriate to assume that a woman is sexually available if she wears revealing clothing). In any case, it doesn’t seem to matter what the woman writes; anybody who presents as a female opens herself up to potential harassment. Another woman stated that a disadvantage of publishing on the Internet was that “now and then I get some idiot asking me [if] I write porn...” (Midnight, 1998, unpaginated) It is likely that some women writers, like women more generally, either do not post their fiction to the Web, or have posted it and subsequently removed it, because of Literature at Lightspeed – page 90 these kinds of unwanted responses from men. There is, unfortunately, no way of knowing how extensive this is given the survey which was conducted. However, this does suggest that women are more highly motivated to use a pseudonym or their initials on their Web pages than men, which, as I suggested, would lead me to underestimate the number of women who are publishing fiction on the Web. (For further discussion of the relationship between gender and participation in online communication, see Stewart Millar, 1998.) Given all of this, I think it should be clear that, contrary to the stereotype of the computer user, those who publish fiction on the Web are a diverse group. What Is Published on Individual Pages and in Ezines? As mentioned in the section on methodology above, I was not, for technical reasons, able to download samples of writing from all of the writers who participated in the survey. Of the 336 writers who responded to the survey, I was able to download at least one writing sample from 219 (65.2%). Where there were novels, I downloaded five chapters, reading at least two. For individual Web pages, wherever possible I downloaded three stories (some writers wrote in more than one genre; reading more than one story gave a better idea of their work, as well as the work available on the Web as a whole). The stories varied widely in length. There were fragments which were less than a page (for example, Michael T. Gilbert’s “Atmospherics 1: the Club” (undated, unpaginated)). There was one novel which, when opened in my word processor, was over 400 pages, so long it had to be downloaded in a .ZIP file (Don Phipps’ Avatar (1991, unpaginated)). Without having done the calculation, my impression is that the average story was between 3 and 6 pages long. I was able to place each of the stories in one of six literary genres: fantasy/horror, science fiction, romance, humour, mystery/hard boiled and literary/other. These genres conform to the general sense of them. Science fiction stories involve extrapolations of scientific hardware and principles in futuristic settings. Fantasy and horror stories involve Literature at Lightspeed – page 91 supernatural beings or events. Romance stories feature the development or disintegration of intimate relationships between human beings, the search for intimacy. The main function of a humourous story is to make the reader laugh. Mystery stories contain a puzzle which needs to be solved, while hard boiled fiction (usually, but not always focusing on a detective) deals with aggressive characters in gritty urban settings plumbing the depths of human depravity. The literary/other category is a catch-all, of course, for stories which do not comfortably fit into any of the other genres; however, most of the stories in this category can be considered to have the “serious” literary purpose of exploring the human condition in ways which do not conform to any genre expectations. Classifying a given story based on these categories is not always straightforward. One story, for instance, was about a man whose features had been scientifically altered and were slowly coming apart:

Martin 29 Dash CompuG looked into the mirror and saw himself not quite assembled; one of his eyes, the new blue one, was afloat; his nose, glowing a nice cadmium yellow, was on the mark but not yet attached; his mustache was fluttering mockishly in the air just ahead of his upper lip; and his hairpiece, which was usually on time, was only just beginning to settle in under the cap he imagined himself to be wearing. Only his red paper ear and business-like lips were fully in place and standing ready. The tentative aspect of all the rest suggested he was in for another altogether disagreeable day. (Beardsley, 1998a, unpaginated)

Science fiction or humour? Where such conflicts arose, I always tried to place the story in the category which seemed predominant, but such decisions are necessarily arbitrary judgment calls. (My arbitrary judgment in this case was that it was a science fiction story.) How the various stories are distributed by genre is shown in Chart 2.3. While there was a substantial amount of science fiction and fantasy, the largest category, by far was literary/other (which contained almost twice as many stories -- 45.8% of the whole Literature at Lightspeed – page 92 versus 25.6% of the whole -- as the other two combined). This goes against the popular perception of the Internet as a source of writing geared primarily towards the interests of “techies,” writing dominated by specific sub-genres of science fiction and fantasy. Somewhat surprisingly, there was little difference between what is published in ezines and on individual Web pages, which suggests that individual writers are just as serious about what they write as those who have to meet the criteria set up by editorial boards.

zines individual pages total

Fantasy/horror 28 (19.1%) 25 (17.9%) 53 (18.5%) Science fiction 26 (17.8%) 23 (16.4%) 49 (17.1%) Romance 11 (7.5%) 14 (10.0%) 25 (8.7%) Humour 9 (6.2%) 13 (9.3%) 22 (7.7%) Mystery/hard boiled 2 (1.4%) 4 (2.9%) 6 (2.1%) Literary/other 70 (48.0%) 61 (43.6%) 131 (45.8%)

Total stories 146 140 286 Individual writers 146 73 219 Novels 7 26 33

Chart 2.3 Distribution of Stories in My Sample of the Web By Genre The only significant difference between those who published in ezines and those who published on their own Web pages was that the latter were almost four times more likely to publish novels. In this case, I counted novels “in progress,” where two or more chapters had been written and the writer had expressed the intent of continuing until a work of sufficient length to be considered a novel had been completed. I also included in this category novels which had only sample chapters on a Web site and for which interested readers would have to pay (for example, the Avram Cohen mystery Crimes of the City (Rosenberg, 1991, unpaginated). This difference is easily explained. Like print magazines, ezines primarily contain short stories. As we have seen, most have guidelines which specify a maximum length for Literature at Lightspeed – page 93 stories which they will accept which falls far short of novel length. Where novels were published in ezines, they were usually serialized (again, like traditional magazines). This isn’t strictly speaking necessary since, as we shall see, the cost of publishing more material online is negligible compared to the cost of publishing more material in print; it is partially a legacy of the print model, partially a method for editors (most of whom are unpaid volunteers) to keep the amount of writing they have to work with small enough to fit into their schedules. Individuals with fiction on their Web pages, by way of contrast, are not bound by the expectations of previous models of publishing, and can, therefore, take advantage of the economics of Web publishing to make longer works available. In this regard, their only limitation is how much work they are able to produce. It should be noted that there are advantages to serializing long pieces of writing. One is that it encourages readers to keep returning to the site in order to read the latest installment of a story they enjoy. Another is that it makes reading the stories easier since most people are uncomfortable reading large blocks of text off of screens. Most of the individuals who posted entire novels, aware of these considerations, usually divided them into a chapter per Web page, some serializing their stories by posting chapters as they were written. That having been said, what kind of stories are published on the World Wide Web?

It had only been a month since Greywolf had arrived back in Selenor, a Wood Elven village north on the Serpent River in the upper regions of the Sylvan Elf Kingdom. He had journeyed there from his keep far to the south to begin his two months active duty with the militia. He had been a General in the King’s Army but with the death of good King Elemmakil some eighty years ago, he had served the High Council that ruled in the absence of an elected king. He commanded an elite group of battle hardened veterans, some of which had served with him from as far back as the Demon Wars. It was a hand-picked brigade, The Wolf Brigade. And though the kingdom was at peace, and had not kept a standing army in almost thirty years, he and his soldiers did not use their active duty time Literature at Lightspeed – page 94

for happy reunions; nay, he and his men trained, and they trained hard. So, it had been only natural to send out messenger hawks to reconnoiter the borders. And when reports had returned indicating activity in the land of an old enemy, it had been a foregone conclusion to concentrate one’s efforts on discovering more. And when the reports had ceased altogether, it was time to act. So it made good military sense to send a spy to investigate, so he had sent the best one he knew; a spy who’s eyes he could trust. He had sent himself. (Greywolf the Wanderer, undated, unpaginated)

Most people associate fantasy stories with the legacy of Tolkien’s Lord of the Rings trilogy: races of elves, ogres and other creatures, epic battles of good and evil, magi of various levels of ability with various intentions towards the human race. To be sure, there are many of these to be found on the Web: Mike Adams’ “A Daughter’s Duty” (1998, unpaginated); Brandon Mitchell’s The Dark Saviour (1998, unpaginated); Stuart Whitby’s “A Spell of Rain” (1998b, unpaginated); Mercuti777’s “Aruss Returns” (1998, unpaginated). Occasionally, a writer would try to vary the typical fantasy story by writing from the point of view of an exotic creature. Lida Broadhurst’s “Reunion,” for example, is written from the point of view of a dragon. (1998, unpaginated) Rowan Wolf’s “Boil a Manchild for Odin” is a novella told from the point of view of an ogre. (1999, unpaginated) Both stories delve into the social structures and mores of their protagonists, and use their unique point of view to comment on the social structures and mores of human society. There are also a couple of fantasy stories about warriors returning home after battle which are serious meditations on the horrors of war. “All, of course, want to hear stories, what I’ve seen, where I’ve been,” one reads. “But how can I tell them of the things I’ve seen? What the faces of ten year old girls look like after they have been raped by a dozen men. What it smells like when people are herded like cattle into huts, which Literature at Lightspeed – page 95 are then set alight. What the air over a battlefield tastes like when so many men lie dead or dying in the blood soaked mud.” (Brown, 1997, unpaginated) The proliferation of this type of fantasy fiction can be attributed, at least in part, to the fact that there are ezines (DargonZine, Faerytales, et al) devoted specifically to it. This can be attributed further to the fact that for some of the writers, fantasy fiction was an offshoot of the role playing games in which they had participated. “Wings of Destiny came into being as the character back story for a young avariel elf I had created for an AD&D Forgotten Realms campaign...” one writer explained. “As it turned out, the campaign itself was not very successful, but the back story seemed to stick. I changed it a bit, altering locations and names, but the emotion and spirit of adventure remains constant.” (Farmer, 1998, unpaginated) This is not the only form that fantasy stories can take; there are far more stories with fantastic or supernatural elements which do not feature elves or dragons. In Archie Whitehill’s “The Prodigal Son,” a man who caused his father’s suicide finds the old man’s battered briefcase can bring him whatever he wants. (1998, unpaginated) In “Welcome to Endsville,” written by Jeremiah Pond, a child warns a murderer that he’ll come to a bad end if he stays in the city’s run-down hotel. (1998, unpaginated) These are fantasy stories in the mold of The Twilight Zone, where unexpected endings are to be expected and fate often metes out an ironic justice. Several fantastic fables can be found on the Web. Kevin Paul Smith’s “The Search for Common Sense,” about a man who believes that common sense can be bought, is one example. (undated (b), unpaginated) “The Lone Fur,” written by Matt Ulmen, is a short meditation on the nature of choice. One of the more poignant stories was a fable called “The Monster in the Wood:"

The monster in the secret wood lived in fear of the villagers of course and tried never to be seen, half-believing all the terrible things that were said Literature at Lightspeed – page 96

about it, and thinking it was the only one of its kind. In its heartsickness and loneliness, the creature began to sing, and discovered to its amazement that its voice was very beautiful. “What is that sound?” the villagers asked. “It is the wind,” said one. “It is the voice of God,” said another. “No, it seems to come from the direction of the place where the monster lives,” a brave man ventured. “Don’t be ridiculous!” the village elders snapped. “How could anything beautiful come from such an unspeakable thing as the monster that lives in the secret wood?" No more was said. (Curzon, 1997, unpaginated)

The monster in the story is a metaphor for gay men; the music a metaphor for all of the beautiful artistic work which gay men create. In this context, the hostility of the villagers needs no explanation. Fantasy stories often shade into horror; the line between the two is fuzzy at best. Thus, you can find stories like Christie Gibson’s “Higher Learning,” in which a boy is stolen away by shadows to become one of them. The shadow race is a recent creation of god to make the world more interesting; its purpose is to interfere with the smooth operation of the human world. (2000, unpaginated) In another story, Michaela Croe’s “Tracks,” a child believes that he can hear voices coming from beneath the train tracks behind his house. In fact, there seems to be a whole community down there of people who have been run over by trains... (1996, unpaginated) The popular horror sub-genre of vampire stories is well-represented on the Internet. “The problem with being a vampire--” according to one story, “the real problem, not the glamorized Gothic shit people like Ann Rice epic about -- isn’t the sun, the loneliness or even other vampires at all -- It’s the moon.” (Bernard, 1997, unpaginated) Another story expands on the idea of the vampire as tragic romantic figure popularized by -- yes -- Ann Rice: “After the first time I saw him twice a week. He always met me in the same place. I would get into the car then we’d drive to a lonely spot where he’d take out a syringe and draw out maybe a quarter cup of my blood. Then he would drink it.” (Baumander, undated, unpaginated) Literature at Lightspeed – page 97 Science fiction writer Norman Spinrad offered an amusing take on the sub-genre in “The Fat Vampire.” Set in Hollywood, where you can never be too rich or too thin, the story is about a man who cannot seem to stop eating, but whose female companions are the ones who gain all the weight. (undated (a), unpaginated) Satan, the ultimate villain of much horror fiction, appears in two very different guises in two very different stories. In Christian Bertrand’s Dorom, he is the beautiful fallen angel who tempts a couple of innocent angels to question their place in heaven. (1996, unpaginated) In Robert Paxton’s Between Heaven and Hell, he is the horned beast who commands armies of evil demons. (undated, unpaginated) In both stories, his goal is, of course, to overthrow heaven and install himself on the throne of god. Finally, there are several stories which do not fit comfortably into any of the fantasy/horror sub-genres. One such is Jacques Servin’s “I Was Living in a Gay Condo,” which boasts hallucinogenic prose:

Filippo charged ahead with his scimitar, decimating several armies of marauding estate-dwellers then en route to a mirthful afternoon among heathens. Cut: to the mirthful afternoon, never in fact occurring, but nevertheless quite real. The heathens are disporting themselves on the lawn, undoing knots as is their wont at top gallop atop fine creatures not quite equine but savage nevertheless, shouting at the same time stanzas of excellent poets to the assembled tribes of their vanquished, who can only look at the mounted victorious heathens with downturned head (eyes as far up as possible) and murmur little ditties from their own tongues, all about survival. (1997, unpaginated)

Another is David J. Wallis’ “Our Schools Are Burning.” In it, a spirit is charged with ensuring that spirits who have come to Earth are not killed before they have learned what they entered mortal bodies to learn. When a pair of kids start shooting up a high school, he wanders through the carnage intent on ensuring that four souls whose lessons have yet to end remain in living bodies. (1999, unpaginated) Literature at Lightspeed – page 98

On an unknown, mysterious planet somewhere in the middle of the galaxy stood a priest named Abmar from the Family of Ab. The wind moved his white, wispy hair. He was deep in thought thinking of the Runes of Kale, the voice of his god, Ao. The One Prophecy out of many contained in the Prophecies of Ao consumed his mind. Two great galactic powers lay in peace across from each other, but the one called Ral was ready to strike. All it took was a subtle shift in the balance of power. The millennia of peace were going the way of the approaching dusk and the coming of the two moons. The One Prophecy would start it all. (Shaffer, 1997, unpaginated)

Galactic battles between different branches of human civilization or human and alien civilizations have been a staple of science fiction since the pulp magazines of the 1930s and 1940s. They were given a new life by George Lucas’ Star Wars and its sequels. Many of the science fiction stories on the Web fit comfortably into this category. The more serious ones include: Roland Mann’s “In the Trenches,” (1998a, unpaginated), Sandra Tseng’s The Jandorian Chronicles (1998, unpaginated) and Kaleen Weston’s A Meeting of the Minds (1998, unpaginated). Sometimes this theme was dealt with in a more comic way. Zach Smith’s “It’s All in the Translation, for instance, deals with the problem of communication between alien races:

‘I hate you, all of your ancestors, and whatever things may excrete from the bellies of your harlots,’ the voice crackled over the headset. ‘I welcome you, in the name of a people of a great and long history of tolerance and charity, with open arms. Over.’ (undated, unpaginated)

Rick Underwood’s “The Honeymoon is Over” (undated, unpaginated) also deals in a light-hearted way with the floundering of relationships between different species due to communication problems. A strand of “dystopian” science fiction which contains visions of the future of Earth as an ecologically destroyed shell where survivors battle each other for scarce resources is represented in fiction on the Web. In Richard Cumyn’s “The Effort,” for Literature at Lightspeed – page 99 instance, the surface of the Earth is uninhabitable, and human beings living underground survive by rationing their resources and cultivating a fungus for food. (1994, unpaginated) Another strand of science fiction, one which developed in the 1980s, was cyberpunk. Stories in this sub-genre usually take place in the near future, extrapolating technological advances in robotics, computer science, nano-technology and other fields. There are examples of this type of story on the Web, as well. In Confessions of a Reluctant Hacker, a psychic detective captures a man responsible for the destruction of a robot manufacturing facility in which 30 people were killed. Despite having hunted him for over a decade, the detective finds himself having to let him go in order to get him to cooperate in an investigation of a hacker threatening to release a virus into the city’s mainframe computer that would wreak havoc with the city’s major functions. (Coleman, 1999/2000, unpaginated) As one might expect, computers play a major role in many of the science fiction stories. In one, a lonely woman meets a man she hopes is her soulmate online; it turns out he’s an alien who only wants her for her brain. Literally. (Dyderski, 1998, unpaginated) In another story, reminiscent of Harlan Ellison’s classic “I Have No Mouth and I Must Scream” and the more recent film The Matrix, human beings spend their lives in a fantasy world created inside a monolithic computer: “The main hard drive 2 everything iz in what wuz once called Detroit, which iz now called The Brain. & that iz what it iz. The universes largest computer ever, capable ov thought, ideas, emotions, U name it. & it controls pretty much our entire lives. & it doesn’t even no it. But it iz true. 99% ov the human race spends up 2 10 minutes out ov cyberspace in their entire lives.” (Blenman, undated, unpaginated) Finally, there are stories which are difficult to place into sub-categories. For example, “Oops!” is a fable about scientists with god-like powers (and names like Literature at Lightspeed – page 100 Yawheh) who accidentally create a universe in an advanced physics lab. (Jennings, 1997, unpaginated) In a similar vein, Andrea Tavlan’s “Life Goes On” is a comic fable about the way an alien race might interpret human behaviour from their distant vantage point. (1997, unpaginated) On a more serious note, there were stories like “Time Walk,” in which a person with the ability to travel through time moves from one human atrocity to another. (Weindorf, 1996, unpaginated) “Time Walk” is an attempt to comment on the unchanging nature of the periodic horrors the human race inflicts upon itself. There are also stories like Tom Oliver’s “Anarchus:”

Located on the very edge of known space, Del3 was originally a mining world. In the past, before the world’s slow process of terraforming, the planet had been extensively exploited for the heavier elements it contained. Tantalum and Actinium, present in usually high concentration on the surface, brought the furnaces and engines of humans to this world. The elements, vital for manufacture of hull and fuel stabilisation, were soon scraped in vast quantities away from the earth and transported in the huge Stellarships back to the industrial worlds of the Inner Core, where second and third generation stars (and therefore the heavier elements) were distributed far more sparsely. Now, after the only remaining elements lay deep beneath the hard rocks, where neither economics [n]or necessity could reach them. The mines shut down, the domes [were] left to decay, and humanity left. The wrecks of the old, nucleated mining towns formed jagged shapes on the horizon, deserted and left to the winds.” (1999, unpaginated)

This story reminded me a little of John Steinbeck’s The Grapes of Wrath in the way it combined physical descriptions of a used up land, knowledge of the political and economic forces which lead to ruined communities and a compassion for individuals trapped into desperate lives by those forces.

From some darkened deep did the rains pour. Our image of each other was dripping wet. We cried..laughed, and were terrified. I, king of fools..you, queen of gestures. And the tears became suspended in air...in their stillness the light reflected, and we saw. We violated the taboo of generations. The radiant light penetrated the cold images, and they broke like fine crystal Literature at Lightspeed – page 101

glass. We panicked cutting our feet as we ran. Frantic bleeding hands try to put the pieces together. After so long, suddenly, it seems, we do not know each other. Bewildered..manic laugh. All this from a simple gesture of loving. Cry, and die a little. Laugh, and die a little. I want to live! Let our bodies heal. Let the pieces lay. (Sprague, undated, unpaginated)

Because of the general impression that the Internet is dominated by young people, when one thinks of romance fiction on it, one is tempted to think it is made up mostly of stories of first love written by adolescent girls. There are some examples of this on the Web (Grace Chong’s “Love Trouble’s” (undated, unpaginated), for instance). However, a much greater variety of stories in this genre is available. Some of the stories are classic adult romances with elements of fantasy. Rhonda Nolan’s page Rhondavous is dedicated to this type of story, containing many examples of it. (undated, unpaginated) Judy Ossello’s “The 11th Arrondisement” details the budding of a relationship between a pair of strangers in a foreign land. (undated, unpaginated) Mostly, though, I found stories which attempted to deal with romantic relationships in a more “literary” way, eschewing fantasy for a more realistic appraisal of relationships between more fully developed characters. David Watmough’s “The Beautiful Landlord,” for example, tells the story of a man attracted to the man who owns the building in which he lives. The landlord is heterosexual, but there is a certain sexual tension between them. While this is the central thread of the story, Watmough also develops the characters of other tenants in the building, creating a web of relationships beyond the central, romantic one. (1997, unpaginated) In Joseph Flood’s “Eire,” a young man is encouraged by his family to take a trip to their Irish homeland to find himself a mate; when he gets there, however, his relationship with the woman (and the country) is not what he has been led expect. (1996, unpaginated) Literature at Lightspeed – page 102 Many of the stories were about the sadness and anger of the end of the relationships. Some, like Tom Crisp’s “Nor a Lender Be,” dealt with this subject in a humourous way:

"Hello, darlin’!” he said as he opened the door. How convenient it is to have a sweet li’l ole accent to play like background music. He wore a big smile and smallish black briefs. “Come see the puppy!” It certainly should be enough to have to see your ex married to someone you end up liking even more than you like the ex, ensconced in a perfect apartment with a big square terrace partially overlooking Riverside Park and working at your dream job with your former favorite magazine. That should be plenty. Then this. "Puppy? When did you get a puppy?” (1998a, unpaginated)

Other stories attempted to describe the pain of betrayal in a more serious way:

Walking toward me was my ex, John. I stopped made nice and tried to walk away queitly [sic]. On the way up the ramp I had to pass by one of those guys you always see playing guitar in the station. He was singing and stupid me I listened -- “the first time I saw you I could tell by the look in your eyes we’d be together forever. I could tell the first time I saw you how much you would mean to me.” That’s when I lost it. Tears streaming from my face, choking back sobs I hurried into the night air. Everywhere I looked faces stared. I felt lost in a funhouse maze and I couldn’t find home. I just wanted to get home and curl up with my puppy and pretend I never met the Bastard. Pretend I never loved him. Instead I hit obstacles. (Macris, 1998, unpaginated)

One tale, “Say Anything...But Don’t Say Goodbye,” starts off as a seemingly storybook romance and ends as a harrowing story of wife abuse and murder. (Swann, 1999, unpaginated) Some of the stories were sexually explicit:

She arose and stood before him. It was time. She removed her clothes, first revealing her torso as the dress and bra fell to the floor. Her nipples hardened in his gaze. She felt her nakedness exposed in the candle light. For the first time in years, she experienced her body as sensual and voluptuous in the eyes of another. She knew he liked her shape as much as she did. He took her by the hand and led her to the bed. She slid between the sheets which were cool from the forest night air. He undressed in the candlelight. For the first time since taking off her clothes she openly Literature at Lightspeed – page 103

returned his gaze. He climbed into the bed, and they entwined, her breasts rubbing against him. She felt an unexpected level of energy in his body... She wasn’t thinking anymore. She was aware only of his attentive touch. She held his penis and massaged it gently to erection. He kissed her nipples, producing an electrifying tingling sensation like the breeze he was invoking in the forest of her soul. He caressed her body. Gently she opened her legs wider as her abdomen responded to his presence. He kneaded his fingers between her thighs as she moaned softly in response to his touch. (Davis, 1998a, unpaginated)

While this passage may not seem out of place on a pornography site, I actually felt it to be tremendously sweet in the context of the story, which is about a woman in her 50s being awoken to the possibilities of life by a man in his 80s.

Since her divorce she’d had several relationships with younger or middle- aged men but had found that they were either too needy, sex-obsessed, or just not her type. She found herself thinking more and more about the old man. He filled an empty spot -- a longing for some kind of recognition, security, protection, and comfort. Eventually she decided to take him up on his offer. It was more of an intellectual decision than an acknowledgment of a physical need. She could use someone to help relieve the isolation and boredom of her mundane existence... (ibid)

"The Old Man” is really a story about the human need for emotional connection. Rereading the first passage with this knowledge, you will likely find that it has a completely different meaning than the one you imagined when you first read it. Some will argue that sexually explicit material cannot, by definition, have literary merit; however, most of us recognize that the two are not mutually exclusive. By denying the literary qualities of such stories, their critics infantilize literature, making one of the most basic of human experiences off limits for serious writers. Context is an important element in determining the literary merit of sexually explicit material. In Jan Setzer’s “When the Wisteria Bloom,” for instance, a romanticized sexual encounter between two people we are led to believe are strangers turns out, in the end, to be a way of rekindling the spark in a 10 year marriage that had Literature at Lightspeed – page 104 been flagging. (1996, unpaginated) In this instance, the sex scene has psychological import for the characters. Noah Masterson, by way of contrast, uses a scene of sex in “Stella: A Fictional Haircut Story” to comment on the act itself: “Plus, women will almost always suck your finger if you put it anywhere near their mouth while you are eating them out, and that’s pretty funny if you ask me. Sex is meant to be funny.” (1998b, unpaginated) This appreciation of context is perhaps most important to the writers of sexually explicit material whose work appeared on the Blithe House Quarterly site. Since gay sexuality is still largely a taboo in North American society, being able to express it can be both a form of personal empowerment and a political statement. This issue will become important in Chapter Four’s look at the American Communications Decency Act.

At the rededication ceremony one bright spring morning, the Service Area Director stood on a picnic table, one hand in his pocket, and said, “Her love for trees is the quintessential spirit of our New Jersey Turnpike." "Joyce was a man,” his assistant whispered. "His,” the Director said. It was official. (Brooks, 1995, unpaginated)

Many of the stories in the other categories contain elements of humour. However, there are stories on the Web whose primary function is humourous. Some, like Brooks’ “The Joyce Kilmer Service Area,” quoted above, contain political or social satire. Doug Powers, to use another example, has a regular column of satire on the Inditer site. (undated, unpaginated) Most of the stories were humourous “slices of life.” Shan Anwar’s “Water Buffaloes,” for example, is about a man who thinks he is a lady’s man until he tries to seduce a woman who won’t play his game. (1998b, unpaginated) Steven J. Frank’s “The Gelato Affair” is about an American entrepreneur suffering through a trip to Italy in the hope of discovering the secret to making that perfect Italian dessert. (1997, unpaginated) Literature at Lightspeed – page 105 A couple of the stories had literary antecedents. W. S. Mendler’s The Screwdisk E-Mail was patterned after C. S. Lewis’ The Screwtape Letters. In the latter, a demon writes to his cousin on Earth advising him how to corrupt mortals; the former updates this concept, employing email instead of print mail, and suggesting that new technologies gives demons new methods of corruption. (1996, unpaginated) The other was William F. Orr’s Any Other Season. (1997, unpaginated) This novel unfolds as a series of reviews of New York stage events written over the course of a season by a cantankerous journalist. As we get deeper into it, we find that we learn more about the writer and his relationships to the people in the New York theatre scene than we do about the plays he is ostensibly writing about. I found this conceptually similar to Vladimir Nabokov’s Pale Fire in the way it used an unlikely literary form to create a narrative (although I’m sure Orr will not feel slighted when I suggest that in execution the story lacks Nabokov’s sly subtlety). There were also a pair of stories which could be considered absurdist. Michael Mirolla’s “Pulling One’s Leg” is about a stage play about an execution in which none of the actors seems entirely clear how to properly inhabit their role. (1998, unpaginated) Kevin Paul Smith’s “The Man Who Licked the Pope” is probably self-explanatory: “I am the man who licked the pope. My name, Eric, will go down in history. Ever since that day, I have been called Eric the pope licker.” (undated (a), unpaginated) Finally, as one might expect, some of the stories were just silly, bordering on juvenile. “One fretful and fateful day, it just so happened to be Max’s twenty-second birthday,” one read. “If this fretful and fateful day had happened a couple hundred years earlier, Max would probably have his own family and farm by now, but this fretful and fateful day had not happened a couple hundred years earlier, and that means Max doesn’t have his own family and his own farm, just a messy dorm room and a mold-covered piece of bread he calls Fred.” (Henning, 1998, unpaginated) Dangerously Psycho’s series Literature at Lightspeed – page 106 of stories about The Strange Society (nee the Strange Table), a group of high school friends who wreak havoc on various British institutions, is another example of this kind of humour. (2000, unpaginated)

"I’ll cap your fuckin dogs, you don’t call em off.” She started to drop one arm to her side, but the dogs inched closer. "Don’t move. And don’t even touch your fuckin piece." "This is uncool, T. I’ll nine em, I swear.” (Shirley, 1997, unpaginated)

I only found a small number of stories which could be considered classic mysteries. Rosenberg’s The Avram Cohen Mysteries is a series of novels about a Jerusalem detective. (1991, unpaginated) B. E. Fraser’s “Madeline Deerstalker” is a classic locked room story with a twist: the murder victim locked a baby in a room before she was killed and thrown off Niagara Falls. (undated, unpaginated) Hardboiled fiction was slightly better represented. Shirley’s “Men at Work” (quoted above) is about a tough female mob enforcer who has to deal with a shady character while keeping her lesbian lover unaware of what she does for a living. Katie McCarty’s “Wish Upon a Star...” is about a man who is convinced that a woman who betrayed him isn’t dead, and the vengeance he hopes to wreak on her. (1996b, unpaginated)

Sometimes the people who live on the coastline lose their fight against melancholy, like the captain’s widow who sat down on an ice-floe one morning and let herself drift into the chilly North Sea. Old sailors sit at the harbour all day and wistfully look out at the river Weser; they noticed her but none of them took any action. The woman sat on her ice-floe, quiet and content, as if she fulfilled an old wish. Like a flame, her large red woollen scarf flew around her neck. Near Blexen, at the the [sic] mouth of the Weser, the second officer of an upstream riding freighter from Singapore saw her once more, but she seemed so determined that he didn’t dare to give the alarm. Moreover he thought: “Different countries, different customs -- maybe that’s a special German kind of sport -- ice- floe-sailing.” They never found the captain’s widow, but that doesn’t matter anyway. (Sellier, 1998, unpaginated) Literature at Lightspeed – page 107

Taking the cap off of a pen is almost precisely the same as taking the cap off of a syringe. (Garni, 1997, unpaginated)

A man wonders what a war photographer’s last hour of life must have been like after reading an article about her death. (Sanders, 1998, unpaginated) An elderly woman contemplates the remainder of her life after the death of her husband of 47 years. (Mandel, undated, unpaginated) An artist is only able to sculpt perfect fragments of human bodies, not wholes. (Church, 1998, unpaginated) After 40 years, an elevator operator has to adjust to retirement. (Russell, 1997, unpaginated) A man who judges the people around him in a supermarket changes his opinions when he takes a closer, second look at them. (Fritz, 1997, unpaginated) Various people exchange their views on urban decay while waiting for their stalled subway car to start up again. (Annechino, undated, unpaginated) In the Occupied Territories, an Israeli soldier becomes suspicious about what may happen to him because he is wearing a dead man’s boots. (Shafir, undated, unpaginated) As you might expect, the subject matter of the stories in the “Literary/Other” category is quite broad. There were a lot of stories about young people. “American children,” one unromantically describes adolescence, “lithe and feral, completely unforgiving, smell intelligence and hunt it down, persecute it, try to purge it like an impurity with continual rituals of social darwinism.” (De Lancey, 1997, unpaginated) James Muri’s The Plains Diaries appears, at first blush, to be a sentimental coming of age story set in the 1930s. However, many sophisticated events are strained through the point of view of the somewhat naive adolescent narrator. (undated, unpaginated) Other coming of age stories included Louis Greenstein’s novel Mister Boardwalk -- about a kid who summers with his parents at Coney Island (1997-1998, unpaginated) and Sally Poulsen’s “1956,” about a girl who goes to San Francisco in search of her love, Literature at Lightspeed – page 108 Neal Cassidy. (undated, unpaginated) There was also a gay coming out story, Dean Kiley’s experimental “Eight Answers, Four Replies, A Peepshow and an Epilogue.” (1998a, unpaginated) The conflict between youths and their parents appeared in many stories. In one, a troublemaking boy is punished for a transgression by not being allowed to go fishing with his family; his parents being unaware that he has more fun at home. (England, undated, unpaginated) In Breves Itineres, Tucker McKinney baldly wrote: “The whole point of this ordeal is that I cannot explain my personal life to my parents at all.” (1999, unpaginated) Work was the subject of some of the stories. Its portrait was generally not flattering. “Margot and I were working overtime, stuffing envelopes at five o’clock in a tiny basement in the bowels of a law firm,” wrote one author. “Water spots made rusty circles on the ceiling tiles above our heads, and every few minutes the walls shook from the vibrations of people walking up the stairs to leave. I had worked there for about six months and I hated it, hated everyone, hated myself, for being so average.” (Heidi Moore, undated, unpaginated) In these stories, work is spiritually deadening, and workers have no respect for their jobs, or their fellow workers, as Michael James Erdedy made clear in his story, “Daily:” “Greg has a cubicle in the bowels of an accounting department. He spends a lot of time coming up with nicknames for his coworkers, most revolving around how anal and stiff they are. Banal Bob is his favorite creation, though he doubts that the roots are the same. None of the nicknames have caught on yet. A few of his coworkers barely nod or imperceptibly shrug when Greg walks by. These are his friends.” (undated, unpaginated) Some of the stories dealt with the urban poor, what one author called “the invisible city.” (Stokes, 1999, unpaginated). “The steed-cat’s name was Hasbeen...he has been everywhere with me,” went one story. “He gallops aside me like a wounded horse Literature at Lightspeed – page 109 chasing a rabid sugarcube. We were on our typical, daily search of Utter Rapture; and some damn fine grub too. But, we inevitably settle for damn lousy grub. We live in the cesspool of culinary cutthroats. Everywhere we eat, we demand stale marshmallows.” (Recker, 1996, unpaginated) In another, a woman describes the deterioration of public housing and the effects this has on the souls of people who live there (Morrigan, 1998a, unpaginated). Other stories seriously probed issues of life and death. One is about the emotions of the friends and family of a man who is in a vegetative state, being kept going by life support machines. (Friesen, undated, unpaginated) Another contains discussion of when euthanasia is appropriate, although, oddly, that may turn out to be part of a teenage suicide pact. (Switaj, 1997-2000, unpaginated) A third story sympathetically details the reactions of friends and family to a boy who slowly dies form AIDS after contracting it from an operation. (McCarty, 1996a, unpaginated) A couple of the writers set their fiction in the past. Afshin Rattansi’s “Caprice,” for example, dealt with the human wreckage of political suppression in Spain. (1997, unpaginated) In addition, a tourist in Germany cannot believe the warnings of his friends that he may not be welcome after the election of Hitler’s National Socialist Party because he is Jewish in Sylvia Petter’s “Viennese Blood.” (1998, unpaginated) Sometimes, the researcher, having too little time to process too much information, creates dubious categories. Cruelty to animals, for example. “My visit with Laura was already going badly when I killed her dog,” Marcy Dermansky opened her story “Drop It.” (1998, unpaginated) In Alison Gaylin’s “Getting Rid of January,” the cat isn’t killed, but disappears under suspicious circumstances. (1998, unpaginated) Or plane crashes. David Ellis Dickerson’s “Crash” describes in detail the deaths of several passengers whose airplane falls to the ground. (undated, unpaginated) The story is not meant solely to shock, however: Dickerson reveals how many of the Literature at Lightspeed – page 110 characters were connected to each other in life, although most of them didn’t know it, inviting the reader to ponder the web of relationships in which she or he is enmeshed. Cary Tennis’ “The Journalist Responds Incorrectly to an Airline Crash,” as the name implies, is about the reaction to a crash by people on the ground. (1998, unpaginated) Undoubtedly coincidences, not trends. A couple of the authors experimented with content in ways similar to well known writers. Ralph Robert Moore’s “Big Inches” was a Kafkaesque nightmare about a man stopped at an unnamed border being searched by a relentless guard who is convinced he is hiding something: “‘And we can examine all of you, and re-examine all of you, and then re-examine what there is left to re-examine after we re-examine you, until there is none of you left. All of you that we’ve looked over already: if you weren’t tied to the floor, and could examine this room, or examine us, do you think you could find any of it now? Your clothes are no longer in this room, Pottah, not because we brought them out of the room, but because we examined them so well they no longer exist.” (1997, unpaginated) In Nigel Tasane’s “Schrodinger’s Nobody,” a man becomes increasingly obsessed with a mathematically precise description of the whirring of a fan; in terms of style and content, the story reminded me of the prose of Samuel Becket. (undated, unpaginated) A couple of the other stories experimented with form as well as content. For example, Griffin Rand’l’s “Timothy Jordan’s Conscience Springs a Pop Quiz” unfolds as a series of questions:

Define libido. Compare and contrast eroticism and perversion. List three crimes against nature. (BONUS: Two extra points for each additional response.) Why would a mother, under any circumstance, allow her son to go to school wearing a dress? Literature at Lightspeed – page 111

When did you first realize you were different? To what degree do you now accept the difference? (1999, unpaginated)

As the questions become increasingly more specific and personal, the primary conflict (between the main character’s sense of himself as a homosexual man and society’s messages that homosexuality is immoral) is revealed. Finally, there were a couple of stories for children. Jeff Meyer’s “Gilbert Henry Tries Again” is about a boy who has to learn how to slow down and not do everything too fast. (1997, unpaginated) Steve Karr’s “Levitation (or How to Float)” is a sweet, illustrated story about, well, how to float. (undated, unpaginated) * * * This has been a brief overview of some of the fiction available on the World Wide Web. I have not attempted to quantify the amount of fiction available in each of the sub-genres and describe how frequent various types of story are; rather, I have tried to give the reader a sense of their variety. Contrary to what one might expect, stories on the Web are highly diverse, with something for virtually every taste. Why Publish on the Web? 1) Cost The most frequent reason individual writers cited for putting fiction on the World Wide Web was that it was not as expensive as publishing in print. “It doesn’t COST anything to post your stories on the web!” one writer explained. “I already had the web space, and I just decided to take advantage of it to post my stories.” (Darnell, 1998, unpaginated) “It’s as free as one can get and reach such a vast amount of people,” wrote another. (Shadow NightWolf, 1998, unpaginated) However, the situation is not quite so simple. “There are other resources involved in online publishing,” one writer pointed out, “such as the internet access and the phone line.” (Doucette, 1998, unpaginated) These costs are borne by the writers. If they have their own page, writers have to pay for space on a server to house it (although, as Darnell Literature at Lightspeed – page 112 pointed out above, some people get free server space when they sign up for their account, so they may not have this expense). Finally, there is the cost of the computer itself: “It can be cheaper to publish on the web -- after you discount the few thousand dollars worth of computer equipment you need to do it.” (Platt, 1998, unpaginated.) How can we account for this apparent contradiction? Of the 336 individual writers, 10 accessed the Web from school, 23 from a combination of school and home and 17 from school, home and work. That means that 50 respondents (14.8%) accessed the Internet from school at least part of the time. This is important because students and teachers do not have to pay directly for either the hardware or the connectivity time needed to put work on the Web (some schools also offer server space). Students, the majority of people in this category, do have to pay for these things in terms of higher fees, but this cost is hidden, and many do not realize it exists. Professors, on the other hand, are supplied with equipment and Internet connections as a condition of their employment, so the cost of their access to the Internet is taken up by student fees and/or government levies. A similar argument can be made for those who access the Internet from their place of employment. Thirteen people said they used their work computers to get on the Net; 93 said they used a combination of work and home computers, and; as we have already seen, 17 people access the Internet from home, school and work. Thus, 123 (36.6%) of the survey respondents accessed the Internet some or all of the time from work. Workers do not have to pay for the equipment, which is supplied by the company they work for (although many have to pay for their own server space); as another cost of doing business, this is passed on to consumers in the price of the product. Still, the majority of survey respondents use their home computers to get on the Web: 173 (51.4%) exclusively, 213 (63.4%) in some combination. Since they directly bear the costs of computers and connectivity, they must be aware of them. Yet many of Literature at Lightspeed – page 113 these people would also claim that publishing on the Internet is less expensive than self- publishing in print. How to account for this? As it happens, most people who publish fiction on the Web use their computers for a variety of tasks (see Chart 2.4). As you would expect, the people who accessed the Internet from computers at work used them for work. Game playing was very popular among others. Other uses of the computer included: digital art/graphic design/image processing, data processing and computer programming and desktop publishing. To properly determine how much publishing one’s fiction on the Web costs, we would have to determine how much time a person spent using her or his computer and how much of that time was spent on publishing. The resulting percentage could then be applied to the cost of the equipment to determine how much money was spent on Web publishing specifically. In a similar vein, not all connect time is used to upload pages to the Web or surf for writers’ resources; some may be used to download email not related to the writer’s Web page, for example. Here, again, it would be necessary to determine how much of the connect time was devoted to Web publishing to determine its true cost. Writers do not make these calculations, of course. Many of them apply the cost of their equipment to the use for which it was primarily purchased. Thus, a student might buy a computer and Internet connection to further his or her schoolwork; the possibility of publishing fiction on the Web is a “bonus” (much like the server space Darnell found he had bought with his Internet account) that hadn’t entered into the decision to buy the equipment in the first place. If no additional expenses are incurred, it is not unreasonable for these writers to argue that publishing cost them nothing, since it did not, in fact, cost them any more money than they already would have spent for other reasons. This seems to bring us back to the initial statement that publishing on the Web is less expensive than publishing in print. Why go through this exploration rather than simply letting that statement stand? I can see two reasons. The first is that the analysis of Literature at Lightspeed – page 114 costs and benefits which a writer applies in order to decide whether to publish online is not as simple as a statement such as “It’s cheaper” would lead one to believe. For me, exploring the complexity of this decision is inherently important. A second, even more vital reason has to do with the limits of my methodology. Since my survey was of writers online, it was impossible for me to learn anything directly about writers who do not publish online. However, we know that over half of North Americans do not have Internet access, and can reasonably assume that a fraction of them are writers. If we accept the logic that it costs next to nothing to publish writing on the Web, we have to wonder why everybody isn’t doing it. On the other hand, if we acknowledge that there are costs to publishing online, then we can infer that at least some of the writers who do not publish on the Web do so because their analysis of the costs and benefits leads them to a different conclusion: they cannot afford to. computer uses zine writers individuals totals games 78 (34%) 50 (45%) 128 (38.0%) spreadsheets 18 (7.9%) 8 (7.3%) 26 (7.7%) work/research 61 (26.9%) 25 (22.9%) 86 (25.6%) digital art 30 (13.2%) 23 (21.1%) 53 (15.8%) desktop publishing 7 (3.1%) 5 (4.6%) 12 (3.6%) data processing 11 (4.8%) 6 (5.5%) 17 (5.1%) programming 10 (4.4%) 12 (11%) 22 (6.5%) financial/shopping 2 (0.9%) 5 (4.6%) 7 (2.1%) music composition 3 (1.3%) 2 (1.8%) 5 (1.5%) IRC 0 5 (4.6%) 5 (1.5%)

Chart 2.4 How Survey Respondents Use Their Computers Having thus discounted the cost of equipment and connectivity, many writers made a direct comparison between the cost of publishing on the Web and that of publishing in print. “[T]he readership numbers can far exceed what paper would cost to reach the same audience,” one writer stated. (Merz, 1998, unpaginated) Traditional Literature at Lightspeed – page 115 publishing relies on material processes (making paper out of trees, developing inks, bringing them together in printing) which have fixed costs; if you add a thousand copies to a print run of a magazine, for instance, your per issue cost will likely go down slightly because of economies of scale, but your overall printing bill will increase. In electronic publishing, by way of contrast, the main costs are incurred in producing a work; copying is a trivial expense. In fact, as you make more copies distributing the cost of creating the original among them, the per copy cost of creating an electronic work quickly approaches zero. Cost savings in producing works can be increased substantially by the relative ease of distributing works of fiction through electronic networks. Most traditional self- published work is distributed by hand, limiting the possible audience to the author’s immediate circle. In some cases, distribution at local bookstores occurs and, rarely, a local distributor can be found (and for which the writer/publisher can expect to pay between 40 and 60 per cent of the cover or asking price of the work). In either case, the potential readership is limited to the number of people in the author’s city. The electronic network, by comparison, is worldwide: “More people read it! Seriously. And all kinds of different people -- anyone in the world can bump into your story, which never happens in a traditional journal.” (Shinn, 1998, unpaginated) Comparable distribution for a print work would be prohibitively expensive for small publishers. As we have seen, some writers claimed that their work on the Web was read by more people than a comparable work would be in print. “I get more readers than my brother, who has a book in hard copy through a writers’ collective (Gecko Press) and who works his arse (ass?) off as a performance poet...” one writer stated. (Tasane, 1998, unpaginated) Another cited the problem of local distribution: “With the net you can reach people everywhere, while literary magazines (at least in Italy) have a little number of readers.” (Bianchi, 1998, unpaginated) Some writers offered numbers to support their Literature at Lightspeed – page 116 belief in the wider readership of the Web: “About a 1,000 people a month hit our site. Most of the stories get less than 100 hits.” (Schmitz, 1998, unpaginated) While this may not seem like much, many self-published works have a print run in the hundreds, which means that such stories can get more readers on the Web in a matter of months. Moreover, with few exceptions, most literary journals have a print run of 5,000 copies or less; at this rate, a story would only have to be on the Web for approximately three years to get more readers than its print counterpart. An important reason for believing more people will read something online relates to cost: “[N]o one has to pay, and thus [the] audience is bigger, exposure is bigger.” (Brundage, 1998, unpaginated) Because readers have a limited amount of funds to spend on books and magazines, they have to choose only those which they most want to read. Since online fiction is not, in most present cases, charged for, this limitation on what readers can access does not apply. “[J]ust about anyone can read just about anything, without having to decide if it’s good enough to be worth what they’re charging in bookstores these days.” (Johanneson, 1998, unpaginated) In theory, this could encourage people to experiment with reading work they might not ordinarily be attracted to. Moreover, the granularity of writing online is different than that of print. “Granularity is a concept involving the size of the pieces of information. Think of granularity in this context as the degree to which information can be broken up and still be worth more to the consumer than the price at which it can be profitably delivered.” (Whittle, 1997, 311/312) A magazine is a package of different works; when deciding whether or not to buy it, the reader has to decide if paying for the work she or he wants is worth the amount of money needed to pay for the entire package, including the work she or he does not want. This is not the case on the Web, where “People who otherwise wouldn’t spend money on a publication with your work in it, might take the time to read Literature at Lightspeed – page 117 it on a free website.” (Michael T. Gilbert, 1998, unpaginated) On the other hand, the lack of a mechanism by which writers can be paid is a serious problem, as discussed below. Not every writer agreed with this assessment. “After mentioning the large readership of internet publications, I should say that while the potential is larger, actual readership is still smaller than that of traditional publications. The problem, as I have met it, is that there are so many people, professional and ameteur [sic], publishing on the internet that the competition is stifling. On the net, you become one page out of, who knows, maybe millions. Publishing is simplistic. Getting people to come and read it, that’s hard. Really hard.” (Swan, 1998, unpaginated) As another writer, asked who he thought read his page, plaintively responded, “I don’t think anyone does.” (Keller, 1998, unpaginated) It is some writer’s experience, therefore, that the perceived lower cost of publishing on the Web is actually a disadvantage since it allows larger numbers of people to publish, making it more difficult for writer/publishers to find readers. In any case, the cost of publishing in print is clear and immediate, while the cost of publishing online is anything but. The complexities of this type of analysis, where the costs associated with using a specific technology are weighed against its benefits, are an important part of the decision a writer makes about whether or not to publish online; in particular, the perception that publishing on the Web is cheaper than publishing in print is likely an important factor in determining its benefits. Some financial considerations are not obvious. One writer claimed, for instance, that publishing on the Web “is environmentally sound as you can publish as much as you like without using a single piece paper.” (Wardrip, 1998, unpaginated) Since paper is an important cost of print publishing, eliminating it would make online publishing relatively less expensive. It is true that the Web works differently from print in this regard. If you add more stories to a magazine, you have to add paper pages, which increases your cost. If you add Literature at Lightspeed – page 118 more stories to a Web page, as long as you have not exceeded the server space you pay for, you have no additional cost. Is paper really eliminated from the process, though? As many commentators point out, promises of a “paperless office” made by computer enthusiasts as early as the 1970s have failed to happen (see, for example, Landauer, 1997 or Fuller, 1998). As we saw in chapter one, readers most often print out material they have downloaded from the Web because it is easier to read off paper than a screen. Thus, the cost, monetary and environmental, of paper has not been eliminated so much as shifted to the reader.1 And if you add more material, it costs the reader more to print it out (assuming the reader wants to read the additional writing, of course). Another stated advantage of electronic publishing is that higher production values do not necessarily cost more. To add pictures to a print publication, one has to pay for the mechanical process which prepares them for print. To add a picture to a Web page, one simply has to write an additional line of HTML code. (Assuming one has previously scanned the picture, turning an analogue image into digital code, of course. The availability of a scanner has the same cost considerations as other digital equipment, as outlined above.) “I have found that the Web afforded me the opportunity to illustrate my novel with over one hundred original color pictures,” one writer wrote. “This would have been prohibitively expensive in a printed version, especially for a first novel.” (Orr, 1998, unpaginated) Printing colour photographs is much more expensive than printing black and white photographs: four negatives have to be made and the paper must be run through the press four times (once for each colour of ink: red, blue, yellow and black). On the Web, however, “Use of color is a[s] cheap as black and white.” (Sprague,1998, unpaginated) As we have seen, the writers publishing their fiction on the Internet come from a variety of countries. Some mentioned that submitting stories to online publications was Literature at Lightspeed – page 119 easier than submitting to their print analogs: “I live overseas (in Korea) and the response time and mailing hassles made traditional submissions difficult.” (Potts, 1998, unpaginated) Owing to the small number of English speakers in the countries in which they lived, this group of writers were forced to find publications in other countries, which significantly raised their postage costs. Finally, there are some advantages, financial and otherwise, to publishing in an electronic magazine rather than on one’s own page. Since the story will reside on the ezine publisher’s server, for example, the writer does not have to pay for server space of her or his own. In addition, publication in an ezine has the potential to increase an individual writer’s readership: readers who come seeking the work of other writers may be encouraged to browse and find his or her story, something which could not happen if the writer had published only on a personal page. Why Not Publish on the Web? 1) It Doesn’t Pay The most commonly cited disadvantage to publishing fiction on the Web was that there didn’t appear to be any money to be made from it. A typical comment was that it was “Hard to figure out how to make money via the web.” (Mann, 1998b, unpaginated) As it happens, most print literary journals have such small circulations that most of the income they make must go into their production costs. As a result, they pay little, if anything, to their contributors; economically, therefore, there would seem to be little to lose by publishing online rather than in print.. “I wasn’t paid for it,” Andrew Burke pointed out about his Web page, “but poetry pays badly at the best of times.” (1996, unpaginated) “Traiditional [sic] sci-fi magazines aren’t well read -- and they don’t pay a lot for the stories,” Steven Schiff added. (1996, unpaginated) The economics of publishing on the World Wide Web may, in fact, make publishing in an electronic magazine more lucrative for writers than publishing in a print journal: “Less expensive for publishers than print -- much less. As a result, many online Literature at Lightspeed – page 120 magazines are paying more than print pubs.” (Atkinson, 1998, unpaginated) Of course, an online publication which doesn’t have any revenue cannot take advantage of this, which means the majority of the ones I read. In any case, this is a relative advantage: since most literary magazines in print cannot afford to pay their contributors, an online publication need only pay a token amount to pay more than its print cousin and, as we have seen, most do. Still, of the writers who had published extensively in print, some claimed that they published online “for$ this is what i do for a living, i only publish in e-zines that pay” (Goldberg, 1998, unpaginated) Writers who put fiction on their individual pages don’t have this source of income. They have tried to make Web publishing pay in a variety of ways. “I saw book companies were selling through the Web,” one survey respondent wrote, “publishers were looking at writer’s samples through the Web, and some people were just posting there [sic] own works for the fun of it, and so I thought, hmmm, I can combine all three. Selling my own fictional works and those of friends puts all the control in our hands and all of the profits. Other than my monthly access fee and a great deal of work, expenses are at a minimum.” (Clay, 1998, unpaginated) She went on to point out, though, that since electronic books were a relatively new phenomenon, many people were wary of buying them. Another possible source of income for writers was borrowed from computer science: “I was working for a computer company and was trying to think of ways to use electronic media for publishing. I had seen lots of shareware software and decided that a shareware novel would be a good idea. By the time that I published my shareware novel, I had found two other such electronic books on local dial-in bulletin board systems...” (Lindsay, 1998, unpaginated) Shareware refers to computer programmes which are given away at no cost. If users find that the programmes are useful, they are asked to send a fee to the programmer or company responsible for the creation of the programme. This is, at Literature at Lightspeed – page 121 best, a highly uncertain method of compensating people for their work. Only one writer claimed to have had any success with a system which resembled shareware: “On my page, I jokingly stated that ‘if you really liked it, instead of writing a critique, just send me money.’ One women [sic] sent me a check for $100.” (Starry, 1998, unpaginated) One final possible method of receiving payment would be to offer a small amount of free material and ask the reader to pay for more. “You would need to give a potential reader just enough of a sample to hook the reader onto your work. There would have to be a form of electronic cash you could receive for the work. Or you could work it out so you can receive credit card payments.” (Bamberger, 1998, unpaginated) The work to be charged for would be on a part of the writer’s site which could only be accessed by a password. I have come across few fiction sites which actually do this. However, one writer did say that “I intend to charge (micropayments) for access to full text, with a sample novel and a few short stories available free.” (Aviott, 1998, unpaginated) As we shall in Chapter Three, electronic cash and micropayment schemes have, to date, not worked, leading to the question of how these schemes can be made practical. Even those who did not have a plan to make money from the work they published on their Web sites expressed the hope that they would be able to parlay it into paid work in print. Typically, one writer “began putting my page together as a way to get myself known (hopefully [to] attract an agent/editor who’d be interested in me) and increase my readership.” (Merz, 1998, unpaginated) Some writers go so far as to have special sections on their Web sites which are not available to the general public: “I have a semi-private web page that describes some of my fiction and which also has some fiction, but I use that as something to show potential agents (I give them the URL).” (De Lancey, 1998, unpaginated) At first, I was skeptical of this claim: most magazines and publishing houses already have far more submissions than they can publish, I reasoned, and so would see no Literature at Lightspeed – page 122 advantage to searching the Internet for yet more material. “I have 100s of letters from agents and editors all saying they’re far too busy to even look at my work. They don’t say it’s horrible; they can’t because they refuse to even look at a sample. They seem content with stables of safe writers.” (Stowe, 1998, unpaginated) The poor reputation fiction published on the Internet has made it seem even more unlikely that publishers and agents would be searching the Web for publishable material. As one writer put it: “I doubt whether publishing houses are surveying the internet for a cyber Hemingway [sic].” (Abdulrazzak, 1998, unpaginated) However, some authors have claimed just such a thing happened to them. “The story in Blithe House,” one wrote, “is being reprinted in Best AMerican [sic] Gay Fiction 3 which is edited by Brian Bouldrey and published this fall from Little, Brown.” (Currier, 1998, unpaginated) According to another, “My novel, The Magic Life, which first appeared on the Internet, is currently being published in hardback, $19.95, Rare Bird Press, ISBN 0-0996281-6-6. It is scheduled for role out in January, 1999. It was a finalist in the Hemmingway First Novel Contest since it’s introduction.” (Starry, 1998, unpaginated) Apparently, Abdulrazzak and I were wrong. Others had received interest from agents and publishers in the print world, although they had yet to record a sale there. “Hi -- Yes, I will be happy to reply to your questionnaire,” one wrote. “I’ll get to it next week. I had two publishers contact me about publishing my novel after they found the sample chapters on my website.” (Linda Adams, 1998, unpaginated) Another claimed that as a result of being published on the Web, “i’ve been contacted by a literary agent who wants to represent one of my novels, as well as other magazines inviting me to send them work.” (Tyree, 1998, unpaginated) It may be worthwhile to draw a distinction between small press publishers and their larger mainstream relatives; it is possible that smaller presses with fewer submissions are seeking work on the Internet while the larger publishers are not. Literature at Lightspeed – page 123 Moreover, as I expect the reputation of work on the Web will improve over time, print publishers may find there is less of a stigma to “discovering” writers who started their careers by publishing online. Already, one writer found that “with more credits gained from net published stories, more [print] editors pay attention to me now when I list credits. It helps, it really does.” (Merz, 1998, unpaginated) In an earlier section of this chapter, we saw that many writers who have published in print are migrating to the Web. The number of writers going the other way, that is, from Web to print, is, at present, but a trickle. However, there are reasons to believe that this will increase in the future, to the point where writers are free to move back and forth between the media, seeking the best possible solution to the cost/benefit analysis for any given work at any moment in time. A discussion of the economics of information which arises out of the concern about generating income from their writing expressed by writers will be taken up at length in Chapter Three. Promotion: Disadvantage or Advantage? A report in December, 1997 concluded that there were 320 Web pages; by 1999, a different publication claimed that there were 800 million Web pages. (The Censorware Project, 2000, unpaginated) Given its continuing expansion, it is only a matter of time before that exceeds one billion. As mentioned previously, this poses a problem for Web publishers: “[I]f you were looking for fame, you might get lost in the muddle of millions just like you.” (Recker, 1998, unpaginated) Because of this, some survey respondents claimed that a disadvantage of publishing on the Web was that “You must put energy into advertising your own page, or no one will see your writing.” (Keller, 1998, unpaginated) This was much more of a problem for people with their own page; those who published in ezines could assume that the publishers would take care of the promotion (although it would certainly be to their benefit to do their own promotion as well). Literature at Lightspeed – page 124 This is actually a facet of self-publishing generally: without the promotional budgets of publishing houses, and their ability to generate publicity, print authors who publish their own works find they must also be their own promoters. If they can afford it, they can advertise. More often, their promotional efforts will involve more labour- intensive activities: taking a booth at a small press fair; putting up posters; selling on the street; et al. The Web offers a variety of means of promoting a work which do not require as much labour. For example, one writer stated that “I keep my page registered with many of the major search engines to improve exposure for myself and the others.” (Stokes, 1998, unpaginated) Search engines such as Yahoo!, Lycos and Alta Vista are databases which contain the names, URLs, keyword descriptors and brief descriptions of a large number of Web pages. By submitting keywords, people looking for Web pages on a specific subject can find them. There are things to keep in mind when registering with search engines, though. “Search engines can be a great help finding what you’re looking for,” one writer commented, “[in] genres, that is.” (Sonnenschein, 1998, unpaginated) The more specific a reader can be in couching the terms of a search, the more likely he or she will be able to find a satisfying work: searching for romance fiction, to use one example, is more likely to get you a story you specifically want than a general search on fiction. When registering your work, therefore, it helps to be as specific as possible. If you don’t write fiction that can be fit into a specific genre, your page will be dumped into a very large group of fiction pages (which will include all of the genres), making it that much harder to find. Another problem with search engines is that “everyone else is doing the same thing.” (Swan, 1998, unpaginated) Thus, while registering with a search engine gives a writer more opportunity to be read than not, it doesn’t necessarily give one an advantage Literature at Lightspeed – page 125 over other writers who are also registering. Placing one’s page with a search engine should, therefore, be seen as a first step in online promotion. "I have also spent a lot of time joining and maintaining my presence in web rings,” another writer claimed, “my main strategy for getting readers to my site.” (Bess, 1998, unpaginated) As was previously mentioned, Web rings connect pages with similar content; they are a good method of making readers with an interest in a specific genre aware of a writer’s work. Moreover, any person can start a Web ring; writers whose work fits into smaller and smaller genre categories/audience niches may be able to use them to connect with other writers with similar interests for their mutual promotional benefit. Writers can submit their page to more than one Web ring. An author of fantasy fiction, for instance, can become a member of Web rings devoted to fantasy fiction, general fantasy and general fiction. In fact, although it is common practice to list Web rings to which a page belongs on one’s home page, some of the writers in the survey had to have a separate page where they claimed membership in half a dozen or more Web rings. The advantage to belonging to more than one Web ring is that it creates multiple pathways which lead to one’s work; the more such pathways a writer creates, the more likely readers will be to find her or his writing. Some Web rings offer an additional promotional advantage: “I have also been featured as the ‘Page of the Month’ on the web ring to which I belong.” (Darnell, 1998, unpaginated) Such awards not only alert potential readers to a writer’s page, but they act as a form of authority which vouchsafes the quality of the work on the page. As we shall see, this is an important consideration. Web rings work on the principle that if a reader likes a specific aspect of one Web page, that reader will be interested in other pages which share that aspect; it uses the linking ability of the Web to connect pages with similar content. It is by no means the only way to use the linking ability of the Web to promote writing. “More sites are Literature at Lightspeed – page 126 devoted to specific genre’s [sic] and most are willing to link together to provide readers of specific genre’s [sic] together for convenience.” (Winkler, 1998, unpaginated) Trading links with other Web pages which have similar content is a common occurrence on the Web. The advantage to this approach is that it can open up a large number of paths to your story, whereas, with Web rings, your story is only linked to the one before and after it. The disadvantage to this approach is that you have to find and make contact with all of the different pages you would want to link to; registering with a small number of Web rings is less labourious. Links need not be to other fiction sites to be effective. “I have stories about bartenders,” one author said. “I go to Bartenders Magazine and leave little notes to lure their readers to my stories. I’ve had over 1000 hits from bartenders mag alone . . . and more feedback/fanmail than I get from writers sites!)” (Alt, 1998, unpaginated) As with any form of promotion, creativity is required. Another possible means of promoting one’s writing is to send an announcement to an appropriate Usenet newsgroup. This has to be very carefully done, however. Etiquette around newsgroup postings prohibits commercial announcements and discourages self- promotion (I learned about spam in the course of my surveying, as recounted above). As far as I can tell, a simple message including one line of description of the story and a link to it is acceptable to most Usenet users, but anything more risks a negative backlash. Listings specifically for fiction are being created. “You may wish to check out an e-zine named ‘Exodus’,” one writer suggested. “They have a special subsection called ‘WoW’ which stands for ‘Web of Writers.’ WoW is essentially classified ads for web writers, which include bios, websites, and email addresses, plus samples of their work.” (D. K. Smith, 1998, unpaginated) This type of page can help readers find just the type of writing which they are looking for, making it an important place for writers to be listed. Literature at Lightspeed – page 127 Finally, writers can use email to keep in touch with readers who have responded to their work. For example, every two or three months I get an email message from Duncan Long, one of my survey respondents, informing me that his page has been updated. Making people who have already been to one’s page aware of when new material is posted to it is a useful form of promotion. Unfortunately some limits to this type of publicity exist. “[T]he mailing list has grown to such a size,” one ezine publisher stated,

that I’ve started getting bounced from some servers as if I was sending out Spam! (OUCH!) So, I may have to discontinue the mailing list. AOL and some of the other services have already given me problems about the list and refused to deliver the messages - if you’re on AOL and haven’t been receiving the ‘New Issue’ notices, that’s why. (Hollifield, 1997, unpaginated)2

Despite its problems, the Web offers unique opportunities for writers willing to put in the time and effort to promote their work.

Why Publish on the Web? 2) Bypassing Traditional Publishing

Given the proliferation of publications on newsstands, one often assumes that print publishing is vast, with opportunities for all. In fact, although there are far more avenues to get work to potential audiences in print than, say, television or film, it is also true that there are far more people trying to get their work published. As a result, rejection by writers hoping to see their work appear in print is the rule rather than the exception.

As mentioned above, some of the writers in the survey mentioned that they had tried to get published in print, but had failed. “I was having trouble finding a print publisher for my first novel,” Louis Greenstein wrote, “my agent was getting nowhere, so I figured maybe the book would get some exposure [online].” (1998, unpaginated) Sometimes, the rejection could be quite cruel: “When I was seventeen, I sent a story Literature at Lightspeed – page 128 about a cat to Isaac Asimov’s Science Fiction Magazine. It wasn’t the best science fiction story about a cat ever written, I grant you, but it didn’t deserve the treatment it got... I received my SASE [self-addressed stamped envelop] back, full of the ashes of my story, without a cover letter.” (Pylman, 1998, unpaginated) Although this may be an extreme example, rejection stories are not uncommon; there are only two or three major science fiction magazines, and the volume of submissions they have to deal with is substantial. They are known for having a cavalier attitude towards hopeful writers. One might assume that this is just sour grapes from writers who couldn’t make the professional cut. However, this isn’t necessarily the case. Greenstein, for example, had had 10 plays produced and published in many print magazines before he had tried to sell his novel. Whatever reputation he may have garnered from these successes did not make it any easier for him to interest a publisher. Changes in the publishing industry over the past two decades may have something to do with this, as we shall see in Chapter Five. Some writers directly stated that the reason they put their work on the World Wide Web was because they couldn’t get published in print. “When one of my stories was rejected by [science fiction print magazine] Analog,” one writer stated, “I decided to try it with [science fiction ezine] InterText, and lo, it was published. Since then I’ve had two other stories published in InterText.” (Johanneson, 1998, unpaginated) Whether or not a work is published is not simply a question of how many venues there are in which to get it published; the important question is how many people are attempting to sell stories relative to the number of venues in which they can be published? Here, the Web has a tremendous advantage over print publishers: “The web offered an alternative which was not oversubscribed -- websites are looking for writers, unlike print publishers who have more hopefuls than they can handle and therefore place severe restrictions on receiving manuscripts.” (Allan, 1998, unpaginated) Competition between writers is not as fierce online for many reasons: there are, or can be, far more Literature at Lightspeed – page 129 publications; there are, at present, fewer writers relative to the number of publications; writers always have the option of publishing on their own page. In print, many writers are chasing after few opportunities; online, fewer writers appear to be chasing after a greater number of opportunities. Rather than see Web publishing as a sign of failure, however, these writers by and large view it as a means of bypassing an existing system which does not serve their interests. Where some people look at the print publishing industry as a series of gatekeepers whose purpose is to assure the quality of published work, these writers argue that it involves levels of bureaucracy whose main purpose is to keep the writer’s vision from being delivered to the reader. “You do not have to deal with agents, editors, actors, composers or a hundred different talentless people with nothing but inflated egos. It is a godsend particularly for a writer like myself who writes because he MUST write.” (Kruger, 1998, unpaginated) Relationships with gatekeepers are seen by many writers as a more important factor to getting published than the qualities of one’s work. Publishing on the Web, on the other hand, is seen as a means of reaching readers for those who do not have such relationships: “You don’t have to be part of any Old Boys’ (or Girls’) Network. One’s nose need not turn brown.” (Tasane, 1998, unpaginated) Or, as another writer put it, “You have a better chance of attracting 100 readers out of the internet pool than you do one editor out of the publishing pool.” (Stazya, 1998, unpaginated) Not surprisingly, this type of sentiment is expressed far more often by those who have only published online than those who have published both online and in print. As we have seen, some writers feel that some editors demand changes in work which, rather than strengthening it, fundamentally weakens it.

Editors can and often do perform an important function -- I’ve certainly been helped by more than one -- but when the editing goes beyond “it’s Literature at Lightspeed – page 130

not clear what the antecedent to that pronoun is", or “you have him in a blue shirt on page 56, but suddenly his shirt is red on the next page” -- when the editing attempts to tone down a story, or make it more conventional -- no one benefits. Provided the writer is his own good editor -- and many are -- WWW publishing allows for a much richer diversity of expression, and expression which has not been well-rounded by too many ink-stained hands. (Ralph Robert Moore, 1998, unpaginated)

By publishing on the Web, many writers feel they will be able to maintain the artistic integrity of their work. As well as having complete control over the content of a story, self-publishing on the Web allows a writer to “control the way it’s presented” (Case, 1998, unpaginated) This is in marked contrast to print publishing, where a magazine or book publisher usually designs the work with little or no input from the writer. However, the writer gains only partial control when publishing on the Web. Whereas there may be no publisher to insist upon a certain page design, the way the Web works, much of the power over design is shifted to the reader. “As a web publisher,” one writer commented, “I’m concerned with everyone being able to see my website just as I see it on my screen. But I know that not everyone is running the same equipment and software, so sometimes your website may be a jumbled mess to those with outdated hardware and/or software.” (Michael T. Gilbert, 1998, unpaginated) Even if the reader has the same equipment as the writer, however, the page may look radically different: the writer can change the type size and style, for example, or turn off the graphics. To be sure, writers have more power over how their work will look online than in print, but perhaps not as much as some may think. One of the advantages of publishing on the Web is that it allows writers to reach audiences they could not get in print. “Publishing on the WWW allows you to reach niche markets, or small groups of special-interest readers that would not be economically possible to publish for by conventional means.” (Aviott, 1998, unpaginated) Larger Literature at Lightspeed – page 131 publishers and publications seek work which will appeal to wide audiences; small presses and publications, while having greater editorial freedom, have a much smaller market and, usually, a much smaller geographic reach. Publishing on the Web holds out the possibility of increasing the size of the market for increasingly targeted fiction. This leads to the paradoxical conclusion that “Providing that it can target its audience successfully, Web publishing can serve a much wider audience.” (Cotterill, 1998, unpaginated) There are many criteria by which such niches could be defined. One is subject matter. “My novel concerns the New York Theatre,” one writer explained. “One editor rejected it with the comment ‘no one is interested in the theatre’. On the Web I can directly reach those who are.” (Orr, 1998, unpaginated) Another is writing style. “[M]uch of the fiction at the ‘better’ quality sites seem more experimental and obscure than in traditional print journals and mags.” (Levens, 1998, unpaginated) Another is fiction written specifically by and for minority groups. “[T]here [are] alot [sic] of niche publications on the web, designed for audiences I think my writing targets. My stuff has been on web zines designed for South Asians, South Asian Americans, and Asian Americans.” (Anwar, 1998a, unpaginated) In addition, it is worth noting that the creation of niche readerships on the Web can be a boon for readers. This can occur by giving them easy access to work which they would like to read but would be difficult, if not impossible, for them to find in print (Levens’ experimental writing, for example, or Orr’s novel about the theatre). It can also expand the amount of available work in an existing niche: by making more science fiction stories available, for instance, the Web can augment what is available to fans of the genre in print (or other media). The picture is not all rosy. “Many audiences are still not ‘wired,’“ one writer pointed out. “Certain age groups, economic backgrounds, etc. are not reachable.” (Fogel, 1998, unpaginated) Thus, fiction written by and for, say, the elderly, is less likely to find Literature at Lightspeed – page 132 its audience because the Internet tends to be dominated by younger people. As personal computers become more widespread, perhaps some day achieving the ubiquity of televisions or telephones, every writer should ultimately be able to find a niche to fill, and every reader writing of interest. An interesting point emerges from this: although some writers who publish online may rail against the control of large mainstream print publishing corporations, their most obvious competition is not the large houses or publications, but small presses and small circulation magazines, publications which also see themselves as catering to niche readerships. As smaller publishers become targets for takeovers by large entertainment corporations with publishing arms, the distinction may be less and less meaningful. Still, it is worth noting, and will come up again in Chapter Five’s discussion of the changing nature of print publishing. One other problem with traditional publishing is the time between the submission of a piece of writing and its publication. Most magazines require a work to be submitted two to six months before it will be published. A novel may take anywhere from six months to two years to see its way into print. There are many reasons for this. One is the time necessary to prepare a manuscript for publication. Another is that most publishers are so swamped by submissions, they are backed up and require a lot of time simply to read them and determine which to publish. “I had a story accepted by Midstream, a respectable magazine that comes out of New York,” one writer claimed. “That was over ten years ago. I’ve still not seen it in print.” (Mandel, 1998, unpaginated) This is an extreme example, of course. Still, it does illustrate the point that writers must often wait a long time to before they get to see their work in print. By way of contrast, publishing on the Web gives “Nearly instant gratification.” (Crisp, 1998b, unpaginated) It does this in two ways. One is that it speeds up the publication process, so that there need be no delay between the time a story is accepted Literature at Lightspeed – page 133 and the time it becomes available to the public. “We go out to everyone instantly. There is no shipping delay.” (Schustereit, 1998, unpaginated) The other is that it speeds up the submission process. Some writers lauded “The ease of submitting [to an ezine] (compared to print journals).” (Jeremiah Gilbert, 1998, unpaginated) Rather than go to the trouble and expense of making a paper copy of a story and mailing it to a magazine (or, probably, several magazines in the hope that one will be interested in publishing it), all the writer has to do is attach the story’s file to an email and send it. “No using the mail,” one writer enthused. (Via, 1998, unpaginated) Of course, publishing on one’s own a Web page can make even the submission of a story to an ezine seem slow, since there need be no delay between when a story is written and when it goes out to readers. Why Not Publish on the Web? 2) Lack of Authority Bypassing traditional print media channels is a double-edged sword: although it offers the advantages stated above, it also leads to a general perception that the World Wide Web is filled with a lot of bad writing. “I think there’s an assumption that because almost everything published on the web is rubbish there is nothing of any quality,” one writer remarked. (Tasane, 1998, unpaginated) Another writer summed up the views of many when he stated that the disadvantages of publishing on the Web included: “The unevenness of quality, the degree of publishing and the excess of self-promotion by frustrated writers masquerading as editors. Much of these are growing pains I believe but there is still insufficient professionalism for my taste.” (Watmough, 1998, unpaginated) In fact, those who had published in print and electronic magazines were the ones who most often used the term vanity press to describe those who also published fiction on their own Web pages. In this way, a professional hierarchy seems to be developing on the web: those who publish only in ezines are more “professional” than those who publish in both ezines and on their own Literature at Lightspeed – page 134 pages, and they are in turn more “professional” than those who publish solely on their own pages. Vanity press is a pejorative term for self-published writing. It is based on the assumption that if you cannot interest a traditional publisher in your work, it must be of inferior quality, and publishing it yourself is mostly a way to salve your own ego. This perception ignores the long history of self-publishing, which includes works by many now-famous authors, including Charles Dickens (Epstein, 2000, unpaginated), Walt Whitman, Edgar Allen Poe (The Writer’s Centre, undated, unpaginated), Mark Twain, Henry Thoreau, Herman Melville, James Joyce and Carl Sandburg. (Phoenix Publishing Group, undated, unpaginated) It is only in the twentieth century, when publishing became a truly industrial process and the division between writer and publisher became rigidly

defined, that this stigma of failure was attached to self-publishing.3 The general lack of respect for writing on the Web has real world consequences. “SFWA [Science Fiction Writers of America] for example still does not accept electronic publication as a criterion for membership.” (Sirois, 1998, unpaginated) Among other things, this means that stories published online are not eligible for the Association’s annual awards, an important form of legitimation (not to mention promotion). However, some real world institutions are changing to include work published in the new medium: “The Best American Short Stories anthology is now considering material which appears on the Web.” (Hubschman, 1998, unpaginated) This sort of recognition is both effect and cause of the authority of the medium: as respect for the quality of writing on the Web grows, institutions which arose to support print media will give it more attention; as print media institutions give writing on the Web more attention, they will raise the level of respect for it. Traditional publishing has two means of deriving authority. The first comes from the fact that several people may be involved in the editorial process; this leads many Literature at Lightspeed – page 135 people to assume that the quality of the writing of traditional publishers is high. This may not be the case, as Tasane noted: “It is...the case that almost everything published on paper is rubbish, but the paper literati seem to think that the two cases [print and electronic publishing] are somehow different.” (1998, unpaginated) A second means of establishing authority is simple longevity: if you have read several issues of a magazine, you come to know what to expect from it. The more frequently a publication supplies you with what you consider to be good writing, the more you will come to expect it. This process has come to be known as branding; it will be looked at in more detail in Chapter Three. The Web is too young for any solely Web-based fiction publications to have been branded as assuring quality. However, the proliferation of ezines devoted to fiction is seen by some as a way of asserting traditional notions of editorial integrity on the Web. “Acceptance to an e-zine is much like acceptance to a magazine...it denotes prestige...” (Case, 1998, unpaginated) The assumption is that editors and publishers of ezines will help raise the level of writing on the Web. But do they? According to one writer, “It must be admitted there’s a lot of poorly developed ezines with low quality writing. Editorial assistance leaves something to be desired on many of them.” (Bamberger, 1998, unpaginated) Another writer has a term for poor quality online publications: trash zines. “You don’t have the editorial guidelines as much on the web as you do in print, so occassionally [sic] you run into what I refer to as trash-zines, stuff thrown together without any thought on content. That’s really sad to see things like that, because people who really put a lot of effort into their sites are hurt by it.” (Winkler, 1998, unpaginated) As we have seen, stories in the majority of ezines go through one or no edits, and even when they are edited, the edits may be for simple issues of spelling and grammar rather than more complex problems such as structure or meaning. Literature at Lightspeed – page 136 In traditional publishing, editors usually have to spend years copy editing or doing other low-level jobs in order to earn the right to vet manuscripts and work with writers. Because publishing on the Web is as easy as uploading some material to a server, anybody can be an ezine publisher (and, in fact, some people may even have had no previous intention of doing so, as is the case with accidental publishers). “Most of the ‘editors’ are people who just love a particular genre and have no formal training in editing so they do no editing per se. Many of the e-zines will publish anything submitted to them and often the stories need to be re-written or at least proofread. Sometimes the stories published are blatant rip offs of works published in hard copy or from movies or tv. Often a short story will be the germ of a good idea but it is not enough to develop into a freestanding story.” (Blann, 1998, unpaginated) This seems to bring us full circle. Is the Web full of inferior quality writing after all? Hardly -- there is good and bad fiction being published in both media. It is important to keep in mind, though, that authority is a matter of perception, not reality. A lot of bad writing appears in print, but print is generally accepted as a medium where quality work can flourish because of the editorial functions of publishing houses and magazines. Transferring these functions to online zines may not ensure the quality of writing, but, given time, it is likely to change the perception of readers, making them more likely to accept the legitimacy of online work. One additional factor which will lend legitimacy to online publishing is the migration of print publications and authors to the Web. Print publications such as Mississippi Review (Frank, 1998, unpaginated), Oyster Boy Review (Harrison, 1998, unpaginated) and Moonshade Magazine (Jennings, 1998, unpaginated) place original stories on their Web sites. Sometimes, the publications bring their writers online with them. Kevin McGowin, explaining why he published his stories on the Web, wrote, “Magazines and journals in which I appear began to put out online editions.” (1998, Literature at Lightspeed – page 137 unpaginated) In the short term, such publications will encourage their print readers to see the Web as a legitimate form of publishing. In the long term, as movement between print and online publication becomes common, they will likely be seen as equally valid venues. A cost/benefit analysis comes into play here. On the one hand, making more work available to a greater number of readers is what many literary journals is all about, so adding Web publishing to their activities seems natural. On the other hand, if people can get material free off the Web, what incentive will they have for shelling out money for a magazine? Maximizing paying customers has kept most mass market fiction publications from offering much original content online. Smaller, though often no less prestigious, publications, do not have to sell as many copies since, among other things, they pay their contributors less; thus, they are more likely to take advantage of online publishing. A similar analysis comes into play when considering individual authors. Some writers put fiction on the Web in order to promote their print publications. “Grandstanding is part of wanting to be a writer, I think,” Duncan Long, who has had 13 fiction and 50 non-fiction books published, stated. “It seemed like a good way to show people what I do and maybe get a few to buy more of my books.” (Long, 1998, unpaginated) Kenneth Tindall (1998, unpaginated) and Richard James Cumyn (1998, unpaginated), both of whom have had two novels published in print, put fiction on their Web pages as a way of promoting that print work. Self-promotional notwithstanding, however, the most successful print writers have little to gain by making their work available for free over the Internet, and much potential income to lose. There have only been highly tentative efforts by very well known writers to publish on the Web. Stephen King, for example, published a 16,000 word novella, Riding the Bullet, exclusively on the Internet. The story, which could be read by Windows, Palm or eRocket machines, cost US$2.50. In the first 48 hours after Literature at Lightspeed – page 138 the story was made available, sales of and requests for the story reached almost 500,000, much greater than the 40,000 to 75,000 first-day sales of most blockbusters according to a representative of publisher Simon and Schuster. Yet King was described as “sanguine about the future of the form. ‘I’m curious,’ he says, ‘to see...whether or not this is the future.’“ (Archer, 2000, D5) A catch-22 is in operation here: the Web won’t get a reputation for having good writing if authors whose quality has been proven don’t publish there; but quality writers won’t publish there as long as the Web has a reputation as a haven for bad work. Those who are not in the upper echelons of print publishing, however, may find that they can gain substantially more readers, and potentially increase the buyers of their works in print, by publishing some work on the Web. In fact, for reasons I shall explore in Chapter Five, so-called mid-list authors are being dropped or no longer picked up by many traditional publishing houses. For these authors, the Web may be the only means by which they can regain the opportunity to get readers which they have lost because of the changing nature of the print publishing industry. The experience of well known science fiction author Norman Spinrad is relevant in this context. For over a decade, his novels had been published by Bantam Books. However, he claims that by 1994 “Bantam Books had undergone a spiritual devolution that mirrored the general devolution of American publishing from a cultural enterprise into a ruthless corporate machine dominated by a few large conglomerates and a handful of book store chains and national distributers.” (Spinrad, undated (b), unpaginated) By that year, 200 people had been fired and the list of books which the company had published had almost been cut in half. Spinrad claims to have had a verbal agreement with Bantam that it would publish one of his books as a quality trade paperback. A new editor replaced the one who had entered into this agreement with him and, in order to cut costs, reneged on it, offering, Literature at Lightspeed – page 139 instead, to publish the book as a Spectra paperback. Spectra is Bantam’s science fiction arm. Unfortunately, the book, Pictures at 11, was not a science fiction novel. In addition, the company planned on a much smaller print run than Spinrad had been led to expect. When he found out about Bantam’s plans, Spinrad threatened to go to booksellers and tell them that they would be party to a lawsuit he would bring against Bantam if they distributed the book. Knowing that this would have killed the book, Bantam relented. Still, the company had no faith in the book and, while adhering to the letter of its agreement with Spinrad, did nothing beyond it to ensure the book’s success: “Bantam published Pictures at 11 as a mainstream trade paperback with a schlocko cover I unsuccessfully fought every inch of the way, did no advertising and a minimal printing. The reviews were excellent to rave and a film option was taken, but by the time it made the New York Times Book Review’s Recommended Summer reading list, it was virtually out of print.” (ibid) This was only the beginning of Spinrad’s problems with Bantam. His contract with Bantam gave them the option of publishing his next two books. They accepted He Walked Among Us, his next book, but rejected the outline for the following one, Glass Houses. Fearing that “the powers that be there [at Bantam] would trash the publication the way they did Pictures at 11, only worse,” (Spinrad, undated (c), unpaginated) Spinrad asked to be let out of the contract. At first, Bantam agreed, as long as they would be reimbursed for the advances they had given him, a standard industry practice. However, he then received a letter from the publisher’s legal department which “called for me to repay them not only out of first proceeds of any resale of He Walked Among Us but out of first proceeds of the sale of my ‘next novel,’ the so-called option book. Which they had already rejected.” (ibid) In essence, he would not be reimbursed for his work on this third book, since the proceeds would go straight to Bantam. Since he cannot sell another novel in the United States, Spinrad claims that his career is “in limbo.” (ibid) Literature at Lightspeed – page 140 Spinrad’s case is ugly and, since he details it in various pieces of writing on his personal Web page, quite public. Most writers who have had experiences like this simply disappear from public consciousness. Due to the changing nature of the publishing industry, their numbers are undoubtedly increasing. This may ultimately resolve the catch-22 by pushing established writers, like Spinrad, into putting some of their fiction writing on the Web. In the context of the authority of writing on the Web, the important lesson of this story is that writers who have developed a reputation in print who choose (or are forced) to make their writing available on the Web will bring their readers along with them. As with the migration of print magazines, the migration of authors who have published work in print should begin to lend authority to online publishing. Both readers and writers will reason that if authors they respect are publishing on the Web, the medium itself is deserving of their respect. One author has already experienced something like this: “I realized after a short period of doubt about the legitimacy of web publishing that many respected writers and editors were working there.” (Hearne, 1998, unpaginated) Moreover, the fact that known writers publish on the Web may ease the worries of lesser known writers about what they see as the potential disadvantages of the medium: “I overcame fear of plagiarism by seeing better known writers on the web.” (Weinberg, 1998, unpaginated) One final source of legitimacy is the development of a critical establishment in symbiosis with a given artistic community. Film and literary critics help legitimize their respective art forms by establishing critical criterion by which works can be judged and, in a more practical sense, warning potential auditors of work which does not stand up to the ideal. Since fiction on Web pages garners “No reviews in [the] mass culture/literary scene” (Rattansi, 1998, unpaginated), there is nobody to vouch for the quality of online writing. In this vacuum, many people assume the quality is low. Although there are many Literature at Lightspeed – page 141 print magazines devoted to listing Web sites, few take a theoretical or critical approach to their subject. In fact, the only analogue to a critical establishment currently on the Web itself are the “Best of...” pages, which at least reward excellence with recognition. There can be no doubt that online awards boost the legitimacy of individual Web pages. “[M]y site was chosen as Aol Member Site of the week and 13000 people trooped through the front door in two weeks.” (Cross, 1998, unpaginated) While this is a good start, it won’t be until a truly critical establishment develops, especially one which reports in media other than digital networks (as when movie and book reviews appear in newspapers), that stories delivered online will begin to be respected. I shall return to this issue of criticism in Chapter Five. Throughout this discussion, the reader may have noted that I have tried to avoid judging the quality of the fiction on the Internet myself. De gustibis est non disputandum. I tend to subscribe to the critical theory attributed to science fiction writer Theodore Sturgeon: 90% of art is crap. Though the reader may quibble with the amount, it has generally been true regardless of the medium. It is likely true of the Internet. However, what Web publishing does is increase the absolute amount of writing available to potential readers, which means it increases the amount of good writing as well as bad. Put another way, it is more likely that a reader will find a satisfying work the more work is available; since the Web may make substantially more work available, it increases the likelihood that readers will find work which satisfies them, whatever their interest or taste. The Relationship Between Ezines and Personal Pages Of the 227 survey correspondents whose work had been found at an ezine, 81 (35.7%) had also published fiction on their personal Web pages. We have already seen a couple of Literature at Lightspeed – page 142 the advantages of publishing in both places: the ezine publisher is usually responsible for promotion, for example, relieving individual writers of this responsibility. One writer argued that there was no advantage to publishing in ezines: “In all actuality, it doesn’t make too much sense to actively seek out to be publish in e-zines, unless you are after money, because for all practical purposes their audience reach is only fractionally greater than an individuals.” (Schwartz, 1998, unpaginated) While this is true, the decision need not be either-or; many writers felt that it was best to publish in both venues. The most common reason for publishing work on both a Web-based magazine and a personal page was “More readers, I suppose.” (Tasane, 1998, unpaginated) As we saw earlier, the more linked routes a writer can have to his or her stories, the more possibility that they will be read; if the stories are on more than one Web page, each one can develop a different set of links, making them that much more likely to be read. “The web is a big place,” one writer explained. “If you want people to find you the more places you can stick your name and your work the better off you are.” (Baumander, 1998, unpaginated) The most obvious links are between the different venues where the writer’s stories appear: “The idea is that once people become interested in my writing through the e- zines, they’ll come to my page and read more. Nice theory. :-)” (Diana Evans, 1998, unpaginated) Often, ezines will include a link to a writer’s home page, however I came across few instances of an ezine which linked to stories in other ezines. Even if a writer does not put stories on her or his own page, therefore, it is a good idea to create one in order to have a central place with links to all of the writer’s stories on the Web. Another advantage to publishing in both venues is that “People who visit my page regularly find out what kind of writer I am, and people who visit the e-zine can read the story too.” (Vinyard, 1998, unpaginated) While most ezines allow writers to include a Literature at Lightspeed – page 143 brief bio with their work (usually no more than a paragraph long), some writers may wish to say more about themselves than the ezine will give them room to. (And, although I have no evidence to support this, it is certainly possible that some readers will want to know more about writers whose work they like.) Publishing in both venues gives writers the readership of the ezine with the potential for more personal information on their own pages. Writers generally did not try to publish the same stories in both venues. Even where there may be no money involved, online publishers seem to have a sense of the “first sale” value of a story. As a result, “if i have some of my work published on my site, its likely that an ezine won’t accept it because they consider it previously published.” (Poulsen, 1998, unpaginated) Furthermore, if there is no content on a personal page which cannot be read in an ezine, there is no incentive for a reader to move to a person’s home page. One writer suggested publishing the same work in both venues did offer a technical advantage: “No point in duplicating other than quicker access times for those using servers in nearer continents.” (Rattansi, 1998, unpaginated) The way the Internet works, data jumps from computer to computer on the system until it reaches its destination; by cutting the number of jumps the data has to travel, you cut down the time it takes. By putting a story on a server in a country closer to the reader, Rattansi argued that it would get to the reader faster. This advantage seems to be outweighed by the disadvantages, however; as a result, most writers placed different stories on their home pages than were available in ezines. Another reason for publishing in both venues is that “Sometimes it is frustrating when you know you have a first class story, but for some reason, no one accepts it. It might relieve the frustration to publish it on your own web site.” (Harth, 1998, unpaginated) Thus, having a home page can be a fall-back position, a way of publishing stories which are not accepted by ezines. As another writer put it: “What one won’t take, Literature at Lightspeed – page 144 the other will!” (Owens, 1998, unpaginated) This echoes the original rationale for publishing on the Web as opposed to publishing in print. A couple of writers claimed that they “get more feedback from the works published on my personal pages, from web friends, real world friends and family, etc. who read there.” (Davis, 1998b, unpaginated) In an ezine, a writer’s story will be competing with the stories of many other writers for the attention of readers, lessening the probability that they will respond. “If you get a reader into your own page,” one writer explained, “you know that they’re just reading your stuff, and don’t have a choice between 12 other writers.” (Powers, 1998, unpaginated) Other writers had a different experience, however: “On some sites, [I received] lots of good feedback. I think traffic on the site is the key to that.” (Trammell, 1998, unpaginated) In this view, since ezines generally attract more readers than home pages (Wardrip, 1998, unpaginated), they offer greater potential for reader response. Finally, ezine contributors with their own fiction pages often stated that they had “more control on my own pages.” (Davis, 1998b, unpaginated) On your personal page, you can design the material, as well as ensuring the integrity of the text. Occasionally, a writer would claim to have had a bad experience when a story was published in an ezine, outside of the writer’s control: “It came out within about a month or two of acceptance, and I didn’t even know it was out until somebody quite unconnected wrote to me about it. I wrote to the editor, who did not answer, and I finally tracked the story down on my own. It looked fine, but it had an annoying grammatical error, which I didn’t put there and would have caught, given the opportunity.” (Harth, 1998, unpaginated) However, maintaining control over one’s writing by publishing it on a home page comes with costs, both in skills required (HTML coding, for example) and time (uploading material to a server, designing pages, et al). “Obviously, taking care of your own page takes tons of time,” one writer observed, “time I’d rather spend writing and Literature at Lightspeed – page 145 having someone else post on their page.” (Powers, 1998, unpaginated) For a lot of writers, the ease of submitting to an ezine and seeing one’s work published outweighs the possible editorial problems. In fact, it should be kept in mind that almost two thirds of the people who had published fiction in ezines did not have their own Web pages. As we have seen, many of them argued, sometimes quite vehemently, that publishing on their own page was tantamount to “vanity” publishing, a form which would diminish their reputations. Generally, the arguments made for publishing on one’s own page (greater potential readership and feedback, more control, etc.) tend to be made by writers with less print publishing experience, reinforcing the idea that those who are relatively new to publishing are more concerned with becoming known than maintaining a critical reputation. Evolving Relationships: 1) Authors and Readers Writing is a solitary craft; there is much truth to the stereotyped image of the writer slaving away on a manuscript in an obscure room. Traditional publishing tends to put barriers between readers and writers. The most frequent form of communication, letter writing, can be mediated by several layers of a magazine or publishing house’s bureaucracy before it is received by the writer (if, in fact, it ever is). In addition, because the process of publishing both fiction and reader responses to a work of fiction can take a long time, it may be many months, sometimes years, between the time a writer finishes a story and a reader responds. This is not the case with work published on the World Wide Web. The Web gives writers tools by which they can measure how many people access their stories: page counters and reader tracking software. This is an advance over paper magazines, where the writer can know how many people buy a publication, but has no idea how many of them actually read her or his story. This is still an imprecise measurement; Net surfing Literature at Lightspeed – page 146 makes it easy for people to move through one’s page without giving one’s work more than a casual glance. “You’re never sure people actually READ your things,” one writer explained. “They may end up in your pages by chance, and go away, without even [taking] a look at your writings.” (Bianchi, 1998, unpaginated) Still, tracking software does seem to offer the writer more precision than print. Moreover, as one writer noted, “One of the more exciting things about being published electronically is the immediacy of the feed-back...” (London, 1998, unpaginated) Most stories published on the Web include the email address of the author. In addition, many personal Web sites and a few ezines have “guestbooks,” pages which include a form which allows visitors to the site to post comments which others can subsequently read. Thus, readers can respond to a story right after reading it, and such responses go directly to the author. “There are no middle men; there is a direct line from writer to reader.” (Rowan Wolf, 1998, unpaginated) Some writers do get more feedback from readers. “I have received e-mails from readers who read and appreciated my work or wanted to comment on it which is very gratifying to a writer who otherwise would not get any feedback from a print journal,” one commented. (Hearne, 1998, unpaginated) Another stated: “Every week I get hits from at least 3 continents and 20 different locations. Some folks have read the novels and given me feedback.” (Weindorf, 1998, unpaginated) Responses to stories published on the Web are almost universally positive. “I get the occasional email praising my stories -- so far nothing negative, but that’s not surprising, as people that don’t like it probably don’t bother wasting their time.” (Miller, 1998, unpaginated) This can be attributed to the ease with which people can move from Web page to Web page. If at any point a reader finds he or she doesn’t like a story, he or she will move on rather than read it to its conclusion and comment negatively on it. Literature at Lightspeed – page 147 There are, however, methods other than negative email for readers to show their displeasure. One writer cautioned: “[T]here’s always a jerk out there who likes too much to spam you if you say something he/she doesn’t like; no one wants her creative work cut down by someone who can’t spell but can blow up a mail box. “ (Back, 1998, unpaginated) Another writer, whose page contains a lot of stories for children, claimed that “The disadvantages are the nameless people who send porno to my e-mail address thinking what fun it is to poke at Mr ED.” (Mr ED, 1998, unpaginated) Still, these are rare exceptions to the general rule that if somebody doesn’t like a story, they will simply move on. We should keep in mind, though, that feedback from readers was the exception, not the rule. When asked what their sense of their readers were, a much more common response was that the writers did not know; the majority claimed to have received few or no email responses from readers. It is possible that Web surfers read fiction without feeling the need to contact the author. A more likely explanation, though, is that people generally don’t go online to read prose fiction. There are many reasons to believe that this is the case. As has already been mentioned, the Web has a reputation for being flooded with poor quality writing; it would not, therefore, be a reader’s first choice of place to look for an esthetically pleasing experience. Another factor is the well known and much commented upon phenomenon that “many people do not like reading works from their computer screen -- there is no neat hard copy that they can take with them on the airplane to read (unless they own a handheld computer!)” (Phipps, 1998, unpaginated) One survey respondent went so far as to suggest that this was a particularly critical concern to fiction writers: “...words on the page are easier to read in the relaxed-yet-alert way in which fiction/poetry ought to be read. Have you ever meditated on anything you ever saw on a computer screen?” (Sato, 1998, unpaginated) Finally, promotion of the Internet, and particularly the Web, has not Literature at Lightspeed – page 148 focused on finding original prose fiction; advertisements are more likely to concentrate on games or, increasingly, e-commerce. For this reason, many people connected to the Internet may not be aware that they can find fiction on the Web; it certainly wouldn’t be a motivating factor in their surfing the Net. Taken together, these factors discourage people from looking for prose fiction on the Web. “I don’t think Web authors are taken very seriously...” one writer said, “and I don’t think very many serious readers go to the Net for fiction.” (Youngren, 1998, unpaginated) Because of this, the advantage of immediate reader feedback is more potential than actual for most writers of Web fiction. Despite this, two groups of readers were revealed in the surveys. “I write stories for and about my friends,” one writer stated, “and hand them out to read at school. Obviously, there isn’t a lot of time during class for my friends to read them, so I started sending them out over email. Then, it occurred to me that I could just put them on a website (well, not me, really; I got my great friend David to do it for me) so that my friends could read any part of any story at any time.” (Beck, 1998, unpaginated) Publishing for friends on the Web eliminates the time and expense of photocopying (which could be large if you’re popular). There is also an advantage to publishing a story on the Web if one’s friends are geographically dispersed: “...to allow my friends to read my work without me having to print it and send it to them (sometimes in the post)” (Kelly, 1998, unpaginated) Most often, writers claimed to have received feedback from other writers. This suggests a Web of relationships which can, in fact, be considered a community. A Community of Writers on the Web One survey correspondent wrote that his readers were “often fellow writers who are participating in a sort of ‘community’ project, i.e., publishing on the web.” (Zach Smith, 1998, unpaginated) According to another writer, one of the advantages of publishing on Literature at Lightspeed – page 149 the Web is that “It is a great way to meet writers and poets from around the world.” (Abdulrazzak, 1998, unpaginated) Whereas most print publications have a geographic base, online publications can be based anywhere in the world, as well as accepting and publishing work from writers anywhere in the world no matter where they are based, and, as we have seen, many individual writers post pages to the Web from around the world. Given the ease with which people can communicate directly with each other, this allows for a hitherto unheard of level of communication between writers of a variety of nationalities. As one author stated, “a friend of my father’s who is an established writer in Israel became interested in my [Web published] fiction and gave me great encouragement and inspiration.” (Shafir, 1998, unpaginated) This must be tempered with the knowledge, though, as Abdulrazzak noted, that “that mostly means [writers] from the US.” (1998, unpaginated) Another facet of this group of writers is that it somewhat levels the hierarchy which has developed in print publishing. Traditionally, the more you publish, the more status you have; being published in certain highly respected venues, or by well known publishing houses, also increases a writer’s status. Writers are generally not encouraged to communicate with those whose status is much different, although the willingness to do so varies from writer to writer. Online, on the other hand, “It doesn’t matter if you have been published, as long as you’ve written something we all acknowledge each others lame attempts at writing. This helps in giving each other support and all that. I don’t know a lot of those who have gotten published in traditional print media that bond together that closely. I think it has to do with the fact that your wallet’s on the line.” (Qining, 1998, unpaginated) Some would argue that this is merely an expression of the camaraderie and information sharing which has characterized the Internet since its inception. Qining’s last point is worth keeping in mind, however. The competition for a very small number of paying positions in magazines and publishing houses is an Literature at Lightspeed – page 150 important factor in the segregation of writers of differing experience levels in the real world. Because so little fiction is paid for online, this kind of segmentation has yet to happen; however, as the Net becomes increasingly commercial, and methods of paying for online information become more effective, it may come to pass there. The comments travelling back and forth between writers with different levels of skill and experience results in Web publishing being “the ultimate workshop.” (Evans, 1998, unpaginated) In fact, some of the ezines have been created specifically as writer’s workshops, where stories are commented upon by other writers and improved. One example is The Dargon Project, publisher of DargonZine, a shared world collaborative publication. “The project was founded in 1985 as a way for amateur fantasy writers on the Internet to meet and become better writers through mutual contact and collaboration.” ("About DargonZine,” 1998, unpaginated) Participation in mutual critique is an integral part of the Dargon Project: writers are “expected to critique others’ works and contribute to the shared world. People who don’t want to participate in a communal project should consider submitting to other emags.” ("DargonZine Writers’ FAQ,” 1998, unpaginated) At least one participant lauded this approach: “In the environment of the Dargon writers group, I get constructive criticism on my work, rather than ‘No thank you’ letters. Much more friendly.” (Whitby, 1998a, unpaginated) Another site which functioned in this way was the Short Story Collective. “ShortStoryCollective was the first site that posted my work and they used to have a page for comment. I got an enormous amount of response from other writers there.” (England, 1998, unpaginated) Unfortunately, the Short Story Collective no longer exists. “It has now closed,” according to England, “due to its abuse by some people who used it as a platform for views other than writing.” (ibid) As people who have followed discussion groups can attest, this is often one of the drawbacks of the ease with which communication over the Internet is possible: discussions can drift from their original Literature at Lightspeed – page 151 purpose, particularly due to people with strong views who may alienate many of the group’s original members. Given all of this, we can begin to see how people relate to each other through writing on the World Wide Web. Certain pages are nodes which bring together writers, editors and readers in a common goal: the production and consumption of fiction. Many of these pages link to each other, potentially creating a connection between the various people who take part in them. It is also possible for individuals to be the links between pages (as when a writer publishes stories on more than one page, or when a writer/reader travels through a variety of different pages). As Wellman stated, “...ties in networks are often transitive. If there is a tie from A to B and from B to C, then there is an implicit indirect tie from A to C -- and an increased probability of the formation of a direct tie at some time in the future.” (1998, 42) Finally, as the example of the email exchanges quoted above suggests, individual writers may be in direct contact with each other. (It can also be argued that writers who do not connect with others in any of these ways still benefit from the sense of community that exists, in the same way that people who do not use certain social services indirectly benefit from the stronger society to which those services contribute.) For an idealized illustration of these relationships, see Figure 2.1 Clearly, writers have developed a large and complex web of relationships online. Does this constitute a community? The term has become highly contentious through overuse in public discourse, as Russell Smith observed: “The debate over Napster, and the indignation over the hacker attacks on CNN, consistently turn on the idea of a ‘community’ of computer users. The phrase ‘cyber community” or ‘the Napster community’ recur in the rhetoric of netheads. This kind of phrase is a very contemporary tic: the word community now attaches itself to almost any idea, just to puff it up a bit. Is there really such a thing as a cyber community, or are we talking about a population of users as diverse as the world itself. What the hell do we mean, now, when we say this Literature at Lightspeed – page 152

Figure 2.1 Idealized Representation of Web-based Community Relationships The squares represent Web pages, the circles represent individuals. For the most part, individuals meet and connect with each other through Web pages (represented by a circle with an “a”). Some writers contribute to more than one Web page (perhaps an ezine and a personal page); in this way, they become bridges between different groups (represented by a circle with a “b”). Some Web pages connect to each other directly through reciprocal links or such devices as Web rings (represented by the line connecting “B” to “C”). Finally, individuals may connect to each other through reciprocal links or Web rings (represented by a circle with a “c”). word?” (2000, R5) Or, as another writer put it, “...the idea of community is, as the term suggests, so loose that any sort of common concern may in principle give rise to the locution of the ‘x-ing community,’ where x can be almost anything you care to think of -- the knitting community or the snorkelling community as readily as the Heidegger-reading community. Think of an activity that people might have a common interest in pursuing, and they can plausibly describe themselves as a community.” (Ryan, 1997, 1168/1169) This is exacerbated by the fact that researchers who take community as their subject, especially those in the field of sociology, cannot, themselves, agree on a definition of community. “Indeed, in a celebrated article in 1955 in which he surveyed Literature at Lightspeed – page 153 definitions of community, George A. Hillery came up with ninety-four definitions and claimed that the only feature they had in common was that they all dealt with people! [note omitted]” (Plant, 1978, 80) Some of the definitions of community have included the following features: “locality; interest group; a system of solidarity; a group with a sense of mutual significance; a group characterized by moral agreement, shared beliefs, shared authority or ethnic integrity; a group marked by historical continuity and shared traditions; a group in which members meet in some kind of total fashion as opposed to meeting as members of certain roles, functions, or occupational groups; and finally, occupational, functional, or partial communities. Clearly, not all of these meanings are compatible...” (ibid, 82) Plant argued that ultimately any definition of community would necessarily contain normative and ideological assumptions. One possible way out of this problem is to consider how relationships between people are arranged. This is sometimes referred to as “structural analysis.” Community is a social construction of a network of relationships. To understand the community, one must understand the structure of the relationships: “In our view, an important key to understanding structural analysis is recognizing that social structures can be represented as networks -- as sets of nodes (or social system members) and sets of ties depicting their interconnections.” (Wellman and Berkowitz, 1988, 4) Using this definition of community, the relationships between writers identified above and idealized in Figure 2.1 would qualify as constituting a community. Structural analysis of communities seems to contain no normative or ideological content; it could easily be used by anybody looking at community. Unfortunately, it also has several problems. If, for instance, community is defined solely as a network of relationships, then community stops being a useful concept; we may as well simply call all communities networks. However, not all networks are communities. Opposing armies Literature at Lightspeed – page 154 in a battle share a network of relationships, but we would hardly call them a community. Look at it a different way: I am currently part of an academic community in Montreal and a filmmaking community in Toronto. Since both of these groups are part of my personal network, they would, by definition, form a single community. Yet, the only significant thing they have in common is my membership. Furthermore, since we are all interconnected by interlocking personal networks, it could be argued that everybody is a member of a single community. However, this is also known as “the human race;” without some limit on the network, it becomes meaningless as an attempt to define community. It becomes necessary, therefore, to add functional features to this structural definition. I accept that there will likely be hidden ideological assumptions in these ideas. What conditions are necessary to forge a network of ties into a community? “First, community is based on a unity of shared circumstances, interests, customs, and purposes...” (Lindlof, et al, 1997, 2) If this were the only criterion on which community was based, then we could reasonably talk about a Napster community or a knitting community, despite the dismay of the writers quoted above. However, since part of the definition of community is that it is a network of relationships, shared interest, while a condition of community, is not, itself, sufficient to create a community; people with such an interest must communicate with and relate to each other. Thus, Napster users or knitters who do not communicate with each other cannot be considered a community just because they share an interest. Although they differ in many ways, the individuals in this survey could certainly be said to share a common interest and purpose, as well as the common circumstance of publishing online, as this chapter has shown, thus fulfilling these condition of being a community. "A second characteristic is the moral obligations that the members observe toward each other, manifested by social rules, etiquette, and ethical codes.” (ibid, 3) The Literature at Lightspeed – page 155 rules by which writers critique each other’s work on DargonZine may be a form of shared norms for behaviour, and the Internet generally has its own etiquette, sometimes referred to as Netiquette. However, there are likely many informal rules for how individuals who contribute to these sites should behave, and this doesn’t even begin to touch on the social rules which govern interaction between writers who only communicate with each other by, say, email. These are areas which require further research. One of the arguments that online groups cannot be considered communities is that there is no method to enforce the social rules which may be proposed for them. “Free speech is free when it is responsible -- not in the sense of being dreary and commonplace, but in the sense of the utterer having to live with the consequence of their utterances.” (Ryan, 1997, 1170/1171) While the Internet generally is seen as a place where actions have no consequences, this isn’t entirely true. Flames, for example, while often written about by researchers in terms of negative interpersonal communication, can sometimes have the effect of enforcing the norms of online groups. This is true, to take one example, when somebody flames a newbie (somebody new to the group) for asking a question that has already been answered in a Frequently Asked Questions file. In MUDs, there is something called “toading,” which can mean changing the appearance of somebody who has broken a rule to make them look ugly (sort of an online Scarlet Letter). At its most extreme, toading can mean cancelling somebody’s account, effectively barring them from participating in the community. (Dibbell, 1996) As with the rules themselves, how social norms are enforced in the community of writers requires more research. "Third, if both unity and moral obligations are to form, a community must have stability: it has to maintain a structure (usually more horizontal than hierarchical) over time in order for common traditions and rituals to develop. Geographic or social network Literature at Lightspeed – page 156 boundaries also enable the members of a community to know who is inside and who is outside their own kind, and therefore promote a collective identity.” (Lindlof, et al, 1997, 3) This is an especially problematic issue for online groups. You will notice that writers may become members of this community by placing a story on their own Web site and linking it in some way to the Web sites of others, by placing their stories in ezines or by contributing to a collective work. In this way, the community is dispersed across many different sites on the Web. Aside from the Web, writers may also become part of the community by emailing each other; furthermore, if a writer who began by posting stories to a Usenet newsgroup continues to be active in the newsgroup after publishing stories on the Web, everybody in the newsgroup is connected to, and becomes a member of, the larger online community of writers by the transitive process described above. In this way, the community is facilitated by a variety of different digital communication technologies. Writers on the Web, therefore, can be said to form a distributed community, with a large number of modes and sites of interaction. While this may not be immediately intuitive, we have all experienced this in the physical world: we often meet different configurations of friends in a variety of locations; getting together in a cafe or a restaurant is as much a part of community-building in the as meeting in City Hall or partying in our backyards. Just as community in the physical world has become geographically dispersed, community online is distributed across many sites. Traditionally, community is thought of as tightly bounded, with ties which stay within a neighbourhood, densely knit, where most residents interact with each other, and broadly based, with each relationship providing a wide range of benefits for those involved. Whether or not community really functioned this way is an open question; such definitions may be a form of nostalgia for a time which didn’t exist exactly as remembered. However, it certainly doesn’t describe the lives of people in developed Literature at Lightspeed – page 157 nations, most of whom have loosely bounded ties which go beyond their immediate neighbourhood; whose relationships are sparsely knit, where many of the people we know do not interact with each other; and, are specialized, with each tie providing a limited set of benefits. (Wellman, 1990, xiii) “Current research suggests that North Americans usually have more than 1,000 interpersonal relations, but that only a half- dozen of them are intimate and no more than 50 are significantly strong. Yet, in the aggregate, a person’s other 950+ ties are important sources of information, support, companionship, and a sense of belonging. [note omitted]” (Wellman and Gulia, 1999, 183) This gives a sense of how the distributed online writers’ community functions. Those connected to the active nodes such as DargonZine or The Short Story Collective likely feel a relatively strong tie to the community, not necessarily intimate, but perhaps significant. Although most individuals come to the space looking for help with writing, they may well find themselves developing relationships which go beyond such instrumentality. “Even when online groups are not designed to be supportive, they tend to be. As social beings, those who use the Net seek not only information but also companionship, social support, and a sense of belonging.” (ibid, 173) On the other hand, those who have their own Web page and are only connected to other writers through comments left online or email are likely to have relatively weak ties to the community. At the moment, these are speculations. The survey of writers which I conducted did not ask about online community; in fact, I was surprised to find that some writers felt themselves to be part of a community. More research will have to be done to determine how strong and weak ties are distributed throughout this (or, perhaps, any other) online community. Distributed community has not been dealt with in the literature on online communities, which has tended to focus on single sites and individual modes of communication on the Internet. Thus, there have been studies of Usenet newsgroups such Literature at Lightspeed – page 158 as alt.cyberpunk (Giese, 1997) and rec.arts.tv.soaps (Baym, 1993); MUDs (Multi-User Dungeons or Domains) such as LambdaMOO (Curtis, 1996; Dibbell, 1996) and Habitat (Farmer and Morningstar, 1991); mailing lists such as METHODS (Babbie, 1996); and, conferencing systems such as the WELL (Rheingold, 1993b) and the Canopus Forum (Whittle, 1997). Some studies of online communities describe general characteristics of given technologies while implying that communities form around discrete sites: Internet Relay Chat (Reid, 1996a); Usenet newsgroups (Hill and Hughes, 1997); and, MUDs (Turkle, 1996; Bruckman, 1996; Reid, 1996b). Few studies have looked at distributed communities; one exception is Lindlof, et al (1997), which looked at Web pages and newsgroups devoted to the comic book X-Men, although the distributed nature of the community was not considered noteworthy. Most of these studies take for granted that their subjects form a community, ignoring the essentially contested nature of applying the concept to online groups. If community is distributed, as I am suggesting for writers, then stability need not be tied to a single site. If a Web site closes, the people who used it as their entry into the community will have to find other connections. While this may alter the balance of their ties (from the relatively strong ties to the Web site to the weaker ties of emailing contacts at other sites), they can always develop stronger ties with remaining community members. The stability of a distributed community resides in the relationships between its members, not in any specific site. This, too, has an analogy in the physical world: when a restaurant or other establishment where people gather closes, they often find another venue in which to continue their communal relationships. Stability has a second meaning for those online: the continuity of identity. Because it is relatively easy to use multiple names online, it is possible to present oneself as different people online. This makes rules about appropriate group behaviour harder to enforce: transgressors can avoid punishment by adopting a new identity. (Dibbell, 1996) Literature at Lightspeed – page 159 It also makes relationships harder to maintain if the name one associates with a certain history and set of personal characteristics is used by a different person, or if a person decides to use a different name. It is not necessary for people to use their own name to have a stable identity; however, if they choose to use a pseudonym, it is necessary for it to be used consistently. This did not seem to be a problem for the writers I studied, but, again, I would have to caution that I was not looking specifically for it. So, to sum up, the definition of community which I will be using, and which I believe applies to online groupings of people, is that it is a stable network of relationships bound by a common interest or purpose which shares a set of rules governing behaviour within the group. Such a group can be brought together in a single place, whether online or off, but it can also be distributed across many different places. Those who define community differently will object to this definition. The traditional way of understanding community is that it must be based in a geographic location. Some argue that physical copresence is so essential to our definitions of community that no set of relationships which lack it can properly be called community: “...removing the notion of geography from community -- which must be done for virtual communities -- guts a core aspect of the concept of community.” (Kilker and Kleinman, 1997, 70) However, this may be an artifact of the fact that for most of human history, human beings had little physical mobility; before the 19th century, most people’s life travels did not take them further than 50 miles from the place of their birth. The only relationships possible, under these circumstances, would be relationships with people in one’s immediate surroundings. Except for the very wealthy, for whom distant travel was possible, the only community possible was local community. Literature at Lightspeed – page 160 Geography-based definitions of community were challenged well before the creation of the Internet, as individuals became increasingly mobile in the 19th and 20th centuries. The widespread use of the train in the 19th century and the car in the 20th century extended the web of people’s personal relationships beyond their immediate geographical area. The telephone made it possible to communicate with -- and, thereby, develop or maintain relationships with -- people regardless of their geographic location. The Internet is merely the latest communications technology to extend human relationships beyond what is possible in a single location. What, after all, is a community? Almost all definitions of community agree that it is a set of relationships between individuals (where they tend to disagree is on what the important aspects of those relationships are). As I just showed, geography, although historically an integral part of such relationships, need no longer be. As Barry Wellman, a chief proponent of network analysis of community, explained, “The traditional approach of looking at community as existing in localities -- urban neighborhoods and rural towns - - made the mistake of looking for community, a preeminently social phenomenon, in places, an inherently spatial phenomenon. Why assume that people who provide companionship, social support, and a sense of belonging only live nearby?” (1990, xii) Although geographically defined communities are certainly a subset of my expanded definition of what community is, to say that they are the only acceptable form of community is to place unnecessary restrictions on the concept, making it conform less and less with our lived experience. Another argument claims that modern urban communities force people to have ties with a diverse range of people because of their physical proximity, while people online tend to choose to congregate with people who are similar to them. This objection is perhaps overstated: even in the most culturally diverse urban centres, people often Literature at Lightspeed – page 161 congregate in ethnically segregated neighbourhoods or choose to interact with those with similar interests. It also ignores the historical reality that before travel became widespread enough to allow migrant communities to develop in single geographic locations -- that is, for most of human history -- people gathered in small, homogenous tribes. Thus, this view of community is already a compromise between traditional notions and the changed circumstances brought on by modernity. A related objection is that “proponents of cyber- community do not often mention that these conferencing systems are rarely culturally or ethnically diverse, although they are quick to embrace the idea of cultural and ethnic diversity. they rarely address the whitebread demographics of cyberspace...” (humdog, 1996, 439) Since certain demographic groups are over-represented online, they can make for insular online groupings. Neither of these objections undermines the idea that there is a community of writers online. As we have seen, other than sharing an interest in their craft, writers online are substantially diverse. “People on the Net have a greater tendency to base their feelings of closeness on the basis of shared interests rather than on the basis of shared social characteristics such as gender and socio-economic status. So they are probably relatively homogenous in their interests and attitudes just as they are probably heterogenous in the participants’ age, social class, ethnicity, life-cycle stage, and other aspects of their background.” (Wellman and Gulia, 1999, 186) While some groups may presently be underrepresented in the community of writers, as they are on Net generally, there is no reason to believe that as digital communication technology diffuses into the general population their numbers won’t rise in the online writers community. Nodes specific to certain minority communities (such as “The TimBookTo Home Page” or Morningstar) add a different wrinkle to this discussion. To the extent that they encourage writers to congregate in homogenous groups, they may inhibit the diversity of Literature at Lightspeed – page 162 the community of writers at large. This would depend upon the number and strength of ties the focused node has to the broader community. Unfortunately, this is beyond the scope of the current work. Another objection is that communication online, which is based largely on text, is an impoverished form of in person communication, which has greater bandwidth. When we are talking to another person, their tone of voice, body posture or other non-textual cues give us additional information by which we can judge the validity of what they are saying, cues which are not available online. Without being able to read such cues, we cannot truly know people we meet online, and if we cannot know people, we cannot form communities with them. A number of counter-arguments exist. For instance, “Textual substitution for traditionally non-verbal information is a highly stylized, even artistic, procedure that is central to the construction of an IRC community.” (Reid, 1996a, 398) Emoticons (also called smileys) and signature (sig) files are visual cues which give additional information to readers (although they are still relatively poor in information compared to physical cues). As Baym points out, “The nonverbal cues necessary to frame performance are reinvented within the limits and possibilities of the ascii text format.” (1993, 158) In addition, while it may never be possible to create truly strong ties through simple text (epistolary romances notwithstanding), the case has not been made that weak social ties cannot be conducted through text. Since relatively weak social ties make up the bulk of online interactions, if community is defined as being largely made up of weak social ties, online community is possible. Finally, the necessity of non-textual cues to the development of relationships may be overstated. Literature at Lightspeed – page 163

Our knowledge of the other individuals we interact with is not complete nor does it come as a single coherent package for us to interpret in one sitting. This knowledge accretes over time but is never complete. Very few people would ask me how I know the ‘facts’ about the individuals I interact with at my university or question their validity. I know these things because they are all facts that have been stated, and, in some cases re-stated in the course of their interactions with me and each other. On the other hand, in part because the medium of interaction is so new, the ‘facts’ I know about the punks [in the newsgroup alt.cyberpunk] have been questioned because I have never ‘met’ them. In both cases the knowledge I acquired about the punks and about my cohorts and colleagues were gathered in a similar way and my sense of their validity and of the personalities of the individuals involved are similar despite the medium of interaction. I know all of the things detailed above because, over the course of time and a series of personal interactions, people have, for various reasons, told me about themselves or told me about people that they know of. (Giese, 1997, 24/25)

We build up our knowledge of who a person is mostly through what they tell us. Non- verbal cues may give us an indication of the validity of any given statement, and it may be necessary for us to develop other methods of determining truth in environments where there are no non-verbal cues; but the consistency of statements (the content of verbal communications) over time is what ultimately allows personal ties to develop. As Giese suggests, this process occurs online similar to the way it occurs in the physical world. Taken together, I think these arguments show that the lack of non-textual cues is not an impediment to the formation of personal relationships or communities online. As I have mentioned, I was not looking for community among the writers I studied; for this reason, there are many questions which I cannot resolve, only point to as possible subjects for further research. For example, “...we would expect a group to transmit norms and values to aid in the maintenance of the group’s cohesion. Shared norms and values give a group shared identity, without which it becomes less of a group and more of a crowd. [note omitted]” (Hill and Hughes, 1997, 4) How are norms and values transmitted through a distributed community? I have suggested some mechanisms Literature at Lightspeed – page 164 (FAQs, flaming and toading), but how do they actually function in structuring online relationships? Are there any differences between the interactions necessary for a single- site community to form and those necessary to create a distributed community? Another important observation is that “Much on-line contact is between people who see each other in person and live locally.” (Wellman, et al, 1996, 222) Because the focus of my survey was on the experience of writers online, I have little information about whether there is any interaction between writers offline, and, if so, how the interactions in the two realms differ and compete or compliment each other. The concept of distributed communities can include sites of interaction in both the offline and online worlds, further complicating what being part of a community may mean. The relationship between these two realms is another area where more research would be useful. While the possibility of a Web-based community of writers was thought by all to be a positive development, one writer did caution that “Ideally, I would like to have a readership that extends beyond the closed circle of aspiring writers.” (Wardrip, 1998, unpaginated) I suspect many writers would agree with this sentiment. On the other hand, it’s not an “either/or” situation: over time, one would hope that there would be both a network of writers and an increasing readership of non-writers. A network of Web writers could help diminish the problem of the authority of Web writing. As more experienced writers interact with those with less experience in informal “apprentice” relationships, the quality of writing on the Internet will steadily improve. Whereas editors are the vouchsafes of quality in print, other writers may prove to be the vouchsafes of quality online. Initially, ezines like DargonZine and the Short Story Collective will be places where readers can go with some assurance that the stories they find will have gone through some form of collective editing process; eventually, this will also occur for individual pages, although in a less formal manner. At some point, these efforts may help change the perception of the Web as a sinkhole for poor writing. Literature at Lightspeed – page 165 Why Not Publish on the Web? 3) Uncertain Regulatory Environment Many people believe that the ease with which digital material can be reproduced makes it easier to steal such material. Some of the writers surveyed had experienced the publication of their work without their knowledge or permission. “There was once this moron [who] took one of my stories and placed it on HIS website claiming that he wrote it,” one writer claimed, “and the WORSE [sic] thing was that he got good comments for it in his guestbook! I just happened to stumble upon his website for some unknown reason (must have been surfing within geocities, i do that from time to time).” (Qining, 1998, unpaginated) Another claimed that “someone tried to post a story w/o my permission and they had to delete it.” (Shirley, 1998, unpaginated) Electronic reproduction is only one possible means of losing control of one’s work. Another writer pointed out that digital publishing on a worldwide network “presents an enormous opportunity for someone to steal your work, put their name on it and publish it in print in another country.” (Gubesch, 1998, unpaginated) While this may strike some as unduly pessimistic, at least one writer claimed that this had happened to her: “Recently I found that a certain Thai celebrity (who is known for surfing the Internet) published a book of essays, one of them bearing incredible resemblance to one of my own. It turns out that she saved my essay to her hard drive and altered a small part of it and passed it as her own.” (Truman, 1998, unpaginated) The odds of this kind of activity being discovered are not great, so there is no way of knowing how often it happens. A lot of writers look to governments to help them protect their work from such theft. In print, the copyright regime has developed in order to give writers control of -- and help reward them financially for -- their work. Many writers expressed the concern that copyright would not protect their works on the World Wide Web. “The copyright Literature at Lightspeed – page 166 laws do not properly reflect pieces published on the internet,” was a typical comment. (Steffensen, 1998, unpaginated) The writers pinpointed some of the problems with attempts to apply copyright to digital communications media. One is the transitory, unfixed nature of digital work. “If someone were lifting it from a book,” one writer explained, “there’d be a book to substantiate my claim [of copyright]. The internet is too changeable a medium to prove much in the way of ownership.” (Stazya, 1998, unpaginated) One way around this problem would be to print out a copy of one’s story and “register your work with the Library of Congress just as if it were an ordinary book. In the future, they may become evolved enough to accept electronic manuscripts or web pages.” (Bamberger, 1998, unpaginated) This would be harder to do for hypertext works, since part of the value of them is their structure; perhaps a map of the structure would have to be included with the content of the nodes when such a work was submitted for purposes of copyright. Furthermore, it is uncertain whether this procedure would cover subsequent drafts of a work posted to a Web page. How many changes could a writer make and still be able to claim that a later draft of a work was similar enough to an early one to fall under the original copyright? How much would need to be changed before a new copyright registration would be required? Legislators have yet to deal with such problems. Another problem, based on the international nature of the Internet, is that different countries have different copyright regimes, offering different levels of protection for authors. As Truman argued, “Unfortunately in Thailand copyright laws fall on deaf ears. It’s absolutely infuriating to find something like this happen and that you can’t really do anything about it even if you did place a copyright.” (1998, unpaginated) Occasionally, a survey respondent would make an untenable claim about copyright. For instance, one writer claimed that “...an obvious disadvantage is a certain vulnerability to copywrite [sic] infringement by unscrupoulous [sic] plot miners.” (Muri, Literature at Lightspeed – page 167 1998, unpaginated) That is, somebody may read what you have written, like one of your plot devices, and use it for a story of his or her own. Copyright cannot stop this, was, in fact, specifically designed to allow this. Copyright covers the expression of an idea, that is, the language used by a writer. Ideas themselves are not copyrightable. Another writer stated that “I’m leery about publishing my poetry on my home page, because of copyright violations. I’ve moved away from publishing poetry on BLAST, and have chosen to submit only non-fiction essays. They do have a copyright, so I’m not too concerned. But I don’t have a copyright on my home page, or anything.” (Blum, 1998, unpaginated) Copyright is assigned to a writer as soon as a story is fixed, however; it wouldn’t matter if the story appeared in an ezine or on an individual’s page. Furthermore, all one had to do to assert copyright was to put a line on the page with the work claiming copyright, naming the date and the author in whose name the copyright was being asserted (that is, until recently, when even this minimal requirement was dropped). Uncertainty about how copyright will be applied to material published online is keeping some people from fully embracing it as a publishing medium. “I’ve published summaries of my short stories adn vagueties [sic] about my novels, but I don’t really trust people not to rip stuff off yet.” (Lachesis January, 1998, unpaginated) There is no way of knowing how many people have writing but do not place it on the Internet for this reason. This is not the only uncertainty which may be inhibiting use of the Internet which can be attributed to government actions. The other is control over content: censorship. Although the 1998 survey generally mirrored and amplified the concerns of writers who responded to a previous, less ambitious survey which I conducted in 1996, this was the one major area of difference. “Internet censorship laws could also [p]ut a significant strangle hold on creativity on the net, but currently there is no real inforcement [sic] of Literature at Lightspeed – page 168 such censorship and the government would probably loose [sic] in court anyways...” one respondent to the original survey wrote, reflecting the concern of several of the writers. (Calef III, 1996, unpaginated) By way of contrast, not only did none of the respondents to the 1998 survey say that censorship was a disadvantage to publishing on the Web, but one went so far as to claim that one advantage of publishing on the Web was the “lack of censorship...” (Morrigan, 1998b, unpaginated) How to account for this difference? In 1996, the United States government passed the Communications Decency Act, which criminalized a lot of speech online. There were online protests (one involving changing the background colour of Web pages to black), as well as a generally unfavourable response by the offline press. This was undoubtedly in the minds of the respondents when they filled out the survey. By 1998, the CDA had been struck down by the US Supreme Court as unconstitutional, leading many to believe that the battle against government censorship had been won. Thus, one of the 1998 survey respondents could write that “Since the internet remains a kind of anarchist arena (despite recent attempts at legislation), virtually *any* kind of writing can find publication online.” (Wardrip, 1998, unpaginated) As it happens, government efforts to control Internet content are still a problem: not only have attempts been made to introduce new versions of the CDA into the Senate, but various State legislatures have their own censorship laws. Furthermore, even if the United States government does not ultimately pass laws to censor content on the Internet, many other countries have such laws, a fact which will limit its application as a publishing medium for some writers and the potential readership for the work of others. Thus, while perhaps not recognized by the 1998 survey respondents, government censorship is still an important issue. On this subject, one writer brought up an issue in 1998 which had not been raised in 1996: private, corporate censorship of writing. “[Y]our server could disaprove [sic] of Literature at Lightspeed – page 169 your essays and drop you (Which HAS happened to me before and a friend of mine who writes erotica).” (Bandy, 1998, unpaginated) At the very least, this could lead the writer to experience the inconvenience and possible expense of moving to a different server. Some writers, however, may have few alternatives, or may not want to go to the trouble of reestablishing themselves elsewhere. The result is that some writing may disappear from the Net, a loss not only to the writers, but to potential readers of that material. The question is: what exactly is an Internet Service Provider? If it is seen a publisher, then, like a newspaper editor, it has the right to control content on its server. If, on the other hand, it is seen as a common carrier, like the telephone system, then it does not have this right. Here, it seems important for the government to step in and define the nature of the industry, if for no other reason than to give individual content creators such as writers a clear picture of what they can expect when they sign up with an ISP. Copyright, censorship and government regulation are areas of uncertainty which are or should be of concern to writers who place their work on the Web. They will be dealt with in greater depth in Chapter Four. Evolving Relationships: 2) Web and Traditional Publishing As we have seen, migration from print publishing to the Web is substantial, while migration from Web publishing to print is growing. There are some wrinkles to the relationship between the two forms of publishing, however. A disadvantage to publishing writing on one’s own Web site cited by more than one reader was “not being able to sell first publication rights since the writing has already been self-published.” (ShanMonster, 1998, unpaginated) First rights are part of a contract which writers enter into with print publications which guarantees the publications that they are the first to publish a story and that they have exclusive rights to it for a set period of time (six months not being uncommon). By publishing the story online, the writer forecloses on the possibility of being paid first right fees by a print publication. Literature at Lightspeed – page 170 Here, a writer’s cost/benefit analysis comes into play: first rights fees are weighed against the possibility of actually getting published in print. Many authors surveyed had either given up on the possibility that they would be published in print, or had been published primarily in print publications which paid little or nothing; for them, loss of first publication rights was less important than increasing their readership. Moreover, many writers hoped that the reputation they gained by publishing online would be parlayed into payment for subsequent stories. As we have seen, this has happened. First rights are not the only rights; some publications will accept stories which have already been published, although at a reduced fee. Even this may be jeopardized by publishing online: “The contract laws concerning electronic publishing are still very fuzzy. A person has to be very careful to have it clearly specified just HOW LONG a piece will be posted. Some publications archive for a very long time. Even though the author still has the residual rights, the marketability of a piece is greatly reduced if it is readily available somewhere for free.” (Richardson, 1998, unpaginated) Here, again, every writer will apply a cost/benefit analysis to determine whether or not to publish online. So far, the discussion has been about digital stories migrating into print. Print stories are also becoming available in digital form. This is most obvious when print magazines create online versions which contain the same content as the originals. However, individual writers have found reasons for republishing (the term reprinting seems inappropriate in this context) their work online. For example, asked why he published his fiction on the Web, one writer responded, “Purely for archival purposes. Zine ran its course in print, decided to put highlights on the web.” (Masterson, 1998a, unpaginated) Another writer answered, “my first novel was out of print, so i decided to give away the text, rather than have it gather Literature at Lightspeed – page 171 dust in a drawer.” (Kadrey, 1998, unpaginated) In this way, publishing online is a method of keeping a work available to the public after it stops being available in print. Underlying the republishing of print material online is the belief that digital writing has a “Longer ‘shelf life.’ No one rips the covers off an e-zine and returns it. It isn’t replaced each month by a new one; or, if it is, it usually archives the material on the web site, easily accessible to browsers.” (Sirois, 1998, unpaginated) Back issues of magazines usually can be ordered from the publisher, as can backlisted books which may not be on bookstore shelves. However, once the book goes out of print or the magazine publisher has sold its last copy of the publication, the potential reader can only search through used bookstores or libraries or employ specialized search services to find a particular work. (Assuming that the person knows that it exists; when a book goes out of print, it drops out of catalogues and other promotional material which would lead a potential reader to it.) By publishing on the Web, its proponents argue, these problems are alleviated. However, other writers argue just the opposite. According to one author, “fiction on the Web has a pretty short lifespan.” (Sherwood, 1998, unpaginated) As mentioned above, Web pages are highly impermanent. As one writer asked, “What happens to the data when a site folds...?” (Jeremiah Gilbert, 1998, unpaginated) Common sense would suggest that the rights should revert back to the author, but this is still highly untested ground. Another writer pointed out that “On the internet, texts can appear and then disappear without a trace. I read recently that Noam Chomsky refuses to use the internet for this very reason. If an article appears in the NY Times, for example, it is essentially set in stone -- I can always go back and find that article. Obviously, the same is not true of the internet. Texts can be deleted or modified, or what have you.” (Wardrip, 1998, unpaginated) In fact, writers may be given the illusion that their work exists on the Web Literature at Lightspeed – page 172 when it doesn’t since, “Sometimes an e-zine folds without informing those who have submitted works.” (Weiss, 1998, unpaginated) As with most of the conflicting opinions between writers that we will come across in this chapter, there is some truth in both positions. At the moment, the Web is highly unstable, and many pages which are here today will be gone tomorrow; on the other hand, there are pockets of stability, and some work will have a longer life online than it would have in print. This, too, must be part of the cost/benefit analysis a writer must consider when considering the possibility of publishing online.

Hypertext Fiction on the Web The word in oral cultures is ephemeral, evanescent; it is literally a puff of air. If you are not there when it is spoken, you miss it. The word in print cultures, by way of contrast, is fixed and, as long as the physical artifact exists, it can always be consulted. Unlike stories in oral cultures, which are never exactly the same from one telling to the next, and, in fact, can be changed according to the input of listeners, printed stories are fixed. Print media are essentially linear. The nature of the medium is such that readers are encouraged to start at the beginning and read until the end. This is not to say that there haven’t been experiments with non-linear forms of printed text: ancient religious works, for example, featured a central text surrounded by commentaries which referred to highlighted passages. Indeed, the notes and references in academic works, including this dissertation, give the reader the opportunity to move in and out of the main text. Furthermore, it is always possible for the reader to skip pages, or even chapters, moving back and forth at will. Having noted these exceptions, I maintain that a primary characteristic of print is that it encourages a linear reading of a text, that these exceptions work against the logic of the medium. As readers, our experience of text is largely linear. As we saw in Chapter One, computer mediated communication allows for a different method of arranging text: what Theodore Nelson called “hypertext.” The units Literature at Lightspeed – page 173 of textual language are well known: letters, words, sentences, paragraphs. The units of a hypertext language include all of the units of text, but add two additional units: the node and the link. A node (which is sometimes referred to as a “lexia") is essentially a chunk of text; it can be as small as a single word or as large as a novel. A link is a device which connects nodes. Whereas print text encourages a linear reading, digital text encourages a non-linear reading. (Following this reasoning, in the rest of the dissertation, I will alternate between the terms “print” and “linear” writing.) I believe that the addition of links and nodes to textual language completely changes our experience of texts. In addition, the ability to link nodes inherent in digital communications systems makes new esthetic experiences possible, and requires new esthetic criteria. In short, it is a new art form, with all that that entails. Because it is so different from tradition text, I have chosen to look at hypertext fiction separate from other forms of fiction available on the World Wide Web. Before we look at hypertext, however, we have to differentiate between two different types of it: individual and collaborative. With individual hypertext, all of the nodes and links are created by a single writer; with collaborative hypertext, the content of the nodes and, sometimes, the links are written by different people. Although they have many aspects in common, we shall see that individual and collaborative hypertexts have some practical differences. What Hypertext is Available on the Web? In my Masters Thesis, which is about telling fictional stories in hypertext, I argued that the possible structures of hypertext formed a continuum based on complexity and connectivity. On one end, there were stories which were largely linear, highly schematic, with few links that didn’t offer much choice to the reader. On the other end, the stories were completely non-linear, a web of links with no discernible structure, with a large number of links relative to its nodes which gave the reader a large number of paths Literature at Lightspeed – page 174 through the work. In between were works which contained increasing numbers of links relative to its nodes and decreasing structural schematization, leading to increasing narrative complexity. (Nayman, 1996) Although relatively small in numbers at the time of my survey, hypertext stories on the World Wide Web ran the gamut from one end of the continuum to the other; in form, most of the stories are unique. At the basic end of the spectrum is Matthew Gray and Jake Harris’ Matthew and Jake’s Adventures. (undated, unpaginated) The text in each node is less than a screen long; usually, there are single links to the next part of the story, until the ending, at which point there are two links, one to a “happy ending” and one to a “true ending.” In terms of the narrative, the interactivity is minimal. One thing that Matthew and Jake’s Adventures does have are text links embedded in the narrative to non-fictional material. Click on the fictitious characters’ names, for instance, and you will be taken to a page with a brief description of the real people on which they are based, a page which includes a link to the person’s home page. There are also links to off-site non-fictional Web pages: areas of the college where the two study, for example, and meteorological information. Other interactive stories which had links to off-site non-fiction Web pages included The Electronic Chronicles (Wortzel, 1995 unpaginated) and Cutting Edges (Nestvold, 1997, unpaginated). Linking a fictional narrative to non-fictional Web sites blurs the distinction between the two forms of information. Linking to information off of one’s site carries the risk that the Web page one links to will disappear. One of the links in Cutting Edges, for example, to Webster’s Dictionary, was dead when I tried to access it. Links to off-site material can enrich a work, but they require constant monitoring to ensure that they work. There is an additional risk: that once a reader has moved to a page off-site, she or he will not return to read any more of one’s work. Literature at Lightspeed – page 175 Further along the continuum were the stories which consisted of chunks of text at the bottom of which the reader was given two or more choices. These are digital versions of the “choose your own adventure” stories which have appeared in print. Examples of this type of story included: If We Even Did Anything (Wilson, undated, unpaginated), An Interactive Cyberpunk Tale (Dessart, undated, unpaginated) and A Further Xanadu (Robert, undated, unpaginated). Each of these stories has its own unique features. A menu on the opening page of If We Even Did Anything, for example, gives the reader the option of beginning to read the story on any of its 38 nodes. In contrast to most of the other stories, which had a single start page, this approach offered a much greater level of narrative complexity by allowing the reader to jump in at any point. (It also gave readers a sense of how big the work is, something which is taken for granted in print but is rare in hypertext.) The most ambitious of these stories may have been A Further Xanadu, which boasted approximately 160 discrete chunks of text and 240 links. The chunks of text did not reside in separate files, however; they were divided into six really long files. Each chunk was separated from the others by two screens of single dots down the left hand side. Robert used some conventions which limited the amount of interactivity: when the only choice is “next,” for instance, the text is really linear; when the only choice is “back,” the link is obviously a dead end which does not advance the narrative. Moreover, since the chunks of text are placed into a small number of large nodes, it’s too easy to simply scroll down the text and read it linearly; this cannot be done when the chunks are given separate nodes. One of the stories, 24 Hours With Someone You Know, contained an example of a major problem with choose your own adventure stories. At one point, the character we’re following is walking down the street; we are given four choices of store for the person to enter. Three of the choices lead to a node with a single paragraph of description, after Literature at Lightspeed – page 176 which the reader must choose to enter one of the other three establishments. (Burne, undated, unpaginated) The problem with this section of the work is that it will become obvious to the reader that only one of the four original choices moves the plot forward; the others are there to give the illusion of choice. The is likely to alienate the reader who, as we shall see below, needs to feel like her or his choices are meaningful. (This particular problem is easily solved: further develop each of the underdeveloped scenes in the three stores, adding two or three more nodes before the character leaves them; this would make it less obvious which path was the one the writer wanted the reader to follow.) So far, the reader has been given text links at the bottom of each node. Further complicating the possible structure of a hypertext narrative are links embedded within the text of the node itself. Under the Ashes (Inglis, undated, unpaginated) is an example of this type of linkage. These types of links have the same potential problems as links at the bottom of a node: for example, if there is no further links in a node, the reader has been led to a dead end. Too many dead ends will frustrate a reader, making him or her feel that the writer is leading him or her on rather than allowing him or her to direct his or her own experience. Where links at the bottom of a page tend to have the purpose of moving the story forward, however, embedded links offer a variety of literary purposes, some of which will be explored below. Embedded links can also increase the complexity of a hypertext by creating a thicker web of connections between chunks of text. One aspect of text links is that the reader, knowing what has already been read, can avoid revisiting nodes. Some may find this an advantage. However, the writer may have reason to want certain information repeated at strategic points in the narrative. One solution to this problem is to place generic icons at link points within a node (as Greenwald does in Fields of Night (undated, unpaginated)). Generic icons do not telegraph the content of the node to which they are linked, so they offer an element of Literature at Lightspeed – page 177 surprise. (In addition, the same icon can link to different nodes depending upon where it is placed in the text, lowering the chance that the reader will be able to anticipate the direction of the narrative.) More important, graphic links offer a second method of navigating through a text. This increases the complexity of the work, moving it further up the continuum. Jack Tar, for example, contains links embedded in the text and a bar of graphics at the bottom of each node; each graphic actually represents the background of the node to which it links. (Vinik, undated, unpaginated) Following the texts links creates one set of meanings; following the graphic links creates a different set of meanings. Using some combination of the two to navigate through the work offers a richer set of possibilities than either would on its own. The most ambitious work I came across, the one furthest along the continuum, was Marjorie Luesebrink’s The Probability of Earthquake... The text was sparse, ranging from two or three paragraphs to a few short, aphoristic lines; one page had no text at all, but a graphic of a handwritten note. The text contained embedded links. Each page had a lot of graphics, many of which were links to other pages; one graphic, called the “Star Maps” contained seven distinct links. Finally, at the bottom of the page was a graphic with half a dozen additional text links. Thus, there were three different modes of travelling through this work, each carrying its own meaning. How these different interactive narratives compare in terms of their linking structures and interactivity is summed up in Chart 2.5 Compared to their single author relatives, collaborative hypertexts were conservative, linear texts with little linkage which would be placed at the beginning of my continuum. Collaborative hypertexts come in two forms: ongoing narratives and shared worlds. Literature at Lightspeed – page 178

types of links site example interactivity

few at nodes ends, embedded Matthew and Jake’s Adventures minimal at nodes ends; menu If We Even Did Anything minimal to medium icons, embedded in text Fields of Night medium text, embedded in text Under the Ashes medium icons; embedded text Jack Tar medium to complex embedded text; two graphics The Probability of Earthquake... complex

Chart 2.5 Online Hypertext on the Continuum of Linking/Narrative Complexity With an ongoing narrative, new writers simply add chunks to the end of what has already been written. Tales from the Vault [http://www.talesfromthevault.com/] contains over 100 such stories. There is no need for collaborative stories to take this form. The technology allows for links to be embedded in chunks; an author could simply ask the person maintaining the story to place links at one or more places in the existing parts of the story. This would take effort, however, and, as we shall see, ongoing narrative fiction writers tend to be less invested in their work than individual hypertext authors. Shared worlds collaborative works take place in the same universe, with shared history, geography and/or characters. As we have seen, DargonZine is an example of a shared world. Another example is The Company Therapist [http://www.thetherapist.com/]. Here, the common element is that each writer creates a patient for fictional psychiatrist Dr. Charles Balis; each story unfolds as a series of encounters between doctor and patient. The writer can also supply additional material, such as drawings made by the patient, stories written by the patient, correspondence by or about the patient, etc. Shared worlds grow in complexity as the number of contributors grow. However, each writer’s contribution is discrete and linear. They are similar to print anthologies of short stories on a given theme or other shared characteristic. If they are in any sense non- Literature at Lightspeed – page 179 linear (by the choice of story order, perhaps?), they are at the very beginning of my continuum. Why Hypertext? Linear, print text is at least 500 years old. Writers know how to use it; readers are used to experiencing it. Digital hypertext fiction, by way of contrast, is, depending upon how one measures such things, between 10 and 20 years old. The principles by which writers of hypertext fiction may create the most pleasing experiences for readers have yet to be determined. For their part, readers are not used to non-linear storytelling, and may find hypertext fiction confusing or frustrating, inasmuch as they are conditioned to read linearly. In this very uncertain environment, why would people choose to write hypertext, rather than linear, fiction? Many writers are actually excited by the possibilities of experimenting with a new form of storytelling. “Print texts are wonderful, and I love them,” stated one author. “As a writer, however, I feel that hypertext is an interesting experiment into narrative structures.” (Luesebrink, 1998, unpaginated) The fact that the esthetic principles of non- linear narrative are, for the most part, still to be discovered appealed to some writers: “The conflict-crisis-resolution model has proved satisfying for many people. To find something as satisfying [in hypertext] would give me great joy.” (Sanford, 1998, unpaginated) Another writer had an interesting metaphor which summed up this attraction of the medium: “I consider myself to be a craftsman, and hypertext offered a new set of implements to use for telling a story. It’s sort of like a carpenter -- always looking for a new tool to hang in the shed.” (Sorrells, 1998, unpaginated) Traditional narrative requires that the writer make choices: Phil can marry Joan or he can marry Mary, but he cannot do both. One of the attractions writers have to hypertext fiction is that this need not be the case: “You can take all the ‘what ifs’ and make them happen and it will still make sense.” (Burch, 1998, unpaginated) The Literature at Lightspeed – page 180 possibility hypertext allows for alternative narratives need not be confined to plot; the same story can be told from different points of view, or even in different narrative styles. As one author put it, “sometimes stories need to be told more than one way.” (Crumlish, 1998, unpaginated) One example of this flexible aspect of hypertext occurred in No Dead Trees, a collaborative fiction where authors wrote stories in a shared fictional universe which, taken as a whole, can be considered a novel. “We can go anywhere we want at any time,” one of the editors claimed.

For example, in an earlier portion of NDT, Monica killed a non-character (Aileesha). In traditional print, this non-character would have had no life beyond the printed page and plot development. In the Novel she was killed simply to show that Monica is a vampire without feeling. But hypertext allows us to give that non-character a life, to tell her story. Out of that one act of murder, a whole section of the Novel grew. The girl Monica killed was given a name -- Aileesha -- a past, a mother and father. She’s since grown to become a popular character in the Novel. (Benson, 1998, unpaginated)

In addition to illustrating how flexible hypertext is, this anecdote also shows an advantage of digital text over print text: in print, there is a limit to how many pages a given volume can hold, so writers are encouraged to focus only on the main conflicts in their story. Because digital media are, in theory, infinitely expandable, the lives of secondary, tertiary or even more minor characters can be profitably explored; in effect, they can become primary characters in a different branch of the story. Some writers had experience with non-linear forms which made it easier for them to envision creating non-linear literature. A small number, for example, had used hypertext in contexts other than fiction writing. “I’d been working on, and in, hypertext environments for some time,” one writer described his experience, “having produced a large ‘appendix’ to my Honours thesis in the form of a Hypercard project, and had Literature at Lightspeed – page 181 internet access since 1994, and was interested in the ways in which hypertextual writing and reading could function as a model for critical theory and postmodern interpretive modes. Publishing on the Web was a logical next step, after designing several literary sites and writing analytically about hypertextual fiction.” (Kiley, 1998b, unpaginated) A couple of writers stated that they were influenced by their experience of experimental non-linear forms of print literature. “As a kid I liked Choose Your Own Adventure books and computer adventure games. I’ve always liked shapes and spatial relationships. I suppose the two came together.” (Inglis, 1998, unpaginated) Choose Your Own Adventure books contained chunks of narrative one to several pages in length which ended with a choice of narrative direction ("If you want Alfie to enter the room, go to page 37; If you want Alfie to kick John in the shins, go to page 62."). People who had read these books claimed that they helped make them open to exploring the possibilities of digital non-linear literature. Writers of collaborative fiction had somewhat different motives. “Collaborative prose is sooooo much more fun to write...” one author stated, “because you NEVER know what’s going to happen next... sometimes I lie awake at night, wondering, ‘What is “so-and-so” gonna do to my character in “Story X”?!... Oh, man, if he kills him, I’m not gonna be able to go on...’” (Douit, 1998, unpaginated) Collaborative fiction can be like an elaborate game where writers develop story segments which either help or hinder the writers who create later segments. Thus, a fundamental difference emerges: whereas solo hypertext writers generally are serious about exploring the creative possibilities of a new medium, collaborative hypertext writers generally are out to have a good time. (Keep in mind, though, that they may advance the creative possibilities of the medium whether it is their intention or not.) One of the most often cited reasons for participating in collaborative fiction was the “two heads are better than one” concept: “Traditional fiction is usually more boring Literature at Lightspeed – page 182 because the writer can lose track of what he wants to say and ends up writing a whole lot of ‘filler’ than an actual story, [but] this doesn’t really happen in collaborative writing because there are many writers that can add to the story to make it interesting if it appears to be getting boring in content.” (Kira Moore, 1998, unpaginated) Individual writers were prepared to admit that they were not always inspired to write, and were happy that others would be there to continue a story when they did not feel they could. On a more positive note, different writers bring different experiences and sets of knowledge to a work, expanding its possible scope beyond what a single writer can imagine based on what she or he has experienced or knows. Writers who felt this way believed collaborative writing pages were cauldrons of creativity: “The combined brains of many have the capacity to come up with brilliant new ideas.” (Thomas, 1998, unpaginated) For this reason, they argued that the collaborative process resulted in better works of fiction: “These stories are much richer in creativity [than] work written by a single person.” (Anya, 1998, unpaginated) One writer even went so far as to claim that collaborative writing is “almost a paradigm of the net itself.” (Filion, 1998, unpaginated) As with individual hypertext authors, some collaborative fiction writers had had previous experience which helped them see the possibilities of the Web. “I’m already involved in fantasy role-playing,” one writer explained, “which really is a collaborative work. Jumping from there to a shared-world environment isn’t that big of a leap.” (Knowlton, 1998, unpaginated) In role playing games, one person creates a fantasy world in which players can have adventures; each participant gives life to at least one character, whose actions that participant will determine in the course of the game. The most well- known role-playing game is Dungeons and Dragons. Role playing games can be seen as a form of narrative which is constructed by the moment-to-moment decisions of each of the players interacting with each other and the environment which was created for them. Literature at Lightspeed – page 183 (Nayman, 1996) The creation of this narrative is a collaborative effort among the players, in much the same way, Knowlton claimed, that collaborative fiction is the collective creation of all of the writers involved. The two do not have to be entirely analogous for us to see that those who had experience with role playing games would more easily appreciate collaborative fiction than those without such experience. Finally, one writer stated that “I prefer collaboration simply because I learn so much from the other writers.” (Milano, 1998, unpaginated) As we have seen, a community of writers appears to be emerging on the Internet. In a way similar to ezines which encourage writers to critique each other’s work, Web pages which feature collaborative fiction can be seen as nodes around which small groups of writers collect in order to work with other writers to improve their fiction. Why the Web? As we have seen, writers of traditional prose have the choice of publishing in analog print or digital online formats. Because hypertext relies on digital media for its very existence, writers of hypertext generally do not have the option of publishing their work in print form. (It is possible, of course, but the links which are such an integral part of digital hypertext seem artificial in print.) When considering potential publishing venues, creators of hypertext have a different choice: the World Wide Web or CD-ROM. Many of the arguments favouring the Web over CD-ROM echo the arguments favouring the Web over print. For example, one writer pointed out that the Web offers writers “an automatic conduit to readers” which is not possible with CD-ROM, which requires a substantial publishing and distribution industry. (Robert, 1998, unpaginated) Another claimed that he placed his hypertext on the Web because “I didn’t see any way to sell it at the time. So I thought I might as well publish and get some feedback from readers.” (Inglis, 1998, unpaginated) This isn’t exactly the same as the situation in print, a mature industry where a lot of writers are competing for a relatively small number of Literature at Lightspeed – page 184 spots in magazines. CD-ROM is an immature industry, a recently created technology with a small market and uncertain economic future. Still, the sense that hypertext writers put their work on the Web because the alternative offers little compensation is a familiar one. As with print, some writers see the immediacy of publication as an advantage of the Web. “I don’t have to wait a year for the new innovations I’ve used to appear,” one writer explained. “CDs are prestigious but not keeping up with the revolution.” (Sanford, 1998, unpaginated) By the time a CD-ROM has gone through the editorial process and reached the market, new technical methods of creating hypertext may have been created which may make it passe. In addition, a writer may be experimenting with new narrative structures or other esthetic innovations and wish to have immediate feedback; as with linear text, the Web makes this possible in ways other media do not (as Inglis stated above). This may be changing, however. Until recently, to get CDs burned one had to go through a company with that capability. Relatively inexpensive CD-write drives are now making it possible for individuals to burn their own CDs. This could mean that an individual could create a CD-ROM and immediately have copies for distribution. (This would be analogous to self-publishing in print using computers as desktop publishing tools.) This would still leave the problem of distributing the material on the CD-ROMs, a problem alleviated by the Web. Other reasons for publishing on the Web are specific to hypertext. “The web is still mostly text,” one author stated. “Great for a writer.” The CD-ROM industry has bypassed straight text and is known mostly for creating works in “hypermedia,” works which use the ability to link digital information to build complex webs of text, graphics, video and audio. This advantage may not be long-lived, however: as graphics on the Web Literature at Lightspeed – page 185 become more sophisticated (a function, at least partially, of increasing bandwidth), computer users will likely see it less and less as a medium for distributing text. Another advantage of the Web over CD-ROM is that “the latter is operating system dependent.” (Robert, 1998, unpaginated) CD-ROMs can usually only be played on computers using either a Macintosh or Windows operating system; those who do not have the appropriate operating system are effectively barred from accessing the work. Creating a second version which can be used with the other major operating system (and even a third version which can be used with, say, the Linux operating system) can be costly and/or labour-intensive. The Web, by way of contrast, was designed to be accessible independent of a computer user’s operating system, so placing a work there can increase the work’s potential readership without additional time or resources. This is not to say that the Web is perfect in this regard. “Other than the economics,” one writer commented, “the web’s...main limitations are...the incompatability [sic] of web browsers, and the need to assume a technologically ‘lowest common denominator’ among your audience. With television, for example, all the sets are basically compatable, [sic] and you can assume your audience can receive picture and sound and probably color. On the web, you can’t even assume that the audience is downloading the images.” (Pipsqueak Productions, 1998, unpaginated) Not only will Netscape Navigator users see a slightly different page than Internet Explorer users, but browsers allow users to customize how they view Web pages, which means that no two users may see a page in exactly the same way. Because they are platform specific and do not allow users very much leeway for customization, CD-ROMs are viewed exactly the same way by all who can access them. Perhaps the most common reason for favouring the Web was that, “It can evolve over time... It is never finished.” (Crumlish, 1998, unpaginated) CD-ROM, like print, is a fixed medium, which means that a work must be finished before it is committed to the Literature at Lightspeed – page 186 medium. The Web, on the other hand, is expandable, which makes it a perfect place for works in progress, since new nodes can always be linked to existing nodes. For this reason, the Web is the ideal place for collaborative works of fiction:

We structured The Company Therapist as a web experience. It couldn’t exist in a CD-Rom as a continuing phenomenon. Writers, who’ve [sic] I’ve never met, create work which is published and which acts as an incentive to create more work in a serialized story. In a CD-Rom, we could retain the navigational elements of the underlying hypertext structure, but we would lose the new work of the authors’ themselves. Perhaps, if The Company Therapist ends its run, we’ll publish a CD-Rom of the whole. But then it will be a document of what was rather than a continuing expression of what is. It will be fixed like a snapshot in the past: a tantalizing glimpse of a sunny beach frolick. (Pipsqueak Productions, 1998, unpaginated)

Not all of the writers felt that this was an advantage, however: one claimed that the Web “loses by its impermanence (it takes active effort to maintain a site, whereas books, etc. once printed are out there for a long time).” (Robert, 1998, unpaginated) In this view, the potential for continually adding new material which excites some writers, actually becomes a burden necessary to keep readers coming back to one’s site. The Web has other disadvantages which the writers noted. One is that “Slow connections and slow systems can make downloading pages tedious.” (Burch, 1998, unpaginated) Using a CD-ROM, one’s experience is not affected by the level of traffic on a network (although the speed with which one’s computer transfers information from a CD drive to one’s screen is an important factor in how quickly one can access information on a CD-ROM). There are other potential “Problems with the server you may be using, ie. technical problems that aren’t actually your fault...” (Kira Moore, 1998, unpaginated) The problems may range from intrusive advertising to ISPs getting bought out by other ISPs, forcing domain name changes which make it confusing for readers looking for the page your work is on to ISPs which close, abruptly leaving a page without a home. None of these problems exist with CD-ROMs. Literature at Lightspeed – page 187 As with the decision over whether to publish regular text in print or online, the decision to publish hypertext online or on CD-ROM is complex. HTML or Not HTML? Whether to publish on the Web or CD-ROM is not the only technical decision a hypertext writer must make. There is also the question of what authoring tool to use. This may seem obvious, since the World Wide Web has become the sine qua non of digital communications. It is worth remembering, however, that HTML (HyperText Markup Language) is the newest tool for creating hypertext: Storyspace, a different computer programme, was first used by Michael Joyce to create Afternoon, perhaps the first true hypertext novel, in 1989, while Hypercard, a third programme, began being bundled with the Macintosh computer in 1987. (Barger, 1996, unpaginated) The earliest hypertext works were created using these two programmes, not HTML. Of the 20 hypertext authors who responded to the survey, exactly half (10) had used programmes such as Hypercard or Storyspace in addition to creating work in HTML. One of the advantages of HTML cited by one of these writers is that “Storyspace is over elaborate. It’s easier to learn HTML.” (Ryman, 1998, unpaginated) HTML uses sets of markers known as “tags” to create its effects; Web pages can be prepared in text editors common to all computers (although programmes which have been written to create Web pages are available). Storyspace, by way of contrast, requires a specific type of software which the author must learn how to use. This difference may be overstated, however. Having used both systems, I would say that while the basics of HTML can take five to 10 minutes to learn, the basics of Storyspace need only take 30 minutes to an hour to learn. (Of course, the subtleties of either can take a lifetime to explore.) In addition, a writer would have to take at least as long to learn how to use an HTML authoring tool as Storyspace. Literature at Lightspeed – page 188 A similar argument is sometimes made for readers: “I am primarily a hypertext writer for CD-ROM. The web was a choice because folks had access to electronic writing easily... The advantage is that so many people know how to use the web and understand hypertext reading in that environment.” (Luesebrink, 1998, unpaginated) The Web appears to have superseded other hypertext environments; far more people -- readers as well as writers -- are familiar with it than with Hypercard or Storyspace. Using either of these programmes would limit the audience for a work to the people who have them and know how to use them. Furthermore, as has been noted, HTML allows “Universal readability through web browsers.” (Inglis, 1998, unpaginated) Works created in Hypercard cannot be read by Storyspace, and neither can be read by Web browsers. A couple of writers claimed that: “Storyspace is a much better authoring tool than anything I’ve seen for html because of the graphical view it gives you. I think this is an invaluable aid to anyone in dealing with the kinds of complexities that hypertext authoring presents.” (Robert, 1998, unpaginated) With Storyspace, the writer creates nodes and fills them with text, then uses a separate function to link them together. Unlike HTML, Storyspace contains a map of the set of links and nodes, giving the writer a graphical representation of the entire work. As Robert suggested, this is extremely useful because of the importance of structure to hypertext writing. For this reason, some writers use Storyspace to create a first draft of a work; they then copy the text into text files and recreate the links in HTML. It is unfortunate that the Web has eclipsed other tools for non-linear writing (outside, perhaps, of universities, where Storyspace is widely available). It would be useful to determine whether different hypertext authoring programmes affect what writers produce; however, since the number of writers outside universities creating in formats other than HTML is dwindling (remember, half of the hypertext authors in the survey had used only HTML), this line of research is becoming increasingly unlikely. Literature at Lightspeed – page 189 Constructing Non-linear Narratives Hypertext writing adds two new elements to stories: nodes and links. This is fundamentally different from traditional, linear textual fiction (where the reader is free to link different sections of the work in the process of reading by, for instance, turning pages back and forth, but which, nonetheless, encourages a beginning to end reading). If, as I believe, this means that hypertext is a new art form, it will require new practices and new esthetics. How, for example, should a writer approach constructing a narrative made up of nodes and links? Nine of the hypertext writers surveyed (45%) stated that they began by creating the content of the nodes, and then decided how to link them. “I always start with writing,” one author typically explained. “If the form does not follow the content, I’m not interested.” (Crumlish, 1998, unpaginated) On the other hand, seven of the writers surveyed (35%) claimed that they began with the structure of the work and proceeded to fill in the content of the nodes. “I start with a global structure, which determines most of the main (‘longer distance’) hyperlinks, then more local ones are determined as I go along.” (Robert, 1998, unpaginated) The remainder of the survey respondents, 4 (20%) claimed that they worked on both structure and node content at the same time. Most writers likely go back and forth between the two elements at most stages of the process; nonetheless the majority of writers appear to begin by doing the bulk of the work on one or the other. There need be no contradiction here; both methods can have their uses. Which approach a writer takes will be determined by, and subsequently determine, the shape the writer initially believes the narrative will take. Broadly speaking, we can refer to two categories of hypertext narrative structure: rigid and fluid. Rigid structures are highly schematic, with little flexibility in the placement and linkage of nodes (corresponding to the beginning of the interactivity continuum explained above). Parallel structure is a Literature at Lightspeed – page 190 common form of a rigid structure. In a parallel structure (Figure 2.2), two or more lines of narrative run alongside each other, allowing the reader to move between them. In Figure 2.2, line A could be a series of events told from one character’s point of view, line B could be the same events told from a different character’s point of view and line C could be the events told from a third point of view, a commentary on the first two views or some other, related series of nodes. Another possible use of the parallel structure: line A could be the events in an adult’s life; line B could be related events from the character’s childhood; line C could be social events which have an impact on the character in either (or both) time periods. The possible uses to which parallel narrative structures can be put have, to this point, barely been explored. For our immediate purposes, though, it should be noted that writers who work with structures which fall into the rigid category are much more likely to begin with structure and then create the content for each necessary node. Fluid structures can be conceived as a web (Figure 2.3). Unlike rigid structures, structures which are fluid are not schematic; there is no necessity to how the various nodes are linked (closer to the end of the interactivity continuum). Writers who begin by creating nodes and start structuring them after enough have been written are more likely to create works with fluid structures. With rigid structures, links are largely (although not entirely) predetermined by the structure. With fluid structures, on the other hand, writers are much freer to link various nodes. One of the esthetic challenges for all hypertext writers, but especially writers employing a fluid structure, is determining how to link nodes. The authors who responded to my survey suggested several criteria. Sometimes, links are used simply to convey information. One writer said that he linked “character’s names with backstories for the characters, from previous writing.” (Rodebaugh, 1998, unpaginated) This need not be limited to characters, however: Literature at Lightspeed – page 191

Figure 2.2 Parallel Interactive Narrative Structure geography, politics, national or regional histories, any aspect of a narrative which would enrich a reader’s experience can be linked into it. (As we have seen, such links need not be to fictional material. Imagine an E. L. Doctorow novel with links to Web pages with information on the real characters in the work, a science fiction novel with links to pages containing information on the real science on which it is based, historical novels with links to pages with the real history of the time and place in which their stories occur, and so on. This would continue the hybridization of fiction and non-fiction which became part of the 20th century literary landscape, further problematizing the whole notion of a divide between the two modes of narrative.) One motivation for a link has to do with the prosaic issue of the direction of the narrative “In this case, it [a link] was [created] when a decision had to be made.” (Dessart, 1998, unpaginated) We have come across this idea before: it is the form of the classic “choose-your-own-adventure” story: the detective/reader is given the choice of which clue to follow up on; the romantic lead/reader is given the choice of which lover to pursue, and so on. Literature at Lightspeed – page 192

Figure 2.3 Web Interactive Narrative Structure For simplicity’s sake, I have labeled one node “H” for “Home.” As we have seen, although many interactive narratives start at a single page, others allow for multiple entry points into a narrative.

A similar use of linkages for simple plot advancement allows the reader to see the events of the story through the eyes of different characters. This was the intention of Sorrells’ The Heist. The question then becomes at what point do you allow the reader to move from one point of view to another? According to Sorrells, “each little scene or chunk of a scene usually had some natural places where one character bumped into another, which I would then use as a point where the reader could make a choice of whose POV they wanted to follow.” (1998, unpaginated) This is not as simple as it may first appear, however: the reader can always backtrack and see subsequent events from a different point of view, creating a complex web of complimentary and contradictory interpretations. Literature at Lightspeed – page 193 Some authors agreed that links should primarily be used in the service of the narrative, but in more complex ways. “I link where the story jumps to a different tone, character, thread of the plot, or location.” (Greenwald, 1998, unpaginated) Unlike the previous type of link, which was intended primarily to connect events, this type of link uses the connections between story elements to create richer meanings. One writer argued that it would be necessary to order the links in certain ways in order to make narratives comprehensible to readers. “I think a linear writer know [sic] where the love scene comes and where the murder comes -- like, you can’t have the sex before the first date,” she explained. “Certain emotions are dependant on certain prior experiences. In the same sense, I know where the links go.” (Eisen, 1998, unpaginated) While how we view events in a narrative certainly depends upon the events which we have experienced up to that point, it doesn’t necessarily follow that events in a hypertext must contain a cause and effect logic. We do not know how much ambiguity a reader can withstand and still be able to create a coherent narrative. Depicting sex, followed by a scene in which the participants meet for the first time could be quite legitimate; the reader will find ways to connect the two events. However, the meaning the reader assigns the events if he or she encounters them in this order will likely be different than if he or she experienced them in the reverse order. (This is why a narrative which is structured in such a way as to allow readers to come across either combination will have different meanings for the readers who experience the different combinations. Multiply this by the number of possible paths through a hypertext, and it is easy to see how each reader will create her or his own meaning out of the narrative provided by the writer.) To be sure, there is a limit to the amount of ambiguity a reader will accept. With non-linear stories, it is “More difficult to get artistic coherence.” (Deemer, 1998, unpaginated) Stories in which events seem to have no connection or characters come and go without motivation are likely to frustrate readers. Still, human beings are meaning Literature at Lightspeed – page 194 generating machines, and I believe we require much more exploration before we can state with any assurance what the limits of narrative ambiguity are. Some writers claimed that the way they linked nodes was “Thematic and plot- related usually.” (Nestvold, 1998, unpaginated) One author offered that thematic links were “things that get repeated: the idea of grey, quaker oats, cat eyes...” (Rodebaugh, 1998, unpaginated) There are two ways of looking at this. The links could pertain to the images which connect nodes. This is akin to cuts in a film where a visual motif is carried from shot to shot: when one shot ends with a close-up of a spinning wheel, for example, and the next shot begins with a close-up of the moon. Where the first image is followed by a choice of secondary images, they could all follow this pattern (although it isn’t, strictly speaking, necessary). Thus, the spinning wheel could link to the moon in one branch, to the sun in another, to the brim of a man’s hat in a third, and so on. More likely, though, Rodebaugh was referring to the content of the nodes, making the links indirect rather than direct. This is similar to the use of themes in traditional literature. Finally, one writer, pondering how to link nodes, said he asked himself “Do they have something in common, do they shed light on each otehr, [sic] do they contradict each other.” (Ryman, 1998, unpaginated) As we have seen, contradiction can arise in the differing points of view of characters involved in a single event. It can also arise from the point of view of one character on an event at different points in her or his life. Moreover, the author can simply create different versions of events without recourse to the point of view of any of the characters. Thematic and contradictory links are even more complex than story-related links; the latter require the reader to work to develop an understanding of the forward movement of the story while the former require the reader to work to develop an understanding of deeper levels of meaning within the story. The types and purposes of links which we have seen to this point are summed up in Chart 2.6. From this chart, we can begin to extrapolate other forms of link; what is the Literature at Lightspeed – page 195 esthetic effect of linking a node with narrative content to a node with purely descriptive information? Furthermore, there are probably other forms which links can take. These are, after all, early days, and we should assume that additional experimentation with the form of hypertext fiction will reveal new esthetic possibilities. Nonetheless, this should give the reader some idea of how links are a vital new creative tool. content of nodes purpose of link information to information inform reader plot point to plot point move narrative forward; change location or tone point of view to point of view different interpretations of events image to image poetic meaning info to contradictory info requires greater active reader interpretation

Chart 2.6 A Typology of the Uses and Meaning of Links Another esthetic question in the development of hypertexts is how much writing should an author include in a work before it can be considered complete. “Is it ever,” one writer responded, “even in what you’re calling traditional prose?” A lot of writers felt that their story was complete when they had run out of things to say, comparing this to linear prose fiction. It seems to me, however, that the issue of what to include, what to omit and when the work is complete requires more consideration for non-linear fiction than it does in linear fiction. Narrative structure in traditional fiction has been recognized since at least the days of Aristotle, who, in The Poetics, argued that a story was a succession of events which followed out of “logic and necessity” one from the other. (1987, 7) Furthermore, a story began at the point before which nothing needed to be said and ended at the point where any additional information would no longer add to the narrative. (ibid, 10) Although he was writing about theatrical narratives, Aristotle’s dicta can be very usefully applied to prose forms of fiction. Literature at Lightspeed – page 196 For our present purposes, the important thing to note is that, with the exception of a small number of non-linear narrative structures, events in hypertext need not follow out of logic and necessity. As we saw earlier, the scene of sex can follow the date scene in certain paths through a story, but, in others, the sex scene can precede the date scene. In some ways, this is equivalent to an episodic narrative, which Aristotle claimed was an inferior form of narrative. (ibid, 13) However, this only scratches the surface of what is possible in hypertexts. For instance, the sex scene mentioned above may be followed in one thread through the narrative by a scene of birds flying through a meadow; in another thread, it might be followed by a description of the workings of a space shuttle.4 While most writers think of hypertext linkage in terms of concrete effects such as plot or character development, their most profound effects may be poetic. Given all of this, it should be obvious that traditional ideas of what constitutes a narrative do not necessarily apply to hypertext. So, we return to the original question: when is a hypertext story complete? With rigid hypertext structures, the answer to this question is relatively simple: “The work is complete when all of the space outlined by the global structure is filled in.” (Robert, 1998, unpaginated) Thus, if you are working with a parallel structure which is three tiers deep and six nodes wide, you will have to create 18 nodes. Once all of those nodes are filled, the story is complete. Of course, the writer can always add another node to the width or even another tier to the depth, but this would require 21 total nodes in the former case and 24 nodes in the latter. Plopping a single node into such a rigid structure would destroy it (although there may be esthetic reasons for doing so). In fluid forms of hypertext structure where the links are largely concrete, it is sometimes possible to adhere to a modified version of Aristotle’s conception of the ideal narrative structure. “i usually have an end point in mind before i start [to write],” one Literature at Lightspeed – page 197 author stated, “so i know where i’m going and when i get there i know i’ve finished” (Burne, 1998, unpaginated) In this case, links may go off on tangents, but the narrative has a strong through line and most, if not all of the paths through it move the story forward (reader use of backspacing notwithstanding). Thus, although not all of the nodes follow strict logic and necessity, enough do to make the narrative coherent in traditional terms. Of course, a hypertext need not contain a single narrative line. Linear text forces a writer to make choices between different plotlines. Does the couple have sex, or do they fight and break up before they get to the bedroom? With the exception of a small number of works which explicitly deal with the theme of how our choices determine our lives, the writer of traditional prose fiction must choose one or the other. As we have seen, one of the advantages of hypertext fiction is that the writer need not make such choices. Even where there are multiple storylines, it is still possible to use Aristotle’s ideal narrative form to determine when a work is complete. In that case, Deemer claimed, the narrative could be considered complete “When all narrative lines are complete.” (1998, unpaginated) With fluid narrative structures without one or more clear plotlines, deciding when a story is complete is much more complicated. One writer stops “When I can’t improve it anymore.” (Crumlish, 1998, unpaginated) For another writer, a story is complete “when [I] have nothing more [to] say, told all of story, told all of didactic message, fully played out themes, run out of memory, run out of time... whatever.” (Jones, 1998, unpaginated) This is an interesting mix of considerations. On the one hand, there are such esthetic concerns as ensuring the story’s themes are well developed; at the point where no additional variations of a theme will increase the reader’s appreciation of it, the story would be considered finished. Literature at Lightspeed – page 198 On the other hand, there are a couple of practical considerations which may end a writer’s involvement in the creation of a story, if not complete the story itself. Memory, presumably computer memory, is an interesting one. A writer who wants his or her story to be portable enough to fit on a computer disk, for example, is limited to the effective storage capacity of the disk (1.38 megabytes). A writer who pays for a certain amount of storage space on the hard drive of an Internet Service Provider (usually in increments of five or 10 megabytes) cannot write more than will fit in that space without incurring additional cost. At first, this may seem like an unnecessary imposition on the creative process. However, it has a well known print equivalent in the print magazine which won’t accept contributions above a set word length, or a print publisher who will not consider manuscripts above a certain length. All writers have to work within the limitations of their medium. In any case, Jones reminds us that the decision that a narrative is complete is multi-dimensional, that it can involve a variety of considerations. One author wrote that determining how much content to put in a story was “A judgement call about how much the reader can want to know.” (Luesebrink, 1998, unpaginated) Completely fluid narratives can become a maze in which the reader travels with no sense of how much of the work has been experienced, how much there is left to experience or if he or she is even making progress towards a conclusion. Indeed, there need be no conclusion to this kind of narrative; the reader can theoretically travel from node to node, reinterpreting and rereinterpreting events forever. Michael Joyce, one of the first theorists and practitioners of hypertext fiction, controversially asserted in his hypertext story Afternoon that “When the story no longer progresses, or when it cycles, or when you tire of the paths, the experience of reading it ends.” (Landow, 1992, 113) Thus, hypertext replaces traditional catharsis with ennui. Literature at Lightspeed – page 199 This seems to me a poor trade-off which may lead to an unpleasant experience for the reader. For this reason, Luesebrink’s suggestion that the limitations of the reader’s desire to navigate through a hypertext should also be a consideration in the size of a work is well taken. Determining how much of a narrative will create a satisfying esthetic experience for a reader is complicated by the fact that, unlike most traditional narratives, the reader is not likely to read all of the text and, in fact, may not need to read all of it to feel it is complete. Making a choice in a hypertext means forgoing other choices; even where threads loop back to earlier nodes, there is no guarantee that a reader will choose a path through the hypertext which will allow her or him to access and read every node. In fact, it rarely happens, and as the number of nodes in the work grows, the likelihood that it will happen decreases dramatically. Therefore, as one writer explained, “a primary factor determining strategy is what percentage of the text you expect a reader to have gone through when they have (by whatever criteria) ‘finished’, since that will be what you have to convey your impression. (I planned Xanadu assuming the reader will visit every major ‘module’ in the story but only read about half of what is in each one.)” (Robert, 1998, unpaginated) Out of this develops another important consideration for hypertext writers: how to ensure that readers will choose paths which take them to nodes with information which is important to an appreciation of the story. In completely fluid narratives, no information is necessarily any more important than any other (the effect of the collision of various narrative elements is the most important esthetic consideration), so this is not much of a problem. However, with fluid narratives with one or more strong storylines, and even with some rigid narratives, ensuring that certain plot developments or thematic elements are experienced by the reader is an important concern. Literature at Lightspeed – page 200 To date, two methods of ensuring that important information in a hypertext is accessed have been developed. One is to structure the work in such a way that several links into the node containing the necessary information are made (what could be called the “all nodes lead to Rome approach"). The other is to place the important information in several different nodes, assuming that sooner or later the reader will move through a thread which will hit one of them. The problem with making important information redundant in this way is the possibility that the reader will find reading the same thing over and over again tedious; varying it enough while maintaining the basic information may not only mitigate this, but may enhance the esthetic experience by stressing the importance of the information or, if the variations are substantial enough, by asking the reader to create his or her own meaning out of different representations of the same events, characters or themes. With some forms of fluid hypertext, another problem with the uncertainty of when a work can be considered finished which the writer comes up against is what has been referred to as the exponential branch explosion problem. (Rees, 1994, unpaginated) As one writer described it: “I think without some sense of what you’re trying to achieve, and how, structurally, you intend to execute it, you end up drifting into the morass of the geometrically expanding tale. This page links to two pages, which in turn lead to four, etc.” (Sorrells, 1998, unpaginated) Suppose at each branch you give the reader three choices of where to go. At the first branch, the number of choices is three. At the second branch, each of the three branches has three new branches, for nine choices. By the time you have developed 10 levels, you have to have 177,147 choices; with 20 levels, you have to supply 10,460,353,203 choices. Each choice, of course, represents a node which has to be filled with story. This is a specific occurrence of a general problem with hypertext: while a navigable story can, in theory, be infinite, in practice it is limited by such factors as the Literature at Lightspeed – page 201 time the writer has to devote to it, available digital storage space, et al. The reader of a hypertext may want to explore a part of the fictional world, or learn more about one of the characters or otherwise seek information which the writer has not provided. In order to minimize the possibility of the reader going off in directions which the writer has not provided, the choices must be “naturalized;” that is, the writer must make the links seem so important to the forward motion of the story that the reader will not question why other choices were not offered. "Writing hypertext,” one author explained, “you need to either constrain the various routes that the reader can take through your work or write in a way that acknowledges the fact that you must convey your impressions in a kind of persistent, subtly accumulating, asynchronous kind of way. In general you do some of both. In my _Further Xanadu_ work, I tried to constrain reading order in certain ways locally while leaving it free -- and unimportant for the impression -- more globally.” (Robert, 1998, unpaginated) A small number of methods have been explored which constrain the reader’s choices without seeming to. According to one writer, “I developed some techniques that I called ‘bottlenecks’ and ‘canebrakes’ for curtailing this geometrical expansion problem. But I wasn’t especially happy with them.” (Sorrells, 1998, unpaginated) These are also sometimes called cul-de-sacs; they are areas where the reader can do some exploring but, instead of leading to further choices, they eventually lead back to a main storyline. Constraining reader choice, while necessary, comes with its own problem. As one hypertext reader complained: “I felt very strongly as I read that I was being controlled as if the author was always one step ahead of me. Hypertext fiction usually gives the reader a certain amount of freedom, which you did, but it seemed to be so often not what I wanted to read but what you wanted me to know. Lets face it in reality the choices hypertext fiction gives us are fairly superficial at times.” (Winson, 1996, unpaginated) If Literature at Lightspeed – page 202 the writer arbitrarily limits what information and choices are available to the reader, he or she may find the experience of reading the work unsatisfying. As Brenda Laurel revealed, to be satisfied hypertext readers (as do computer users generally) require a sense of “agency,” the belief that their choices are meaningful in determining the way they experience a work. (1993, 4) Thus, the author has to tread a fine line between fulfilling the needs of the story he or she wants to tell, and the reader’s need to feel in control of the interactive reading experience. To be sure, not all readers will respond positively to a given solution of this problem; some will always want to explore outside the parameters set by the writer. However, the writer who is sensitive to this issue can construct narratives which will be more satisfying to a greater number of readers. Many of the writers in the survey claimed that there was no point at which a hypertext could be considered finished. “Ive [sic] never decided a hypertext electronic work is complete,” was a typical response. “It never is....for me at least.” (Wortzel, 1998, unpaginated) The ease with which links can be made between existing and new nodes means that a writer can always add new information to a story. “i usually end up rewriting parts of earlier works (implicitly) when later i write things about the same characters that slightly contradict the older material.” (Rodebaugh, 1998, unpaginated) Writers who begin with a structure and fill it are analogous to bricklayers working from an architect’s plan; writers who continue to add nodes are akin to sculptors who build their work by adding clay and shaping it to fit what already exists. In such cases, though, an author doesn’t so much conclude a story as simply stop writing, and the reason usually has more to do with lack of inspiration than the needs of the narrative: “in truth, i usually either get bored with a work, or can’t see what else to work on.” (ibid) In addition to the ability to add new nodes to a structure, digital communication networks also allow writers to go back and rework the content of old nodes. It’s the old saw that nothing digital is permanent, of course, but with a twist: Literature at Lightspeed – page 203

I’ve made small changes based on how people have read the work. For instance, the first page of ‘Ashes’ has four links at present. Three originally led to nodes with no links. People found this frustrating because they couldn’t go forward. I can see exactly what they mean. But I had been thinking of those links like footnotes; detail or background on the objects they referred to. In the end I changed two. (Inglis, 1998, unpaginated)

In this way, readers can have some direct influence on the creation of digital narratives. More importantly, the World Wide Web becomes an early testing ground for hypertext theory. With print literature, what works for readers is determined, for the most part, by indirect measures such as how well books with different narrative forms sell. With digital literature on communications networks, not only can readers directly state what worked for them and what didn’t, but they can actually explain to writers why. With thousands of surfers reading dozens of works, the Web is a field for experimentation which should help authors and theorists develop an esthetic for hypertext fiction. If hypertexts truly never end, we are left with the potential for a Borgesian nightmare where a text meets up with other texts which ultimately combine into an uber- text which maps the world. Works by individuals authors put a brake on this somewhat since the time any single writer can give to a work is limited, by commitments to the body (ie: the need to sleep), responsibility to friends and family and, ultimately, the author’s limited lifespan. Collaborative hypertexts, by way of contrast, can be extended indefinitely. Collaborative hypertexts offer the same sorts of esthetic challenges as hypertexts created by individuals. In addition, they offer their own unique opportunities and problems. One problem is writing a segment which is both concrete enough but open enough to not foreclose on the possibility of other writers continuing the narrative. “I think it is important to write segments that you can continue building a story upon,” one writer stated. (Breivik, 1998, unpaginated) Thus, as tempting as it may be to end one’s node Literature at Lightspeed – page 204 with a nuclear conflagration which reduces the earth to a cinder, it wouldn’t be in the spirit of collaboration to do so. Another problem cited by one author is that writers may have to compete with each other for plum writing assignments. Typically, a collaborative hypertext writer will end her or his segment by opening up possibilities for taking the narrative in a new direction. “It...encourages you to challenge the next writer by setting up a twist of your own near the end.” (Wood, 1998, unpaginated) Some twists will encourage many writers to continue the narrative. At that point, “The downside is that there are about twenty [people] wanting the same slot or parts of the story...” (La Gesse, 1998, unpaginated) With some collaborative environments, alternative storylines are possible; with others, they are not. As a last resort, writers can certainly publish their particular story segments on their own pages, perhaps developing them into full stories which stand on their own. Another problem with collaborative fiction is that it can be “difficult to read different writers’ styles at times...” (Failing IV, 1998, unpaginated) This is not only a matter of differing writing abilities, although some writers will, of course, be more talented than others; it also takes into consideration the fact that most writers develop their own unique “voice,” and that different kinds of writing will clash if contained in a single story. It may also be a question of the tone: “Some guests always try to get the word ‘DICK’ in the story -- and they usually manage to succeed.” (Cornell, 1998, unpaginated) Some Web sites featuring collaborative work screen additions to ensure some continuity; others do not. A common complaint about collaborative fiction related to that of voice is that many stories lack continuity, often to the point of incoherence. “Sometimes people introduce to [sic] many characters, the story lines are illogical, there are no climaxes,” one writer explained, adding: “Mostly, you can’t write a good story if you haven’t planned it out.” (Anya, 1998, unpaginated) Of course, planning is virtually impossible in Literature at Lightspeed – page 205 environments where the next writer can create a node which takes a story in an unexpected direction. In fact, a small number of nodes after the beginning of a story, it may be completely different than what the author of the original node intended it to be. Moreover, while some writers of any collaborative work may take great pains to maintain continuity, others may not: “The story may take illogical directions and be self- contradictive, [sic] because not all of the writers knows [sic] every detail that has already been written.” (Breivik, 1998, unpaginated) Some writers adopt strategies which accommodate this. “I tend to write smaller ‘chunks’ that don’t need much background or build-up,” one stated. “Sometimes I just provide a line of background or build-up and let somebody else use it however their mind wants to use it.” (Cornell, 1998, unpaginated) Another stated that “I tend to concentrate more on character and relationships, rather than plot as such.” (Wittmaack, 1998, unpaginated) In accepting that they lose control of their narratives, what these writers claim is that they put less effort into creating elements which they feel will not be respected by subsequent writers. We have too little experience with collaborative fiction at this point to know if this strategy leads to more pleasing works for readers, or if, in fact, they will allow the writers to be satisfied with their contributions to collaborative works. Issues of voice and continuity are not a problem on a Web site known as The Company Therapist, which makes a virtue out of what most others consider the liabilities of collaborative writing. There, each multi-part story takes place in a session with the therapist, a character common to all of the work. As the creators of The Company Therapist explained:

In many collaborative ventures, the end product is choppy and unsatisfying, revealing a Frankenstein patchwork of different writers. In this project, however, the varying styles of different writers is a bonus rather than a detriment. Each writer creates and evolves his or her own Literature at Lightspeed – page 206

character. Each character in this story has a unique way of expressing him or herself and has an individualized voice revealed through transcripts of conversations and through the personal writings of that character. Instead of trying to lose the voice of the individual authors in service to the whole collaborative work, The Company Therapist revels in those individual voices to create unforgettably real characters. (Pipsqueak Productions, 1998, unpaginated)

While free in some ways, writers for The Company Therapist are constrained in others. According to the site’s creators, “We edit with particular care any items of an author’s session which might contradict other elements which have already been established as true in the world we’ve created. So, for example, the Doctor has a particular narrative voice which we strive to keep consistent. If one writer wishes to take an action which involves another author’s character, they have to get that author’s permission first.” (ibid) This process did not satisfy every writer. “[C]hanges have been made to the character & overall storyline by the editors which sometimes conflict with my plans,” one writer complained. “When the eds. make a seeminly [sic] minor change or error, it has caused a number of headaches for me, since I usually plan out the story several episodes in advance. “ (Duffey, 1998, unpaginated) This type of problem can be easily avoided, however, by the creator(s) of a site featuring collaborative fiction working more closely with writers to ensure that both are satisfied before a segment is published. Another site with a similar approach, one which we have encountered before, is DargonZine. Stories on the site all take place within the same fictional world, with a common geography and history as well as several different races of fantasy characters to choose from (dwarves, elves, etc.); they may also have some characters in common. Each story is workshopped before it is published to ensure that it maintains continuity with previous stories as well as a high level of writing. Collaborative works of fiction take many forms, from very loose structures to which anybody may contribute in any form, to much more highly developed structures Literature at Lightspeed – page 207 which require writers to go through critiques and achieve a satisfactory level of writing before publication. More experience is needed before we can even begin to consider what works -- for writers as well as for readers -- and what does not. Who Controls Interactive Narratives? Until roughly the 1970s, literary criticism worked on the assumption that the “meaning” in a text was created solely by the author. To understand a text, it was necessary, therefore, to pay very close attention to the words, which were assumed to be transparent vessels through which the author’s ideas were transmitted to passive readers. Are words so transparent, though? The assumption is that words refer to physical objects in the real world, and that each writer shares with each reader a sense of the meaning of words based on shared reference to their physical counterparts. However, this idea breaks down upon close inspection: as language becomes increasingly abstract, it becomes increasingly difficult to find objects which correlate to certain words. Where, for instance, can one point to and be able to say, “This is love?” or “This is justice?” or “This is truth?” By the 1970s, semioticians were arguing that all language works this way, that we do not understand words by referring to physical correlatives of them, but that we understand them only in relation to other words. Given this, the semioticians claimed that it was no longer possible to understand a text in relation to the author’s intentions, since there was no longer a basis for agreement of what the words in it meant. In effect, because there was no longer an objective reality to which a text pointed and each reader understood words differently, the reader actively created the meaning out of a given text. This was most forcefully argued in Roland Barthes’ seminal paper entitled “The Death of the Author.” (1977) Michel Foucault took up this argument, arguing that what he called the “author-function” was a social creation rather than a historical fact. (1979) Literature at Lightspeed – page 208 With hypertext, the reader navigates through pieces of narrative, choosing the order in which they are read. In this way, every reading of a text may be unique, and every reader will have a different experience of a work. Moulthrop (1989, unpaginated) and Landow (1992) argued that hypertext is an instantiation, a literalization of the semioticians’ theory that a text’s meaning was created by its readers. In this view, all text is mutable, linear text being a special subset of the larger universe of text. Some writers of hypertext fiction in my survey agreed with this assessment of their work. “you invite the reader to take some control,” one writer stated. “in reality, the reader always had control, anyway. the problem i have with nonlinear text is that linear text is not all that linear to begin with; it struggles to be linear enough to draw the reader into believing the artifice.” (Rodebaugh, 1998, unpaginated) This was, however, a minority opinion. While 6 of the 20 hypertext authors who responded to the survey stated unequivocally that they lost control of their work, nearly twice as many, 11, said that they did not. “the reader can choose various paths but by giving them the options and limiting those options i still have ultimate control over the characters and the story,” one writer explained, adding: “I don’t think i would work on a story which readers add too [sic] because then it would no longer be my creative work” (Burne, 1998, unpaginated) Thus, while different readers may have different experiences of a hypertext, even radically different experiences, they cannot be said to be the true “authors” of the text since the writers created all of the conditions (nodes and links) which made each reader’s experience possible. In addition, many of the writers claimed that readers who had been brought up on linear text would not be able to navigate through hypertexts: “It can be confusing for a reader who has never experienced it before” (Burch, 1998, unpaginated) claimed one. “Readers are not yet used to the ambiguity of hypertext stories,” argued another. (Luesebrink, 1998, unpaginated) By this argument, linear stories have transparent Literature at Lightspeed – page 209 meanings, while non-linear stories require the reader to choose her or his own meaning from the many made possible between nodes in the network of text. The World Wide Web has the potential to increase the involvement of readers in the actual construction of a story. One writer explored this possibility: “In my thing (novel? novella? hypertext courtroom drama?) How Finds The Jury, I had readers vote at various decision points as to what they wanted to happen next.” The democratization of literature? Perhaps. However, according to the writer, “I pretty much ended up hating this. Mostly for the same reasons I wouldn’t want to work at Wendy’s: I have a God complex and don’t like 23-year-old guys with acne and thick glasses telling me what to do.” (Sorrells, 1998, unpaginated) Three of the authors fell in between these two camps, writing that they gave up some control over their stories, but retained some. This position appears to recognize that the reader does take some control of the experience of reading the work by his or her navigation through it, but that the writer retains control of both the content of the nodes and the links which connect them. As one author put it, “I control the aspects over which I am willing to relinquish control.” (Crumlish, 1998, unpaginated) Writers of collaborative fiction, by way of contrast, had much more unanimity on this issue: 18 (75%) claimed that the writer gave up control of her or his work, while only 1 (4.2%) stated that the writer did not. (The balance claimed that they only gave up a portion of their control.) “What a daft question,” one respondent wrote, “that’s the whole point of it” (Golding, 1998, unpaginated) For some, collaborative writing was a different process than solo writing: “[I]t’s more of a sport than an intellectual excercise [sic].” (ibid) According to one writer, this requires different skills. “You have to be flexible and creative to keep pace with the events in the story. It is more like a dialog then a [monologue,] as traditional writing is.” (Anya, 1998, unpaginated) Or, as we saw earlier, more of a game. Literature at Lightspeed – page 210 However, pace Barthes, writers of collaborative fiction do not claim to give up control of their narrative to readers; they believe they cede their control to other writers. “I think that for an effort to be truly collaborative you need to give up some of your control. Otherwise it’s just you telling someone else exactly what to do, which is NOT collaborative. It’s alot like a marriage.” (Knowlton, 1998, unpaginated) As we have seen, with digital communications networks any reader, with a little effort, can decide to become a writer; however, this is not quite what Barthes had in mind.5 Some writers cited disadvantages to this potential loss of authorial control. One saw it as esthetically inhibiting. “It is impossible to set up a plot twist when the next writer can come along and change the story so completely that you can not use your idea any longer.” (Wood, 1998, unpaginated) Another, speaking for many collaborative fiction writers, expressed the fear that “You might not like what someone else has written and think that you may have been able to do a better job.” (Kira Moore, 1998, unpaginated) Some writers cited benefits to this loss of authorial control. “[Y]ou may be stuck on what you think should happen next,” was a typical response, “but someone else can come up with it for you.” (ibid) In either case, “if it is a story I enjoy I can always write my own version of it at home with no disruptions from the web authors.” (Towler, 1998, unpaginated) In this way, the individual authorial voice always has the option of taking back control of a work. The experience of hypertext and collaborative fiction writers, then, would seem to go against this element of semiotic theory (or, at least, Landow’s extension of it to digital media). It is possible for the authors to be unaware of the broader implications of their work; it may take a long time and a lot more experience before it is generally accepted that hypertext writers cede control of their work to their readers. Still, is it possible to reconcile this conflict between theory and practice? Literature at Lightspeed – page 211 I think it is. First, we must recognize that the semioticians who talk about the death of the author make the same mistake as the literary critics they were reacting against: reducing the complex communications act to a single, simple variable. Consider

a simple model of communications: you would need a Sender (S) of a Message (M1), a

Medium (M2) through which the message is sent and a Receiver (R). This is schematized

in Figure 2.4. Traditional literary scholars simplify by concentrating on M1, the message, the text. Where they introduce biographical material about the author, they can be said to be dealing with the S, the author of the text. Mostly, however, they infer the intentions of the author through examination of the text with little or no reference to the actual person. Semioticians, by way of contrast, focus entirely on R, the receiver of the message, the reader.

S -----> M1 -----> M2 -----> R

Figure 2.4 A Simple Communications Model Neither traditional nor semiotic approaches encompasses the entire act of communication through text. We can recognize, from our own experience as readers, that we create our own meaning as we move through texts. There is also much historical evidence to support Foucault’s contention that the role of “author” is a relatively new one, created for social purposes. However, the evidence of the writers in my survey reminds us that the author is also a very real human being who creates hypertexts as an intentional act. To fully understand the communication process of hypertext, then, it is necessary to look at all of the elements of the model rather than just one.

Conclusion According to Karl Marx, “The instrument of labour strikes down the labourer.” (1985, 79) That is, technology necessarily is used by the owners of businesses against the interests of their workers. It does so by giving management control over the means of Literature at Lightspeed – page 212 production: “Machinery comes into the world not as the servant of ‘humanity,’ but as the instrument of those to whom the accumulation of capital gives the ownership of machines. The capacity of humans to control the labor process through machinery is seized upon by management from the beginning of capitalism as the prime means whereby production may be controlled not by the direct producer but by the owners and representatives of capital [author’s emphasis].” (Braverman, 1985, 81) This is accomplished in many ways: the fragmentation of production processes, where labourers focus on a smaller and smaller part of a process which is controlled overall by the business’ owner; the increasing ability of employers to monitor employees’ work performance; etc. Marx claims that the end result of this is that “when capital enlists science into her service, the refractory hand of labor will always be taught docility.” (1985, 80) While control of the means of production may have this effect in many industries, it seems inadequate to explain the production of cultural artifacts. In fact, individuals have had access to affordable means of production for many years. Anybody who wanted to make music could buy relatively inexpensive instruments and a tape recorder. Anybody who wanted to take still photographs could buy a relatively inexpensive still camera. Anybody who wanted to shoot a film could buy or rent a relatively inexpensive video or 8mm camera. And, of course, anybody with a pen and paper could write a work of fiction. Despite this easy availability of the tools to create works of art, the ranks of socially recognized artists have not swelled. How can we account for this? Despite being able to create works of art, most people’s work will not be seen because they do not have access to the means of distribution of their work. Most videographers would not get their work shown on television because a small number of corporations operated the networks. Most filmmakers would not get their work shown in theatres because most are owned by a small number of chains. As director Alison Anders Literature at Lightspeed – page 213 commented, “We [directors] all figured we would be replaced by these kids running around with home video cameras, making movies that we couldn’t possibly compete with. But that didn’t happen [because] if you’re running around with a home video camera, you [have] no place to show your stuff.” (Cury, 1997a, 26) As we have seen, many writers who put their work on the World Wide Web do so because they cannot get published in traditional magazines, while electronic magazine publishers claim a higher potential circulation on the Web than they could achieve if they had tried to distribute a print version of their publication (partially because they cannot afford to print many copies, but also because most distributors will not carry and most bookstores do not stock zines).6 The advantage of publishing on the Web, then, is that it offers an inexpensive distribution system for artists whose work would otherwise not be available to the general public. Of course, the technologies of publishing, especially the printing press, have undergone many changes since Gutenberg. As the technologies of publishing changed, the workers’ relationship to the means of production changed with them. How can we account for this? Ursula Franklin’s work suggests one approach. In The Real World of Technology, Franklin classifies technologies into two categories: holistic and prescriptive. “Holistic technologies,” she writes, “are normally associated with the notion of craft. Artisans, be they potters, weavers, metalsmiths, or cooks, control the process of their work from beginning to finish. Their hands and minds make situational decisions as the work proceeds... These are decisions that only they can make while they are working. And they draw on their own experience, each time applying it to a unique situation. The products of their work are one of a kind.” (1990, 18) Opposed to this is “...specialization by process; this I call prescriptive technology. It is based on a quite different division of labour. Here, the making or doing of something is broken down into clearly identifiable Literature at Lightspeed – page 214 steps. Each step is carried out by a separate worker, or group of workers, who need to be familiar with the skills of performing that one step.” (ibid, 20) This is the process Marx described whereby workers were alienated from the means of production. Initially, printing was a holistic technology: authors would set their own work in type, designing the pages and running the presses themselves. Over time, these became specialized processes, and the writer became disengaged from the production of his or her manuscripts, in most cases losing control over aspects of the publishing process not directly related to writing. Desktop publishing gave authors back the ability to design their own work, returning to them a measure of their original autonomy. Computer networks give authors control over distribution of their work. Because of the way they give writers complete control over the entire process, computers applied to publishing can be seen as a holistic technology, returning publishing to the state it was in at the time of Gutenberg. Changes in technology change people’s relationship to work to the extent that they have access to information about production and distribution processes. According to Meyrowitz, there are two classes of knowledge around any technology: onstage and backstage. Onstage knowledge is public knowledge which allows any individual to use a technology; for television, this would mean knowing how to turn it on, change channels, etc. Backstage knowledge is private, available only to people actually in the industry, involving all the details of production; for television, this would mean knowing the complex process of creating and distributing shows. (1985) The introduction of the portable video recorder, to take one example, was hailed for its potential to turn the average person into a video director, but it didn’t happen, largely because vital backstage information, particularly information about distribution, continued to be closely guarded by the established television industry. Literature at Lightspeed – page 215 Backstage knowledge is more relevant to prescriptive technologies than it is for holistic technologies. For the artisans who use them, all of the knowledge necessary to use holistic technologies must, by definition, be available. The only people who may not be privy to backstage knowledge are the consumers of the products of the artisans. Digital technologies make even this distinction relatively unimportant. The goal of computer design is to make the interface transparent, to create a computer which virtually anybody, with a minimum of training, can use. The fact that most of the knowledge about a prescriptive medium is closely guarded by those in the industry makes entry into it difficult for the average person. In publishing, for instance, backstage knowledge of design, press techniques and distribution kept people from publishing their own work (issues of cost notwithstanding). Desktop publishing and computer mediated communication make knowledge of techniques of publishing, previously kept backstage, available to anybody who wants them; it is the availability of this knowledge which turned publishing from a prescriptive to a holistic technology, allowing anybody to become a producer/distributor of fictional texts. Following Liebling, a computer connected to a communications network gives individuals the equivalent of a printing press. While this dissertation is about how complex this phenomenon really is, it is worth noting that the potential for democratic communication certainly exists within it. The Internet is the hottest topic in the country... Companies of all kinds -- big and small, technology-oriented or service-based, employing hundreds of people or consisting of one person at a desk -- are trying desperately to take advantage of the benefits, both factual and perceived, of being online. Writers are no different. Surf the Net and check the web sites writers have put up in an attempt to draw attention to their services. You’ll be surprised at what you see. (Winchester, 1997, 23)

For a huge number of Americans, a single company will control the electronic pipeline into the house and most, if not all, of the content and services that are pumped through it. (Reguly, 2000, B1)

Walter Hale Hamilton: “Business succeeds rather better than the state in imposing restraints upon individuals, because its imperatives are disguised as choices.” (Herman and McChesney, 1997, 190/191)

Chapter Three: The Economics of Information on the Web

Introduction As I write this, “A Year in the Life of the Digital Gold Rush” blares from the cover of the latest issue of Wired magazine. (Bronson, 1999, front cover) The comparison of efforts to make money off the Internet to the Alaskan or Californian gold rushes of the last century is common in the popular literature on electronic commerce. “[T]here are corporate prospectors on the electronic frontier, rubbing their hands at the trillion-dollar, digital goldmine expected by the year 2000,” goes one example. (Biocca and Levy, 1995, 21) It is true that a lot of people and organizations are putting up Web sites in the hope of making money. As we saw in Chapter Two, many of the writers in my survey Literature at Lightspeed – page 217 said that they were hoping to figure out a way to make money from the writing they published on the Internet. However, people and organizations who supply digital information (as opposed to physical goods) over the Internet have, for the most part, not been able to find an economic model which works. “Prodigy, once the third-biggest online service in the U.S., practically vanished after blowing more than a million dollars on largely unwatched content. Ted Turner spent an undisclosed amount on his Web ‘zine, Spiv, before pulling the plug. And the Microsoft Network took a bath on the online magazine Mint.” (Thompson, 1998, 57) Announcements that “...in recent months, well- regarded sites Word and Charged have been forced to seek financial rescue” (Sandberg, 1998, B7) are almost as frequent as announcements of new efforts to create content which will be profitable. There seems to be a lot of wisdom in the observation that, “As [with the gold rushes], it is likely that more money will be made by those who provide the supportive infrastructure of hardware, software, and intellectual property -- the picks and shovels of cyberspace...” (Whittle, 1997, 42) A workable mechanism for paying for digital information was developed in the 1960s. As described in Chapter One, Theodore Nelson’s hypertext system, which he came to call Xanadu, involved a series of documents in different windows; when a user activated a link, the material would appear in a new window. Xanadu had an internal copyright which everybody who signed a contract to be on the network agreed to: anybody could link to anything to which they had legitimate access; once a document was published, you had no control over who linked to it or how, and; once a document was posted, you could not remove it (because you would mess up all the connections made to and from it), but you could publish versions which superseded it. Most important to the current discussion, “In our planned service, there is a royalty on every byte transmitted. This is paid automatically by the user to the owner every time a fragment is summoned, as part of the proportional use of byte delivery. Each publishing owner must Literature at Lightspeed – page 218 consent to the royalty -- say, a thousandth of a cent per byte -- and each reader contributes those few cents automatically as he or she reads along, as part of the cost of using the system.” (Nelson, 1992, 2/43 and 2/44.) If a person was quoted in one document, when the link to the quote was activated, the person got a small percentage of the fee from the original page. If another link was activated in the quote, the third person would get a percentage of the second’s fee, and so on. Nelson’s Xanadu was never put into practice, and has largely been overtaken by the World Wide Web, so we will probably never know if it would have worked. This chapter will look at the financial repercussions of the digital communications network which does exist, the World Wide Web. We must start by recognizing that writers are not the only people with a financial stake in the Internet. Those who run the largest entertainment conglomerates in the world are also eagerly eyeing the Internet as potentially lucrative additions to their revenue streams. As we shall see later in the chapter, the interests of the major corporations are often in conflict with those of individual producer/consumers. It is necessary, then, to begin the chapter with a description of these conglomerates and their interests in this medium. Having done that, we can begin to look at problems which both individuals and corporations have making money supplying content for digital communications media like the Web. I start by considering the question of what information, as a generic commodity is worth, concluding that, as the amount of information grows, its value approaches zero. This accords with the early ethos of the Internet, which has been described as a “gift economy,” where information was exchanged for reasons other than financial gain. Thus, I consider how the Internet compares to models of gift economies based on the physical world. Literature at Lightspeed – page 219 One technical solution to the diminishing value of information is known as micropayments, which allowed people to buy things over the Internet which could be valued at fractions of cents. This would have been a boon to individual content creators, who would have had an effective means of charging for their small inventory of individual pieces of writing. Unfortunately, as I show, there are problems with the technology which have yet to be overcome. Without an acceptable form of exchange, it becomes important to be able to distinguish one’s content from that of the generic content flow. This can be accomplished by “branding,” making the name of the producer or the product stand out in the mind of the consumer. As I show in a section on the subject, large producers can take advantage of branding in a way which is not available to most smaller information producers, especially individual writers. While branding can make a product more attractive, it isn’t an economic model, per se. So, next, I turn my attention to models from existing media -- advertising and subscriptions -- and find them mostly inadequate for generating revenue over the Web. One of the main problems with both models is that they require large numbers of consumers, while the billions of pages on the Web fracture audiences, making the audience for any given page too small to make it financially viable. One possible method of dealing with this is to change the nature of the medium, to make it close enough to an existing medium that the model of revenue generation for the existing medium can be applied to it. Individuals do not have this power, but entertainment conglomerates might. Therefore, I next look at ways in which they have attempted, and continue to attempt, to turn the Web into a glorified form of television, introducing push technologies, streaming video and multicasting, Web TV and asymmetrical bandwidth transmission. These technologies are not necessarily sinister; there are good reasons for adopting them. Literature at Lightspeed – page 220 However, I hope to show that the form in which they are introduced into the marketplace can have an adverse effect on individual Web content creators. The chapter ends with a look at a non-traditional form of exchange which has been suggested to deal with the fundamental problem of information as a commodity on digital networks: the attention economy.

Corporate Conglomeration in the Information and Entertainment Industries The primary (some would say only) purpose of a corporation is to make money for its shareholders. The more it dominates a market, the greater the potential profit. There are two ways a corporation can attempt to dominate a market: through vertical or horizontal integration. A horizontally integrated corporation tries to corner the market on a given product. A corporation in this position is known as a monopoly. A vertically integrated corporation tries to control every aspect of production and distribution of its product (and, sometimes, related products). A film corporation which has production, distribution, marketing and exhibition capabilities is vertically integrated. Most often, a small number of large vertically integrated corporations settle into a comfortable competition with each other: this is known as an oligopoly. According to Herman and McChesney, this describes the current state of the entertainment industry.

The 1990s has seen an unprecedented wave of mergers and acquisitions among global media giants. What is emerging is a tiered global media market. In the first tier are around ten colossal vertically integrated media conglomerates. Six firms that already fit that description are News Corporation, Time Warner, Disney, Bertelsmann, Viacom and TCI. These firms are major producers of entertainment and media software and have global distribution networks... Four other firms that round out this first group include PolyGram (owned by Philips), NBC (owned by General Electric), Universal (owned by Seagram), and Sony. All four of these firms are conglomerates with non-media interests, and three of them (Sony, GE, and Philips) are huge electronics concerns that at least double the annual sales of any first-tier media firm. None of them is as fully Literature at Lightspeed – page 221

integrated as the first six firms, but they have the resources to do so if they wish. (1997, 53/54)

These first tier corporations, which work on a global scale, not only control all of the steps for production and distribution of their products in a given medium, but own subsidiary corporations in a wide variety of related media. This gives these corporations tremendous advantages. “Disney’s 1996 Hunchback of Notre Dame generated a disappointing $99 million at the U.S. and Canadian box offices. According to Adweek magazine, however, it is expected to generate $500 million in profit (not just revenues), after the other revenue streams are taken into account... In sum, the profit whole for the vertically integrated firm can be significantly greater than the profit potential of the individual parts in isolation. Firms without this cross-selling and cross-promotional potential are at a serious disadvantage in competing in the global marketplace.” (Ibid, 54) These advantages lie not only in cross-promotion of a product across a wide variety of media owned by a single corporation, but can also occur with large non-media conglomerates: “In 1996 Disney signed a ten-year deal with McDonald’s, giving the fast food chain exclusive global rights to promote Disney products in its restaurants. Disney can use McDonald’s 18,700 outlets to promote its global sales, while McDonald’s can use Disney to assist it in its unabashed campaign to ‘dominate every market’ in the world. PepsiCo. signed a similar promotional deal for a 1996 release of the Star Wars film trilogy, in which all of PepsiCo.’s global properties -- including Pepsi-Cola, Frito-Lay snacks, Pizza Hut, and Taco Bell -- were committed to the promotion [footnotes omitted].” (ibid, 55) Again, this sort of promotion is not available to smaller content creators, certainly not individuals. McChesney and Herman identify two other tiers of entertainment corporation. There is “a second tier of approximately three dozen quite large media firms...that fill regional or niche markets within the global system... These second tier firms tend to have Literature at Lightspeed – page 222 working agreements and/or joint ventures with one or more of the giants in the first tier and with each other; none attempts to ‘go it alone...’” (ibid, 53/54) The global corporations “dominat[e] the activities of the other, weaker competitors in the market.” (Serexhe, 1997, 301/302) Access to their marketing and distribution systems gives the global corporations a strong negotiating position in relation to smaller firms, although this may be mitigated partially by the fact that they need a constant stream of content to maximize the profit potential of their distribution systems. Finally, “there are thousands of relatively small national and local firms that provide services to the large firms or fill small niches, and their prosperity is dependent in part upon the choices of the large firms.” (Herman and McChesney, 1997, 54) These are “independent” producers: small film companies, regional publishers, community radio stations, et al. These companies can have a positive relationship with the larger corporations. As Julie Schwerin, president of InfoTech, a firm that tracks the multimedia industry, points out, “Having a big partner...greases the skids for raising more money to keep the [small] company growing...” (Carlson, 1996, 32) However, the independence of these companies depends upon how much of their revenue can be generated from their local audience; as Herman and McChesney point out, to the extent that they rely on the larger firms for revenue, their independence is compromised. One might expect that the major entertainment conglomerates would be in intense competition with each other, but this is not necessarily the case. For one thing, there is a pattern of overlapping ownership: “Seagram, for example, owner of Universal, also owns 15 percent of Time Warner and has other media equity holdings. TCI is a major shareholder in Time Warner and has holdings in numerous other media firms. The Capital Group Companies’ mutual funds, valued at $250 billion, are among the very largest shareholders in TCI, News Corporation, Seagram, Time Warner, Viacom, Disney, Westinghouse and several other smaller media firms [footnotes omitted].” (Herman and Literature at Lightspeed – page 223 McChesney, 1997, 56/57) As has also been mentioned, these conglomerates are increasingly “tied, either directly or by overlapping directorships, to the major manufacturing and financial powers.” (Drew, 1995, 73) Furthermore, “In establishing new ventures, media firms frequently participate in joint ventures with one or more of their rivals on specific media projects. Joint ventures are attractive because they reduce the capital requirements and risk of the participants and permit them to spread their resources more widely... The ten largest global media firms have, on average, joint ventures with five of the other nine giants. They each also average six joint ventures with second-tier media firms.” (Herman and McChesney, 1997, 56) An example might help illuminate this point: the increase in the budgets for Hollywood movies which are dominated by special effects means that they increasingly cost more than $100 million; if such films don’t take in at least $300 million at the box office, the studio could lose enough money to threaten its existence. So, for some films, studios agree to co-produce the films. While this decreases their potential profit, it also decreases the amount of their investment and, therefore, the amount which they risk losing if the film doesn’t do well. Oligopolies, as a rule, foster only a selective form of competition; in many ways, there is agreement at the highest levels on how to run the system to the benefit of those companies large enough to be included in it. Joint ventures and overlapping ownership are two ways in which companies in an oligopoly work with each other for their mutual benefit. Although well known for their traditional media holdings, the major entertainment conglomerates are becoming increasingly active in creating content for CD-ROMs and computer mediated communications systems like the World Wide Web. “Now, Fox and Universal have their own large interactive divisions -- as do Disney, LucasArts, Time Warner, Virgin, Paramount, Turner, MGM, etc. These companies Literature at Lightspeed – page 224 compete neck-and-neck with other big exclusive software developers like Broderbund, Interplay, Electronic Arts, Accolade and others.” (Lewinski, 1997, 41) Not only do they produce content for their own streams in competition with new media producers, but these corporations also ally themselves with new media companies; America Online, for instance, “established a joint venture in Germany with Bertelsmann, the world’s third largest publishing group,” (Meissner, 1997, 16) while ZDF, the public German broadcasting service, “together with Microsoft and NBC, runs the most ambitious Internet news channel in Germany...with 19 editors who are on Microsoft’s payroll.” (ibid, 17) In fact, “Throughout the 1990s companies like Lucasfilms and Time-Warner began to explore alliances with the biggest players in the information technology industries, and computer companies courted broadcasters. In 1996 Microsoft began a joint venture with NBC to create MSNBC -- a traditional television network, delivered by cable and satellite, with an associated Web site... By 1996 all the major television networks had established Internet footholds.” (Friedman, 1997, 179/80) Computer companies are not the only ones with an interest in electronic communications with which the entertainment corporations are allying themselves. Phone companies with an eye towards delivering digital content are also looking for partners: “Several Bells, including Ameritech and SNET have hired former Hollywood executives to negotiate strategic alliances with film studios... The joint synergy of studio-Bells makes the following possible: movies on demand, home shopping, interactive games, educational programs and travel assistance. The alliances are win-win -- studios receive extra distribution and the Bells develop competitive programming.” (Carlson, 1996, 37/38) This was made possible partially by government deregulation of phone services, allowing them to enter fields they were previously forbidden from entering, and partially from privatization of what were once public utilities. “Forty-four [Public Telecommunications Operator]s have made this shift since 1984, generating almost Literature at Lightspeed – page 225 US$159 billion...” (Barkow, 1997, 80) The competition brought about by deregulation and privatization has forced phone companies to aggressively pursue avenues of revenue generation which were previously closed to them. Completing this picture are the cable companies, which, along with the telephone companies, are becoming increasingly interested in exploiting their capability of distributing digital communications. According to Baldwin, McVoy and Steinfield, “The cable system is vertically integrated. Large multiple system operators have investments in program networks, often shared with other MSOs. Some cable operators own television and film production subsidiaries as well, completing the vertical integration -- that is, retailer (systems operator), distributor (program network), and producer (film or television studio). [original emphasis]” (1996, 261) At the same time, Microsoft invested

$1 billion in cable company Comcast.1 (Reid, 1997, 125) Although still largely separate, these various media may ultimately merge, a process known as convergence. “For years, most of us have had three different sets of wires and cables entering our homes and offices: one for electricity, one for conversation or computer data, and one for news and entertainment... When all of these signals are digitized, it becomes possible to carry TV pictures on the telephone wires, computer data on the TV cable, or both of them on the electric utility’s meter-checking lines. That’s convergence.” (Cetron, 1997, 19) The mergers and alliances with computer, phone and cable companies in which first tier corporations are engaged are their way of ensuring that they can maintain the control vertical integration gives them over a completely digitally converged system. As Edmond Sanctis, senior vice president of NBC Digital Productions, explains, “The whole idea is to develop media franchises and creative properties, and then float them across any platform that is viable.” (Goldman, 1997, 42) Early in the new century, the first major event in the convergence of the old media companies typified by McChesney and Herman’s seven dominant transnational Literature at Lightspeed – page 226 corporations and new media computer-based corporations took place when AOL took over Time Warner. Time Warner owned, among other entertainment or information companies, CNN, Time and People magazines and the Warner Brothers movie network and TV studio. AOL’s assets included its Internet service, which had about 20 million subscribers, Netscape, the second largest Web browser and MovieFone, a telephone and online movie-booking service. (Milner, 2000, A1) The attraction of Time Warner to AOL would seem to be the production companies content, which could be cross-promoted to its online customers. (I shall look at this phenomenon in more detail below.) However, AOL had a more immediate purpose for the takeover: “Time Warner fills [AOL’s] need for a high-speed network with its cable business which covers 20 per cent of the United States. AOL no longer has to plead with other cable companies for access to their systems and it has a way to stop its customers from leaving for high-speed providers such as @Home.” (Evans, 2000b, B13) The high speed pipes were necessary for what some analysts see as the next phase of the Internet: video on demand. Some suggested that the advantage for Time Warner was that it had “a treasure trove of archived material that it will now be able to remarket to a vastly expanded audience.” (MacDonald, 2000, A1) This can only be partially true, however; while some of its older material may be repackaged for the Internet, it’s hard to see how AOL’s 20 million subscribers could give Time Warner more viewers than its own CNN or WB networks. A different motivation for Time Warner emerged close to two weeks after when it announced that it was taking over British music company EMI. “Of the treasure trove of content within the AOL Time Warner portfolio, music has the biggest business potential because it is already the most pervasive and accepted form of content on the Web today. There are thousands of Web sites that accept orders for music on-line and ship CDs to Literature at Lightspeed – page 227 customers.” (Evans, 2000a, B5) An added bonus is that because of its cable holdings, AOL Time Warner would be able to remedy the problem of slow download times for music. “If AOL Time Warner can convince [its subscribers] to purchase [its] high-speed cable access to the Internet, it would give the company a large audience for on-line music purchases.” (ibid) The immediate import of the deal was that AOL Time Warner’s purchase of EMI meant that four corporations controlled 90% of the music sold in Canada. (Bertin, 2000, B5) AOL’s joint venture with Bertelsmann (described above) was expected to be unaffected by its takeover of Time Warner, even though Time Warner and Bertelsmann were competitors. (Milner, 2000, A8) This is another example of the interlocking nature of first tier entertainment corporations. Some commentators believe that the AOL takeover of Time Warner signaled a fundamental shift in the economics of entertainment. One claimed that the deal “has created what industry watchers are calling the new model for the media industry -- both on line and off.” (Cribb, 2000, C1) This seems to me to be highly overstated: the takeover is an extension of the logic of vertical integration to digital communications corporations. I would tend to agree more with Robert Barnard, author and co-founder of d-Code, who said, “So what’s so new? Nothing I’ve seen or read so far tells me that AOL Time Warner is going to do anything differently other than being bigger. The iMac was new, Netscape was new, but this is just bigger.” (Potter, 2000, A20) The same logic suggests that other new and old media companies will have to combine in order to compete with AOL Time Warner. “Insiders expect the AOL-Time Warner deal will open the floodgates to a number of mergers, not just between media and entertainment companies, but between media, telephone, cable television and entertainment businesses as they move to combine their resources.” (Craig, 2000, B14) Literature at Lightspeed – page 228 So, the entertainment industry at the beginning of the century was a dizzying complex of large players allying or merging with other large players in order to increase their profitability.

Three RBOCs are attempting to form partnerships in Hollywood. Cable and telephone companies are aligning with software designers and hardware manufacturers. Broadcast networks are ‘in play,’ with Hollywood studios, cable MSOs, and telephone companies all mentioned as prospective buyers. Most of the converging companies are also buying into or creating online services, a business strategy useful in its own right and as a stepping stone to integrated broadband networks. We can expect that in the end the new industry will thoroughly integrate the businesses of television and audio production, multimedia production, program distribution, database creation and distribution, and broadband networks to the home. (Baldwin, McVoy and Steinfield, 1996, 400/401)

This is the marketplace into which individual content producers who wish to distribute their work will be entering. It is sometimes argued that the innovations which large corporations introduce into the market also benefit small players. For instance, if a workable electronic cash system were developed by a major distributor of online information, individuals would also be able to use it for their benefit. However, as we are about to see, size does matter. Large corporations have economies of scale which are not available to individual content creators; furthermore, the corporations may have the power to restructure the Internet in ways which would be of great benefit to them, but at the cost of completely disenfranchising individuals. As I hope to show, the interests of the major entertainment conglomerates are in competition, for the most part, with the interests of individual content providers. Before we look at this, however, we must ask a basic question which will affect all of the players who hope to make money by putting original content on the World Wide Web. Literature at Lightspeed – page 229

What is Information Worth? Before we can determine what information is worth, we must know what information is. Shannon and Weaver suggest that information is something we didn’t know before. (Fiske, 1982) The repetition of a fact may have value, but it is only information the first time we hear it. To this definition, I would like to add that the information with which I am primarily interested in this dissertation -- prose fiction -- is a deliberate human construction (unlike the myriad information from our environment which is constantly flooding our senses). Many commentators have argued that the economics of information is different from traditional economics. (To simplify the argument, we will look at information as a generic product; later in the chapter, we shall see how specific information complicates this theory.) To explore this difference, it is necessary to look at some of the basic tenets of traditional economics. The most fundamental of these is the issue of scarcity:

Scarcity means that we do not and cannot have enough income or wealth to satisfy our every desire. We are not referring to any measurable standard of wants, because when we deal with an individual’s desires, they are always relative to what is available at any moment. Indeed, this concept of relative scarcity in relation to our wants generates the reason for being for the subject we call economics. As long as we cannot get everything we want at a zero price, scarcity will always be with us. (Miller, 1988, 4)2

Economics is an attempt to find the most efficient means of distributing these scarce resources. One important contributor to the condition of scarcity is what can be called the perishability of goods. When you use something, it is gone. When you buy and eat food, you cannot bring that food back. Even goods which seem permanent (for example, buildings), deteriorate over time and must eventually be replaced. While some goods can be renewed (for instance, food can be replaced with a new year’s crop), far more cannot. Literature at Lightspeed – page 230 Information is not like that: when you use it, it is still there to be used by somebody else. When you’ve read a book, for instance, even if you lend the book to another person, you can still hold the contents in your memory. Two or more people can watch a recorded video or listen to a taped song, and it will still be there for them (or others) to use at a later date. Digital information is considered by many to be the paradigmatic case: if I download an article from the World Wide Web, I have a copy, but the original is still there for anybody else to access; when I email a copy of that article to a friend, we both have copies; etc. Unlike any other good in the world, any physical good, information is not depleted through use, but can be said to accumulate. In oral societies, where there was no lasting record of information, the amount of information available to anybody was the total of the memory of every member of the tribe. Since the population of tribes was more or less stable (since the number of births would more or less offset the number of deaths), the amount of information in the world was relatively stable: the amount of information in the memory of every living human being. With the advent of cave paintings and markings on stone and wood, the amount of information in the world increased: now, it was the sum of all living human memory, plus all cave paintings and everything carved into sticks and stones. Artificial storage systems increase the amount of available information in the world. Applying this idea to the present, we can say that the amount of information available in the world equals the sum of the content of all living human memory AND all books and magazines in existence AND all television shows and movies AND all recorded music AND every digital storage system AND other storage systems too numerous to elaborate upon here.

Information accumulates.3 This facet of information affects its value. In traditional economics, the price of a good is determined by the interaction between the number of units of the good which are available and the number of people who want the good; that is, between the supply of the Literature at Lightspeed – page 231 good and the demand for it. The relationship between supply and demand can be summed up in two very simple rules. According to the law of demand, “More of a good will be bought the lower its price, other things equal.” or “Less of a good will be bought the higher its price, other things equal.” (ibid, 37) That is to say, when we go shopping, we compare the cost of a good against how much we want it; generally, the higher the cost, the less likely we are to buy it. According to the law of supply, “At higher prices, a larger quantity will generally be supplied than at lower prices, all other things held constant.” or “At lower prices, a smaller quantity will generally be supplied than at higher prices, all other things held constant.” (ibid, 48) That is, companies will tend to produce goods with higher prices in order to make the most profits. The point at which supply equals demand is known as the point of equilibrium. Here, the number of buyers of a good is the same as the number of units of the good which producers make available. This is also the point which determines the price of the good. (ibid, 55) Because of the way it accumulates, information cannot be considered a scarce commodity, but an abundant one, and, as O’Donnell observes, “The shift from an economics of scarcity to an economics of abundance becomes painfully relevant and threatens to change the landscape dramatically.” (1998, 134) One commonsense result of the interaction of the laws of supply and demand is that as the supply of a good increases, the price per unit must go down (this is sometimes referred to as “economies of scale”). The abundance of information is a corollary to Miller’s argument about scarcity: abundance drives the price of information ever closer to zero. This has been true for a long time, but it has been obscured by the fact that information had to be embodied in physical form. When you buy a book, for example, most of the money you pay goes to the people who produce and distribute the tangible artifact; very little of the price of the book is actually returned to its author. “Typical author royalty rates for hardbacks range from 10% to 15%, or $2.50-$3.75 per copy [for a Literature at Lightspeed – page 232 book with a $25 cover price].” (Eberhard, 1999, unpaginated) A similar argument can be made for pre-recorded music. The actual information content of previous media was usually the least valuable component of the artifact. Digital information releases information from its reliance on a physical container. It is true that the computer networks through which such disincorporate information flow amount to a vast physical system, or, for that matter, the fact that to be useful to a human being, such information must manifest itself on a very physical computer screen, or frequently be printed up on quite physical paper. Unlike a book, however, where you buy the physical artifact with the information, with digital information, you buy a machine (a computer) which is disconnected from any specific content; you choose the information you want from the abundance of it in digital form. This severance of information from its physical container has made much clearer the reality that an abundance of information drives the value of information asymptotically towards zero in the traditional economic system.4 As it happens, for most of the history of computer networks, users have shared information with no expectation of monetary reward, so this issue didn’t come up. Before we can properly discuss the current economics of the Internet, it is worth considering how this system developed and thrived without direct economic incentives.

The Gift Economy and Generalized Exchange of Public Goods For much of its existence, the Internet was a non-commercial place to obtain information. The general impression, which persists among many people, was that “One of the keystones of the Net is free stuff.” (Zgodzinski, 1988, F10) This seems to fly in the face of the common belief that content providers would not create anything for the Internet unless they were financially rewarded for it. Another model had to be applied to the Internet: one was known as the “gift economy.” Literature at Lightspeed – page 233 “The culture of the Internet is marked by a circle-of-gifts mentality, according to which people produce materials and contribute them to a common stock on which they draw themselves.” (O’Donnell, 1998, 96) Gift economies dominated ancient tribal cultures; although gift-giving certainly continues in modern societies (for instance, for weddings, anniversaries and birthdays), it does not have a central place in our economy. To better understand how gift giving may have been the basis of Internet culture, it is necessary to see how theories created to explain the behaviour in tribal cultures might apply to this new social grouping. To begin, we have to go beyond traditional concepts of selflessness. “Gift giving is often described by sociological theorists as a process of exchange through which individuals rationally pursue their self-interests... According to the exchange theorists...the generosity that we observe in gift giving is only an apparent altruism. In reality...giving to others is motivated by the expectation of some reward....” (Cheal, 1988, 7) Since the reward in a gift economy is, by definition, not economic, we must look elsewhere to understand what motivates people to participate in such exchanges. Yan claims that “It has been widely recognized that gift giving is one of the most important modes of social exchange in human societies. The obligatory give-and-take maintains, strengthens and creates various social bonds...” (1996, 1) People who participate in gift economies, therefore, do so as a means of building and maintaining relationships to others. Rheingold argues that the ease of distributing digital information helps this process in the online world:

I have to keep my friends in mind and send them pointers instead of throwing my informational discards into the virtual scrap heap. It doesn’t take a great deal of energy to do that, since I sift that information anyway in order to find the knowledge I seek for my own purposes; it takes two keystrokes to delete the information, three to send it to someone else. And with scores of other people who have an eye out for my interests while they explore sectors of the information space that I normally wouldn’t Literature at Lightspeed – page 234

frequent, I find that help I receive far outweighs the energy I expend helping others: a marriage of altruism and self-interest. (1993b, 68)

Implicit in this model of a gift economy is the concept of reciprocity. “Interpersonal dependence is everywhere the result of socially constructed ties between human agents. The contents of those ties are defined by the participants’ reciprocal expectations. It is these reciprocal expectations between persons that make social interaction possible, both in market exchange and in gift exchange.” (Cheal, 1998, 11) When we give a birthday gift to a friend, to take one example, most of us assume that we will receive a comparable gift when our birthday rolls around. When we put up a site on the Web, on the other hand, we do not expect everybody who visits the site to give us the URL to their site in return (in fact, many if not most of those visitors may not even have a site on the Web). Relationships between information providers and computer users are, therefore, for the most part, asymmetrical, although, as Rheingold pointed out above and for reasons we shall consider in further depth below, one could have a reasonable expectation of receiving more information from the Internet than one put on it. Traditionally, gifts have been physical objects, but there seems to be no reason why the theory cannot be stretched to accommodate digital information, which need not have a physical form. Perhaps more importantly, “Gift transactions almost always occur between individuals who possess the kind of reciprocal interpersonal knowledge that can only be acquired in face-to-face interaction.” (ibid, 174) Face to face interaction clearly need not take place in relationships conducted over computer networks, where the participants may never physically meet, or, indeed, have any personal contact whatsoever (as in the case of a user who downloads a Web page). This would seem to suggest that online information exchange does not fall under the gift economy model. Another traditional feature of gift economics is that, “To be given as a gift an object must be alienable, in the dual sense that the donor has the right to renounce Literature at Lightspeed – page 235 ownership of it and that the recipient has the right to possess it as his or her own property.” (ibid, 10) By this theory, an object is not a gift if the giver can reclaim ownership and retake possession of it. As we have seen, though, information does not work this way: I can give it to others and still keep a copy for myself. Information is not alienable. The concept of the alienability of a gift has been challenged. “A new approach to the study of the gift gradually emerged in the 1980s, emphasizing the inalienability of objects from their owners.” (Yan, 1996, 10) In this view, although the gift-giver may give up possession of an object, it is imbued with his or her spirit, which can never be given up. This is, perhaps, closer to the spirit of online information exchange, although few people would think of information in this way. When considering the gift value of information, an important thing to remember is that the first users of computer networked communication were primarily university researchers. (Rheingold, 1993a) There is a culture of sharing information in the academic community; it is important for academics to get published in peer review journals, for instance, even though there is no financial reward to do so. To be sure, publishing articles has the potential to help academics advance their careers, particularly those who are attempting to get tenure. However, it is also true that, to the extent that academics see their role as expanding the base of human knowledge, freely flowing information has always been a major part of academic culture, a part which greatly informed the early culture of computer mediated communications. This was augmented by one of the first groups to take up CMC after it grew beyond the academy: former hippies. (ibid) Many of these people felt that the new form of communication could help them spread their communitarian beliefs, and were attracted to the ARPANet and Internet because they thought that the free flow of information would further their utopian goals. Literature at Lightspeed – page 236 These two cultures contributed to the development of the Internet as a place to exchange information at no cost; these beliefs would likely continue to dominate if these two groups had remained as the majority of Net users. However, as the Net has expanded, especially with the popularity of the World Wide Web, the number of people who use it who belong to neither group, and, therefore, do not have allegiance to the belief in the free exchange of information, has grown substantially. Moreover, A computer user can download information from a Web site anonymously, without entering into any sort of relationship with the person who created the site. To be sure, personal relationships can develop between Web designers and the people who visit their pages, but we don’t know how often this occurs, or how strong such ties are. As it happens, personal relationships are not the only type of relationship which might benefit from gift-giving. Another “type of social reproduction does not necessarily involve intimate relations (although it may do), and is often conducted through forms of communal action. It consists of the reproduction of social, rather than personal, relations [note omitted].” (ibid, 90) This may better explain why information is often freely exchanged on the Internet, even among people who may never know each other; as Sproull and Kiesler point out, “open-access networks favor the free flow of information. Respondents seem to believe that sharing information enhances the overall electronic community and leads to a richer information environment.” (1993, 116) Unlike personal relationships, which exist between individuals, “...communal relations may involve very large numbers of people. Such ties are inevitably specialized in content and limited in emotional involvement. Communal relations involve actors who share specific interests and whose knowledge about each other may be limited to what is necessary in order to get things done.” (Cheal, 1998, 108) This could describe much of the information exchange on the Net, which is often described as a collection of communities of interest, particularly in news groups and other areas organized by subject Literature at Lightspeed – page 237 matter. Although the Web is not organized around subject matter, it could be argued that people tend to go online to search for specific information, allowing them to form loose communities around specific pages or clusters of pages on any given topic; in the last chapter, I tried to show that just such a community was being formed around fiction writers. Rather than think of information exchange on the Internet as an exchange of gifts, which does not seem completely accurate, one writer refers to it as a “generalized exchange,” which “is both more generous and riskier than traditional gift exchange. It is more generous because an individual provides a benefit without the expectation of immediate reciprocation, but this is also a source of risk. There is the temptation to gather valuable information and advice without contributing anything back. If everyone succumbs to this temptation, however, everyone is worse off than they would have been otherwise: no one benefits from the valuable information that others might have. Thus, generalized exchange has the structure of a social dilemma -- individually reasonable behavior (gathering but not offering information) leads to collective disaster...” (Kollock, 1999, 222) Some argue that if rationality would suggest that we take information without giving any in return, there must be other reasons why people contribute to general exchanges such as the Internet. For example, “...the process of providing support and information on the Net is a means of expressing one’s identity, particularly if technical expertise or supportive behaviour is perceived as an integral part of one’s self-identity. Helping others can increase self-esteem, respect from others, and status attainment.” (Wellman and Gulia, 1999, 177) Thus, writers who offer constructive criticism to each other may do so in order to show off their own knowledge of writing, or to make themselves look better to other members of the writing community. While these types of Literature at Lightspeed – page 238 personal motivations undoubtedly play their part, I do not believe it is necessary to resort to them to resolve the dilemma of why people contribute to general exchanges. With a traditional exchange, one unit is given and another received. If you wanted five units of information from five different people, you would need to exchange information five times (which could require you to have five different units of information to exchange, since they might have different needs, although it might sometimes work out that you could offer each the same information). With generalized exchange, you enter your unit of information into the pool and can draw on the information which already exists in it; in a single transaction, you can obtain more units of information than you could with a traditional exchange. For this reason, Kollock’s suggestion that individuals who gather but do not offer information threaten the system is perhaps overstated. A rational person, knowing that if nobody contributes to the general pool of information, it will stagnate and die, and realizing that there are great benefits to its existence which outweigh the minimal effort any single person must take to keep it going, will reasonably decide to contribute. To be sure, some will try to calculate the minimum amount they need to put in in order to get out the rewards. It’s also true that some will not contribute. The question is, how tolerant can a general exchange system be of non-contributers? Suppose ten people read a collaborative short story on the Web and five are moved to contribute new material. For the one unit of the story, they have received four other new units of the story. This may well be enough for them to feel that they have received fair value, even though others who read the story did not contribute. (In fact, the way the Web works, most people other than the creators of a page will not know how many people have accessed it, so they will likely not be aware of how many people are not contributing.) How tolerant different communities on the Web are of non-participant Literature at Lightspeed – page 239 users of their information (referred to online as “lurkers”) is an interesting question which requires more study. Generalized exchange leads to the creation of public goods, the term “good” referring not to commodities but boons. Kollock claimed that public goods were easier to create and maintain on digital communication networks than they were in the physical world. For one thing, “To the extent costs are lowered, the more likely it is that individuals will take part in the collective action.” (1999, 224) Because it’s easier for an individual to contribute information through an online generalized exchange, it’s less likely for the person to lurk. For another thing, “The fact that many of the public goods produced on the Internet consist of digital information means that the goods are purely nonrival -- one person’s use of the information in no way diminishes what is available for someone else.” (ibid, 225) Public goods in the physical world, such as the shared common lands which were spread throughout Europe until the 16th century, require much greater coordination because the resource can often be used up; the incentive for individuals to get as much as they can out of such public goods while giving back as little as possible is, therefore, greater than with digital public goods. Finally, “while the provision of many public goods requires the actions of groups...the nature of a digital network turns even a single individual’s contribution of information or advice into a public good.” (ibid) The common lands often required farmers to coordinate their efforts to maximize their benefit from the land; with digital information, people add to the store of ideas with a minimum of coordination. If there is a threat to the public good of the general exchange of information on the World Wide Web, it arises out of the fact that it has become a site for a tremendous amount of commercial activity. A large number of corporate Web sites have been created for the purpose of promoting products, which has inspired individuals to seek financial remuneration for their Web sites, potentially lessening the amount of work available in Literature at Lightspeed – page 240 the common pool. Advertising is virtually ubiquitous. “Traffic suggests that half of all pages sent over the Web every day contain an ad.” (Wallich, 1999, 37) This creates a tension between two basically different systems: “It is the extended reproduction of...relationships that lies at the heart of a gift economy, just as it is the extended reproduction of financial capital which lies at the heart of a market economy. Between these two principles there is a fundamental opposition, as a result of which any attempt to combine them is likely to result in strain and conflict [note omitted].” (Cheal, 1988, 40) Commercialization does not mean an end to material being freely shared on the Internet; some people will continue to offer information at no financial cost. “...[D]espite the enormous changes associated with capitalist modernization, gift transactions continue to have a vital importance in social life.” (ibid, 19) Indeed, as we saw in Chapter Two, almost all of the writers surveyed for this dissertation put their information on the Web without expectation of making money (although many harboured vague hopes of doing so). However, what commercialization does is marginalize other forms of exchange. As commercial sites proliferate on the Web, it becomes harder and harder to find non- commercial sites, which become a smaller and smaller percentage of the whole. Commercialization does not affect different uses of digital communication networks in the same way: email, for example, remains dominated by the free exchange of information between individuals. Still, even though some form of generalized exchange may continue to exist on the Internet, many people will place pages on the World Wide Web in the hope of making (or visit them with the expectation of spending) money; in addition, as we have seen, major economic forces are at work to exploit the medium for their profit. So, we must go back to finding an economic model which would make this work. As it happens, one which takes into account the vanishing small value of generic information has been developed for the Net: it is known as micropayments. Literature at Lightspeed – page 241

Micropayments As commerce slowly began to develop on the Internet in the 1990s, the most common way of paying for goods and services was with a credit card. There was a practical limit to what could be bought, however: $10. This was because, “It costs large national acquirers somewhere in the neighborhood of 19 to 20 cents to process a card transaction, according to analysts’ estimates. Thus, using a credit card to buy something on the `Net for a nickel will be a money loser.” (Patricia Murphy, 1998, 50) The cost of using a credit card for a purchase under $10 was greater than the amount of money a merchant could make on the sale. As a result, there was no mechanism by which consumers could buy information at its true value. “Currently, most minor services [on the Internet] are provided free of charge because it is impossible to get Web consumers to pay for them.” (Chartier, 1999), 28) One method of dealing with this problem is to aggregate content. Collect enough information in one place, and you can charge more than the minimum $10 for it. This is common enough in the world of newspapers, magazines and articles collected into books. The problem with aggregating content on the Net (as, indeed, it is a problem with newspapers, magazines or articles collected into books) is that the consumer must pay for information he or she does not necessarily want. Online, where space is not nearly as costly as in print, the temptation for the aggregator is to increase the amount of information available at his or her site, allowing him or her to charge more; but, for most consumers, this means paying increasing amounts of money for information with a decreasing amount of overall usefulness. Moreover, it seems to go against one of the advantages of digital information: the ability of a reader to choose information. There is no reason inherent in the technology for a consumer to buy any information other than that which he or she specifically wants. Digital cash seemed to be a way out of this dilemma. Literature at Lightspeed – page 242 In the mid-1990s, several schemes to create an electronic version of money were developed; Mondex, DigiCash, Cybercash and First Virtual, among other companies, vied to offer a form of currency which could be spent over digital communications networks. A typical digital cash set-up would go something like this: merchants with wares to sell and a presence on the then-emerging World Wide Web (or who were willing to set up on the WWW) would sign up with one of the digital cash companies. They would use software from the company to allow them to accept digital cash over the Internet. Consumers would sign up with the company and download software which would allow them to connect to the merchants who had previously made arrangements with the company. The consumer would then have to transfer money from a bank or (most frequently) through a credit card to his or her online account. (Some systems also allowed consumers to transfer this money to cards which could be used to purchase goods from vendors in the real world.) Only after all of these steps were taken could online transactions take place. (Godin, 1995) One of the major advantages of digital cash is that it automates the processing of orders, which no longer requires the shuffling of paper. “The cost of processing credit card transactions is high because the merchant has to ask the credit card issuer to verify the card holder’s ability to pay for each transaction. Micropayment schemes eliminate this costly step. The micropayment system broker - typically a bank - usually simply verifies that the encrypted serial number on an electronic token or purchase order is valid.” (Patch and Smalley, 1998, 72) This means that, “The system can handle financial transactions as little as a few cents...” (Harrison, 1999, 16) Some of the early experimenters with digital cash were able to do just that: “MilliCent supports charges as low as 1/l0th of a cent; IBM Micro Payment: 1 cent; BT Array: 10 pence; and CyberCoin: 25 cents.” (ibid) Literature at Lightspeed – page 243 Had this system worked, it would have allowed consumers to pay the market value for small amounts of information (a single newspaper article, for instance, or a short story). On the one hand, it would allow them to be more specific about what information they consumed, since what they bought need no longer be aggregated with information which they did not want. On the other hand, micropayments would encourage consumers to explore the possibility of purchasing information they didn’t have a prior interest in since “there is little or no [financial] risk” involved when paying such small amounts per article or story. (Balfour, 1998, 23) Moreover, micropayments enabled through digital cash systems would have given even the smallest producer (such as a fiction writer currently on the Web) a mechanism by which she or he could make some money. Patch and Smalley offer one example: “The Guitar Heroes Web site, based in St. Paul, Minn., offered songs that customers could play along with for 25 cents in the MilliCent trial and made $75 in test money in one month. Each day of the month Magic had 120 downloads at $2 to $5 each, earning about $450 in test money.” (1998, 72) The amount of money per transaction may not be large, but “millions of transactions worth even a few cents represent a very impressive flow of income.” (Mosley-Matchett, 1997, 10) Even a writer whose page only attracted hundreds or thousands of readers willing to pay a fraction of a cent to download a story had the potential to make more money than if she or he tried to sell it to a magazine. Unfortunately, there were several problems with digital cash. For one thing, setting up an account to sell products cost money: “Subscriber enrollment [in CommerceNet] costs $400 a year for U.S. companies and U.S. subsidiaries, and $800 a year for non-U.S. organizations. There is a one-time $250 initiation fee.” (Godin, 1995, 214) Individuals with a small number of works to sell might not make that much money in micropayments in six months or more. This likely discouraged many people who might have been interested in participating in this form of commerce. Literature at Lightspeed – page 244 Another problem with proposed digital cash systems was that they required a lot of effort on the part of consumers to set up. “Typically, consumers must open an account with a micropayment system before using it and then download “wallet” software to use with their browsers. Depending on the system, customers can either run a tab that is paid with a credit card when a set dollar amount is reached or they can buy ‘funds’ to spend later.” (Machlis, 1998, 39) In order to minimize fraud, one system, First Virtual, required that users respond positively to an email message asking if they had authorized every single purchase. (Godin, 1995, 207) Consumers already comfortable with using credit cards to make purchases in the real world, were more likely to transfer this purchasing behaviour to the online world, especially given the fact that most digital cash schemes required a credit card to be set up in the first place. Finally, there was the chicken and egg problem which plagues many new technologies: “...consumers didn’t want to download unproved e-commerce software without an attractive range of things they could buy. But most Web firms weren’t willing to invest in digital-cash servers and parcel up their sites into easily saleable chunks without a guaranteed audience of willing buyers.” (Wallich, 1999, 37) The fact that there were many competing firms exacerbated this problem, since consumers couldn’t be guaranteed that the digital cash account they set up today would allow them to buy the goods they wanted, or even be in service tomorrow. The results of these problems were predictable: the first efforts at creating a digital cash system failed. “First Virtual, which billed itself as the first Internet bank, has abandoned the business altogether; DigiCash...is in Chapter 11 reorganization, and its only telephone number leads to a message from the company’s ‘interim president’ saying he no longer listens to messages left there. Ostensible market leader CyberCash has stopped offering ‘cybercoin’ transactions in its U.S. software. In the U.S., at least, all the Literature at Lightspeed – page 245 banks that once supported micropayments have taken their resources elsewhere.” (ibid, 37) The lack of acceptance of the first wave of digital cash illustrates an important principle in technological adoption: new technologies will not be able to compete with existing technologies unless they allow users to do something they could not do with the existing technology, or to do something they do with existing technology more easily. It is true that digital cash allows producers to divide information into increasingly smaller units, giving consumers greater control over what they can buy, usually at a smaller price. However, this advantage was outweighed by the fact that credit cards were much easier for consumers to use than digital cash, which required too steep a learning curve. As Amy Larsen pointed out, “ease-of-use may be the most important factor in determining if a new online payment method gains acceptance.” (1999, 46) Too many producers recognize the advantages of micropayments, however, so, although they were “long ago cast by the side of the Infobahn as unrealistic technology, [they] are on the comeback trail.” (Kerstetter, 1999, N12) A second wave of companies is currently creating new digital cash schemes. As more people become comfortable with the idea of spending money over digital communications networks, they may be more willing to accept the idea of digital cash. Perhaps with this in mind, “Datamonitor, a London-based research firm, recently estimated that by 2002, micropayments could account for 12% of the total projected U.S. online purchases of $12.5 billion.” (Machlis, 1998, 39) There is also the possibility that if more people do more of their banking online, banks will develop their own workable micropayment schemes. In the meantime, companies and individuals continue to put information on the Web and continue to dream of making money from it. Since the option of digital cash hasn’t been open to most of them, they have had to fall back on more traditional methods Literature at Lightspeed – page 246 of valuing information. The first step is to create a popular conception about one’s product in the mind of potential consumers. That is, to create a brand.

Branding In traditional economic theory, we decide what commodities to buy based on an assessment of whether or not they will satisfy a given need. Thus, if we are hungry, we are willing to pay a lot for a banana, but are not likely to be willing to pay much for a Mont Blanc pen. The reason we can make such judgments is that the information about the product is separate from the consumption of the product. We can know what a banana and a pen are without having to purchase them. When information is the product, however, it is difficult to divorce assessment from use. To know if a specific newspaper article has information we need, we must read the article; to know if a specific film will give us pleasure, we must watch it. In the computer software realm, one method of dealing with this problem is known as shareware. In this model, creators give away their product for nothing, asking the people who make copies of it to pay a certain amount (sometimes set, other times whatever the user thinks is fair) if they have found it useful. Unfortunately, this is, at best, a haphazard way of making a living; most people exposed to shareware do not seem to pay for it. Some commentators have suggested that this is because of a basic flaw in human nature. Perhaps. I would suggest, however, that it points to a fundamental paradox in the nature of information as a commodity: one has to be exposed to information to be able to determine if it has value; but, that very exposure to information lessens its value. Some online payment systems take this into account. First Virtual, for instance, was considering allowing its users to look at information before deciding whether or not to pay for it, in a more formalized version of the shareware concept. However, “To ensure that customers do not abuse the privilege of trying before they buy, First Virtual may limit the number of times a consumer may evaluate information products without Literature at Lightspeed – page 247 paying for them.” (Godin, 1995, 132) This system could not guarantee that information would be used without payment (especially if users signed on to a variety of digital cash systems, each allowing free access to a specified amount of information), but it could stop some abuses. One way out of this paradox is branding. A product brand is “a set of expectations and associations that a given community has about a product, and attaching a brand to one’s content stream is a way of enabling satisfied consumers to get ‘more like that...’” (Agre, 1998, 92) You read The New York Times in the morning because, being familiar with the brand, you have a reasonable expectation that it will give you news which has value to you. In a similar fashion, you go to a film starring a particular actor because you have seen that actor’s films before and have a reasonable expectation that you will enjoy this one. (Film aficionados may go to a film because of a specific director, or even a specific special effects house.) Of course, that particular issue of the newspaper or that particular film may not satisfy your needs; however, sooner or later a consumer’s loyalty to a brand will fade away if the brand does not satisfy the person on a regular basis, so it is necessary for a producer to maintain product consistency. Branding is considered an important part of the marketing of an entertainment franchise by entertainment conglomerates. Branding is also a solution to the larger problem of determining the value of information in a time of abundance, since brands create a form of scarcity. “A successful branding program creates in the consumer’s mind a perception of singularity, that there is no other product on the market like ‘The Brand.’” (Diekmeyer, 1998, E2) There are millions of fiction books in the world. However, less than a dozen of them were written by Thomas Pynchon. Even such a prolific writer as Isaac Asimov, whose works number in the hundreds, makes up a very small number of books relative to all that are available. Readers will search out their books, as opposed to Literature at Lightspeed – page 248 those of less well known writers, because readers believe their books have qualities which the books of no other writers have. There are two types of brand which have very different effects and consequences. The first is direct branding, where a consumer keeps buying the same product from a producer because he or she expects it will continue to fill his or her needs. Thus, a person might pick up a Globe and Mail every day because he or she believes, based on past experience reading it, that it will deliver a certain level of international or business news. (On the other hand, a person might pick up the Toronto Sun in order to obtain high quality sports reports -- branding occurs at all levels of perceived quality.) The other type of branding is associated branding. This occurs when a company attempts to associate its name with a product for which it is not necessarily known. When the Globe and Mail makes an information database available online, for instance, it can use its reputation as a source of print information to attract customers for its online venture. This type of branding is most commonly associated with film: when Disney releases a new cartoon, for example, a wide variety of products associated with the film also enter the marketplace. These may include: a soundtrack CD; videogames based on the film; action figures of characters in the film; TV specials on the making of the film; a book based on the film (as well as other books loosely based on the characters or situations in the film); tie-ins with restaurants or food and beverage manufacturers; mugs, bedspreads, keychains and other products which can carry images from the film; and so on. Direct branding can be used by individuals; when this happens, it is sometimes called personal branding. Associated branding, on the other hand, requires large expenditures in marketing, since this makes the brand attractive to creators of other products, making them more likely to want to associate themselves with the brand; it is, therefore, only open to the largest entertainment corporations. Literature at Lightspeed – page 249 A small number of personal brands which originated on the Internet have migrated beyond its borders to become known in the larger culture. Matt Drudge’s The Drudge Report, for instance, has developed a reputation as a deliverer of information which other news sources (including most traditional sources) will not report on. This led him to a cross-over career in traditional media: as the anchor of a weekly television show on Fox News Channel and the anchor of a two-hour weekly radio show on the ABC network. (“ABC signs Drudge,” 1999, D14) However, a much larger number of associated brands have migrated from the real world to the Net. Most films, television shows and books released by major entertainment corporations now boast Web addresses. Many corporations consider a Web presence an important part of their larger promotional efforts. According to Lynda Keeler, vice president marketing of Columbia Tristar Interactive/Sony Pictures Entertainment, “First, SPE has properties, Wheel [of Fortune] and Jeopardy, that are big mass-market brands. They’re proven audience pleasers with an inherent interactive game play to them... It’s a natural for us to look for other ways to extend the brand... A percentage of the show’s fan base is online; plus other people online are looking for a destination for fun, which we hope to deliver.” (Cury, 1997b, 50) Paramount Digital Entertainment President David Wertheimer says much the same thing: “Our focus has been on leveraging the brands in the online world that Paramount has [created in the real world] and building online places for fans to congregate, and really look at how to build a business around networks and multimedia entertainment.” (Goldman Rohm, 1997, 116) For the most part, major entertainment producers have been content to put up Web pages with little original content; in fact, some are nothing more than blatant advertisements for the real world product. While this may satisfy fans of the original work, it does little to attract others. For this reason, developers of online material are starting to develop content associated with real world works which is, itself, original, Literature at Lightspeed – page 250 offering an experience which cannot be obtained anywhere else. For example, “...HBO online will again venture into uncharted territory with a virtual reality companion piece to the upcoming HBO series From the Earth to the Moon, about NASA’s Apollo space program. The TV component, produced by Tom Hanks and airing in early ‘98, will consist of 13 one-hour episodes. Webheads inspired by the mini-series can get a pseudo- lunar experience of their own by tuning into the site and taking a VR trip to the moon and (with luck) back.” (Ivry, 1997, 28) The Web site for the television show 3rd Rock from the Sun features: trivia contests; chat rooms with stars and producers; behind-the-scenes video and audio; episode scripts, including material that didn’t air, and; humourous features. (Goldman, 1997, 42) Associated branding has more effect on consumer choice than direct branding. With direct branding, there is a single product around which to build a reputation. With associated branding, any of a hundred products may gain an individual’s attention. You might see the film first. However, you may buy the book first. You might hear the single from the soundtrack on the radio first. You might see an image on a t-shirt. One real world example should suffice: “Time-Warner produced the film Space Jam using a Warner Brothers cartoon character, of course. The film was plugged shamelessly in Time, Inc. magazines -- Sports Illustrated for Kids even ran a 64-page special issue devoted to it. The soundtrack was released on Warner Music, and included a roster of Warner Music artists in its track-list. That’s film, print and music...oh, of course: ads for the movie aired during basketball games on Time-Warner controlled TBS Sports TV and on CNN, along with ‘The Making of Space Jam’ specials.” (Spiegelman, 1998, 16) Any individual cultural artifact may lead you into the entire chain. Moreover, every additional product which carries a brand reinforces knowledge of all of the other products in the chain in the mind of the potential consumer. Literature at Lightspeed – page 251 Some commentators are very wary of this process. “In spite of the utopian promises made by the promoters of the Net, I didn’t notice traditional media powers getting any weaker” writes John Seabrook.

On the contrary: Instead of distributing power to the edges of society, the Net offered the media megamachine a new way of consolidating its hold. The Net would not develop into a revolutionary new medium that replaced existing media -- the people who used that kind of rhetoric (like me, in my newbie days) were like fog machines. They obscured the truth. What was more likely to happen, it now seemed to me, was that the few advantages and innovations that the Net offered would be seized by the megamachine and used to further entrench itself into our daily lives. And with the growth of corporate Web sites, it appeared that one of those innovations was a new way of marketing off-line goods and content. Net dot marketing got into your head in the same way that, say, MTV got into your head -- it worked the brand and the desire to have it right into your cortex, like the mink oil I was forever massaging into my leather boots, to soften them. (1997, 241)

Or, as another person put it, even on the Web, “at the end of the day the big brands win, and the little brands lose.’” (Kline, 1997, 65) Others believe that the Internet shifts power away from large brands. Esther Dyson, for instance, states, “I’m not saying everybody has the power to become Disney, but people have the power to suck a little power away from them. It does create a flatter landscape.” (Nee, 1998, 118) It is true that Matt Drudge takes a little attention away from the major mainstream news outlets. However, it is too early to know whether this can be duplicated by thousands of other Web sites run by individuals or, more specifically, if this type of success will come to writers of fiction. Another argument is that the proliferation of brands calls their effectiveness into question. According to Silicon Valley marketing specialist Regis McKenna, “‘Other’ owns the leading market share of personal computers, cookies, tires, jeans, beer and fast foods. Since 1984 American television viewers have been watching ‘other’ more often than the three major networks. Brand names do not hold the lock on consumers they once Literature at Lightspeed – page 252 held.” (Davidow and Malone, 1992, 221.) While there may be some validity to this, the truth is that most of the ‘others” McKenna is referring to have their own brand (ie: MTV and CNN, competitors to the three major networks). It seems a little premature, therefore, to say that “The more [brands] strive to please the masses, the more we see of the same -- everywhere, all the time -- the less appealing our brands become.” (Abramson, 1999, 58) In any case, while branding is an important step in being able to differentiate between different kinds of information delivered by computer mediated communications systems, it isn’t, by itself, an economic model: examples abound of real world information producers who have not been able to profit from their brand in the online world. For instance, “Although the success of Wired magazine is truly remarkable, many of the subsequent spinoffs of Wired Ventures have not enjoyed such good fortune. Wired Digital, the branch of Wired Ventures Inc. online, has reported huge financial losses. Despite the fact that the magazine’s online equivalent, HotWired, has been critically acclaimed and is one of the busiest Web sites on the Internet, profits have remained elusive.” (Stewart Millar, 1998, 82) To understand why existing corporate information brands have had little success on the Web, we need to look at the effectiveness of traditional economic models in the online environment.

Traditional forms of income “Until now, the biggest lie on the Internet hasn’t been about alien abductions. It’s been: ‘Don’t worry, the Web will make money.’” (“Pandesic advertisement,” 1998, 157) One approach to generating income from information online would be to transfer traditional models from other media. These include advertising and subscriptions. As the Pandesic ad quoted above suggests, these have largely failed. (Keep in mind, though, that Pandesic’s business is supplying shovels for those under the spell of the new Gold Rush, Literature at Lightspeed – page 253 so it’s in the company’s interest to hold out the possibility that making money is now possible.) This section will look at these models. The Advertising Model Advertising on the World Wide Web has shown a large growth curve: “The Internet Advertising Bureau reported that Net ad revenues totaled $906 million in 1997, up 240 percent from the previous year.” (Danko, 1998, 49) Online ad revenue has the potential to be quite substantial: “The research firms of Jupiter Communications and Forrester Research have both projected that ad spending on the Web will approach US$5 billion by 2000. This bonanza will make up more than 90 percent of total revenues for content- providing Web sites...” (Madsen, 1996, 206) However, to put this in perspective, “Web ad spending in the second quarter of 1996 was about $43 million, up 347% over the fourth quarter of 1995, but a small fraction of the $60 billion spent on traditional advertising.” (Voight, 1996, 196) We are all familiar with advertising, from the pages of newspapers and magazines to the periodic interruptions of television shows. Advertising is adaptable, taking a different form in each medium: on the Web, advertising to date most often means banners. Banner advertisements usually appear at the top of a Web page (the first thing a user sees), often loading before the page’s content (the first thing a user can see). Some have suggested that the banner is not an effective advertising space: “It’s a skimpy piece of acreage to work within -- smaller than a cereal box top, and limited graphically by the need for quick downloads. Sized at 480-by-55 pixels or smaller, the typical ad banner occupies less than 10 percent of a 640-by-480 screen display. As the computer standard has moved to a finer-grained 832-by-624 screen, the banner looks even smaller.” (Madsen, 1996, 208) This is compounded by the fact that some sites fill their first screen with advertising banners of different sizes, which means they compete Literature at Lightspeed – page 254 visually for the user’s attention, but, perhaps more damaging, the user can simply scroll down (or link) to the content and ignore all of the advertising. Another drawback of banner ads is that they can be turned off. Web browsers allow computer users to suspend their ability to see graphics, a necessary function for those whose connection to the Internet is slow since it allows them to maximize the amount of information they can access while minimizing their connection costs. When the graphics function is disabled, all graphics, including advertisements, appear on the user’s screen as a generic graphic. “There’s some evidence that more experienced users turn browser graphics off, a trend that might make advertisers uncomfortable and cheer retailers of high-speed modems. Only 16 per cent of first-year subscribers who use the Internet daily say they frequently turn graphics off, but that rises to 32 per cent for people who have at least three years of experience.” (Solomon, 1998, 13) Moreover, for those who do want to access graphics on the Web, but not ads, programs have been created which remove ads from sites before Web pages are downloaded. Ad blocking software goes by names like WebWasher, InterMute and AtGuard. “Many online advertisers dismiss the trend toward ad-blocking, noting that when faster connections are available, consumers will not be so annoyed about being forced to download cumbersome advertisement files. ‘Consumers understand the basic proposition that all the free things are enabled by advertising,’ says the chairman of the Internet Advertising Bureau.” (“And viewers fight back against Web ad overload,” 1999, unpaginated) This may have been true of older media; however, given that a substantial amount of the content on the Web has not been enabled by advertising (that is, it is available for free), and that many users of computer networks are members of online communities which still subscribe to the ethos that information should be free, it seems, at best, to be a dubious assertion. Literature at Lightspeed – page 255 In any case, the basis of advertising is getting as large a number of people as possible to take in your information, on the assumption that a fraction of them will be motivated by the ad to buy your product. Despite the Web’s reputation as having the capability of personalizing advertising messages, “the advertisers with the throw weight to unleash the online economy are eager for audience consolidation. Old-fashioned economies of scale means delivering messages more cheaply and efficiently.” (Beato, 1997, 193) As a result, “Since more users equals more ad dollars, media outlets -- old and new -- strive to reach more eyeballs.” (Behar, 1998, 48) Some numbers may help put the Web in perspective in this regard: CNN has 69 million weekly viewers; ESPN, 53.5; TV Guide has 39.2 million weekly readers; Time, 25.2; Newsweek, 22. (ibid) While these numbers may be a little misleading (for one thing, different media have different production costs, meaning they can become profitable with far different levels of auditorship), they are useful as a general basis of comparison: “The 120,000 hits a day that even popular Web sites such as Salon brag about are eclipsed at least twentyfold in typical TV channel-surfing downtime.” (Eisenberg, 1997, 68) As we saw in Chapter Two, writers on the Web are happy to measure the hits their pages get in the low thousands. This helps explain not only why the level of advertising on the Web is not nearly as great as that of established media, but why individual pages cannot command the amount of ad revenue that works in other media can. Furthermore, the method used to measure how many people see ads on Web sites, page hits, is in serious dispute. “The number of hits is not the same as the number of visitors or even visits to a given page. It is a measure of the number of files loaded for any given page or site and is thus less than desirable as a measure of impressions or actions.” (Whittle, 1997, 300) We have already seen some of the problems with using hits as a measurement of individual Web page viewers. Another is: if somebody keeps going back to a page, repeatedly downloading it (as can happen with the home page of a Web Literature at Lightspeed – page 256 site with a lot of pages), each visit would count as a separate set of hits and be considered the experience of a separate viewer (did 1,000 different people access your page, or did 1 person access it 1,000 times?) This would lead to overestimations of how many people saw an ad. On the other hand, Web browsers have the ability to store Web pages in what is known as a cache. When a user clicks on a link, her or his computer sends commands to the server on which the page is stored to retrieve its contents. To speed this process up, those contents can then be stored on the user’s computer. The next time she or he clicks on a link to the page, instead of calling its contents up from a distant computer, it simply calls the contents up from its own memory. Moreover, a single ad which appears on many different pages, need only be cached once. In either situation, since the user no longer needs to request the ad from the server on which it is stored, this does not count as a “hit” (unless the cache is emptied by the user, or disabled before browsing). This can lead to underestimating how often some people have seen an ad. A proposed solution to the caching problem has been suggested: embed tags in advertisements which would count the number of people who view an ad, regardless of whether it resides in its original server or in their computer’s memory. (“Committee adopts standard for counting Web ad viewers...”, 1999) This is an important issue. “‘Real [advertising] budgets don’t get spent until you have some kind of accountability,’ says the president of the Advertising Research Foundation. ‘That’s where audience measurement is critical.’ (ibid) Without an accurate accounting of how many people see an advertisement, it is impossible to know how much to charge. Yet, as late as 1998, Media Metrix and Relevant Knowledge, two companies that provide advertisers with statistics on how people use the World Wide Web, could not agree on what the most viewed Web sites were. “Rich Lefurgy, head of an industry trade group, says: ‘It was very hard to understand why 10 of the top 25 sites rated by Media Literature at Lightspeed – page 257 Metrix weren’t on Relevant Knowledge’s Top 25 list.” (“Merger of Web measurement firms will smooth out differences,” 1998, unpaginated) This example underscores the unreliability of Web usage statistics. There is one further wrinkle in attempts at measuring Web page viewers by the number of hits pages get: some pages are not accessed by human beings. These hits are “generated by ‘spiders’ and ‘crawlers’ -- the index services’ software engines that travel the Net cataloguing new sites.” (Bayers, 1996, 126) It’s hard to know how many hits are attributable to non-human sources (which will increasingly include bots -- personal programs which travel around Web sites looking for specific goods or services desired by their human owners), but to the extent that this happens, it leads to an overestimation of the human viewership of Web pages. The Web allows for other forms of audience measurement. Click-through, the amount of times people actually activate the link in a banner ad, is one alternative form. Advertisers argue that it isn’t cost-effective for them to take out an ad which “may cost as much as $10,000 per month, [when] only 3 to 13 out of every hundred people who notice the banner actually open it.” (Voight, 1996, 196) Unlike print, where an advertiser cannot know how many people act upon an ad, click-through is supposed to give an immediate indication of how many people are interested in a product. However, click-through may mislead advertisers as to the utility of their ads inasmuch as it measures the attractiveness of the ad, and not the interest it has generated in the computer user for the product. “I might see some cool cyber-ad with a zooming airplane and click. But it may be for an airline in Arizona that I’ll never use, and my clicking is insignificant data for the advertisers: news they can’t use.” (Israelson, 1998, C2) Furthermore, with traditional advertising, impressing the brand name on the auditor in the hope that if he or she is in the market for a product in that general category, he or she will remember the advertised product, is the goal. This effect is not measured by click-through. Literature at Lightspeed – page 258 Perhaps more importantly, paying for the number of people who click on advertisements, rather than simply viewing them, changes the nature of the relationship between advertiser and creator as it has existed in other media, to the detriment of some Web content providers: “The argument for [charging for traditional impressions rather than click-throughs] is straightforward: whose fault is it if an ad doesn’t pull people in? But smaller Web publishers can often be strong-armed, if only because they never counted on making any money in the first place. They often agree to by-the-click ad contracts -- generally with a bigger site, which takes the traffic and then turns around and charges someone else for the impressions.” (Anuff, 1998, 94) Since the number of people who click through an ad is substantially smaller than those who see it, Web page designers who can only command advertising for click-through are at a serious economic disadvantage. Given the limitations of banners, some are suggesting that advertising on Web sites needs to be more prominent.

Radical surgery awaits. Publishers must reinvent the banner in larger sizes, different shapes, and surprising locations (as Duracell has done with its batteries ripping through background screens). Get rid of the box, add audio and animations, incorporate useful Java apps, and devise more interaction with site content. Delay the ad’s appearance on the page, let it pop up later, or let a rollover reveal it as a hidden Easter egg. Give it continuous presence on the site through the use of frames. Create serial messaging from banner to banner. Make us focus on the ad exclusively for a few seconds (much as the Riddler and Word sites have done by interposing ads on splash screens). Above all, let the advertisement add value to the site experience. That equals entertainment. And revenue. (Madsen, 1996, 212)

By using features specific to the Web (especially interactivity), it is hoped that surfers will actually want to experience advertisements rather than avoid them. Since this is a relatively new field, it is hard to know where it will go, but we should be aware that a counter argument can be made: Web surfers who go to a site for specific branded content Literature at Lightspeed – page 259 may resent having to negotiate complex advertising material which they weren’t looking for. Some suggest that advertisers go even further: “Unlike traditional media that broadcast a blaring brand identity, interactive technologies can -- and should -- mesh entertainment with service, support, and full-blown applications. Online advertising is more than business as usual.” (Freund, 1997b, 92) Or, as Esther Dyson puts it, “The challenge for advertisers is to make sure that their advertising messages are inextricable from the content.” (1995, 142) Some have suggested that this is necessary with an interactive medium like computer mediated communications since, “No longer can advertisers count on catching a passive audience unawares; they must now focus on ways to entice viewers to ‘tune in’ or ‘visit’ a Web site...” (Hindle, 1997, xi ) This is complicated by the fact that the Internet has, for most of its history, been a non- commercial communications medium, resulting in the fact that “magazine readers expect blatant advertising but computer users don’t.” (Katz, 1994, 56) This combination of two usually clearly defined objects -- advertising and content -- would make it even more likely that users would experience the advertisements, since avoiding them might mean also missing the content which the user went to the site to obtain in the first place. Several examples of this kind of advertising already exist. One is referred to as a “microsite,” a corporate Web page which sponsors other kinds of content. A microsite is linked to the site it sponsors, and returns the computer user to the sponsored site when she or he is finished with the microsite. “The e-zine Word helped Altoids breath mints build such a site, a targeted 15- to 20-page microsite that tries to blend the personality of the product with the idiosyncratic tone of Word. The Altoids microsite was codeveloped by Word technical and design staff and is connected to the zine with a banner. Word editor Marisa Bowe says she offers ideas to sponsors but stays away from creating any ad copy.” (Voight, 204) Literature at Lightspeed – page 260 Bowe goes on to say that the line between advertising and content on her site is clear, although the point is to make them similar enough that the user doesn’t object too strenuously when moving between them. “Many [microsites] fall short of their full potential because they are not customized to mesh thematically with the infotainment sites on which they appear and thus remain outside the site’s essential experience. Content providers are in a unique position to customize these modules more effectively to their own content, for premium charges.” (Madsen, 1996, 216) There will always be a tension between the advertiser’s need to make the microsite as near to the main site as possible, and the content provider’s need to keep the line between content and ad clear. Another example of the blurring between advertising and content occurred on Lifetime TV online,

a spin-off if the popular female-oriented television station of the same name. The online publication offers to develop ‘content-related’ advertising for companies interested in trying out new promotional techniques. Sites like Say Cheese, which appears in Lifetime’s parenting section, are the result. Created in conjunction with Sears portrait studio, the site encourages users to vote for the cutest baby out of a selection of 10 new photographs displayed every month. The site’s content is co- produced by Sears and Lifetime, and Sears offers a free photo session at one of their portrait studios as a prize to participants. Brian Donlon, VP-new media for Lifetime TV online, says the page has been a wild success. Bouyed by positive feedback, Donlon says Lifetime is making plans for the insertion of various products into one of its online digital dramas. Users will be able to stop the unfolding action and find out all about the displayed items. Eventually, people will be able to order products right then and there. (Groves, 1997, 34)

Supporters of this kind of advertising compare it to forms of advertising from previous media. One argues that it is “much like sponsored programming of the 1940s or ‘50s, whether it was the quiz show, the “Texaco Star Theatre’ or the “Hallmark Hall of Fame...’” (Goldman Rohm, 1997, 120) However, the advertising did not intrude onto the actual programs to nearly the extent that is being suggested on the Web. “You already Literature at Lightspeed – page 261 have those product placements in movies,” another person argues, “and everybody knows companies pay for those.” (Casey and Grierson, 1999, 29) Product placements in films are not supposed to break the forward momentum of the story, though; a better analogy to this form of online advertising would be if a film stopped dead for a 20 minute infomercial. Some argue that the blurring of content and advertising will inevitably undermine the credibility of the content (as it has, to some extent, with product placement in movies). According to Chris Barr, editor in chief of CNET, “Advertising must be clearly marked, or else you’re compromising the content. There may be short-term benefits, but in the long term you really hurt the publication.” (Voight, 1996, 200) Others argue that “the traditional publishing barrier between advertising and editorial could be eliminated in cyberspace without harming readers if the readers are offered information about how much revenue is derived by the publisher, or content provider, from each advertiser.” (Whittle, 1997, 116) I find it hard to see, though, how a short disclaimer at the bottom of a home page will adequately prepare readers for the fact that what they are about to experience contains advertising in the guise of editorial content. For non-fiction, the blurring of ads and content undermines the reader’s belief in the quality of the information. For fiction, the experience of product placement in movies is that, when it becomes too blatant, it destroys the audience’s suspension of disbelief. Furthermore, as Aristotle pointed out millennia ago (and we saw in the previous chapter), a properly constructed narrative requires that each action follow from the preceding action by logic and necessity. (1987) The intrusive kind of advertising being discussed, unless very carefully handled, would likely break this flow. What theorists of Web advertising describe is a symbiotic relationship between content and advertising: the content gives legitimacy to the advertising, while the advertising can be both as entertaining and informative as the content (while paying for Literature at Lightspeed – page 262 it). However, it may also be possible that, by accepting this trend in advertising, content creators are making themselves obsolete. After all, if advertising is as entertaining and informative as content, what need do advertisers have for other content, which will only compete with their sales message? Calvin Klein had a campaign for a fragrance called cKone which illustrates this point. In print and video advertising, characters are established, each of whom has an email address. If you write to one of the cKone people, “they’ll start writing back.” (Casey and Grierson, 1999, 29) The cKone campaign is a soap opera played out in email. It is subtle: the email correspondence doesn’t mention the product (although it is in the return address on all the email messages); the only way readers of the email know it is a promotion for perfume is from the original advertisement from which they got the email address. For our purposes, the important thing to note about the cKone campaign is that it employs fictional devices but it is not tied to specific fictional content on the Internet. If more advertisers take this direct approach, the amount of advertising money available to content providers will diminish. There is one other Web strategy which could change the nature of advertising. Advertising generally has been described as “an inefficient medium for paying for its accompanying information.” (“The Place of the Internet in National and Global Information Infrastructure,” 1997, 351) This was eloquently explained by Marshall McLuhan in 1964: “Advertisers pay for space and time in paper and magazine, on radio and TV; that is, they buy a piece of the reader, listener, or viewer as definitely as if they hired our homes for a public meeting. They would gladly pay the reader, listener, or viewer directly for his time and attention if they knew how to do so.” (McLuhan, 1996, 168) Computer mediated communications media such as the Web give advertisers just this possibility. Cybergold, for instance, one of the failed forms of electronic cash, allowed, “Subscribers [to] earn cash by visiting websites and reading ads, then spend Literature at Lightspeed – page 263 their cyber cheques on MP3 files and other posted merchandise.” (Platt, 1999, 40) A viable electronic money system would make this type of advertising more likely. How it would affect content is an open question. Although, as has been mentioned, the amount of advertising dollars devoted to the Web as a whole is increasing, “The truth is that the companies bankrolling Web sites are, for the most part, seeing rivers of red ink.” (Larsen, 1999, 94) This means that, for most Web sites whose main product is information, the advertising model is failing to generate revenues which can cover their costs. This is due, in part, to the problems with online advertising which we have looked at, problems which make advertisers reluctant to fully embrace the medium. However, there is a more fundamental structural problem with the Web which may make it impossible for the advertising model of revenue generation to succeed: there are “too many Web sites chasing too few ad dollars.” (“Web profits still elusive,” 1998, unpaginated) One writer suggested that “ad inventory exceeds the demand from advertisers by probably 10,000 percent.” (Kline, 1997, 65) Unlike the magazine market, where there are established profitable publications and the potential for a small number of new publications to succeed given enough time, the sheer number of Web sites means that “What...online publishers seem to be running up against is an accelerated business model, where the marketplace is flooded with equally unprofitable competitors.” (Eisembeis, 1998, 38) We will look at one of the few exceptions to this rule, Web portals, in Chapter Five. Given the super-abundance of content relative to the amount of advertising available, the law of supply and demand has led to a retrenchment in advertising rates: “The prices charged for every thousand page views delivered -- or CPMs -- dropped from $15 per thousand at the beginning of 1996 to less than a dollar per thousand by the end of the year.” (Bayers, 1996, 127) Literature at Lightspeed – page 264 This may be part of a larger trend in advertising: “Madison Avenue is already suffering, having watched corporate advertising shrink from 60 percent of corporate promotional budgets to just 40 percent -- the difference having shifted to direct promotions.” (Davidow and Malone, 1992, 221) Ironically, the Web seems to be developing the need for advertising at just the moment when advertisers are moving away from advertising and putting their money in direct marketing campaigns such as junk mail and telephone solicitation. In response to this trend, “most [Internet] analysts are predicting a wave of consolidations and failures this year.” (“Web profits still elusive,” 1998, unpaginated) This is classical economic theory: since supply of Web pages far outstrips the demand for advertising which can economically sustain them, the number of pages will have to be reduced. This would bode ill for small producers, since if advertisers are “best served by a Web in which 60 to 70 megasites receive the overwhelming majority of traffic, as was predicted at a recent industry conference, then that’s the sort of medium the Web will become. With these megasites offering daily, comprehensive, state-of-the-art content for free, it’s unlikely that smaller sites will have much luck charging users for programming.” (Beato, 1997, 193) It should be pointed out, however, that these analysts are looking primarily at corporate Web sites. The idea that the number of Web sites must decrease so that the remaining sites may become profitable does not take into account the fact that as millions of new computer users begin accessing the Web, many of them will want to put up Web sites of their own (and continue to be willing to do so without making money). All other things being equal, therefore, the glut of Web sites is likely to continue into the foreseeable future, resulting in pressure which will continue to depress advertising revenues. Literature at Lightspeed – page 265 The Subscription Model The concept of subscriptions translates fairly easily into the online world. For physical magazines, a subscription requires a reader to pay a fixed amount of money for access to a set number of issues. In the online world, readers pay a set amount for access to a certain amount of information for a set period of time. This may be all of the information on a site, but there is no reason why different levels of subscription fees could not cover different degrees of access. In addition, “The distinction between whether a ‘subscription’ means delivery by email or simply access to a restricted Web site is vanishing; most online publications offer either, at the user’s option.” (Dyson, 1998, 192) For the most part, subscriptions to online information have not succeeded. The experience of Slate, an online publication available through the Microsoft Network, is instructive. In January, 1998, Slate announced that it would start charging for access to the site, which, up to that point, had been accessible for free. “We don’t believe that the advertising-only approach is sustainable for us,” the publisher of the electronic magazine claimed. (“Slate tries subscription model,” 1998, unpaginated) At the time, The Wall Street Journal, The New York Times, The Economist and Business Week, among other publications, were experimenting with charging for subscriptions. (ibid) Slate established a subscription rate of $29.95 per year. “Nothing that I have seen in the past one-and-a- half years has dissuaded me from the notion that we need subscriptions to have a viable business model,” Slate’s publisher insisted soon after. “The longer you stay a free site, the harder it becomes to switch to paid. For us, it’s not question of if, but when.” (“The increasing cost of surfing,” 1998, unpaginated) By April, Slate claimed a paid subscriber base of 20,000, although at the reduced rate of $19.95 a year. (“Pay-per-view Internet news becoming more common,” 1998, unpaginated) In February, 1999, Slate announced that it would drop its subscription fee and allow people to access it for free once again. According to a Microsoft spokesperson, Literature at Lightspeed – page 266 dropping the subscription fee was “part of an aggressive company strategy to focus on development of a Web site.” (“Slate drops subscription fees,” 1999, unpaginated) It’s hard to know exactly what to make of this statement. Some observers stated the case more directly: “Web-based publications such as Microsoft’s Slate -- before the company gave up on paid subscriptions -- found themselves with only a small fraction of the subscribers they needed to break even (or to match their print competitors).” (Wallich, 1999, 37) Nor was Slate the only online publication to fail to make enough money from subscriptions. “Even such commercial communication giants as the Wall Street Journal (www.wsj.com) and sports network ESPN (www.SportsZone.com) have found it difficult to obtain a viable pool of subscribers.” (Mosley-Matchett, 1997, 10) According to Esther Dyson, “Time Inc.’s Pathfinder, which once hoped to charge, is still free. And the New York Times has an interesting strategy that prices according to value to the customer: It’s online version is now free (although you have to register for access). As of mid-1997, the Times still charged subscribers outside the United States and had 4,000 overseas at $35 a month (the price of home delivery of the paper edition in the United States). But in 1998, the rate for overseas subscribers was dropped in the interest of competitiveness and long- term international growth potential.” (1998, 181/182) In addition, “USA Today had to cut the monthly subscription fee on its Web site from $15 to $13 and finally to nothing.” (Rose, 1997, 221) It seems hard to argue that, “for the most part, subscriptions on the Internet have failed miserably.” (Goldman Rohm, 1997, 118) This may be because there are too few readers currently online for a large enough subscription base to develop; if this is the case, it is a temporary problem which will be relieved as more people get connected to computer mediated communications networks like the Web. On the other hand, there may be too many publications for any to gain enough readers to be economically self- Literature at Lightspeed – page 267 sustaining through subscriptions, even if a large number of new computer users come online. Microsoft Chief Technical Officer Nathan Myhrvold predicted (ironically, in Slate) that “Web readers wouldn’t pay online subscriptions until they became both addicted to the medium and bored by their free options. ‘Imagine trying to sell subscriptions to HBO back in the 1950s,’ he wrote. ‘People clustered around their primitive sets to watch the damndest things (Milton Berle, for instance).’” (Romano, 1998, 62) Perhaps, but cable television was competing with a small number of free channels; all other things being equal, there may always be millions of free Web pages, a lot of which are of high quality. In this case, it is by no means certain that enough people will come to accept subscriptions to support the economic viability of online publications through them. It can be argued that if the largest corporations cannot use their well known brands to make money from subscription sales to their electronic publications, individual content creators don’t have a chance. It is worth keeping in mind, however, that these publications have high overhead costs which have to be recouped; a much smaller publisher, if he or she can become well enough known to attract a lot of readers (a big if, to be sure) needs to charge fewer people to recover his or her costs and start making money. * * * There is no need to choose between advertising and subscriptions, of course; it is likely that the two systems will coexist. (Varian, 1997, 30) On the other hand, at least one prediction is that by 2005, online advertising revenue will grow to $8.9 billion, while subscription revenue will only grow to $360 million. (Wolf and Sands, 1999, 113) Still, to the extent that different payment models will be available, different kinds of information will be subject to different pricing schemes: “[I]nformation that has a broad Literature at Lightspeed – page 268 appeal will remain free. If a large enough user base is interested in it, it will be supported by advertising. Sole suppliers and niche suppliers of information will be able to sell their goods on a per piece basis or subscription basis.” (“The New Economics,” 1997, 418) With respect to advertising and subscriptions, there are two foreseeable outcomes, neither of which bodes well for individual content producers. It is possible that the Web will continue to have too many pages for any to be sustainable by traditional economic methods. This may, in fact, be exacerbated by the fact that as more people buy computers and get connected to the Internet, we can expect the number of Web pages to continue to grow. If this happens, no content creator, large or small, will make an appreciable income. On the other hand, it is possible that a small number of sites will garner enough traffic to make them economically viable. If this happens, it is most likely that they will not be the work of individuals, but large corporate sites which are taking advantage of brands existing in the real world and economies of scale. Perhaps there is another alternative: aggregation. Aggregation can work on the level of advertising or content. In terms of advertising, a single agency will place advertising on a variety of Web pages, charging a single rate for them as a package. “Plenty of small and medium-size companies without ad sales forces are generating online advertising income thanks to a growing list of ‘ad-reach’ networks. These networks -- from companies like DoubleClick, Adbot, and Softbank Interactive Marketing -- place ad banners on your site. They even connect with the ad agency media buyer -- that all-important person who decides where client’s ad banners will appear. In addition, the ad-reach networks deliver, rotate, and track ads; provide click-through rates to the advertiser and ad agency; and best of all, pay you for the privilege of using your ‘inventory’ or Web pages.” (Carr, 1998, 55) Web Wide Media, a joint venture between BSkyB, the world’s largest direct-to-home pay television operator, and OzEmail, Literature at Lightspeed – page 269 Australia’s largest Internet service provider, claims that, “By representing and profiling thousands of sites we are able to effectively deliver target audiences to advertisers from all over the world... With backing like this, you can be sure advertising on your site is in good hands.” (“Web Wide Media advertisement,” 1997, 97) Aggregation of advertising could be a boon for small content producers; aggregating their readers would make them more attractive to potential advertisers who might otherwise have no use for them. In addition, the aggregation companies would do all the promotional work and billing, two aspects of advertising which many small content producers are not capable of and/or interested in doing for themselves. Aggregation of content happens when you combine the work of different people into a single site. This can be an electronic magazine, where all of the content is stored on a single server, or it can be a series of pages on a variety of servers linked together. The theoretical advantage of aggregated content is that you can charge a subscription rate that is more attractive to potential readers than if each person tried to sell his or her content individually. In addition, the more content you offer, the more attractive an aggregated site can be to those potential readers (although, as we have seen, it is also possible that they will balk at paying for a lot of content which they would not otherwise want). There is a problem with both forms of aggregation. Aggregation is part of a process sometimes referred to as “reintermediation:” the return of intermediaries between information producers and consumers. It is possible that the aggregators will make money since they take a percentage from all advertising sales or subscriptions. What is not at all clear is whether, once the money has been divided up between all of the content providers, there will be enough for any of them individually to sustain their work. One final aspect of traditional methods of making money from the sale of information needs to be noted: while it seems true that, as Hakim Bey states, “The Net is gradually being enclosed by corporate capital,” (I26, 12) nothing we have seen so far Literature at Lightspeed – page 270 prevents small content providers from putting their own material on the Web. The danger from these corporate maneuvers is not that small producers will be entirely disenfranchised, it is that they will become irrelevant. “What seems probable, then, is a ‘web within the web,’” write Herman and McChesney, “where the [conglomerate] media firm websites and some other fortunate content providers are assured recognition and audiences, and less well-endowed websites drift off to the margins... The relevant media analogy for the Internet, then, is not that of broadcasting, with its limited number of channels, but, rather, that of magazine or book publishing. Assuming no explicit state censorship, anyone can produce a publication, but the right to do so means little without distribution, resources, and publicity.” (1997, 124/125) Or, as Ted Leonsis, chief programmer of America Online, puts it, “The big will get bigger and the small will get marginalized... This isn’t going to be a business where 380,000 Web sites are going to be important.” (Rose, 1996, 295) Their size, in terms of financial and human resources, gives multinational entertainment conglomerates considerable advantages over individual content producers. According to a 1996 Forrester Research report, the average commercial Web site cost $2 million per year (and was losing $1 million). (Herman and McChesney, 1997, 124) The big corporations can afford this: “‘Brand building is being done today,’ one media executive said of his firm’s Internet activities in 1996, ‘for reward in 10 years’ time.’” (ibid) While small players come and go, a corporation which can maintain a brand on the Web for a decade will attract users because of its stability: people will come to rely on it because it is simply always there. In addition, conglomerates have the ability to produce tremendous amounts of material. “Will the bandwidth bonanza herald the death of the Web as a populist medium? An individual author might create 100 kilobytes of text and a few megabytes of rendered graphics in a day. Compare that with the amount pumped out by the armies of Literature at Lightspeed – page 271 programmers and graphic artists at Microsoft or Time Warner. When the bandwidth logjam breaks, individually produced content will drown in the corporate flood. Turning up the bandwidth effectively turns down the volume on all the small sites that make the Internet what it is today.” (Claburn, 1997, 158) The fear is that, when a user goes looking for entertainment, conglomerate products are all she or he will be able to see. Some, like Wired magazine’s Nicholas Negroponte, do not believe that this is a problem. “Companies like Time Inc., News Corp., and Bertelsmann keep getting bigger and bigger. People worry about control of the world’s media being concentrated in so few hands. But those who are concerned forget that, at the same time, there are more and more mom-and-pop information services doing just fine, thank you very much.” (1997, 208) Rawlins believes that once the first wave of corporate control has run its course, “small independents will start up again to service various niche markets.” (1996, 67) There is certainly validity to this: many of the authors of online material we saw in Chapter Two would not have been able to get much of an audience for their work if they hadn’t been able to put it on the Web. It can be argued that for somebody who is likely to photocopy only 20 or 30 copies of a work and distribute them to friends, or even somebody who can offset print 100 copies of a work and try for a little wider distribution, getting a few hundred or even a thousand or more readers on the Web is a great step forward. There are, however, a couple of reasons why this development is not as positive as it could be. For one thing, the rhetoric of the Web is that anybody with a modem and an account with an ISP can be the equal of a multinational conglomerate; some of the writers in the survey expressed this sentiment. Yet, it clearly isn’t so. “For all the hype about information wanting to be free, and the glorious cyberlibertarian future of the Net combined with the market, the oligopolies are moving in.” (Wark, 1997, 33) A lot of people may be putting material online because of a romantic ideal of the Web which Literature at Lightspeed – page 272 increasingly has little to do with its reality: “Far from demonstrating a revolution in patterns of social and political influence, empirical studies of computers and social change...usually show how powerful groups adapting computerized methods regain control.” (Fernback, 1997, 47) On a practical level, if small content producers cannot attract larger audiences, they will not be able to share in the financial rewards if a workable system of advertising, subscriptions or micropayments is, ultimately, developed. It is also true that information consumers will suffer because the idea that they have a wide variety of choices will prove to be largely illusory. Moreover, because they will be discouraged from looking beyond corporate Web sites, users will not necessarily get the best information or entertainment. “When you vertically link things, you don’t need to have the best in order to prevail. You only need to have something that’s adequate because bundled generally beats better.” (Brandt, 1998, 136) The promise of digital communication is one of increasing diversity of information and entertainment, where smaller and smaller niche interests can be served. However, this promise “is largely illusory if carried out within a commercial framework: the new channels tend to offer the same fare as the old, and instead of filling new niches they attempt to skim off some of the thinning cream in entertainment and sports.” (Herman and McChesney, 1997, 195) Other means of generating revenue, means which would effectively exclude all but the largest producers of entertainment, are also being explored. It is to these efforts that we must now turn our attention.

Old Media for New As we have seen, traditional models for making income have, for the most part, failed when applied to selling information over the new digital medium. As this reality has become increasingly obvious, one response from the business community has been to Literature at Lightspeed – page 273 change the nature of the medium, to make it more like traditional media, media from which business knows how to profit.

“Here’s how one technology executive described what’s going on: ‘Where we see this going is bringing more TV-like experiences to the Web. There’ll be more sound, more graphics and more animation are being employed. It’s what advertisers and agencies have been waiting for to express themselves better.’ “So the future lies before us. The future of the Net is...Television!” (Gehl, 1998, 5)

Television, of course, works on a one-to-many model of communications, a model which, at first, seems antithetical to the Internet’s many-to-many form. (I shall consider these models in further depth in the next chapter.) However, attempts have been made to change the working of the Internet, and, particularly, the World Wide Web, to make it more closely resemble a one-to-many medium. Before we explore these efforts, it is useful to remember that there are two kinds of software: function-oriented and content-oriented. Content-oriented software includes games, music, still images, etc. Function-oriented software is the program on which content is displayed: image readers, music players, Web browsers, etc. This is an important distinction which too often is conflated in the popular press, and, therefore, in the public understanding of technology. It is important because the nature of function- oriented software largely determines what content-oriented software can be displayed on a computer, a lesson not lost on those who hope to shape the future of the Web. The Push for Push How people access information on the World Wide Web is fundamentally different than how they access it on broadcast media. “The Web is basically a ‘pull’ medium. Users decide what they want; they point their Netscape or Microsoft browsers at the relevant website; they then pull the designated pages back to their PCs.” (“When shove comes to push,” 1998, 14) On TV, by way of contrast, the networks “push” their programming Literature at Lightspeed – page 274 according to their schedules, and viewers must accept what the networks offer when they offer it (at least, until the advent of VCRs). While push media are synchronous, limited to a small number of channels and generally require users to be passive, pull media are asynchronous, far less limited and require users to be active. Push applied to the Internet would work as follows: “On-line users download and install software that has a push application. Then they choose which channels they want to receive and how often. Channels will come from content providers that include news organizations such as CNN and The New York Times, and sports and entertainment sites including CBS SportsLine and Daily Variety. Not all push technologies are compatible, but many content companies will make their information available in several formats.”

(Kramer, 1997, C25) The “channels”5 would periodically appear as pop-up boxes on computer users’ screens. Much control of what information appeared on his or her screen would be taken out of the hands of the computer user: “With software now emerging, such as various ‘webcasting’ systems, Netscape’s ‘kiosk mode,’ and Microsoft’s ActiveX programming, content arrives in an unbroken, often uninterruptible stream once the user completes the initial link. Since these schemes aim to make the Web safe for advertising, it is reasonable to assume that users will not be encouraged to make other connections, but rather to keep the channel open and await instructions.” (Moulthrop, 1997, 654) The channels would be free, offering a much larger amount of information sources than television. Otherwise, the economics look eerily familiar: “The pushers would make their profits by requiring people to fill out brief questionnaires about themselves in exchange for free subscriptions; and by using these ‘demographics’ to sell their audience to advertisers. PointCast, for instance, charged advertisers up to $54,000 to run 30-second ‘intermercials’ within the content it pushed to its 1.2m (mostly affluent) subscribers.” (“When shove comes to push,” 1998, 14) Literature at Lightspeed – page 275 As with other such innovations, push technologies offered Web users some advantages. For one thing, push technology held out the possibility of “solving the problem of information overload.” (“‘Push’ found to be too pushy,” 1998, unpaginated) Rather than having to search the Web every time a computer user needed some information, she or he could cultivate a small number of trusted sources, who would deliver (hopefully useful) information directly to her or him. The literature on push technology assumed these trusted sources would be the established brands of existing media corporations. In addition, push technology could save a computer user valuable time; once he or she had found a site which he or she could trust, he or she could ask it to send regular updates to his or her desktop, which would “make repeated visits to Web sites unnecessary.” (Kramer, 1997, C25) For a brief period, push technology “was being hyped as the blockbuster application of the Internet.” (“When shove comes to push,” 1998, 14) Extravagant claims were made about push’s potential: “The technology is expected to become so widespread that push-delivered advertising, transactions and subscriptions will account for a third of a projected $19-billion (U.S.) in annual worldwide revenues just three years from now, according to the Yankee Group, Boston-based market researchers.” (Kramer, 1997, C25) The technology didn’t live up to the hype, though. Consumer reaction to push technologies was largely negative: “Push media’s promises [were] often met with outright resentment from computer owners glad to be free of TV’s broadcast model.” (Boutin, 1998, 86) In fact, the reaction could be quite vehement. “The push model grabbed the attention of Internet publishers because it allows them to dispatch information without depending on users to visit their sites,” one letter writer to Wired magazine commented. “Of course, you and I both know the real reason these Internet publishers aren’t getting visitors -- their content sucks. Publishers aren’t willing to accept that low traffic might be their problem. So what do they do? These oh-so-thoughtful Literature at Lightspeed – page 276 publishers force themselves on us and ram their worthless information right down our pipelines.” (Peterson, 1997, 32) Another person wrote: “The Web is a success because it provides information to users and doesn’t pander to advertisers. Television is a vast wasteland of useless predigested mush because the people running it put commercial interests before those of the viewers. If push media is going to follow the model of television, it’s going to be a waste of time.” (Freeman, 1997, 32) Virtual Reality pioneer Jaron Lanier summed up the attitude of existing computer users to push technologies when he stated that, “Push is not a technology, but a way of using technologies to make new media emulate old media. Push indicts the business minds of new media for failure of imagination. Push ultimately will mean television all over again, because that’s the only business model our moribund investment sector seems able to fathom.” (1997, 156) This opposition led to “the demise or reinvention of many of the many start-ups hoping to make a success of push technology...” (“‘Push’ found to be too pushy,” 1998, unpaginated) PointCast, one of the biggest names in push technology, is “retreating from the consumer market and concentrating more on such businesses as corporate banking, telecom, health and property.” (“When shove comes to push,” 1998, 14) Proponents of push technologies attempted to change the way computer users accessed online information through the nature of a specific type of software. Although they failed, other efforts continue. Streaming Video and Multicasting In order to get a video file, one has to download it to one’s computer. This is time- consuming (large files downloaded by slow modems could take hours) and frequently frustrating (imagine spending all that time to download a video that, it turns out, you aren’t interested in). A better way to download video would be to have it appear immediately on your desktop when called for, and unspool in real time. This would allow Literature at Lightspeed – page 277 you to watch only as much as you needed to get the information or experience you wanted. This is the idea behind streaming video. Rob Glasser, Chairman and CEO of RealAudio and RealVideo maker Progressive Networks, believed that they would turn “the Net into the next mass media.” (Jones, 1997, 14) The advantage of the Net as a mass medium is, of course, that traditional economic models could then be applied to it, particularly the advertising model, giving major corporations a way of making money by selling their information assets through it. An optimistic view of streaming video was that it “could some day challenge the landed interests of the TV industry...” (Reid, 1997, 123) by giving small video producers an outlet for their work. At the same time, however, large economic interests have been involved with the technology: “...Microsoft, which already owned 10 percent of [streaming video company] VDOnet, added 10 percent of Progressive Networks and 100 percent of [streaming video company] VXtreme to its collection. It also announced agreements with several other vendors to support its Active Streaming Format. By assimilating the best technology from its competition and assuring that most player modules will support ASF, Microsoft changed the entire Web video landscape in a single week.” (Avgerakis and Waring, 1997, 47) At the same time as the giant of the computer industry was buying up streaming video properties, existing television networks were beginning to see its potential for their reach: “Jeff Garrard, executive producer for CNN Interactive, says if CNN can reach people with information relevant to their work, via streaming media on the Web, it will have maintained its reach, even if those same people watch less CNN on TV at home.” (“Broadcasters Target the Office Worker,” 1998, unpaginated) As we have seen, the World Wide Web is a pull medium where individuals ask for information from distant sites. Pulling streaming video off the Web has a fundamental Literature at Lightspeed – page 278 problem, though: it can put a large load on a server and take up a lot of space on a network. If 1,000 people each download a large video file at their convenience, it requires 1,000 different requests from the server and 1,000 different streams of information. One solution which has been tried is known as Internet Protocol (IP) Multicasting. “[I]f the server site were to use a multicasting protocol, it could send just one stream of packets into the ether -- and any number of users could tune into the signal, with no extra load on the server. Multicasting changes the rules of the road: it allows packets of information to be ‘broadcast’ to anyone who is ‘listening, rather than a single, specific computer. These packets aren’t sent individually to each recipient; instead, only one stream is sent, but it is received at all destinations at (more or less) the same time.” (Savets, 70) The main difference between streaming video and multicasting, a vitally important one, is a matter of time: streaming video is asynchronous, meaning that computer users can download information when they want; multicasting is synchronous, meaning that users must be online when a video stream is sent. More or less like television. Many commentators have claimed that “a multicast-enabled network is the foundation for the next major evolutionary step in the life of the Internet.” (Hovis, 1997, 24) Certainly, important players in the information industry are acting as though it will. “MCI and Progressive Networks recently announced an ambitious mass-market hosting service called RealNetwork. MCI has placed Progressive splitters and multicast technology throughout its IP network and has signed up the likes of ESPN, ABC News, and Atlantic Records.” (Avgerakis and Waring, 1997, 48) And, of course, there’s always Microsoft: “UUNET, the sole access provider for Microsoft Network, is fully multi-cast enabled. Microsoft plans to exploit IP multicast in Windows 98, in important new technologies like DirectX and DirectShow, and in products like NetMeeting and NetShow.” (Doyle, 1997, 62) Literature at Lightspeed – page 279 Some people believe that streaming video is currently economically viable. “Web-based streaming-media technologies already make possible live telecasts to audiences as big as 50,000 people at a cost per viewer lower than cable. When you’re paying to reach people, it pays to reach only the ones you want.” (Browning and Reiss, 1998, 102) This belief is predicated on the idea that advertisers are willing to pay higher costs for smaller numbers of audience members if those audience members are part of a demographic segment of the population more likely to buy the products of the advertisers. This is a disputed proposition. As a representative of America Online (AOL) argued, “We talk about Webcasting here a lot, and we think audio and multimedia is going to be a big part of what we do going forward. Yet, it is totally unproven. The biggest webcast ever reached 10 or 20 thousand people. If you’re running your business on an advertising model where you’re getting $60 per thousand, an audience of 20,000 is not going to cut it as a business model.” (Geirland, 1997, 233) Without the revenues of a mass audience, these producers will not be able to produce television quality shows, which may be to the advantage of smaller producers with much less overhead, but which may also mean that it will never be economically feasible. As it happens, multicasting need not necessarily turn the Internet into a one-way medium; it has been used in trials of online videoconferencing, for instance. However, this type of interactivity (which is, after all, a defining feature of networked digital communications) only works with a small number of sites, which is the opposite of the mass medium envisioned by the entertainment corporations pushing the technology. “‘The big players in content production -- the TV networks and Hollywood -- are used to a broadcast method that reaches tens of millions of users, and the Internet today simply cannot deliver data in this way,’ said Martin Hall, co-chair of the IP Multicast Initiative (www.ipmulticast.com), a coalition of more than 65 major companies. ‘In order to attract the creators of expensive and elaborate content, the Internet must change to deliver those Literature at Lightspeed – page 280 users, and that’s where IP Multicast comes in.’” (Hovis, 1997, 24) There can be no clearer statement of the proposition that where there is a conflict between a medium and the needs of major content producers, the medium must change. Some advocates are a little more circumspect, claiming that multicasting technology promises “the richness of television broadcasting coupled with the interactivity of the Web.” (ibid) Of course, interactivity with a broadcast meant for tens of millions of other people necessarily would be limited to relatively trivial matters such as camera angles, replays, purchasing products, etc. The irony is that streaming video may not be necessary. According to the famous law propounded by Gordon Moore, computer processing power doubles approximately every 18 months, a process which will continue for the foreseeable future. Thus, it is only a matter of time before computer networks will have the bandwidth to distribute full- motion video on an asynchronous basis. As Steinberg points out, “Advocates [of multicasting] are overlooking one detail: with the coming of the Gigabit Ethernet, network managers are trying to find ways to use all the bandwidth, not save it.” (1997b, 100) Multicasting may turn out to be a permanent solution to a temporary problem. WebTV To this point, we have seen attempts to change the nature of the Web through the introduction of specific kinds of software. One major drawback of these approaches for the corporations which are trying to impose the broadcast model on the Internet is that they are voluntary: computer users must sign up for push channels or invoke a streaming video program in order to use them. The users are still able to use all of the other features of the Web, if they so choose, ignoring push and streaming video. Other changes in the nature of the Web which are being pursued more directly curtail what a computer user can do with it. These changes are being introduced at the level of hardware. Literature at Lightspeed – page 281 WebTV is one of the technologies which is currently being introduced into people’s homes. At its simplest, it is a box which sits on top of a normal television. The box contains a computer chip which allows it to process information delivered through the TV set. WebTV uses “a broadcasting technology called ‘the vertical blanking interval” -- a space between TV signals that can be adapted for sending data -- to automatically integrate data from the World Wide Web into TV programs in progress.” (“Oracle’s plans for integrating Web with TV,” 1997, unpaginated) What might a WebTV experience look like? “In the not-so-distant future, you’ll see television programming on your screen, complete with interactive elements. Just imagine: Click here to (1) shoot Barney, (2) break Michael Flatley’s kneecaps, or (3) download Martha Stewart’s recipe for purple dinosaur stew.” (Li-Ron, 1997, 48) A perhaps less dramatic vision of WebTV suggests that “a person viewing, say, a football game could interact with other viewers through a Web-based chat session appearing in one window on the screen.” (“Oracle’s plans for integrating Web with TV,” 1997, unpaginated) As with other attempts to integrate television-like features into a Web environment, some major economic players are involved in WebTV. “There’s no stopping this runaway train. All you really need to know is that Citizen Bill has already invested in the necessary infrastructure: First Microsoft bought Web TV; then the company poured a billion dollars into a cable business. Next stop: your desktop.” (Li- Ron, 1997, 48) And, as with the other attempts, there has been a certain amount of retrenchment as the kinks of the business model are worked out: “NetChannel Inc., which provides an Internet-via-TV service similar to Microsoft’s WebTV, plans to shut down its service this weekend, as it continues to talk with America Online about an acquisition... AOL lent the beleaguered company $5 million in November, and is said to be more Literature at Lightspeed – page 282 interested in NetChannel’s technology, employees and expertise, than in the NetChannel service.” (“Netchannel likely to turn off its Internet-via-TV-service,” 1998, unpaginated) There are two modes of WebTV, neither of which is without problems. With one, direct access to the Web is available on your television set. But Web surfing is an essentially active pursuit: you must go out and find the information you want. Television viewing, on the other hand, is a primarily passive pursuit: turn the set on, pick a channel and watch. Some critics of WebTV are afraid that it will ultimately change the nature of information on the Web: “Some sites are already feeding the push beast by altering the basic shape of information they send to consumers, essentially creating bit-size chunks for easier transmission to cybersavvy consumers.” (Li-Ron, 1997, 49) The other mode, as we have seen, is to allow a small amount of interactivity into traditional programming. “Using phrases such as ‘web shows rather than web sites’ and ‘choreographing your (Internet) experience,’ [executive producer of the Microsoft Network for Microsoft Canada, Inc. Martin] Katz outlined a future where the Web could be ‘programmed,’ just like CBC or CBS, but with some pointing and clicking thrown in to juice up the particip-action factor.” (Zerbisias, 1997, D1) This diminished form of interaction has been referred to as “lazy interactivity.” Josh Bernoff, an analyst for Forrester Research, defines lazy interactivity as “interactivity you can do with a remote in one hand and a beer in the other.” (O’Harrow Jr., 1998, F12) Critics see this as a great diminution of what the Web can be: “Television isn’t expanding into the Net; it’s shrinking the Net to fit the cramped dimensions of the box.” (Kingwell, 1997, 93) WebTV, although computer chip-based, does not offer many of the features of a personal computer. “You can’t download files and save them on a hard drive, for example. If you want to write a letter offline, there’s no word processor.” (Riga, 1998, C1) Perhaps the greatest drawback of WebTV, though, is that, “the way the technology is for the foreseeable future, Web TV does not allow users to create their own Literature at Lightspeed – page 283 programming, or their own Web sites, as they now can with their computer. Instead, much like conventional TV now works, Web TV will set the schedule – and the agenda.” (Zerbisias, 1997, D1) As the Web is currently configured, each consumer of information is also a potential producer; as television is currently configured, only a small number of producers create work for a large number of consumers. Any attempt to introduce the television paradigm into the Web will necessarily have to reduce the role of the individual from producer/consumer to consumer. This distinction, so important to the writers surveyed in Chapter Two, will come up again. Why would people currently on the Web accept such a reduced role? Odds are, most wouldn’t, but that doesn’t matter. WebTV was not designed for people who are already online. “Do you hanker to surf the Net via your TV? I don’t, and I wonder how many of you can honestly say you do. A growing number of companies, however, are betting that many people will turn to settop boxes for Internet access. You and I are not their market, but for people who either can’t afford computers or are too intimidated to use them, these devices offer a viable alternative.” (Coursey, 1997, 63) The CEO of set- top box maker Curtis Mathes Holding Corp. bluntly stated that, “We’re hitting the TV viewer, not a computer person.” (“New set-top box challenges WebTV,” 1997, unpaginated) As I never tire of pointing out to my friends, 50 million people may currently use the Internet in North America, but that leaves 300 million who are not regularly online. WebTV offers these people a convenient means of getting online, and, since they have no experience with the World Wide Web, they have no allegiance to it, no idea of how WebTV is a reduction of it. As Mark Kingwell points out, “The troubling thing is that, under cover of the allegedly democratic character of wider access, the revolutionary interactive possibilities of a direct-communication medium are gradually being allowed to slip away.” (1997, 3) Literature at Lightspeed – page 284 One way to look at WebTV is as an example of a phenomenon called “convergence.” As we have seen, at its simplest, convergence is about the merging of the computer and the television into one appliance. (At its most complex, convergence is about the merging of the computer and all other appliances into one cybernetic system.) WebTV incorporates elements of computer networks into the television; streaming video, on the other hand, incorporates elements of television into computer networks. If a converged system succeeds, it will be because it offers users experiences they could not get with television or a computer on their own, in accordance with the theory that any new technology must offer new experiences or other benefits which outweigh those of old technologies to succeed. However, there is a corollary to this theory: an existing technology will continue to thrive in the face of a challenge from a new technology to the extent that it can accentuate its own unique features. As long as a large enough group of individuals see value in putting their own information on the Web, or prefer to surf for themselves rather than have their choices limited to what television producers program into their shows, the original, computer-mediated Web will continue to exist. As one commentator put it, “I used to be on interactive television panels all the time and people would always ask, ‘What’s going to win, the PC or the TV?’ The question is nonsensical because there are certain types of applications you would never want to do on your TV, and there are certain types of entertainment you would never want to do on your PC. So, you have to assume that both forms will continue to live, grow and morph.” (Goldman Rohm, 1997, 116) WebTV need not be a direct threat to the Internet, since individuals will still be able to choose between a direct or a televisually mediated experience of the Web. However, one thing it will do will be to allow current television networks to maintain their advertising base, and possibly expand it slightly as people who might otherwise have found their way onto the Web watch the new, somewhat interactive television Literature at Lightspeed – page 285 offerings. This may slow the projected growth of online advertising, to the detriment of those who are trying to derive some of their income from it. The main argument for such things as push and WebTV is economic: business can apply models from existing media to new media in order to profit from them. However, the effect is not just economic. Recall from last chapter the difference between prescriptive and holistic technologies drawn by Ursula Franklin. As it is currently configured, the Internet is a holistic publishing medium. All of these efforts, in addition to applying existing business models to the Internet, require the application of existing organizational models to the Internet. To a greater or lesser extent, this may turn digital communications networks into prescriptive technologies, to the detriment of individuals who are currently using them in holistic ways. Bandwidth Issues The Internet currently piggybacks on the phone system. As anybody who uses it privately (as opposed to from a school or business) knows, you must dial into an Internet Service provider through a phone line; the ISP usually connects to an access reseller which in turn connects to an Internet backbone (although a small number of ISPs connect directly to a backbone). The phone lines were not, of course, created to handle digital data and especially not the volume of digital information required for higher end uses such as video. As a result, there can be lags in the flow of information across the lines, which is relatively unimportant for email, somewhat unpleasant when downloading large Web sites and potentially deadly for video or online gaming. To solve this problem, telephone companies are upgrading their equipment. However, they are not simply replicating the two-way telephone system; they are fundamentally changing the way information is delivered to the home. “In an attempt to find a way to offer video services over standard telephone lines, asymmetrical digital subscriber line (ADSL) technology has been developed. ADSL offers transmission Literature at Lightspeed – page 286 speeds of up to 7 megabits per second from the central office to the subscriber, and up to 576 kilobits per second transmission speed from the home to the central office. This is enough to send two medium-quality video channels to a home.” (Baldwin, McVoy, Steinfield, 1996, 118) ADSL, once implemented, would allow about 13 times more information to enter your house than you could send out in a given time period. The telephone companies were not the only ones to envisage an asymmetrical information pipeline. Digital information can also be transmitted to the home by cable modem; some systems allow 800 to 3000 kbps to come into the home, but only 33.6 kbps out (a difference of 23 to 89 times). It is also possible to transmit digital information over satellite dishes; in this case, you could get 200 to 400 kbps into the home, and the same 33.6 kbps out (a difference of 6 to 12 times). (Reinhardt, 1998, 83) Some of the developers of these systems insist that additional bandwidth leading out of the home will be added in the future (ibid), however, it seems unlikely that, once patterns of usage have become entrenched, these companies will want to jeopardize their dominance by offering more interactivity. Although the competing technologies are completely different, the vision of the executives of the various companies is remarkably similar. By the mid-1990s,

a major turf war had erupted with cablecos and telcos jockeying for position in what was thought by industry mavens to be the new frontier of the multibillion-dollar home-entertainment business, the so-called information highway. This was to be a high-bandwidth delivery system to homes, for what was envisaged in corporate boardrooms as digital, pay- per-view television with an interactive component. The interactivity would be confined mainly to games, and to searchable databases for news and various kinds of information including financial services and shopping. It was perceived from the start as a television-based rather than PC-based enterprise; almost no thought was given to adapting the innovations in interactivity in use on the Internet; indeed, the number of senior executives in either the cable or telephone industry who had Internet experience was vanishingly small. (Rowland, 1997, 313/314) Literature at Lightspeed – page 287 In the middle of the decade, “expensive trials of so-called video-on-demand [were conducted] in Britain, the United States, and Canada, and these received widespread journalistic coverage. Typically, a demographically-correct subdivision or urban neighbourhood would be rewired with high-capacity, two-way cable connected to a bank of computer servers. The computer databases would contain dozens of digitized movies along with other video entertainment, video games, news outlets and on-line shopping services.” (ibid, 314/315) Within two years, all of the trials had been wrapped up, none of them showing enough economic promise to be rolled out for larger groups of people. Time Warner’s Full Service Network, for example, failed partially because competition between cable and phone companies decreased, lessening the push to open new markets, and partially because “people don’t want a lot of what’s being offered.” (“Postmortem on Time Warner’s Full Service Network,” 1997, unpaginated) This failure is perhaps understandable: those already online would not be attracted to the circumscribed world of new digital services, while those with no computer experience faced a fairly steep learning curve and daunting new machines for what must have seemed like little benefit. Or it may lead to the inescapable conclusion that, “A number of pilot projects have shown that, with regard to the use of interactive television services, audience habits tend to change very slowly...” (Schroeder, 1997, 105) Still, such services as video-on-demand and home shopping are very attractive to telephone and cable companies. As Unsworth points out, they “fit well into the current market system and require no alteration whatsoever in the role of the consumer.” (Unsworth, 1996, 242) Yet, as we have seen, on the Internet, every consumer of information also has the potential to be a producer of information. The underlying assumption of the passive information consumer in these scenarios is very different from the current reality of information consumption on the Internet, and opposed to the consumer actively producing content which was the attraction for so many of the writers Literature at Lightspeed – page 288 in my survey. “What is disturbing is how, in the general current discourse of these ‘user- centered’ trials, the users of the technologies are posited as consumers of the services that will be delivered to their homes, rather than as active and inquisitive citizens who might use the technologies for personal ‘empowerment’ or edification.” (Shade, 1997, 200) * * * Efforts to remake the Web in the image of television are being conducted by many of the largest names in corporate communications. “The giants are ‘pushing’ the types of sport and commercial news and entertainment that play well in broadcasting,” write Herman and McChesney.

CBS and Disney, for example, have developed major sports on-line services. The General-Electric-Microsoft joint venture MSNBC plans to join them... In 1997, MSNBC, CNN and News Corporation began ‘pushing’ 24-hour live video feeds over the Internet. News Corporation also launched its TV Guide Entertainment Network website, to capitalize on the firm’s widespread media properties... To complement its existing websites, in 1997 Time Warner established CityWeb, meant to replicate a TV network on the Internet with hundreds of local affiliates. Disney launched an online kids’ service in 1997 with basic and premium options and several different ‘channels’ targeting different youth demographic categories. After failing in its effort to launch German digital television, Bertelsmann announced that it would concentrate on developing a widespread digital TV presence through the Internet.” [footnotes omitted] (1997, 125/126)

An advertisement for AT&T has a former infomercial producer proclaiming that, “The Web is a natural extension of television.” (“AT&T Web Site Services advertisement,” 1997, 45) These efforts to remake the Web into the image of television are driven by perceived economic necessity: by creating a medium in which mass audiences will come together for their Web programs, these corporations hope to be able to make money through traditional forms of advertising. The irony is that these corporations are trying to recreate the mass paradigm online at the same time as it is breaking down in existing Literature at Lightspeed – page 289 media. The early history of television was dominated by three networks which vied for the largest audiences. By the 1980s, however, cable delivery of television signals allowed for the delivery of additional networks (CNN, HBO, et al), and deregulation in the mid- 1980s allowed even more networks to develop (Fox, the WB, et al). “In 1978 three television networks -- ABC, CBS, and NBC -- captured 90 percent of the American prime-time television viewing audience. Over the following decade, that figure dipped to 64 percent... ‘There’s really no mass media left,’ an ad buyer told Forbes magazine in 1990.” (Shenk, 1997, 112) Tellingly, “A contemporary television blockbuster like Seinfeld draws only one-third the audience, as a percentage of total, that saw 1960s network hits like The Beverly Hillbillies.” (Rothenberg, 1998, 74) This plays havoc with the advertising model which relied on achieving the largest number of viewers as possible. “Indeed, fragmenting audiences are robbing entertainment companies of the mass scale that made their businesses so attractive in the first place. It’s extremely difficult to amortize higher costs over fewer customers.” (Stevens and Grover, 1998, 90) Attempts to recreate the Web along the lines of broadcast television may be self- defeating. Adding new channels on the Web will only continue the process of fragmenting the audience, further undermining the mass advertising model on which television depends. This would greatly disadvantage those creating work specifically for the Web, but it would be less of a problem for those leveraging brands across a variety of media: “The formula for success is straightforward enough: Produce something for a fixed cost and exploit the hell out of it, selling it over and over in different markets, venues and formats.” (ibid, 88) Only time will tell if any of these efforts to change the fundamental nature of the Web by changing the underlying software or hardware by which users get connected will succeed, either singly or in some combination. The failure of heavily hyped push Literature at Lightspeed – page 290 technologies suggests that individual computer users can, by the choices of which technologies they use and which they ignore, determine the future of the Web. However, this requires vigilance, since what industry cannot do in one form, it simply finds different ways of accomplishing. The consequences of users losing sight of their long- term interests in order to cope with short-term problems are very serious. As Derrick de Kerckhove comments: “If business is left to guide the values of the Internet, the latter will be marked by the proliferation of real-time, full-bandwidth communications. We can only hope that the architecture of these communications remains sufficiently open to let everyone in on them.” (Rushkoff, 1998, 68)

The Attention Economy “Does a place in cyberspace exist if no one visits it?” (Dyson, 1995, 142) As we have seen, information is abundant, and, therefore, almost valueless as a generic commodity. Attempts to make money from the sale of online information using traditional models have been, for the most part, unsuccessful. Some have suggested that qualities related to information may be where value lies. According to Esther Dyson, “The source of commercial value will be people’s attention, not the content that consumes that attention. There will be too much content, and not enough people with time for it all. That will change our attitudes to everything; it will bring back a new respect for people, for personal attention, for service, and for human interaction.” (Dyson, 1998, 175) The basic idea behind so-called “attention economics” is that, while information is abundant, the time we have to devote to consuming it is scarce. Dyson points out that the time we make available for such consumption is directly tied to our demand for information (ibid, 173); attention, therefore, should be used as a measure of value, since the more time we devote to something, the more valuable it can be assumed it is to us. Literature at Lightspeed – page 291 Advocates of attention economics argue that computer users will pay for customized service. This is not a simple assertion. Because it is being posited as the scarce commodity, the attention of the consumer is the product in this economic equation; this would seem to suggest that the attention of consumers is what artists will be competing for, not cash. By this logic, content creators should be paying computer users for their attention. Micropayments, for instance, would allow computer users to “‘earn’ as well as spend small sums” (Wallich, 1999, 37) by, for instance, clicking on advertisements on an artist’s Web site. The problem with this is that most individual producers of content cannot afford to pay users of computer networks to come and look at their work, nor do they have access to advertising which would cover such a cost. In such an environment, Dyson suggests that “The likely best course for content providers is to exploit the situation, to distribute intellectual property free in order to sell services and relationships.” (1995, 137) Thus, content becomes the lure by which artists sell other things. This may seem an exotic solution, but, in fact, the model has existed for decades: television programs are given away for free, payment for the attention of television viewers to the advertising within programs. “Precisely because it is scarce and unreplicable, this unreplicable kind of content is likely to command the highest rewards in the commercial world of the future.” (ibid, 141) For some, attention economics seems to fit nicely with the artist’s agenda. “What do writers want when a book is published?” James J. O’Donnell asks. “Attention, acclaim, response, notoriety: they want the act of imposition to succeed in seizing the public stage, the stage that has been inaccessible until the act of publication occurs.” (1998, 12) While this is certainly one motivation, it is by no means the only one. After all, artists, like other people, have to eat. The time they devote to making a living by other means is time that is taken away from their craft. Given the choice, most artists Literature at Lightspeed – page 292 would prefer making their living from their art so that they can devote all of their time to it.6 How might attention economics work on computer networks? Dyson suggests that some online content creators “‘will write highly successful works and then go out and make speeches.’ And what if they are shy? ‘Then they won’t make any money.’” (“Advice to Emily Dickinson: Speak Up!”, 1996, 16) Of course, many authors currently augment their writing income with speaking engagements; according to Dyson, this money will be an increasing percentage of their income as the amount of money to be made directly from writing decreases. Some people have applied traditional economic approaches to this issue: “People, I think, are going to be increasingly rewarded for their personal effort with processes and services rather than for simply owning the assets. It’s a kind of intellectual property which today is called ‘context.’” (“Opening the Gate,” 1997, 153) Dyson suggests that “players may simply try their hands at creative endeavors based on service, not content assets: filtering content, hosting online forums, rating others’ (free) content, custom programming, consulting, or performing.” (1995, 138) This begs the question, though: filtering or rating assumes pre-existing content to work with, but if people cannot make any income out of producing such content, where would it come from? There are other models by which artists might be able to make money through a relationship with their audience. Musician Todd Rundgren, for instance, has experimented with “a project he hopes will radically change the way artists and musicians market their work” to the public: “‘The idea,’ says Rundgren, ‘is that, instead of, say, a record company buying an artist’s music and then selling that music to the public, anyone who wanted to could subscribe for a year.’ As a subscriber, you’d be let in on the creative process, receiving, via email or a Web page, any newly recorded tracks as they were happening – run-throughs, second or third takes, finished tracks.” (“I Want Literature at Lightspeed – page 293 My...PatroNet?” 1997, 92) Subscribers would receive whatever work was created in the period for which they had paid, but that is almost a by-product of what was really being sold: access to the artist’s creative process. Economics played a big part in Rundgren’s decision to take this route. “Although a popular producer, Rundgren isn’t an easy sell as a musician, and lately he’s had trouble getting record deals. He knows that every time a record label releases a new album it represents a gamble of about $300,000 in up-front costs for production, CD manufacturing and distribution. Very few of those gambles pay off, so Rundgren wants to remove some of those costs, thereby reducing a lot of the financial risk involved.” (Houpt, 1998, C9) This rationale is similar to that of writers who hope to avoid the costs of producing and distributing books by publishing on the Web; it differs from that of the writers in the model for generating revenue on which it is based. A different approach was taken by a writer named Dan Hurley, who billed himself as the “Five Minute Novelist.” Hurley started with a typewriter on a street corner, offering to write a story for passersby; he would ask them questions about their lives, and then create the story based on what they had told him. Eventually, Hurley’s writing moved to the Net: “America Online has created a special area for Hurley -- officially launched in September -- and the novelist may not be back on street corners any time soon. ‘It seems like the online medium was made for the kind of work I do,’ Hurley beams.” (van Bakel, 1995, 90) Users of Hurley’s service email him details of their lives, which he turns into a story. It is important to note that the works themselves are not what makes what Hurley does unique, but the fact that he develops a personal relationship with each of his readers. It is also worth noting that, while Rundgren’s personal brand, developed out of his music in the real world, would make his subscription-based Web site attractive to people familiar with it, Hurley became much better known after he appeared Literature at Lightspeed – page 294 on the Web than he would have been had he remained on his street corner. There is no single approach to garnering attention online. The attention model may not be familiar to most people, but, as John Perry Barlow points out, it “was applied to much of what is now copyrighted until the late eighteenth century. Before the industrialization of creation, writers, composers, artists, and the like produced their products in the private service of patrons. Without objects to distribute in a mass market, creative people will return to a condition somewhat like this, except they will serve many patrons, rather than one.” (Barlow, 1996, 168) Or, as Dyson puts it, “Just as prominent patrons such as the Medicis sponsored artists during the Renaissance, corporations and the odd rich person will sponsor artists and entertainers in the new era. The Medicis presumably had the pleasure of seeing or listening to their beneficiaries and sharing access to them with their friends. This won them renown and attention as well as a certain amount (we hope) of sheer pleasure at experiencing the art.” (1995, 142) Whether or not the Medicis got the sheer pleasure out of experiencing the art produced by the artists they patronized, many people since have had that experience thanks to their largesse, just as many people will be able to purchase the music Rundgren will be able to produce because of the patrons whose money allowed him to create for a year. Some commentators have gone so far as to suggest that attention is “the hard currency of cyberspace.” (Goldhaber, 1997, 182) Goldhaber points out that “...transactions in which money is involved may be growing in total number, but the total number of global attention transactions is growing even faster. By attention transactions I mean occasions when attention is paid to someone who can possibly make some use of having it, or is able to pass it on to someone else. People trade attention at work, at home, and in between, day in and day out. Anyone tied into the Web might make hundreds of such transactions a day, far more than the number of monetary transactions they are Literature at Lightspeed – page 295 likely to be involved in.” (ibid, 188/190) There is the suggestion that attention will ultimately replace cash as the unit of exchange, in digital communications networks if not the real world. This may strike many as absurd. After all, the money which we currently use as a token of exchange is actually worth something. Right? Increasingly, this is not the case. A dollar bill, for instance, is just a piece of paper; if it has any value beyond what one can ordinarily do with a piece of paper, it is because of the social convention that we have agreed that it has such a value. Some have argued that a dollar represents the labour which went into earning it. Thus, a dollar is worth one sixth of a fast food worker’s hour of labour (or one five hundredth of a lawyer’s). There are many problems with this formulation (not the least of which is that money is often created without labour: when governments print more bills, for example, or when banks lend money but only have to actually keep a 10% margin in their vaults), but let us assume it is correct. Since the assignment of value based on labour is essentially a social convention, there is no reason to believe that it could not be supplanted by another social convention, such as the assignment of value based on attention. The tricky bit would be handling the transition from a labour economy to an attention economy. Decisions in a democracy are made badly when they are primarily made by and for the benefit of a few stake-holders (land-owners or content providers). (Boyle, 1997, unpaginated)

The thinking of government in the advanced industrial states remains by and large stuck in the worn-out groove of apportioning scarce resources, whether in terms of bandwidth allocation or licensing of content delivery. This defensive posture is inherently weak. The assertive approach would be to do everything possible to optimize the connectedness of the nation. This translates, first of all, into encouraging and supporting -- financially, if need be -- cable, telco, and even hydro joint ventures; second, combining these initiatives with educational programs that put the power of creation and idea development in the hands of the people, rather than exclusively under the control of established developers and information providers; third, maximizing access to copyright-free public domain material. (de Kerckhove, 1997, 175)

Carl Malamud: “Technology in itself is no guarantee of freedom of speech.” (Ginsburg, 1997, 131)

Chapter Four: What Governments Can (And Cannot) Do

Introduction In these neo-Conservative times, it is politically fashionable to deride government as “the problem” and call for cutting it to the bone, privatizing as many of its functions as possible. Those who call for drastic cuts to government programmes forget that government is an instrument of the people, created to do our bidding. Far from being an enemy, government is a vital means by which the collective goals of a people, goals which they could not reach through their individual efforts, can be accomplished. If the Literature at Lightspeed – page 297 people find a particular government is not acting in their interests, they can change its policies by putting pressure on it, or simply voting for a different government at the next election period; the answer is not to cut government back so severely that it cannot adequately maintain its many agreed upon worthwhile and/or necessary functions. Those who call for the privatization of most government functions forget a simple fact about markets: as we saw in Chapter Three, their purpose is the efficient allocation of resources. Period. Markets are not instruments through which socially just societies can be created; they have nothing to do with morality. Governments are the proper instruments for the exercise society’s moral will. Governments have a number of tools which will, in one way or another, affect art and artists working in online digital media. One is legislation which attempts to directly control expression. Content control (which includes licensing regulations as well as outright censorship) is a stick used by governments to ensure that their nation’s culture is adequately portrayed in their media. Culture is a loaded term, and I don’t intend to get into a discussion of all its nuances here; what is important to note is that governments feel it necessary to promote their culture, however each specific government may define it. “[T]he primary regulatory objective is to protect and promote cultural values.” (Johnstone, Johnstone and Handa, 1995, 113) Some governments already feel that their cultures are under siege by American cultural products:

Will Western-produced news releases and films promote attitudes and opinions contrary to, and incompatible with, their own cultural values and national policies? Will reliance on other countries impede the development of indigenous skills for educational and entertainment programming? Will the lure of Western commercialism undermine their local consumer industries and entice the movement of scarce funds abroad? Will they become unwilling receptors of propaganda warfare between the superpowers and victims of internal interference by other nations? The essential issue is one of uncertainty over whose ideas and ideals will be promoted to which audiences and for what purposes. (Janelle, 1991, 78) Literature at Lightspeed – page 298 Many governments feel that this problem will be exacerbated by the growth of digital communication networks. “While international in scope, the Net has been dominated so far by American voices and sites.” (Kinney, 1996, 143) Governmental control of content takes two general forms. Quotas which make radio and television licenses dependent upon the amount of regional programmes they carry are a form of positive control, in the sense that they require producer/distributors of works to act in a specific way. Laws against pornography or hate literature are forms of negative control, which require producer/distributors not to act in a specific way. This chapter will start with a discussion of negative control focusing on the American government’s Communications Decency Act. (You will recall that state censorship was mentioned as a drawback to publishing on the Web by respondents to my 1996 survey. Although not mentioned by respondents to the 1998 survey, it nonetheless has serious potential effects on their ability to use the Web as a publishing medium, and is, therefore, relevant to the current study.) The nature of digital networks mitigates against government control in a number of ways, however: thus, we shall have to consider the possibility that the international, boundaryless nature of the Internet makes control by local governments unfeasible. This will make up the next section of the chapter. One other area in which government may be seen to have a legitimate role is to negotiate the interests of various members of society, enforcing contracts between parties where necessary. Perhaps the most important example of this for creators is in the creation and enforcement of copyright law. As we saw in Chapter Two, some of the people who put their fiction on the World Wide Web are concerned with ensuring they get proper credit and, if possible, financial compensation for their work. The particular problems digital media create in regard to copyrights will be the subject of a section later Literature at Lightspeed – page 299 in this chapter, where I shall argue that current developments in the law are to the detriment of individual content creators. This will be followed by a discussion of a second major problem with any attempt by governments to regulate or otherwise control content on the Internet: the amorphous nature of digital media makes it unlike any existing medium. Three regulatory regimes have arisen to deal with existing media: broadcast, common carrier and First Amendment/free speech protection. Each regime creates different opportunities for positive control of a medium. Perhaps more important for our purposes, each regime favours different stakeholders in the medium; using one model will foreclose the possibility of people using the medium based on another model. A fourth model will be introduced which will bypass the shortcomings of attempts to understand the Internet using existing models, a model which will suggest that new thinking is required by governments intent on any form of regulating this new medium. Although many hoped to make money from their Web publishing efforts, none of the writers and only one publisher mentioned state support as a source. Nonetheless, many governments (including that of the United States) directly subsidize the work of artists through economic loan and grant programmes. For this reason, the chapter, which began with the stick of government regulation, ends with the carrot of government support. The purpose of government subsidization is to support the creation of worthwhile works of art which would not otherwise be supported by the marketplace. Such works are sometimes attacked for their lack of commercial viability, but those who do so forget that that is part of the rationale of public support in the first place: if such works were commercially viable, they would not need government support. Inasmuch as society benefits from the widest range of works of art, if the marketplace will not support the creation of certain types of work, some other mechanism must be found. In the final Literature at Lightspeed – page 300 section of this chapter, I will look at a few of the programmes in Canada which are intended to financially support the creation of digital artworks.

The Stick: Government Control Through Censorship Government control over communication media is not new, of course.

Every communications advance in history has been seen by self-appointed moral guardians as something to be controlled and regulated. By 1558, a century after the invention of the printing press, a Papal index barred the works of more than 500 authors. In 1915, the same year that the D. W. Griffith film ‘Birth of a Nation’ changed the U.S. cultural landscape, the U.S. Supreme Court upheld the constitutionality of an Ohio state censorship board created two years earlier, thus exempting motion pictures from free speech protection on the grounds that their exhibition ‘is a business, pure and simple, originated and conducted for profit....’ (Human Rights Watch, 1996, unpaginated)

Many literary works which are now considered classics (everything from Women in Love to Huckleberry Finn) were banned from some jurisdictions because of their content (and the Papal index of forbidden works still exists and is regularly updated). Freedom for adults to read or view material intended specifically for adults was hard won. However, there seems to be a widespread unspoken assumption that electronic forms of speech should not enjoy the same protections that the printed and spoken word do. This seems to be the reasoning behind the ill-fated 1995 American Communications Decency Act. As an exemplar of the government tendency to attempt to control the media, this is a good place to start an investigation of state censorship. The Communications Decency Act

Everybody has a favourite cause these days. Mine is smut. I’m for it. Now, owing to the way the laws are written...this is a free speech issue. But we know what’s really going on here. Dirty books are fun. (Lehrer, 1965)

In 1994, Senator James Exon, a Democrat from Nebraska, introduced the Communications Decency Act (CDA) in the . Congress stopped Literature at Lightspeed – page 301 sitting before the Senate had time to consider Exon’s bill, so it quietly died. However, Exon reintroduced the bill in the next sitting of the Senate the following year. The CDA amended the Communications Act of 1934 in order to accomplish two goals. The first was to make it a crime to use computer networks in order to harass another person. According to Exon, “Under my bill, those who use a telecommunications device such as a computer to make, transmit or otherwise make available obscene, lewd, indecent, filthy or harassing communications could be liable for a fine or imprisonment. That is the same language that covers use of the telephone in such a manner.” (undated, unpaginated) The second goal of the CDA was to protect minors from coming across sexually explicit content online. The CDA made it a crime for anybody who “knowingly within the United States or in foreign communications with the United States by means of telecommunications device makes or makes available any indecent comment, request, suggestion, proposal, image to any person under 18 years of age regardless of whether the maker of such communication placed the call or initiated the communication...” (S.314, 1995, 47 U.S.C. 223 (e)(1)) Anybody violating the Act would be liable for a fine of up to $100,000 and a maximum sentence of two years imprisonment. (ibid) Proponents of the CDA argued that it was an extension of existing protections for minors into the online world. For example, Cathleen A. Cleaver, director of legal studies at the Family Research Council, stated that “We have long embraced laws that protect children from exploitation by adults. We prohibit adults from selling porn magazines or renting X-rated videos to children. We also require adult bookstores to distance themselves from schools and playgrounds. Do these laws limit adults’ freedom? Of course they do. Are they reasonable and necessary anyway? Few would dispute it.” (1995, unpaginated) Exon himself claimed that “We based this on the law that has been in effect and been approved constitutional with regard to pornography on the telephones Literature at Lightspeed – page 302 and pornography in the U.S. mail. We’re not out in no-man’s land. We’re running on the record of courts’ decisions that have said you can use community standards to protect especially kids on telephones and in the mails. We’re trying to expand that as best we can to the Internet.” (“Focus – Sex in Cyberspace,” 1995, unpaginated) Cleaver used an interesting analogy to support her position: “[W]e know that pedophiles traditionally stalk kids in playgrounds. Well, we know that computers are the child’s playground of the 1990s. That is where children play these days increasingly. So it is a really toxic mix to have these playgrounds be a place where children are fair game to pedophiles. It is very disturbing.” (McPhee, 1996, unpaginated) Declan McCullogh, a free speech advocate who posted this to the Web, was sarcastic about Cleaver’s position. To be sure, the suggestion that children are fair game to pedophiles on the Net is inflammatory rather than enlightening. However, Cleaver’s analogy should not be dismissed out of hand. Where new technologies are introduced into a society, many people’s initial reaction is to compare them with existing technologies; this makes understanding and accepting them easier for a lot of people. Above, Exon claimed that the CDA was drawn on existing laws governing the telephone and the mail, making an explicit analogy between those media and the Internet. Cleaver compared pornography on the Internet to that found in adult bookstores. As we shall see, opponents of the CDA make different comparisons; in fact, the battle over the CDA can be seen, in part, as a duel between analogies for the Internet. (This may be true of differences of opinion on the nature of the Internet more generally.) Moreover, spatial metaphors abound on the Internet: for instance, Blithe House Quarterly, one of the ezines explored in Chapter Two, has a picture of the floor plan of a building on its contents page, with each story assigned to a room. Because the two phenomena being compared in any analogy are not identical, an analogy necessarily distorts the nature of what is being discussed. Some analogies, in fact, conceal more than Literature at Lightspeed – page 303 they reveal. The test of a good analogy is how closely related the two phenomena being compared are, and how great the differences between them are. Given all of this, there was (and is) merit in the analogy of places children go online to playgrounds in the real world, and those who opposed the legislation to control pornography on the Internet would need to address this issue. It is also important to note, before we visit the controversy that the law created, that proponents of the CDA were not necessarily raving anti-free speech zealots, as they were sometimes portrayed by anti-CDA activists. There is a broad consensus in North American society that minors should not be exposed to sexually explicit pictures or stories. While there may be debate about the line at which the definition of minors should be drawn (are 16 or 17 year-olds knowledgeable enough to experience sexually explicit materials without harm?), it is generally accepted that pre-pubescent children are not yet sufficiently emotionally mature to deal with sexually mature subjects. With few exceptions, opponents of the CDA ceded this point. Thus, the CDA was an attempt to create a law which would accomplish a largely accepted social good; the only controversy was whether it was the best means to accomplish this goal. The CDA passed the Senate by a vote of 84 to 16 on July 14, 1995. (Corcoran, unpaginated) “On June 30, 1995, Representatives Cox and Wyden introduced the Internet Freedom and Family Empowerment Act as an alternative to both the CDA and the Leahy study. The Act would prohibit content and financial regulation of computer based information service by the FCC. In addition, it eliminates any liability for subscribers, service providers or software developers who make a ‘good faith’ effort to restrict access to potentially ‘objectionable’ content.” (Evans and Stone, 1995, unpaginated) This Act was overwhelmingly passed by the House of Representatives. Negotiations between representatives of the two houses resulted in acceptance of Exon’s version of the bill. The CDA was attached to the Telecommunications Act of1996; it was just a small part of a Literature at Lightspeed – page 304 law whose major purpose was to change the telecommunications industry, allowing, for example, local and long distance telephone carriers to compete in each other’s jurisdictions. The Telecommunications Act of 1996 was signed by President Bill Clinton on February 8. The passage of the CDA in Congress had little effect online. “[D]espite the new law, for the most part it was business as usual on the net, where a search under ‘XXX’ or ‘sex pictures’ produced quick cross-references to dozens of sites promising a variety of products and services.” (Reuters, 1995, unpaginated) Even the passage of the Telecommunications Act, which included the CDA, didn’t affect what was available online: “Pornographic sites still offer up obscene pictures and stories of incest and rape still wait to be read on the Internet bulletin board Usenet, where a new group was formed Thursday night -- alt.(expletive).the.communications.deceny.act.” (Associated Press, 1995, unpaginated) While the Internet community didn’t visibly change its online behaviour because of the Telecommunications Bill, reaction to the bill offline was immediate. Minutes after Clinton signed the bill, “the American Civil Liberties Union (ACLU) filed suit challenging the law’s constitutionality. The CDA was on the books for one week and then was restrained by District Judge Ronald Buckwalter.” (“Communication Decency Act,” 1997, unpaginated) Nineteen groups joined the suit, which was presided over by a panel of three judges in , including: the National Writers Union, the Journalism Education Association, Planned Parenthood Federation of America and Human Rights Watch. (Associated Press, 1996, unpaginated) A second challenge to the Act was undertaken at the same time. On the day that the Telecommunications Act was signed, an inflammatory editorial was published in an online newspaper called The American Reporter. It read, in part, “But if I called you [Congress] a bunch of goddam motherfucking cocksucking cunt-eating blue-balled Literature at Lightspeed – page 305 bastards with the morals of muggers and the intelligence of pond scum, that would be nothing compared to this indictment, to wit: you have sold the First Amendment, your birthright and that of your children. The Founders turn in their graves. You have spit on the grave of every warrior who fought under the Stars and Stripes.” (Russell, 1996, unpaginated) Strong stuff, not typical of The American Reporter. However, as the editor put it in an editorial published in the same issue, “This morning, we are publishing as our lead article a startling piece of commentary by a brave Texas judge, Steve Russell, who is risking his position and his stature in the community to join us in a fight against the erosion of the First Amendment.” (Shea, 1996, unpaginated) An attorney for the publication filed for an injunction against the CDA in New York, where a second panel of three judges was asked to rule on it. At the same time as these suits were pursued, a variety of protests against the CDA were organized to raise public awareness of the problems some people and groups had with it. Protesters included the Community Breast Health Project, Surf Watch, Sonoma State University, the Abortion Rights Activist Page, Internet on Ramp, authors, computer programmers and graphics designers. (Associated Press, 1996, unpaginated) As their one form of online protest, many sites on the World Wide Web (mostly, but not exclusively, pornographic) turned their background to black and added links to Web pages containing arguments against the CDA. Some suggested that this latter action mostly preached to the converted, to little effect: “This collective act of protest was greeted, at best, with a yawn in Washington and, at worst, with a collective ‘Who cares if their web pages are black? The fools.’” (O’Donnell, 1996, unpaginated) However, the protests seemed to galvanize the online community, giving its offline protests more coherence and weight. Those opposed to the CDA argued against it on a variety of grounds. The CDA outlawed the transmission of obscene material over the Internet. “The First Amendment Literature at Lightspeed – page 306 protects sexually explicit material from government interference until it is defined as obscene under the Supreme Court’s guidelines for analysis in Miller v. California... Once characterized as obscenity, such material has no First Amendment protection. None of the Communications Decency Act prohibitions of obscene materials violates the First Amendment.” [note omitted] (Evans and Stone, 1995, unpaginated) However, since obscene material was already illegal, the CDA was unnecessary. In a similar vein, “The Supreme Court has also held that child pornography is not protected by the First Amendment. In New York v. Ferber, the Court relied on the fact that child pornography is created by the exploitation of children, and that allowing traffic in child pornography provides economic incentive for such exploitation. The Court also found that such material possesses minimal value. Therefore, child pornography lies outside the protection of the First Amendment and can be prohibited.” (ibid) New laws in this area are only necessary when new media have aspects which make the applicability of existing laws unclear; in such cases, all the new law has to do it clarify how existing law will be applied to the new medium. Since the obscenity and child pornography rulings were not specific to a given medium of communications, they could be applied to the Internet, making the parts of the CDA which covered those issues redundant. However, the CDA went much further, banning “lewd, indecent, filthy or harassing communications.” According to many critics, this was a completely different kettle of fish. “What is ‘indecent’ speech and what is its significance? In general, ‘indecent’ speech is nonobscene material that deals explicitly with sex or that uses profane language. The Supreme Court has repeatedly stated that such ‘indecency’ is Constitutionally protected. Further, the Court has stated that indecent speech cannot be banned altogether -- not even in broadcasting, the single communications medium in which the federal government traditionally has held broad powers of content control.” (Electronic Frontier Foundation, undated (a), unpaginated) Literature at Lightspeed – page 307 Anti-CDA activists claimed that changing the definition of unallowable material from obscene, which was not Constitutionally protected, to indecent, which to that point had been, would have a chilling effect on speech online. The legal test for obscenity involves three qualifications, the final one being that the work in question “taken as a whole, lacks serious literary, artistic, political, or scientific value.” (Evans and Stone,

1995, unpaginated) Thus, even if it has explicit sexual content, everything from a respected novel to an academic paper cannot be considered obscene. However, because the legal test for indecency does not have this provision, those same works can be considered indecent. “Any discussion of Shakespeare or safe sex would not be allowable except in private areas, where someone can be paid for the task of rigidly screening participants.” (Oram, et al, 1995, unpaginated) Other examples of indecency “could include passages from John Updike or Erica Jong novels, certain rock lyrics, and Dr. Ruth Westheimer’s sexual-advice column.” (Electronic Frontier Foundation (a), unpaginated) Moreover, “As Human Rights Watch, a member group of the coalition [against the CDA], argued in an affidavit to the Supreme Court, the law’s prohibition of ‘indecent’ speech could be applied to its own human rights reporting that includes graphic accounts of rape and other forms of sexual abuse.” (Human Rights Watch, 1999, 31) The legality of one Web page devoted to its creators favourite paintings came under question:

As nearly as I can tell, most of [the paintings] would qualify as being indecent under the Communication Decency Act. Were the Communication Decency Act to be broadly enforced, it would be illegal to maintain these images on a server located in the United States... Most of the pictures at this page are pre-Raphaelite -- either painted by members of the pre-Raphaelite brotherhood itself, or by artists with similar inspirations. While it’s beyond the scope of this page to get into a detailed discussion of pre-Raphaelite art, I find it particularly significant that in their day, many of the pre-Raphaelite artists were decried as “indecent,” Literature at Lightspeed – page 308

perhaps by people with the same narrow mindset as our contemporary politicians and law-makers. (Rimmer, 2000, unpaginated)

Perhaps most immediately relevant for our purposes is the fact that, as we saw, some of the stories written by the writers surveyed in Chapter Two and posted to the Web contained graphic descriptions of sexual acts or profane language. These stories would likely have been considered indecent and made illegal under the CDA. In Chapter Two, I tried to show how the graphic passages were not merely prurient, but part of the overall artistic intent of the writers. While this would be a defense against charges of obscenity in print, it was not a defense against charges of indecency under the CDA. Thus, many of the writers in the survey (the majority of whom, you will recall, were Americans) would have had to remove their work from the Internet or faced criminal charges. It is also worth noting that, in quoting passages from such work, this dissertation would have been illegal under the CDA. While I am a Canadian, and the dissertation will be published on a server in Canada, if it were mirrored on a server in the US, the ISP would likely have been liable under the CDA. If widely enforced, the CDA would have the effect of limiting communication on the Internet to what would be acceptable for children. In doing so, the Act essentially criminalized speech on the Internet which would be acceptable in other media. For example, The American Reporter editorial quoted above was written specifically to test the limits of the CDA; the author and publisher assumed it was illegal under the Act. However, “Recently, the editorial, shortened for space but with the same raw language, was reprinted in the May issue of Harper’s magazine. There is no possibility, however, that Harper’s publisher could face criminal sanction for distributing the commentary in print.” (Mendels, 1996, unpaginated) Analogies between online communication and existing communications forms abounded: “It’s as if the manager of a Barnes & Noble outlet could be sent to jail simply because children could wander the bookstore’s aisles Literature at Lightspeed – page 309 and search for the racy passages in a Judith Krantz or Harold Robbins novel.” (Electronic Frontier Foundation, undated (a), unpaginated) The National Writers Union summed up this argument when it resolved that “Electronic communication should have no less protection than print or any other form of speech.” (1995, unpaginated) Another important objection to the CDA was that it cast its net too wide. Defending the Act, former Attorney-General Edwin Meese, et al argued that

It is not possible to make anything more than a dent in the serious problem of computer pornography if Congress is willing to hold liable only those who place such material on the Internet while at the same time giving legal exemptions or defenses to service or access providers who profit from and are instrumental to the distribution of such material. The Justice Department normally targest [sic] the major offenders of laws. In obscenity cases prosecuted to date, it has targeted large companies which have been responsible for the nationwide distribution of obscenity and who have made large profits by violating federal laws. (1995, unpaginated)

The CDA could be interpreted to hold Internet Service Providers (ISPs) liable for the content on their servers. There are many reasons to object to this. The first is that most ISPs do not screen content; it flows through them. Owing to the nature of the medium, ISPs could be prosecuted for material which they couldn’t possibly know was going through their systems. As Mike Godwin stated, “Internet nodes and the systems that connect to them, for example, may carry [prohibited] images unwittingly, either through unencoded mail or through uninspected Usenet newsgroups. The store-and-forward nature of message distribution on these systems means that such traffic may exist on a system at some point in time even though it did not originate there, and even though it won’t ultimately end up there.” (Evans and Stone, 1995, unpaginated) Literature at Lightspeed – page 310 The CDA would seem to require ISPs to substantially change the nature of their business. Some commentators argued that this could have a potentially devastating effect on the industry:

The CDA as passed by the Senate would put the burden of censorship directly on the service providers. Under this burden, the risk of litigation would literally put a vast number of service providers out of business. The result of which would be fewer service providers who will then charge higher access fees based on the shrinking ‘supply’ of access to these services. Service providers will also be required to ‘insure’ themselves from the potential litigation. In addition, the service providers will be required to invest in new technology to ‘censor’ the content provided to their subscribers as well as the information passing through their systems. There is no doubt that these costs will be passed along to individual subscribers by the service providers. (ibid, unpaginated)

It was also argued that the volume of traffic which passes through the Internet would make it impossible for any ISP to properly monitor. We shall come back to this point later in the chapter. Finally, it was pointed out that there were alternatives to government censorship of the Internet. One technical method for keeping minors away from adult content was known as filters. A typical filtering program, Surfwatch, “uses multiple approachs [sic], including keyword- and pattern matching algorithms; the company uses its blocked site list as a supplement to its core filtering technologies... “ (Godwin and Abelson, 1996, unpaginated) Most of the major commercial ISPs offered their own software for concerned parents:

Compuserve offers a kids’ version of WOW!, which lets parents screen their kids’ incoming e-mail, has no chat or shopping features, and restricts Web access to sites approved by WOW!’s staff. America Online provides filters that allow parents to restrict children to Kids Only areas that are supervised by adults, allows parents to block all chat rooms, selected chat rooms, instant messages (a sort of instant e-mail), and newsgroups. Prodigy lets users restrict children by limiting access to certain newsgroups, chat rooms, and the Web. Yahooligans! will permit access only to Internet areas rated “safe.” Microsoft Network’s service Literature at Lightspeed – page 311

automatically restricts access to adult areas except to users who have submitted an electronic form requesting access; Microsoft then checks to see if the account is subscribed to someone over 18. [notes omitted] (Bernstein, 1996, unpaginated)

Legal precedent for American government regulation of speech requires “what the judiciary calls the ‘least restrictive means’ test for speech regulation.” (Electronic Frontier Foundation, undated (b), unpaginated) This means that, if there is a means of accomplishing the aim of government regulation without actually having the government put controls on speech, that means is preferable. Opponents of the CDA argued that, while imperfect, the Internet offered a variety of tools which parents could use to protect their children from indecent materials; if used, filtering mechanisms would protect minors without affecting speech which was legal for adults.1

The courts found the anti-CDA arguments compelling: “...on June 11, 1996, a panel convened in Philadelphia, consisting of Chief Judge Dolores Sloviter and Judges Ronald Buckwalter and , enjoined the enforcement of the CDA, finding the statute to be unconstitutional on its face. On June 13, 1996, a panel convened in New York, consisting of Chief Judge Jose Cabranes and Judges Leonard Sand and Denise Cote, entered a similar injunction.” [notes omitted] (Bernstein, 1996, unpaginated) The government appealed the ruling of the Philadelphia court to the Supreme Court. On June 26, 1997, in the case of Reno vs. the American Civil Liberties Union, the Supreme Court found that the “CDA’s indecent transmission’ and ‘patently offensive display’ provisions abridge the freedom of speech’ protected by the First Amendment.” (Wisenberg, 1997, unpaginated) The Court largely agreed with the reasoning of those who opposed the CDA. On the issue of indecency, for instance, the Court stated that “Although the Government has an interest in protecting children from potentially harmful materials...the CDA pursues that interest by suppressing a large amount of speech that adults have a constitutional right to send and receive...” (ibid) On the issue of filters, the Literature at Lightspeed – page 312 Court stated that “The CDA’s burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the Act’s legitimate purposes...

The Government has not proved otherwise.” (ibid)2 Thus, by a margin of 7-2, the Supreme Court struck down the Communications Decency Act. During its lifetime, debate about the CDA was highly polarized and quite vituperative. Judge Russell’s profane article in The American Reporter was not the only provocation. Online journalist Brock Meeks, writing for HotWired, claimed that retaining the indecency standard “is akin to ramming a hot poker up the ass of the Internet.” (1995, unpaginated) Another commentator called the CDA “...Exon’s pillaging of freedoms in the online world.” (Corcoran, unpaginated) Still another opponent of the bill posted the following to the Net:

The [German] purity crusade now found a focus in the “Act for the Protection of Youth Against Trashy and Smutty Literature,” a national censorship bill proposed to the Reichstag late in 1926. This Schmutz und Schund (Smut and Trash) bill, as it was dubbed, aroused fears in German literary and intellectual circles, but the Minister of the Interior soothed the apprehensive with assurances that it “threatens in no way the freedom of literature, [the] arts, or [the] sciences,” having been designed solely for the “protection of the younger generations.” It was aimed only at works which “undermine culture” and purvey “moral dirt,” he added, and had been devised “not by reactionaries, but by men holding liberal views...” On December 18, 1926, after a bitter debate, the Schmutz und Schund bill passed the Reichstag by a large majority. (Boyer, unpaginated)

The intention, of course, was to compare proponents of the CDA to those who paved the way for Nazi Germany.3

But what actually happened here? Congress enacted a law which would have curbed certain kinds of speech. The Supreme Court found it unconstitutional and struck it down. It is unreasonable to expect that every law Congress will pass will be perfect; it is Literature at Lightspeed – page 313 an imperfect institution populated by flawed human beings. That’s why there are three branches to government in the United States: they are meant to be a check each other’s excesses. It seems to me that, tested though it was, the system worked: a bad law was not allowed to stand. Those who were most uncivil in their discussions of the CDA showed a lack of faith in the checks and balances which are supposed to be the great strength of the American system. After the CDA The Supreme Court striking down the Communications Decency Act did not end American government efforts to control the content of digital communications networks in the name of protecting children. In the House of Representatives, the Internet Freedom and Child Protection Act of 1997 was introduced in order “To amend the Communications Act of 1934 to restore freedom of speech to the Internet and to protect children from unsuitable online material.” (HR 774 IH, unpaginated) By combining “Internet freedom” with “child protection,” supporters of this bill hoped to make it clear that their efforts to keep material out of the hands of children would not interfere with the rights of adults to engage in protected speech (an important lesson of the defeat of the CDA). This bill added an interesting twist to the debate by mandating that “An Internet access provider shall, at the time of entering an agreement with a customer for the provision of Internet access services, offer such customer, either for a fee or at no charge, screening software that is designed to permit the customer to limit access to material that is unsuitable for children.” (ibid) In addition, individual states have the ability to pass their own content control laws. “Legislation restricting speech on computer networks has been signed into law in Connecticut, Georgia, Maryland, Montana, Oklahoma, and Virginia; and additional legislation is pending and will very likely be signed into law in Alabama, California, Florida, Illinois, Maryland, New York, Oregon, , and Washington.” Literature at Lightspeed – page 314 (National Writers Union, 1995, unpaginated) At the time the CDA was being debated, the American Civil Liberties Union claimed to be monitoring bills being proposed in 13 states. (undated, unpaginated) Moreover, the protection of children is not the only rationale behind legislative attempts to control the content of the Internet. One bill would have made it illegal to transmit information about the making of bombs (the bill, S. 735, and commentary on the bill (Center for Democracy and Technology, 1995) can be found on the Web). There have also been attempts to revive the Comstock Act, first enacted in 1873, which made it illegal to discuss any aspect of abortion, in order to outlaw speech on abortion on the Internet. (Schroeder, 1996, unpaginated) Since they were both efforts to ban speech which was protected by the First Amendment, and which could readily be found in other media, these laws would likely have not survived a court challenge had they been enacted. Censorship in the Rest of the World American government attempts to control online speech are noteworthy because of the fact that the United States has a long history of supporting freedom of speech, and the country frequently holds itself up as a model for the rest of the world. However, there are many countries in which attempts at government control over digital communications networks are much more repressive than those in the United States. This section, which is not meant to be comprehensive, will look at some of the attempts to control speech on the Internet around the world. In 1996, German authorities asked CompuServe, an international Internet Service Provider, to stop carrying 200 newsgroups which public prosecutors in that country had deemed illegal; the company complied. Unfortunately, “Since CompuServe’s software did not initially make it possible to differentiate between German subscribers and others for access to newsgroups, CompuServe suspended access to a number of newsgroups to Literature at Lightspeed – page 315 all its subscribers world-wide...” (European Union Action, 1999, unpaginated) While the prosecutors claimed to be targeting pornography, the effect of their action was to stop CompuServe clients from accessing information on a wide variety of subjects. According to Anna Eshoo, who was, at the time, a Democratic Senator from California, “Among the items that CompuServe is being forced to hide from its four million users are serious discussions about Internet censorship legislation pending in Congress, thoughtful postings about human rights and marriage, and a support group for gay and lesbian youth.” (1996, unpaginated) This effort was doomed for a variety of reasons. CompuServe clients outside Germany complained that they were no longer getting newsgroups which were perfectly legal in their countries. Moreover, “CompuServe users still of course had access to the Internet and could therefore connect to other host computers that carried the forbidden newsgroups.” (Human Rights Watch, 1996, unpaginated) Eventually, CompuServe improved its software such that it could keep specific newsgroups from the citizens of specific countries, and only blocked Germans from accessing the newsgroups the German prosecutors had asked to be blocked. For their part, the prosecutors relented and allowed all but five of the newsgroups to be reinstated. You might assume that the lesson to be learned from this experience was that governments didn’t have as much power to control content on the Internet as they might like. In fact, representatives of the European Union, meeting in order to discuss how it should approach Internet regulation, came to a different conclusion: “This demonstrates that there is a need for co-operation between the authorities and Internet access providers in order to ensure that measures are effective and do not exceed what is required.” (European Union Action, 1999, unpaginated) As we have seen, many involved in the battle over the CDA argued that ISPs could not, for moral and practical reasons, be held Literature at Lightspeed – page 316 responsible for the content on their servers which had been created by others. A

discussion paper by and for members of the European Union suggests otherwise:

Because of the way in which Internet messages can be re-routed, control can really only occur at the entry and exit points to the Network (the server through which the user gains access or on the terminal used to read or download the information and the server on which the document is published)... [Therefore, if] the illegal content cannot be removed from the host server, for instance because the server is situated in a country where the authorities are not willing to co-operate, or because the content is not illegal in that country, an alternative might be to block access at the level of access providers. (ibid)

Nor are they alone. Singapore, for example, treats the Internet like a broadcast medium, licensing service providers on the condition that they do not carry material unacceptable to the government. (Human Rights Watch, 1996, unpaginated) In South Korea, “Local computer networks will be asked to prohibit access by local subscribers to banned sites, according to the Information and Communications Ethics Committee of the Data and Communications Ministry.” (ibid) As Human Rights Watch points out, “Censorship efforts in the U.S. and Germany lend support to those in China, Singapore, and Iran...” (ibid) Many governments use technical means to try and control what information their citizens can access. “Saudi Arabia, Yemen, and the United Arab Emirates impose censorship via proxy servers, devices that are interposed between the end-user and the Internet in order to filter and block specified content.” (Human Rights Watch, 1999, 1) To get around this, citizens of these countries can dial into servers in other countries which do not filter communications. However, international phone rates in these countries can be high enough to ensure that only the richest citizens will be able to pursue that option. Some countries have extended existing laws to the online world. For example, “Internet regulations in Tunisia explicitly extend criminal penalties for defamation and Literature at Lightspeed – page 317 false information to online speech.” (ibid, 3) Other countries, while they have not developed laws or regulations specific to the Internet, apply existing laws to it: “ Jordan and Morocco...” for instance, “have laws that curb press freedom and those laws, such as the ones that prohibit defaming or disparaging the monarchy, narrow the boundaries of what can be expressed online.” (ibid) Finally, as this last example suggests, some countries attempt to control content on the Internet which is specifically political: “The governments of Tunisia, Bahrain, Iran and the United Arab Emirates are among those that block selected Web sites dealing with politics or human rights, thus preventing users in their respective countries from accessing them.” (ibid, 4) Attempts by governments to censor material on the Internet has two effects on writers. As we saw in Chapter Two, the number of writers from countries other than the United States was underrepresented in my survey. A contributing factor to this may be that stricter censorship laws in other countries inhibits the posting of certain kinds of information. The other effect is that, despite the feeling that some writers have that publishing online makes everybody connected to the Internet in the world their potential readership, the actual readership for their stories is much smaller for reasons that have nothing to do with the technical aspects of the medium and everything to do with politics. * * *

Dirty books today Are bold and getting bolder For smut, I’m glad to say, Is in the mind of the beholder When correctly viewed Everythinng is lewd I could tell your stories about Peter Pan And the Wizard of Oz? There’s a dirty old man! (Lehrer, 1965)

As we saw at the beginning of the chapter, governments will always try to control new media. Sometimes, enlightened legislators will pull back from such efforts; at other Literature at Lightspeed – page 318 times, enlightened courts will strike down such efforts. With regard to the Internet, it has been argued that the international reach of the medium itself makes local and national government regulation difficult, if not impossible. During the battle over the Communications Decency Act in the US, for instance, Jerry Berman of the Center for Democracy and Technology argued, “I don’t know where Sen. Exon downloaded the materials that he found abhorrent, but if they’re downloaded from Sweden or they’re downloaded from Denmark, which looks exactly like any U.S. site, any law that he passes will not reach it.” (“Focus – Sex in Cyberspace,” 1995, unpaginated) Even a pro- CDA representative had to admit that, “the internet is global. How could we regulate pornography when foreign countries are producing 30% of it?” (The person went on to answer his own question: “Well, America has always been the policeman of the world. It has many foreign policies tools to enforce such a law.” (Gensler, 1997, unpaginated)) Setting aside the question of whether or not the United States has the right to enforce its morality on other nations, Gensler acknowledges an important point: how does an international communications system such as the Internet affect the ability of nation- states to control what information their citizens can access? This is the subject of the next section.

Problems with Government Regulation 1: Jurisdictional Disputes In 1993, Paul Bernardo was charged with the sexual abuse and murder of Kristen French and Leslie Mahaffy. His accomplice, Karla Homolka, cut a deal with the Crown: in exchange for a reduced sentence, she agreed to testify against Bernardo. In a case of really bad planning, Bernardo’s trial did not take place until over a year and a half after Homolka’s. Realizing that if the details of the Homolka trial were made public, the jury pool for the Bernardo trial could be poisoned, Mr. Justice Francis Kovacs of the Ontario Court’s General Division placed a ban on the publication in Canada of any of the details revealed in the Homolka trial. The ban was to last until the Bernardo trial. Literature at Lightspeed – page 319 At the time, I was learning about computer mediated communications networks, particularly the Internet. I had heard rumours that details of the Homolka trial could be found there. Curious about this possibility, I used Archie and anonymous ftp (the World Wide Web had yet to be given its convenient graphical interface) and found an American newspaper report of the trial in a computer at the University of Buffalo. The whole procedure took me approximately 30 seconds. (I had no interest in the trial itself, so I gave the file to a friend, who was outraged enough for the both of us.) The belief at the time was that many people had used the Internet to obtain the forbidden information, and that they had distributed it to many more people. “Despite the publication ban [on information on the trial of Karla Homolka], various Internet newsgroups posted details of the case... the information was freely circulated in the U.S. and found its way back north of the border electronically.” (Johnstone, Johnstone and Handa, 1995, 151) Having information about the trial was not, in itself, a crime (only publishing such information was). However, people who got details of the crime over the Internet flouted the intent of the Court’s ruling, which was to ensure that enough people did not know such details so that an unbiased jury could be impaneled for Bernardo’s trial. Because it was so easily circumvented, the ruling came in for much scorn (as do traffic rules which are difficult to enforce), and brought the entire justice system up for ridicule. Even if a legislative body passes laws to control content on the Internet which hold up in its country, it would still be faced with the problem of jurisdiction. The Internet is a communications system which spans the globe; since information flows more or less freely across borders, laws passed by individual nation-states can be easily circumvented. Worse, since laws passed in one country have no force in other countries, even if a national government can control what its citizens put on the Internet, it cannot control what the citizens of other nations put there. Literature at Lightspeed – page 320 One way in which computer networks may undermine governments is in the way it allows individuals to act in defiance of laws, making them difficult to enforce. The Homolka trial experience is one example of this. Soon after the trial of Karla Homolka, a newsgroup was set up on the Internet which contained information on it, alt.fan.karla-homolka. “This newsgroup was started as something of a joke on June 14, 1993, by a University of Waterloo student called Justin Wells who, upon seeing a photograph of Mr. Bernardo’s estranged wife, decided ‘she’s a babe’ and that she needed a fan club. Soon, however, as the horror of the charges against the two became clear, the newsgroup took on a different tone.” (Kapica, 1995, A13) The newsgroup soon came to include “not only discussion of the case but also evidence presented at the trial in which Ms. Homolka was convicted of two counts of manslaughter in the sex slayings...” (Gooderham, 1993, A5) Soon after the trial, it was reported that, “96 different articles have been posted on the Homolka newsgroup, including discussion of the Canadian and U.S. judicial systems, sordid rumours about the case and the text of an article published last week in The Washington Post.” (ibid) In order to comply with Judge Kovacs’ ban, some Internet Service Providers and universities blocked access to alt.fan.karla-homolka. This was sometimes regarded as an unwelcome attack on free speech. When, on legal advice, Mark Windrim, owner/operator of the MAGIC Bulletin Board Service banned discussion of the Homolka and Bernardo trials online, he earned “a flood of hate E-mail” from subscribers. (Clark, 1994, B28) When the University of Toronto blocked access to newsgroups with information on the trial, the student newspaper The Varsity published a step-by-step guide on how to circumvent the block. According to then-Varsity editor Simona Chiose, “We just wanted to show that despite the university’s effort to censor the information, it can still be obtained.” (Memon, 1994, A8) Literature at Lightspeed – page 321 For a number of reasons, attempts to block information on the Homolka trial were largely unsuccessful in stopping the banned information from circulating. Writing about the ban at the University of Toronto, for instance, Mary Gooderham pointed out that “the university brings in more than 4,200 other newsgroups, and some of those include the same information as the Homolka one. A newsgroup called alt.journalism, for instance, includes the Washington Post article.” (1993, A5) She went on to state that commercial services made the information available: “CompuServe, one of the largest private on-line computer services, offers the Washington Post article to its subscribers...” (ibid) Furthermore, even if access to the newsgroup at one server was blocked, “any Netsurfer with a little wit could find a ‘mirror site’ -- a computer carrying the same newsgroup -- in the United States, where the publication ban is not in effect.” (Kapica, 1995, A13) Simply renaming the newsgroup would have gotten around those who were attempting to contain the information; in addition, “users who have had it blocked also have the option of receiving all of the information by electronic mail.” (Gooderham, 1993: A5) The result of these and other methods around the ban was, according to the Ottawa Citizen, that “26 per cent of those polled knew prohibited details of the Teale [Bernardo]-Homolka trial...” (Wood, 1994) Coverage of the Homolka trial points out the difference between the Internet and traditional print media as disseminators of information. “A story on the case published this week in Newsweek magazine, titled ‘The Barbie-Ken Murders,’ which was not included in Canadian copies of Newsweek, appeared Sunday on the New York Times Special Features wire service.” (Gooderham, 1993, A5) Although the publishers of Newsweek voluntarily complied with the ban in their print publication, they could not control who could read their article when it was digitized. An even starker example of the difference occurred with a publication which, given that it considers itself part of the vanguard of the digital revolution, should have known to be more careful: “...a single Literature at Lightspeed – page 322 sentence in ‘Paul and Karla Hit the Net’ -- a 500-word article on Canadians tapping Internet for banned detail on the Karla Homolka trial -- triggered removal of 20,000 Wired mags from retail racks nationwide...distributors in Victoria and across the country scurried to slap stickers over the offending passage in each copy before returning the periodical to the shelves.” (Wood, 1994) Governments used to be able to control information in traditional print media because, in the worst case, they could seize physical copies of the information, punishing those who were distributing it. Because digital information has no physical form, it is much more difficult to contain, making rules about who can access it much harder to enforce. Although some governments may attempt to control digital information by controlling the physical infrastructure (ie: intervening at the level of service providers), the example of the ban on information on the Homolka trial suggests that this may not be as simple as it has been for previous media (for instance, television). Another example of a national government being forced to come to terms with new media took place in Serbia, when the Milosevic government attempted to outlaw information which opposed its public line on dissident groups. According to one report, Milosevic was largely successful in controlling traditional media:

In cities now controlled by the opposition, more than 50 TV and radio stations have been closed by the Serbian police on the grounds that their licences were not in order, eliminating alternatives to the heavily controlled propaganda machine of state TV. And the weekly magazine Nin, considered the most reliable and most serious publication in Serbia, has a new editor-in-chief, Milivoj Glisic, and now embraces a more Serbian, nationalistic editorial policy. A third of the journalists have left in protest. (Perlez, 1997, A9)

According to Dusko Tomasevic, the Milosevic government was not able to shut down news of the regime which made its way onto World Wide Web sites on the Net: “‘The police told students to shut it down, but they cannot,’ Tomasevic says, subdued, Literature at Lightspeed – page 323 matter-of-fact. ‘We have mirror sites now in Europe and North America, and if they shut down the Belgrade server we can directly modem the information overseas. To stop that they will need to shut down every telephone in Serbia – which is impossible.’” (Bennahum, 1997, 168) As Tomasevic claims, anybody with access to the technology can transmit forbidden information from their computer directly to a computer outside their country (and, presumably, outside the control of the government of their nation). Moreover, once the information has been transmitted, it is rapidly disseminated to computers in various nations throughout the world, rendering subsequent control of the source moot. One of the few remaining independent voices in the region, the radio station B92, had its signal repeatedly jammed before being completely shut down by the Milosevic regime. (Reuters, 1997, A11) In the past, this may have permanently silenced the radio station, but, as it happened, this was not the case:

On December 3 [1995], the Net briefly captured center stage in Belgrade when the Milosevic regime took Radio B92 off the air. B92, then Belgrade’s only radio station that wasn’t under state control, had for two weeks been broadcasting updates on the growing protests in the streets. When Milosevic unplugged B92, the broadcasts were rerouted via the Net using RealAudio. The Voice of America and the BBC also picked up the dispatches, resending them to Serbia via shortwave. Two days later, Milosevic allowed B92 to broadcast again, giving the opposition an important symbolic victory, and inspiring students to start calling their struggle ‘the Internet Revolution.’ (Bennahum, 1997, 168)

Because of their centralized nature, it was once possible for a government to physically seize television or radio transmitters which were used to disseminate information of which the government did not approve. The Internet, being both decentralized and having innumerable points of entry (not only telephone lines, but cable, satellite and, perhaps in time, even power lines), is far more difficult to police in this way. As we saw with print, Literature at Lightspeed – page 324 methods of controlling the medium of radio which once worked are made highly problematic by new digital media. Some countries are trying. China, for instance,

is in the midst of developing a large academic computing network to link more than one thousand educational institutions by the end of the century. There is only one twist to this network. Unlike American networks, with multiple electronic routes from campus to campus, all traffic in this Chinese network will have to run through Beijing’s Quinghua University. Poon Kee Ho thinks he knows why. The Chinese academic network will be technically unsound, but with a choke point at Quinghua University, government officials ‘can do what they want to monitor it or shut it down’... (Wresch, 1996, 147)

China seems, in fact, to want to return computer networks to the hub-and-spokes model of telephone connectivity in order to be able to exert control over it. Politically, there can be no doubt that the Chinese government has the will to carry this out: as the Tiananmen Square massacre indicates, it is more than willing to defy international opinion to achieve internal political ends. Moreover, the example of Singapore, which has extensive international economic ties despite having repressive laws on Internet use, suggests that governments can attempt to control information flow through computer networks with few repercussions to international relations. (Gibson, 1993) That having been said, it must be pointed out that the technology works against such centralized control. For one thing, the volume of information passed through China’s system is likely to be huge, with perhaps millions of messages a day. The computing power necessary to monitor such output is mind-boggling. Moreover, how to sift legitimate from illegitimate forms of communications is a logistical nightmare. Simple programmes can be written which will look for certain words (ie: “democracy”) and let the people running the system know in which messages such words occur; but a huge bureaucracy would have to be created to sort through the flagged material to determine what was innocent communications and what was politically unacceptable. Literature at Lightspeed – page 325 Even if such a system could be set up, Ho’s fear that the Chinese government will shut down the country’s connection to the Internet is probably misplaced. China is currently attempting to increase its educational and corporate connections with the outside world as part of a larger process of helping it to function within the modern world economy. To the extent that the technology is an important part of this process, shutting China’s Internet connection completely would seriously damage these efforts. In this way, unwanted information is somewhat protected by the nation’s need for certain kinds of information; or, as William Wresch eloquently puts it: “Links to the world are innocent. The highway doesn’t know if it is carrying salvation or slaughter.” (1996, 158) As information transfers become increasingly global, as well as increasingly important for international business, the possibility of shutting down local information networks for political reasons becomes increasingly remote. Even where laws are difficult to enforce, some argue that they still have value. Taylor refers to symbolic legislation, whose purpose is “more ideological than instrumental.” (in press) One major characteristic of symbolic legislation is that “it should espouse a particular social message irrespective of the law’s likely ability to enforce that message.” (ibid) In this way, it can be argued that the ban on information on the Homolka trial, to take one example, should have stayed on the books even after it became clear that it was difficult to enforce in order for the government to be seen to be upholding the ideal of fair trials for the accused. (Ideological purposes need not be benign, however; the Milosevic government of Serbia may outlaw some forms of communication in order to seem to be maintaining control over information in its country even if there are holes in such a ban.) The danger of symbolic legislation is that it can bring a government into disrepute by making its power to rule a subject of ridicule. This happened in Quebec when local computer company Micro-Bytes Logiciels ran afoul of the Office de la Langue Francais, Literature at Lightspeed – page 326 the government agency charged with protecting the French language in the province, resulting in the company removing most of its home page from the World Wide Web. (“Language Rules,” Montreal Gazette, 1997) According to the OLF, the company’s English-only Web site was a violation of Section 52 of Quebec’s French Language Charter, which reads: “Catalogues, brochures, fliers, commercial directories and all other publications of the same type must be produced in French.” (Beaudoin, 1997, B5) Reaction to the move was largely negative, the following excerpt from a column in a Montreal alternative weekly being typical: “Where the rest of the world sees a multimedia free-for-all that transcends language, nationality and border, those, um, marvelously iconoclastic individuals at the OLF see printed brochures and neon signs over shoe stores. In NDG. It’s all print advertising to them, and as far as they’re concerned the people who put those print ads on a network that spans the entire globe only intended them to be read by Quebecers.” (Scowen, 1997, 6) The issue was even brought before the federal government when Liberal Member of Parliament Clifford Lincoln scoffed, “The Internet doesn’t belong to Quebec. This isn’t a television channel or a radio station. It’s a totally different entity...” (Contenta, 1997, A2) Beaudoin is not unaware of this: “It has been argued that Quebec cannot exercise jurisdiction over companies located outside its borders that put advertising on the Internet. This is true. But this is no reason, in our view, to abdicate our responsibility to protect Quebec consumers to the extent that we are able.” (1997, B5) This is a clear statement of the symbolic nature of the law.4 Two qualifications must be made here. The first is that the French language press seemed more favourably disposed to the ruling on Micro-Bytes Logiciels than the English language press, likely because they have more sympathy for the government’s goal of protecting the French language. The other is that the French language laws are frequently a source of ridicule in Montreal’s English language press. This underscores the Literature at Lightspeed – page 327 point, though, that the more a government’s laws are ridiculed, the greater its legitimacy tends to be undermined. It should also be pointed out that jurisdictional disputes are not limited to nation-states; smaller, more localized governments may also attempt to pass laws which affect the flow of information over digital networks despite how difficult they may be to enforce. Examples of Internet related challenges to the authority of governments are multiplying. An American company wanting to sell home pregnancy kits over the

Internet faces a problem because they cannot legally be sold in Canada. (“E-Commerce Problems,” Ottawa Citizen, 1998) In December, 1996, two organizations dedicated to protecting the French language in Europe sued the Georgia Institute of Technology’s branch in Lorraine because its Web site contained only English (they recently settled out of court). (“English-only Approved for Georgia Tech Web Site in France,” 1998) “During the 1991 attempted coup in Russia...programmers used their computers to keep in touch with the rest of the world, even though the insurgents controlled the centralized radio, television, and newspaper facilities. Messages traveled from Moscow to Vladivostock, to Berkeley, to London, and back, while the technologically illiterate old-timers were powerless to stop them. In the old days, it was easy for Moscow to prescribe what the entire country thought simply by controlling the central broadcasting stations. Not anymore.” (Rawlins, 1996, 80) As a result of these, and other events, scenarios illustrating the uncontrollable nature of electronic communications networks such as the Internet abound. Nicholas Negroponte, to cite one example, wrote,

If my server is in the British West Indies, are those the laws that apply to, say, my banking? The EU has implied that the answer is yes, while the US remains silent on this matter. What happens if I log in from San Antonio, sell some of my bits to a person in France, and accept digital cash from Germany, which I deposit in Japan? Today, the government of Texas Literature at Lightspeed – page 328

believes I should be paying state taxes, as the transaction would take place (at the start) over wires crossing its jurisdiction. Yikes. As we see, the mind-set of taxes is rooted in concepts like atoms and place. With both of those more or less missing, the basics of taxation will have to change. (1998, 210)

This particular scenario may be overstated. As economist Paul Krugman points out, the movement of people is far from free, and as long as people are physically rooted to one area in one country, it remains possible to make them submit to paying taxes. (Kevin Kelly, 1998, 146) There are other scenarios which point out the difficulty of local attempts to regulate an international communications system: “Questions that once had clearcut answers are now blurring into meaninglessness. Who should be involved in a computer chase? Who has jurisdiction? If you invade someone’s computer, is that burglary or trespass? Where should the search warrant be issued? And what for? What happens if someone living in country A commits a crime in countries B and C using computers in countries D, E, and F?” (Rawlins, 1996, 83) Or again: “To censor Internet filth at its origins, we would have to enlist the Joint Chiefs of Staff, who could start by invading Sweden and Holland.” (Noam, 1998, 19) One obvious solution to this problem is for national governments to enter into agreements with each other to regulate international digital communications networks. The likelihood of the nations of the world, with their wildly disparate cultures, agreeing on policies for policing Internet content seems remote; even if it were possible, methods of circumventing such policies makes their enforceability by no means certain. Finally, there will be a hidden cost to such efforts. “In order to combat communicative acts that are defined by one state as illegal, nations are being compelled to coordinate their laws, putting their vaunted ‘sovereignty’ in question.” (Poster, 1995, unpaginated) Ironically, Literature at Lightspeed – page 329 attempts to protect national sovereignty by controlling Internet content may, in this way, lead to it being undermined. Another problem with any sort of regulation of Internet content is, according to many observers, that the freedom to publish on the Internet is what makes it such a dynamic source of information. “To impose local norms on media that are inherently unlocal is to cripple the media themselves... With enough sufficiently different local norms brought into play, the network will be permitted to transmit nothing but mush.” (Noam, 1998, 46) The irony here is that efforts to “save” the Internet (from, say, purveyors of pornography) may reduce it to something not worth saving. Some commentators suggest that computer networks will make nation states obsolete. Don Tapscott, for instance, writes: “There is evidence that the I-Way will result in geopolitical disintermediation, undermining the role of everything in the middle, including the nation-state. That is, broadband networks may accelerate polarization of activity toward both the global and the local...” (1996, 310) I am not suggesting this. As I argued previously, governments are instruments of the collective will of citizens and, as such, will continue to serve a legitimate purpose for the foreseeable future. In particular, we should expect them to continue to attempt to regulate communications media, including the Internet. However, any government attempt to regulate digital communication networks, as they are currently configured, must take into account the way the technology can constrain the ways in which governments can act.

Copyright As we have seen, many of the writers surveyed in Chapter Two are concerned about whether or not traditional notions of copyright will apply to digital communications media. Some are worried that if they cannot exert copyright protections for their writing, they will not be able to make money from it once mechanisms for revenue generation are perfected. Others are concerned that, given the ease with which digital works can be Literature at Lightspeed – page 330 copied and modified, they will not be able to control where or how their work appears without strong copyright protections. On the other hand, those pushing for stringent application of copyright to digital media the loudest are the transnational entertainment corporations that hope to reap vast profits from the Internet. The tension between these two interests, as well as other issues arising out of the application of traditional notions of copyright to the emerging medium of digital communication networks, is the subject of this section. A Note About Terms: Are Expressions Property? Before beginning a discussion of copyright, it is worth noting that a central term used in most such discussions is “intellectual property.” This term is largely a metaphor, comparing the right somebody has in intellectual expression to the right they may have in owning a car, a house, or any other physical commodity. I find the term misleading, leading to extensions of concepts from the physical world to the purely ideational world regardless of whether or not they actually fit well with our experience of the ideational world. The obvious difference between the two is their tangibility: property rights have traditionally been exerted over physical objects. Expressions of ideas, by way of contrast, may be embodied in a physical object (a book, say, or a videotape), but their essence is not physical; this is made clear by digital media, where messages are transported over vast physical networks, but, themselves, take the form of impulses of light. The most important aspect of the rights inherent in owning physical property is that its owner has absolute control over what is done with it. Thus, the owner of a car determines who can drive it; the owner of a house determines who can live in it and what activities are permissible within its boundaries, and so on. This is necessary because there is only the one object; if a dispute arises in which it becomes necessary to determine who Literature at Lightspeed – page 331 has the right to decide what will be done with the object, the concept of who “owns” it, whose property it is, is invoked. When it comes to the expression of ideas, we have seen (and will have cause to consider again) that this is not the case. When somebody buys information, the person who created it can retain a copy. Moreover, information proliferates: whether emailed to a thousand people on a list or talked about around an office cooler, information is soon distributed beyond the ability of its creator to control. Last chapter, we saw some proposed technical solutions to this problem in the digital world; in this chapter, we will look at a legal solution. It is unclear whether any of these solutions will work (indeed, some people believe they will be fruitless and might as well be abandoned); in fact, as we shall shortly see, extending such power too far may be to the detriment of society as a whole. I would suggest part of the reason control of ideas and the expression of ideas is so difficult is that the metaphor of property which is the basis of such attempts at control does not apply to information. For this reason, although many of the thinkers I quote in this section use the term “intellectual property,” I do not. What Is Copyright?

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it.... He who receives an idea from me, receives instructions himself without lessening mine as he who lights his taper at mine, receives light without darkening me. That ideas should be spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature... (Thomas Jefferson, quoted in Samuelson, 1998, unpaginated)

Our commonsense understanding of the way literature works suggests that authors should be rewarded for their work. In fact, it took a long time for such an understanding to be Literature at Lightspeed – page 332 developed, and, even now, long after regulatory regimes were developed in order to protect the interests of authors, their aims and effects are poorly understood. As described in Chapter One, in the 16th Century Gutenberg’s press spread throughout Europe; the books which were printed on it tended to be ancient texts, largely the Bible, but also many of the philosophical works of the Greeks. Because they were long dead, compensation for the authors of these works was not seen to be important. However, the publishers of these works had a serious economic stake in them: it was not uncommon for a publisher to go to the trouble and expense of developing a volume, only to see it exactly reproduced at lower cost by another publisher. A group of publishers banded together and successfully lobbied the British government for protection, which it granted them; in 1557, the Stationer’s Company, as this group was called, was given exclusive control over all printing and book selling in England. (Gutstein, 1999, 129) As publishing grew, individual authors were encouraged to write new works. However, because the Stationer’s Company had a monopoly on publishing, it was able to dictate the terms under which British authors would be compensated for their efforts. Most often, the author was given a lump sum payment for a work, and had no legal recourse to any other payment, even if the book went on to become a bestseller which made a lot of money for the publisher. Note this economic imbalance, which favoured the interests of publishers over authors and was supported by statute: it will appear again in modern times. The Stationer’s monopoly on publishing lasted for 150 years, until the passage of the Statute of Anne in 1709. (ibid, 130) The rights enshrined in this Statute were to be reproduced in the American Constitution some 80 or so years later. According to this latter document, copyright is intended to “promote the progress of science and the useful arts,” by giving the creator of a work, for a limited period of time, the right to control the dissemination of, and thereby profit from, that work. Several things are noteworthy about Literature at Lightspeed – page 333 this formulation of copyright. The most obvious is that publishers are not mentioned; the most important economic stake in an original work is now identified as the author’s. (In fact, an exception has arisen: “...generally the copyright in a work is owned by the individual who creates the work, except for full-time employees working within the scope of their employment and copyrights which are assigned in writing.” (Brinson and Radcliffe, 1991, unpaginated) However, in the cases in which we are most interested, particularly individuals who put their work on a Web page, this exception does not apply.) Perhaps more important to note about this way of looking at copyright is the idea that it is granted to artists and other creators by society for society’s benefit. “Copyright - - the right of a creator to impose a monopoly on the distribution of his or her work -- was originally conceived [in Canada] as a privilege bestowed by Parliament on authors to encourage the creation of new ideas, which society needed to continue its development.” (Gutstein, 1999, 3) This privilege is, in fact, severely circumscribed. For example, “Under copyright law, only an author’s particular expression of an idea, and not the idea itself, is protectible.” (Jassin, 1998, 6) For example, you cannot copyright the idea that the sun is shining in the sky, because this would make it illegal for any other author to write anything on this very common observation. However, if you are Samuel Beckett, you can copyright the unique expression which opens the novel Murphy: “The sun shone, having no alternative, on the nothing new.” (1976, 24) The advantage to society of limiting the benefits creators may obtain from their work is most obvious in the sciences, where ideas in a given book or paper are debated in subsequent publications. As debates in the sciences advance through this type of give and take, the boundaries of human knowledge expand (as well as the technologies which arise out of this knowledge). If scientists were allowed too much control over their creations, scientific debate would be stifled, and the public interest would suffer. Literature at Lightspeed – page 334 In a similar, though less well understood process, the arts are advanced by building on existing work: “...new works borrow liberally from a common store of facts, information, and knowledge that exists in the public domain in reference books, libraries, schools, government documents, and the news media, as well as in society’s stories, myths and public talks.” (Gutstein, 1999, 138) Existing information is the common property of humanity from which new works of art are forged. Some go so far as to suggest that creativity, far from being the domain of the lone artist as suggested by modern myths of creation, is necessarily “a collective process. No one has totally original ideas: ideas are always built on the earlier contributions of others. Furthermore, contributions to culture -- which makes ideas possible -- are not just intellectual but also practical and material, including the rearing of families and construction of buildings. Intellectual property is theft, sometimes in part from an individual creator but always from society as a whole.” (Stutz, undated (b), unpaginated) Consider Shakespeare. Most of his plays are based on historical characters and incidents or previously existing stories; had these stories been protected by strict copyright laws, many of the greatest plays in the English language may never have been written. Or consider it from the opposite point of view. Had Shakespeare’s heirs held a copyright to his work to this day, we may never have had Burton and Taylor’s The Taming of the Shrew or 10 Things I Hate About You, McKellan’s Richard III or Welles’ Chimes at Midnight, West Side Story or some of the greatest films of Olivier or Branagh. In this way, any artist is a nexus of existing and future stories. Copyright law is intended to find a balance between the needs of the artist in the present and the debt he or she owes the past and the future. Because it is so little commented upon, the public interest in copyright cannot be stressed enough. As Richard Stallman comments, “Progress in music means new and varied music -- a public good, not a private one. Copyright holders may benefit from Literature at Lightspeed – page 335 copyright law, but that is not its purpose.” (1993, 48) Digital technologies pose a unique challenge to existing copyright regimes. Does Copyright Apply to Digital Media?

The grant of an exclusive right to a creative work is “the creation of society -- at odds with the inherent free nature of disclosed ideas -- and is not to be freely given.” (Thomas Jefferson, quoted in Gutstein, 1999, 160)

Copyright was originally meant to apply to literary works, to books, magazines and other print forms of communication. As new media developed, copyright was applied to them, so that it is now possible to copyright photographs, television shows and movies.5 The temptation, to apply the existing regime to new media as they are created, is now exerting itself over digital media. However, in some ways digital media distort our existing ideas of copyright when it is applied to them. To begin with, copyright has traditionally been applied to a work only when it has taken on a “fixed form.” “The point at which this [copyright] franchise was imposed,” writes John Perry Barlow, “was that moment when the ‘word became flesh’ by departing the mind of its originator and entering some physical object, whether book or widget. The subsequent arrival of other commercial media besides books didn’t alter the legal importance of this moment. Law protected the expression and, with few (and recent) exceptions, to express was to make physical.” (1996, 149/150) This is a necessary practicality: when a dispute over the ownership of a work takes place in a court of law, it is impossible to prove that somebody had the expression of an idea in their head at a given date; tangible proof of the existence of the work is necessary to prove who thought it up first. This is the reason many authors register their writing with appropriate guilds or government agencies before they circulate it in an attempt to sell it. Digital media do not have this fixed quality. When you access a World Wide Web page, for instance, all of the elements of the page are stored in temporary memory on Literature at Lightspeed – page 336 your computer. You can fix the amount of memory devoted to this task at a minimum, in which case each new page will erase elements of the page(s) which preceded it. Even if you set the amount of memory devoted to this task relatively high, you can periodically clean out your cache to free up memory. On the other hand, what you actually see, on your screen, is a temporary arrangement of pixels which is constantly changing. The point is, at no point is a Web page “fixed” on your computer. Some might argue that the page is fixed on the server on which it resides. Experience with the Web would suggest otherwise: pages are constantly being updated, moved from one server to another (with a concomitant change in URL), or taken off the Internet when, for one reason or another, the sponsor of the page can no longer maintain it. The Web is in a constant state of flux. Furthermore, digital information relies on specific hardware and software to be readable. Already, much of the information from the early days of computing has been lost because the machines which were in use at the time no longer exist. (Contrast this with print media such as the papyrus scroll, copies of which have existed for thousands of years.) If a copyright dispute involving information created on early computers were to be brought to court today, it would be virtually impossible to prove that such information existed, even if paper tape or other stored copies were available. Finally, the interactive nature of digital media, where each computer user develops his or her own experience by the choices they make, means that there is no single fixed work. “An especially relevant I-way question is whether transitory combinations of data, such as the results of a database search conducted at the direction of the user, are sufficiently fixed for copyright... A related problem is raised by the I- way’s interactive capacity. We are witnessing the birth of ‘you program it’ interactive entertainment systems. If a user programs a selection of programming that suits her Literature at Lightspeed – page 337 tastes, one wonders if the user has a copyright in that selection of programming.” (Johnstone, Johnstone and Handa, 1995, 174/175) Digital media clearly do not pass the test of fixity required to be copyrightable. Despite this, legislatures have attempted to pass laws which would make digital information copyrightable. According to the World Intellectual Property Organization, “Some copyright laws provide that computer programs are to be protected as literary works. [note omitted]” (undated, unpaginated) This offends our commonsense idea of what a literary work is. More importantly, though, it does not deal with the problem of the essentially unfixed nature of digital media. What, after all, is a computer program? It is a set of instructions to a machine. Copyright was never intended to include such things. “Copyright does not protect ideas, processes, procedures, systems or methods, only a specific embodiment of such things. (A book on embroidery could receive copyright but the process of embroidery could not.) Similarly, copyright cannot protect useful objects or inventions. If an object has an intrinsically utilitarian function, it cannot receive copyright.” (Nichols, 1988, 40) Traditionally, legal protection for processes were covered by patent law, not copyright. However, as Nichols points out, the distinction between the two legal regimes has been blurred by the courts:

The Software Act began the erosion of a basic distinction between copyright and patent by suggesting that useful objects were eligible for copyright. In judicial cases such as Diamond v Diehr (1981), the court held that ‘when a claim containing a mathematical formula implements or applies that formula in a structure or process which, when considered as a whole, is performing a function which the patent laws were designed to protect (for example, transforming or reducing an article to a different state of things), then the claim satisfies the requirements of [the copyright law].’ This finding ran against the grain of the long-standing White-Smith Music Publishing Co v Apollo Co decision of 1908 where the Supreme Court ruled that a player piano roll was ineligible for the copyright Literature at Lightspeed – page 338

protection accorded to the sheet music it duplicated. The roll was considered part of a machine rather than the expression of an idea. The distinction was formulated according to the code of the visible: a copyrightable text must be visually perceptible to the human eye and must ‘give to every person seeing it the idea created by the original. (ibid)

The analogy of a computer program to a player piano seems apt, since both are basically sets of instructions for a machine. The 1981 court decision uses some torturous logic in order to essentially overturn the previous court’s decision. A different approach taken by American courts is to ignore the software altogether and concentrate, instead, on the outward manifestations of programs for digital games. “Referring to requirements that copyright is for ‘original works of authorship fixed in any tangible medium’, Federal District Courts have found that creativity directed to the end of presenting a video display constitutes recognisable authorship and ‘fixation’ occurs in the repetition of specific aspects of the visual scenes from one playing of a game to the next.” (ibid, 42) Thus, if, after repeated play, the same characters, backgrounds and situations repeatedly arise, the courts decided that they were “fixed” for purposes of copyright. In such cases, the courts didn’t require the deposit of the algorithms behind the program in order to copyright it, just a videotape of one playing of the game. (ibid) Again, this seems to be stretching the idea of copyright to fit it over something which clearly shouldn’t get its protection. There are other problems with the application of copyright to digital media, especially pernicious since they appear, at first sight, to be protecting copyrights. For example, “Several technological solutions may help control intellectual property on the Web. One popular research scheme involves ‘software envelopes’ or ‘cryptolopes’ that contain encrypted versions of the material to be displayed. These envelopes are designed so that the user is automatically billed for viewing the contents of the envelope when it is ‘opened.’ Many people believe that this is the ultimate solution to IP control; support for Literature at Lightspeed – page 339 cryptolope systems of this sort is built into proposed revisions in the intellectual property law. [footnotes omitted]” (Varian, 1997, 33/34) There are practical problems with this system. Perhaps most important is that encryption by itself is unlikely to stop the unpaid for distribution of proprietary material: “When books are electronic, even if they are encrypted, at some point they must be decrypted for the user to read. At that point, they can be copied. Perfectly.” (Rawlins, 1996, 58) A different form of protection is offered by digital watermarks. “The practice of watermarking documents dates back to the Middle Ages, when Italian papermakers marked their unique pieces of paper to prevent others from falsely claiming craftsmanship. Today, watermarks are still used to identify stationery and stock from bank checks. Like its analog analogue, digital watermarking carries information about the source along with the content.” (Wiggins, 1997, 41) Digital watermarks can be visible to the document user, but that makes them relatively easy to erase. Digital watermarks can also be woven into documents in ways which make them relatively difficult to detect and alter by users. Watermarks do not, in themselves, prevent copying, but they do make it possible for creators with sufficient resources to track illegitimate use of their material. In conjunction with something like encryption, watermarking can be an important tool for those who wish to enforce their control over their material. Even more powerful tools for controlling digital material are being developed.

Thingmaker has several options for protecting a file. The most stringent is locking a file to a given server, so if anyone tries to run it off a different server, it won’t work. If you want to allow re-use, you can lock the file to prevent editing or allow only specific attributes to be edited. For example, if you wanted an animation to point to a particular Web site when clicked, that particular feature could be fixed. ‘It’s not just about stopping people from stealing your content, it’s how you control the sharing of your content,’ says Steve Barlow, chief technical officer at Parable. (ibid, 43) Literature at Lightspeed – page 340 At first blush, this may seem an ideal way to compensate creators. However, that is only one side of the copyright coin: the other, you will recall, is that every member of society should have as wide an access as possible to as great an amount of information as possible. This is embodied in two related exceptions to copyright laws: first sale and fair use. According to the first sale doctrine, once I have bought a book, magazine or other publication, I have the right to do with it what I will. I can lend it to a friend after I have read it. I can give it away as a gift. I can quote passages from it to people I know. The creator of the work has no right to compensation for any of these uses. The most important manifestation of the first sale doctrine is lending libraries. They buy a copy of a variety of publications, then lend them to the public at little or no charge. This may seem unfair to creators, but, in fact, there is no evidence to support the assertion that they lose significant income to libraries or other lenders; it is more likely that people who could not afford their publications would be given the opportunity to read them by borrowing them. In any case, first sale was seen as a necessary means by which the interests of society could be represented, since it helped in the wide dissemination of ideas. The fair use doctrine allows excerpts from existing works to be used in new works. Fair use is most frequently invoked in academic works and journalistic articles; it is seen as crucial for the development of scientific and other ideas. Fair use is what allows me to build the arguments in this dissertation by quoting from existing sources: I not only quote from the arguments of others to either refute or support them, but I quote facts in support of my own original arguments. With tightly controlled copyrights applied to digital media, it is possible to track such quotation, with the possibility of making the quoter pay for each use of even the smallest portion of somebody else’s work. The fear is that this ability will impede the development of knowledge, since many authors will not Literature at Lightspeed – page 341 be able to afford to pay for all of the material which they reference (I know I wouldn’t). Society would suffer as a result. It cannot be stressed enough that first sale and fair use are not, as some contend, simply artifacts of the analog age, allowances made to the fact that control of physical forms of expression such as books was necessarily imperfect. They are necessary balances to ensure that society’s interest in the widespread dissemination of ideas is maintained. Thus, we should be wary of statements like, “First Sale and Fair Use doctrines served us reasonably well in an industrial age economy. They simply will not extrapolate to the emerging world of the Net.” (Heterick, Jr., 1997, 20) While this may be true, given the direction large copyright holders are pushing the development of online technology, the conclusion to which most people jump -- that copyright should continue without first sale and fair use provisions -- is not. This would strengthen the role of the copyright holder at the expense of society as a whole, to the detriment of the development of the sciences and useful arts. “The existing law ensures producers of artistic material the right to profit from their creative works, but it does not allow a creator to control who looks at the material or prevent it from being lent or circulated to others.” (NYT Editorial Staff, unpaginated) Yet, extensions of copyright law, combined with the unique properties of digital media, may do just that. A different conclusion seems more reasonable: “It is not sufficient to simply modify copyright laws and treaties to ‘include’ the new technologies, for the new technologies work in an altogether different manner and do different things than print media.” (Solomon, 1999, 125) Thus, copyright is not the right mechanism for regulating digital media, and some other regime should be developed which finds the right balance between the legitimate interests of information creators and society. Literature at Lightspeed – page 342 Who Benefits Most From Copyright? In the 1990s, freelance writers (those who worked on an article by article basis without a contract) in the United States and Canada found that their work was turning up in the darndest places: online databases. In many cases, the corporate owners of the magazine or newspaper for which they wrote took their work and published it online without compensating -- or even notifying -- the authors. The result was inevitable. In the United States, “In 1993, ten freelance writers sued the New York Times and other publishers over the unauthorized publication of their work through online computer services.” (Brinson and Radcliffe, 1991, unpaginated) In Canada, “In September 1996, [freelance writer Heather] Robertson launched a landmark $100-million class-action lawsuit against Thomson on behalf of any freelance writers, artists, and photographers who had sold works to the company and wanted to retain control over their electronic- publishing rights. She sought $50 million in compensation, $50 million in punitive damages, and an injunction preventing unauthorized inclusion of freelancer’s works on electronic databases.” (Gutstein, 1999, 125) The traditional freelance writer’s contract used to go something like this: the writer agreed to allow the publication a window in which it got exclusive rights to publish a work (six months was not uncommon). The contract specified where the work was to be published. After the initial period of publication, the writer could then sell the work to another publication (often at a reduced rate since it wasn’t the first publication of the work). By publishing the work of freelance writers in electronic databases without having previously agreed with the writer to do so, the media corporations seemed to have been breaking the terms of this contract. The reactions of the courts to these challenges has been mixed. In the United States, “A federal judge in Manhattan has ruled against freelance journalists who argued that publishers should not be allowed to reproduce their work on CD ROMs or in Literature at Lightspeed – page 343 electronic databases without their permission and without paying them beyond what they were paid for the original material. At issue was whether or not electronic reproduction of that sort is essentially equivalent to archival versions of print media on microfilm, which are a publisher’s right under the Copyright Act of 1976.” (“Freelancers Lose to Publishers Over Electronic Reproduction,” 1997, unpaginated) Common sense would suggest that reproduction in an electronic database is a form of republishing material, not merely archiving, especially since the corporations expected to make a lot of money from their databases, but the court seemed to disagree. (At least, so far: the decision is being appealed.) In Canada, by way of contrast, in 1999 “a judge in Ontario Court General Division gave her permission [to Robertson] to launch the class-action suit.” (Gutstein,

1999, 125)6 Whether or not these lawsuits are ultimately successful, the publishers may already have won the war. Contracts with freelance writers now give publishers the right to republish material in digital form without further compensating the writer (in one case, that of Thomson Corporation, the new contract covers the right to reproduce material not only in existing digital media, but in any medium which may, in the future, be developed). This is outrageous: creators of information are paid penurious rates while the distributors of the information reap substantial benefits, benefits which arguably used to accrue to the writer, whose market for reselling articles dries up because the first publisher now has the right to keep the material in circulation in perpetuity. What was the writers’ response? “[G]iven their low average annual income and dependence on a small number of publishers, most freelancers caved in to the unfair demands of the publishers and signed these new contracts.” (ibid, 128) Thus, we have returned to the condition of the Stationer’s Company, which, because of a fundamental inequality of economic power, was able to exert its will over writers, dictating the terms under which it would publish their work, to its tremendous economic benefit. Literature at Lightspeed – page 344 When we think of copyright protection, we usually think of it in reference to a lone writer slaving away in solitude perfecting her or his work of prose. Such writers are the lifeblood of the information society, to be sure. However, major corporations are increasingly using their economic power to undermine the individual author’s rights in a work. In this situation, “Such a discourse [of the romantic creator is being] used cynically to protect existing information monopolies.” (Boyle, 1992, unpaginated) At the same time as entertainment conglomerate are asking for, and largely receiving, rights from authors and other information creators, they are also demanding increasing protection for their copyrights from governments. In the United States, for instance, the length of time a copyright could be held by a corporation was, until recently, 75 years. (In the original copyright laws, by way of contrast, the length of protection was only 14 years.) However, this was not enough for some corporations: “Disney’s crown jewels are its stable of film classics, which it repackages and reissues anew for each generation of young people. Disney was a key supporter of the 1998 Sonny Bono Copyright Term Extension Act, which extended copyright protection for an additional twenty years. Given that copyright protection for Mickey Mouse was due to expire in 2004, this law provided Disney with a bonanza of perhaps $1 billion.” (Gutstein, 1999, 134) One of the rationales for extending copyright to 95 years was “to provide for at least the first generation of an author’s heirs, and...since people are living longer, a longer period of protection is needed.” (ibid, 160) However, it’s hard to justify this based on the original intention of copyright law: recall that the purpose was to give creators an economic incentive to create. Nowhere does it state that the heirs of creators have any right to be financially rewarded for their parents’ work; certainly, nobody argued that benefits for one’s heirs was an important spur for individuals to create original work. In this way, the larger public good is not served by extending copyright for their benefit. Literature at Lightspeed – page 345 Moreover, the public store of ideas is diminished by the extension of copyright. I was born in 1960. If I live 60 years, I will die in 2020. At no time, in the course of my life, will Mickey Mouse be in the public domain; so, even though the character has become a cultural icon, I will not be able to use it in my work. Far from encouraging creativity, the copyright extension will curtail the creation of new work, to the detriment of society. “‘If I could stop this bill by giving perpetual copyright to Mickey Mouse, I’d do it - not that they deserve it: Disney doesn’t pay royalties for Pocahontas and Snow White, so why shouldn’t Mickey Mouse go into the public domain?’ says Dennis Karjala, a law professor at Arizona State University. ‘I’m more worried about the vast run of the rest of American culture that is being tied up. There will be no additions to the public domain for 20 years if this passes. We’ll have another 20 flat years where everyone has to work with what is already in the public domain. The existing cultural base on which current authors can build simply can’t grow,’ he adds.” (Chaddock, 1998, unpaginated) According to filmmaker John Greyson, whose film Uncut partially deals with copyright issues, a similar problem occurred in Canada. “Sheila Copps famously rewrote Canada’s copyright law and said it was all in the name of artists. In fact, it was all in the name of corporations who treat art as property. It’s really property law -- it’s not law that recognizes the actual process of creation.” (Burnett, 1998, 16) These corporations are increasingly taking their concerns to international trade fora, particularly the World Intellectual Property Organization (WIPO) and the General Agreement on Tariffs and Trade (GATT), in order to gain increasing protection for their copyrights. These are often protections which go well beyond protections which they were getting from national copyright laws. Article 7 of the World Intellectual Property Organization Copyright Treaty, for example, “defines the right of creators to receive royalties whenever their copyrighted works are reproduced, directly or indirectly, ‘whether permanent or temporary, in any Literature at Lightspeed – page 346 manner or form.’ By expanding the right of reproduction to include indirect reproduction, the article prohibits the creation of temporary copies, unless authorized.” (Gutstein, 1999, 148) This could be interpreted to mean that when you download a Web page to the cache in your computer’s memory so that you can look at it on your screen, you must pay the Web page designer or you will be violating his or her copyright. The requirement that a work must be in a fixed form to be eligible for copyright has been eliminated, although the implications of doing so (can I now copyright an expression of an idea I hold in my head?) have been ignored. Another section of the WIPO Treaty, Article 10, “creates a new and additional copyright for any work that is simply made available to the public over the Internet. The article goes so far as to state that ‘any communication to the public’ has to be authorized. Even if someone puts a picture, article, or other work on their Web site, in other words, they have to do more than get the permission of the person who created the work. They have to pay royalties for making the work available to the public -- even if no one picks it up.” (ibid, 150) One of the rationales for copyright was to give creators financial rewards for their efforts; anything which diminished their financial returns could be seen as in violation of their copyright. With this Article, creators demand control over their work whether or not its use by others affects their income. This could have a deleterious effect on fair use provisions of copyright. Moreover, these international agreements take precedence over national laws. “In addition, the [WIPO] legislation would curtail the authority of [European Union] nations to enact or maintain fair or private-use privileges in their national laws.” (Samuelson, 1998, 102) Thus, at the same time as they are pushing for extensions for their own copyright interests, corporations are attempting to limit the protections national governments can give to the public interest in copyrights. Literature at Lightspeed – page 347 When considering GATT’s role in the international movement to apply strict copyright rules to digital communications networks, it is useful to remember that GATT is a body attempting to govern international trade; its concern for culture is, at best, minimal to non-existent. “On the international level we have seen the use of the GATT to turn intellectual property violations into trade violations, thus codifying a particular vision of intellectual property and sanctifying it with the label of ‘The Market.’ [note omitted] (Boyle, 1997, unpaginated) Furthermore, while WIPO is a separate entity whose goal is to “modernize and render more efficient the administration of the Unions established in the fields of the protection of industrial property and the protection of literary and artistic works...” (WIPO, 1993, unpaginated) the organization supplies information on copyright to the World Trade Organization. (WIPO, 1995, unpaginated) As one commentator put it, “In a global economy of ideas, free speech is free trade, and vice versa.” (Browning, “Africa 1: Hollywood 0,” in Wired (V5 N3, March 1997), 188) “Is copyright an aspect of culture or industrial development?” Gutstein relevantly asks. “Industry Canada’s job is to promote the latter, and it was Manley’s agency and not Sheila Copps’s heritage department that headed the Canadian delegation to the 1996 World Intellectual Property Organization in Geneva, where Internet copyright treaties were approved.” (1999, 78) If copyright is seen solely as an issue of trade, then governments will do well by the corporations within their borders to ensure that they get the maximum economic reward for their works. On the other hand, if copyright is seen as a means of developing local culture, then the rights of the holders of copyright to benefit from their work has to be balanced by the right of society to have the widest possible set of ideas disseminated to the widest possible number of citizens. The economic model of copyright does not accommodate the greater public good. This explains why my focus has been largely on American copyright history. Most countries have different histories when it comes to copyright, but, owing to hard Literature at Lightspeed – page 348 negotiations in international fora, they are becoming “harmonized” (read: the same). Thus, what in the US is called “fair use” is in Canada called “fair dealing,” but the concept is essentially the same. Until the Sonny Bono extension, our terms of protection (life plus 50 years for individuals, 75 years for corporations) were the same. The concept of work for hire is the same. And so on. (Canadian Intellectual Property Office, 1998, unpaginated) Because the multinational corporations pushing for stricter copyright laws internationally are based in the United States, harmonization is frequently code for Americanization. To use an obvious example: most countries will now have to extend their copyright protections to 95 years, or risk the wrath of the artists in their countries whose works are now not as protected as those of the Americans. For this reason, it is important for everybody concerned about the issue to understand American copyright. To be sure, some individual creators will benefit from the extension of copyrights, especially if they can successfully use the Internet to distribute their work themselves. However, many more will lose, partly because raw material which used to be available to them for their work will be owned and controlled by corporations; partly because corporations have far more resources than individuals to bring to bear to enforce stricter copyright laws; partly because corporations will always have a huge advantage in negotiating contracts. Entertainment conglomerates, not individual creators, benefit the most from current trends in copyright legislation. As director Greyson put it: “It’s pretty urgent in this digital age to grapple with what [copyright] law says versus what law does. Copyright law always pretends to be on the side of protecting artists and often does just the opposite...” (Burnett, 1998, 16) Alternatives to Copyright

I am not an advocate for frequent changes in laws and constitutions. But laws and institutions must go hand and hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions Literature at Lightspeed – page 349

change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy... (Thomas Jefferson, quoted in IITF Working Group on Intellectual Property Rights, 1995, unpaginated)

Given the control which major corporations have over the publishing industry, it should come as no surprise that “only a very few individuals make enough money from royalties to live on. Most of the rewards from intellectual property go to a few big companies.” (Martin, 1995, unpaginated) Many academics teach to subsidize their writing, for which they get little or no remuneration. All but the most successful writers have to have a day job or other source of income in order to survive while they write. This would seem to undermine one of the basic rationales for copyright law: that new work will only be created if authors are adequately compensated for their efforts. “Actually, most creators and innovators are motivated by their own intrinsic interest, not by rewards. There is a large body of evidence showing, contrary to popular opinion, that rewards actually reduce the quality of work. [note omitted]” (ibid)7 It has been noted that this applies directly to the Internet: “So far, the idea of open access to these materials hasn’t slowed down the onslaught of new information flowing into the Net. But will information suppliers rebel against the status quo at some point?” (Rose, 1993, 112) It depends which information suppliers one is talking about, of course. A large part of the information flowing onto the Net which Rose talks about is coming from individuals, most of whom do so despite little financial reward; on the other hand, corporate information suppliers are, as we have seen, trying to change the status quo to their advantage. Given the poor compensation for creation, coupled with the domination by major corporations in the entertainment area, individuals seem poorly served by copyright. One might be tempted to put up material without a copyright notice, but this “permits proprietary modifications.” (“What Is Copyleft?” undated, unpaginated) Anybody can take your uncopyrighted work, make a few changes, and take a copyright on it for Literature at Lightspeed – page 350 themselves. Thus, not only is the information commons not protected by this approach, but it all but guarantees that the original creator will not be given any compensation for his or her work! A different way of dealing with this dilemma, one which first developed in the computer programming community, is known as “copyleft” or “counter-copyright.” One of the earliest proponents of copyleft was the Free Software Foundation. The rationale for the original copyleft was that “The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software -- to make sure the software is free for all its users.” (Free Software Foundation, 1998, unpaginated) The term “free” in this context does not necessarily mean without financial cost, but, rather, distributed without barriers. Copyleft is not a replacement for copyright; instead, it modifies traditional copyright protections. “Copyleft contains the normal copyright statement, asserting ownership and identification of the author. However, it then gives away some of the other rights implicit in the normal copyright: it says that not only are you free to redistribute this work, but you are also free to change the work. However, you cannot claim to have written the original work, nor can you claim that these changes were created by someone else. Finally, all derivative works must also be placed under these terms.” (Stutz, undated (a), unpaginated) Copyleft goes in the precise opposite direction of copyright, where fanatical control of distribution and derivative works (new works based on a copyrighted work) is the norm. While originally intended for application to computer programs, some people now argue that “any work of any nature that can be copyrighted can be copylefted with the GNU GPL.” (ibid) The advantage of copyleft in computer programming should be obvious: anybody can take a program and improve upon it (fixing bugs, adding new Literature at Lightspeed – page 351 features, et al). In fact, it has been argued that superior software is created this way. (Thompson, 1999) Applied to artistic works, copyleft would better reflect the balance between the interests of society and the individual creator than the existing copyright regime: “As an alternative to the exclusivity of copyright, the counter-copyright invites others to use and build upon a creative work. By encouraging the widespread dissemination of such works, the counter-copyright campaign fosters a rich public domain.” (Berkman Center for Internet & Society, 1999, unpaginated) Copyleft may seem to unfairly tip the balance away from creators, since there seems to be no mechanism to compensate them for their work. In fact, anybody who creates a copylefted document can charge for it, including people who create derivative works based on existing copylefted documents. (Free Software Foundation, 1998, unpaginated) What they relinquish is their power to control what somebody does with a work after they purchase it. (In fact, copyleft may simply return the right of first sale, which is being undermined by digital distribution of information, back to the position of importance it held when copyright was applied to analog media.) In some ways, this could undermine the ability of creators to charge for their work, since somebody who buys a copylefted work is legally entitled to send it to a million of her or his friends. Worse, they may create a derivative work which is more popular than the original, selling more copies (possibly at a cheaper price). This surely undermines a creator’s ability to be compensated for his or her work! However, the balance of advantages and disadvantages in copylefting may still be of greater benefit to individual writers than the balance in copyrighting their material. On a practical level, illegitimate copying of a work is encouraged by the ease with which copies can be made and disseminated by digital communication networks, as well as the lack of perceived financial cost associated with such copying. For the most part, I buy books rather than copy them whole from copies obtained from libraries because the Literature at Lightspeed – page 352 cost of copying them isn’t that much less than buying them outright. However, the cost of downloading articles off the Web is substantially less than buying the print journals, magazines or newspapers in which they appear. It becomes necessary, therefore, for content creators to “make legitimate access to your material as easy as possible at a price that will not encourage pilferage.” (Strong, 1994, unpaginated) Recall from the last chapter that some people believe that the ease of copying digital materials will lead to a different method of compensating creators, where they are paid for secondary services rather than the primary works. Copyright maximizes the compensation for its holders by giving them a mechanism by which they can charge for the largest number of copies of a work. In this other economic system, by way of contrast, the work becomes an advertisement for the artist’s services; in this case, widest distribution of the work, whether compensated for or not, is to the artist’s benefit. Copyleft achieves this. As Esther Dyson, a proponent of this alternate form of economics, notes, “The issue isn’t that [traditional] intellectual property laws should (or will) disappear; rather, they will simply become less important in the scheme of things” as new economic models develop. (undated, unpaginated) There is precedence for this view. The musical group The Grateful Dead used to encourage taping of their concerts; a section in front of the stage was usually kept free so that those who wanted to tape concerts could get a clear sound. Moreover, the band let it be known that tapes of their concerts could be copied and circulated among their fans; a trade in these tapes (which cannot properly be called “bootlegs” since the band approved of them) flourished. Despite this, the Grateful Dead prospered. “Enough of the people who copy and listen to Grateful Dead tapes end up paying for hats, T-shirts and performance tickets. The ancillary market is the market.” (ibid) A different way of looking at compensating creators would be that their greatest advantage is the timeliness of their work. “Anyone who has original material (or first Literature at Lightspeed – page 353 publication rights) that can be metered out over the networks on a regular basis could demand any payment she might want from those to whom the information was first distributed. So the price of getting right up to the faucet for a periodically-published information stream could be set high enough to reward the author and reflect the value to those downstream from the recipients.” (Johnson, 1994, unpaginated) For “information” providers, timeliness is obviously important; by the time somebody has copied a copylefted article, new information could be on the originating site. For artists, it could mean serialization; a writer, for example, could periodically put a new chapter of a novel or a new short story on the Internet. Copies of previous chapters distributed through copyleft then become promotions for the most recent work. Individual artists are in a difficult position with regard to copyright: they can either keep the copyright for themselves, but be required to find a means of distributing their work themselves, or; they can try to get their work distributed by a major corporation, in which case they are increasingly expected to give up more and more of their rights. Moreover, the increasing benefits corporations are being given under new copyright regimes are having the destructive effect of narrowing public access to, and uses of, the intellectual commons. Copyleft, combined with the new information economics, may be a way of solving these problems. If copyleft ultimately proves incapable of solving the current problems with copyright, some further creative thinking on this subject must be undertaken. If the primary goal of copyright is to ensure the social good of the widest dissemination of the largest amount of information, the way copyright law is currently developing will need to be rethought.

Problems with Government Regulation 2: Limited Instruments In the twentieth century, governments have developed three regulatory approaches to communications media: common carrier legislation; broadcast legislation, and; minimal Literature at Lightspeed – page 354 to no legislation (often referred to as “First Amendment” or “free speech” protection). Which of these three approaches is applied to a medium is determined by the characteristics of the medium. The minimalist legislative approach, for instance, is traditionally applied to print publishing, since there are a relatively large number of publishers, cost of entry into publishing is relatively small (compared to, say, developing a television network), paper to publish on is not scarce, etc. To understand which regulatory instrument to apply to digital communication networks, it is necessary to understand the models of different types of media. One model is “one-to-one” communications. In this model, a single person uses the medium to communicate with another single person (see Fig. 4.1). The two most important examples of one-to-one communications are the telephone system and the post office. One-to-one media are regulated under common carrier legislation, which has two fundamental requirements: 1) the company which runs the medium must carry any message which a user of the medium wants to send over it, and; 2) because the company cannot control the content sent over the medium, it cannot be held legally liable for the content. Thus, the phone company must carry obscene or harassing phone calls as well as its legal traffic, but, while the caller is legally responsible for them, the phone company is not. Opposed to this is the “one-to-many” model of communications. With this model, a small number of producers use the medium to reach a large number of consumers (see Fig 4.2). It is rare for the number of producers to actually equal one; the important aspect of this model is that the number of producers is exceedingly small relative to the number of consumers. The one-to-many model applies to what are known as “broadcast” media, particularly radio and television. The number of signals it was possible to send over the bandwidth which radio, the first broadcast medium, could receive was severely limited, causing the American government to divide it up and allocate parts of the spectrum to Literature at Lightspeed – page 355 those who held licenses for the purpose of broadcasting over them (a regime which was quickly taken up by governments of other countries).

Figure 4.1 The One to One Communication Model Example: a personal conversation, whether face to face or over a telephone.

The fact that the government licenses radio and television stations gives it a tremendous power over those media which it does not have over one-to-one media. Governments can, and do, require broadcast media to fulfill certain obligations in return for their license; if they do not meet these obligations, governments can take their licenses away and give them to somebody else (although this is more a theoretical possibility than a reality, given that virtually no North American broadcaster has ever lost a license because it hadn’t complied with its licensing obligations). For example, for over a decade, the American government had a policy known as the Fairness Doctrine, which required licensed television stations to give equal time to people or corporations who had been maligned in their newscasts. (To get around this obligation, many news organizations started to avoid the most controversial subjects.) In Canada, broadcasters must agree to air a certain percentage of programmes produced by, written by and/or starring Canadians during prime time hours as a condition of their license. There are many differences between one-to-one and one-to-many media, some less obvious than others. With one-to-one media, the consumer is also a producer (a telephone caller, for instance, is an information producer when talking and an information consumer when listening); with one-to-many media, producers and consumers are clearly Literature at Lightspeed – page 356 separate. For this reason, it is sometimes suggested that one-to-one media are more participatory than one-to-many media, which are more passively consumed. According to Mitch Kapor, for instance, “Users may have indirect, or limited control over when, what, why, and from whom they get information and to whom they send it. That’s the broadcast model today, and its seems to breed consumerism, passivity, crassness, and mediocrity. Or, users may have decentralized, distributed, direct control over when, what, why, and with whom they exchange information. That’s the Internet model today, and it seems to breed critical thinking, activism, democracy, and quality.” (Poster, 1995, unpaginated) Another notable difference is that one-to-one communication -- with the possible exception of telephone solicitation -- usually takes place at the convenience of the communicators, while one-to-many communication usually takes place at the convenience of the broadcaster (at least, until the creation of tape recorders and VCRs, although many, perhaps most, of us still watch or listen to programs when and as they are broadcast).

Figure 4.2 The One to Many Communication Model Examples: broadcast radio and television.

While it may seem that the choice of regulatory regime is a matter of determining which model a medium most closely resembles and regulating accordingly, it isn’t that simple. As we shall see in Chapter Six, in its earliest days, radio held out the possibility of one-to-one conversation as well as broadcasting; in fact, the decision by the Literature at Lightspeed – page 357 government to regulate it as a broadcast medium was an important step in closing off other possibilities for it. There are other examples. In the early days of the telephone, for instance, some companies experimented with broadcasting over the medium; a large number of listeners would dial into a specific number at the same time and hear a symphony or a lecture. Digital communications media complicate this picture. It is possible to have one-to-one communications over the Internet: when you send an email to a single person, for instance, or when you use telephony software to talk over the Internet to somebody else. However, as mentioned in the last chapter, it is also possible to use the Internet for one-to-many communications, as with the attempts to popularize Web-TV. The Internet also supports a third model: “many-to-many” communications (see Fig 4.3). As the name suggests, here a lot of people alternate between consuming and producing information (it should be kept in mind that the many of many-to-many communications is much smaller than the many of one-to-many -- the difference between, say, dozens or sometimes even hundreds and 10 or 20 million). Internet Relay Chat, where many individuals converse with each other in real time, and mailing lists, where many individuals converse asynchronously, are examples of many- to-many communications in digital media; a conference call is a form of many-to-many communications in an older, more established medium. Taking all of this into account, it should be obvious that “Internet communication is not a single medium sharing common time, distribution, and sensory characteristics, but a collection of media that differ in these variables.” (December, 1996, 26) Digital communications plays havoc with our traditional media categories. Consider streaming video. On the one hand, there are a small number of producers relative to the potential number of consumers for streaming video, a condition of one-to- many media. On the other hand, there is no set schedule of programming; a user can go to Literature at Lightspeed – page 358 a Web page with streaming video and watch it at any time of the day or night (any day or night that it is on the Web, that is), which goes against the conventions of one-to-many media. What is the most appropriate form of regulation?

Figure 4.3 The Many to Many Communication Model Examples: Internet Relay Chat or a telephone conference call.

This is only the beginning. In a windows computing environment, it is possible to have all of these forms of communication running at the same time, with the computer user moving back and forth from one to the other. In addition, some computer software allows for the user to be employing different media forms at the same time: in IRC, for instance, while communicating with others in a group, a user can also be sending messages to an individual. Thus, at any given point in time, a networked computer user may be involved in a variety of different types of communication. Digital communications networks can contain anything which can be digitized, which, in effect, is every kind of communication (audio, video, text) in every configuration (one-to-one, one-to-many, many-to-many). For this reason, I am suggesting that we consider it a “variable-to-variable” form of communications (see Fig 4.4). It should be clear, given all of this, that existing regulatory structures are inappropriate for this emerging medium. To attempt to apply one set of regulations to the Internet would favour that particular communication model over the others, with the Literature at Lightspeed – page 359 possibility that it would limit the full development of the Internet, in all its possible configurations. As Mosco puts it, “Old regulatory approaches based on distinct technologies and discrete services and industries do not work in an era of integrated technologies, services and markets.” (1989, 99) This was partially affirmed by the position taken by the Canadian Radio- Television and Telecommunications Commission, Canada’s media regulatory body. In 1998, it conducted hearings on whether it should regulate the Internet using the tool available to it: the Broadcast Act. In 1999, it released its findings: that applying the Broadcast Act to the Internet would not be appropriate. One of the CRTC’s arguments was that the content of much of the information available on the Internet is (as it has been since the medium was created) text. “The Commission notes that...much of the content available by way of the Internet, Canadian or otherwise, currently consists predominantly of alphanumeric text and is therefore excluded from the definition of ‘program’. This type of content, therefore, falls outside the scope of the Broadcasting Act.” (CRTC, 1999, p35) The Commission recognized that email and chat, textual forms of communication which lie at the heart of the Internet, should not be regulated by the means at its disposal. Even video transmissions over the Internet cannot be assumed to be broadcast for purposes of regulation.

The Commission considers...that some Internet services involve a high degree of ‘customizable’ content. This allows end-users to have an individual one-on-one experience through the creation of their own uniquely tailored content. In the Commission’s view, this content, created by the end-user, would not be transmitted for reception by the public. The Commission therefore considers that content that is ‘customizable’ to a significant degree does not properly fall within the definition of ‘broadcasting’ set out in the Broadcasting Act.” (ibid, p45) Literature at Lightspeed – page 360 A hypertext story, or, more to the point, an online video game or a hypermedia presentation, cannot, by definition, be a broadcast, even when millions of people experience it, because no two individuals will have exactly the same experience of it. In making this claim, the CRTC was careful to note that the interactive elements of a work must be substantive for it to be exempted from the Broadcast Act:

...the ability to select, for example, camera angles or background lighting would not by itself remove programs transmitted by means of the Internet from the definition of ‘broadcasting’. The Commission notes that digital television can be expected to allow this more limited degree of customization. In these circumstances, where the experience of end-users with the program in question would be similar, if not the same, there is nonetheless a transmission of the program for reception by the public, and, therefore, such content would be ‘broadcasting’. (ibid, p46)

The point at which interactivity stops being superficial and starts affecting the nature of the mediated experience is, of course, highly debatable. Recall that, in Chapter Two, I introduced Brenda Laurel’s concept of “agency” to describe the effectiveness of works of hypertext fiction; it might be a good guideline for this debate. However, as agency is largely subjective, it doesn’t necessarily make the line definitively clear. This will be an important distinction to keep in mind as governments grapple with how best to regulate digital media. Although generally applauded by those in various industries involved in new media, the decision did come in for some criticism. “How courageous!” one person wrote.

How forward-thinking! The single bureaucratic entity with the legal mandate to regulate harmful content on the Internet, concluding -- for itself -- that it will do nothing of the sort. Neo-Nazis, violent pornographers and pedophiles rejoice! You no longer need ‘worry,’ to use Ms Bertrand’s own word, about a pesky government agency sticking its nose into your business. You now have licence to do what you do best! (“CRTC’s Internet decision: dumb or dumber?”, 1999, 10) Literature at Lightspeed – page 361 In fact, the CRTC’s decision did not condone or “give license” to those who commit crimes using the Internet. As the CRTC itself argued, “The Commission acknowledges the expressions of concern about the dissemination of offensive and potentially illegal content over the Internet. It also acknowledges the views of the majority of parties who argued that Canadian laws of general application, coupled with self-regulatory initiatives, would be more appropriate for dealing with this type of content over the Internet than either the Broadcasting Act or Telecommunications Act.” (1999, p121)

Figure 4.4 The Various to Various Communication Model Example: the Internet. Note that the consumer-producer at the centre of the model may be engaged in activities associated with any or all of the other three models at the same time.

Another argument against the use of the Broadcast Act to regulate digital communications networks has to do with a fundamental difference between them and the media for which the legislation was created: bandwidth. The rationale for regulating radio was that scarce radio frequencies required an efficient means of allocation, and the government was the only organization capable of doing this. But the scarcity argument Literature at Lightspeed – page 362 does not apply to the Internet. “Because there are a limited number of radio and television channels available, the government, which assigns them to broadcasters, is thought to have the right to monitor content for indecency. This is, of course, the wrong analogy for the Internet. Bandwidth on the Net is unlimited, and the government’s permission is not required to attach a server to it in the same way as a radio or television station.” (Wallace and Mangan, 1996, 175) Those like the anonymous author who disagreed with the CRTC decision assume that the broadcast regulations can be transferred to digital communications (as they were transferred wholesale from radio to television); however, the new medium is so different from the old media that such a transfer cannot be taken for granted. Those who believe that the Internet should be regulated as a broadcaster must articulate not only a logical rationale for doing so, but a plan for how it could be made to reasonably work.

There are also problems with applying common carrier legislation to the Internet (most likely to the Internet Service Providers who are most people’s access point to it). The obvious one is that there are some applications which are one-to-many (so-called “net broadcasts,” for example). As broadcasts, these would seem to fall under legislation such as the Broadcast Act. However, there is a more fundamental problem which has to do with the nature of common carriers themselves. Common carriers have had to be regulated because they were monopolies. Telephone companies, to use the obvious example, were given areas in which they could operate without fear of competition. Moreover, economists suggest that this must necessarily be the case. To be a common carrier, you must be able to carry every message somebody is willing to send through your system; exactly the same service which every Literature at Lightspeed – page 363 other common carrier will, by definition, offer. Branding notwithstanding, because the services must be identical, the only way for common carriers to compete is by price. This tends to drive the price down, driving all but the most competitive out of the market; as the number of competitors dwindles, the market moves back towards a monopoly. (Besides it is not unreasonable to suggest that a given area does not need several companies providing exactly the same service.) Thus, what seems to be a thriving ISP market (industry consolidation which is seeing smaller numbers of larger ISPs notwithstanding) will, if treated as a common carrier industry, be reduced to local monopolies. (It is also worth noting that, despite the theory, telephone service, the most well-known common carrier monopoly, has been increasingly opened to competition since the late 1980s, most particularly in the American Telecommunications Act of 1996.) Given all of this, the default position of not regulating the Internet seems to be the one many governments are taking. “It may well turn out that the Net will be regulation- proof.” (Heterick, Jr., 1997, 20) However, this is a negative approach to the subject which will satisfy few people. Rightly so. If governments are to decide not to regulate the Internet, they should have a positive rationale for it, something more than, “It won’t be easy.” Such arguments (ie: that the Internet is the most robust communications medium currently in existence, with easy entry to a wide variety of information producers in a large number of formats, and therefore requires little legislative interference) do exist. Moreover, legal situations are arising which require government attention; if a government decides to do nothing about them, that government has a responsibility to justify its decision. One such issue is that of responsibility for posting on the Internet. Suppose a post appears in a newsgroup which libels a famous person. Who does the person sue to seek redress? The person who originally posted the message? The Internet Service Provider Literature at Lightspeed – page 364 (ISP) from which the person posted the message? The ISP from which the person received the message? The systems operator (sysop) for either ISP? For both ISPs? How about the administrator for the portion of the backbone along which the packets of the message traveled? This issue has been partially resolved by decisions in American courts. Sidney Blumenthal, an advisor to President Bill Clinton, sued online political commentator Matt Drudge for comments Drudge had made about him. Blumenthal also sued America Online, which carried the column in which the comments were made. U.S. District Judge Paul L. Friedman ruled that AOL and other Internet services, unlike traditional publishers, could not be sued in civil courts for content they received from others:

In recognition of the speed with which information may be disseminated and the near impossibility of regulating information content, Congress decided not to treat providers of interactive computer services like other information providers such as newspapers, magazines or television and radio stations, all of which may be held liable for publishing or distributing obscene or defamatory material written or prepared by others. (“Online Providers Not Responsible for Content from Others,” 1998, unpaginated)

In Zeran vz. America Online, another case which was resolved soon afterwards, the United States Supreme Court concurred with the lower court, claiming that federal law “plainly immunizes computer service providers like AOL from liability for information that originates with third parties.” (“ISPs Not Liable for Actions of Subscribers,” 1998, unpaginated) ISPs had been arguing for years that the volume of information which flowed through their systems, accounting for hundreds of thousands if not millions of messages every day, made it impossible for them to monitor all of the information they carried. Furthermore, the speed with which messages can travel through the Internet meant that even if an ISP could locate questionable material, by the time it had deleted it, the Literature at Lightspeed – page 365 material could have already found a variety of other places, on its server as well as others, to continue to exist. The court rulings seemed to recognize these problems. You might wonder why ISPs didn’t just accept being legislated as common carriers. Most of the smaller ISPs, whose main business is access to the wider Internet, would fit comfortably in that category. However, the largest ISPs also offer original content; America Online, for instance, offers its subscribers exclusive online chats with famous people, as well as exclusive discussion fora. And, as we saw last chapter, the addition of Time Warner to its roster gives it control of a large amount of content which has yet to be seen online. Proprietary content is necessary for the largest ISPs to create recognizable brands; unlike traditional common carriers, this allows them to compete on services as well as price. Moreover, as we have seen, some ISPs, as a response to public concerns about child access to pornography on the Internet, seek to limit the amount of adult material on their servers in order to brand themselves as “child friendly.” America Online, for example, “bills itself as a family oriented service -- it’s trying to attract a broad base of customers and wants to maintain atmosphere acceptable to Middle America...” (Powell, Premiere Issue, 44) Thus, while arguing that controlling the material which flows through them is impossible, most ISPs have powerful incentives to maintain as much control of such material as possible. The recent court rulings accept this contradiction. However, what is good for ISPs is not necessarily good for all of the stakeholders in this new medium. These rulings give ISPs the protection of common carrier rules without their obligation to carry any material anybody wants to put on them without prejudice. This gives ISPs which develop their own content a clear and obvious conflict of interest with the millions of their subscribers who might want to upload their content to their page on the service. It also gives them the de facto right to censor anything on their service, which could have a chilling effect on some forms of speech. The best Literature at Lightspeed – page 366 outcome, from the perspective of individual Internet users, would be for whatever they create to be protected by the First Amendment while the ISPs are regulated as common carriers, which would require them to carry everything the individuals created without prejudice. This would require the ISPs to separate their content creation from their connectivity services, possibly by divesting themselves of one or the other of the functions. Given the rapid growth of ISPs into the content creation sector, either within their own companies or by cooperating or merging with existing content creating companies, this already seems highly unlikely, and, as the consolidation process continues, it will only get increasingly difficult. There is actually precedent for this. The Telecommunications Act of 1996 allowed Regional Bell Operating Companies (the “Baby Bells” created by the break-up of AT&T), which prior to that time had been limited to the common carrier status of telephone companies, to provide “electronic publishing” services through affiliated or jointly operated companies. This could lead to a conflict of interest: the RBOC would be tempted to give preferential rates to communications which it had created. To deal with this potential problem, the Act contained two pages of limitations which were meant to ensure that the RBOC and its affiliate were completely separate. Among other things, it was hoped that this would ensure that “A Bell operating company under common ownership or control with a separated affiliate or electronic publishing joint venture shall provide network access and interconnections for basic telephone service to electronic publishers at just and reasonable rates that are...not higher on a per-unit basis than those charged for such services to any other electronic publisher or any separated affiliate engaged in electronic publishing.” (Neuman, et al, 1998, 37/38) A similar initiative might be feasible to deal with the conflict ISPs have as content providers. Given the limitations of current legislation in regard to the Internet, the courts are not the best place to decide these issues. Ultimately, legislatures will have to develop new Literature at Lightspeed – page 367 regulatory regimes (even if they are mostly hands-off) to take into account the problems posed by this new medium of communication. In particular, they must stop using analogies to existing media, which, as I have shown, are inadequate, and start developing new regulatory strategies to cope with a radically different communications medium. There is one additional difficulty to this, however: the changing nature of digital communications may require the issue of government regulation to be regularly revisited. An example may help to clarify this point. In the last chapter, I explored attempts to turn the Internet from a “pull” medium, where individuals went to information and got it for themselves, to a “push” medium, where information is sent to Internet users at the convenience of its creator. This move

could indeed undermine the claim that online censorship is unconstitutional. Precedent holds that indecency can be restricted in media that are pervasive and intrusive: ‘indecent material presented over the airwaves confronts the citizen,’ the [Supreme] Court said in Pacifica, the 1978 ‘seven dirty words’ case. Meanwhile, CDA [Communications Decency Act] plaintiffs have relied heavily on characterizing the Net as a pull medium. So did the lower court that struck down the law, stating, ‘Communications over the Internet do not ‘invade’ an individual’s home or appear on one’s computer unbidden.’ Not yet. But the day when the Internet is as intrusive as TV or radio may not be far off. Have push media’s marketing-savvy boosters thought about its consequences for free speech? (Shapiro, 1997, 109)

Thus, as the Internet evolves, would-be government regulators will have to take into account its changing nature.

The Carrot: State Support At the CRTC hearings into whether or not the Canadian government regulatory body should regulate the Internet, a number of participants argued that the government should in no way be involved, allowing the Internet to develop on its own. However, many other participants “favoured some form of support for the production and distribution of new media content, although the majority of these participants clearly preferred an incentive- Literature at Lightspeed – page 368 based approach to one involving regulation.” (CRTC, 1999, p66) Participants in the hearings had a number of suggestions for how government could support the emerging digital media sector. “Among these were direct funding programs targeted specifically at Canadian new media content, various tax incentives to support the new media industry, content-specific industry development initiatives, and activities to stimulate consumer demand for new media content.” (ibid) In fact, some governments in Canada have or are planning to have incentive programs in place. In its May 1998 budget, for instance, the Ontario government committed $10 million to the Interactive Digital Media Small Business Growth Fund. “Its purpose,” according to the government Web site, “is to invest in strategic initiatives and activities that will spur the growth of, and increase the number of jobs in, small IDM firms and the overall IDM industry in Ontario.” (“Interactive Digital Media Small Business Growth Fund,” 1999, unpaginated) Job creation is further emphasized in the IDM Growth Fund’s list of objectives: “encourage market-led job creation and growth; facilitate industry coordination, alliances and partnerships; coordinate marketing and promotion; increase investment promotion; increase export opportunities; and encourage innovation.” (ibid) An important objective of the IDM Growth Fund is to encourage small businesses to find larger partners who will hopefully help them expand. For this reason, individuals and individual companies are not eligible. (ibid) The IDM Growth Fund is one example of a trend in government funding: to see the development of digital media as an engine of the economy. Another example is the federal Cultural Industries Development Fund, which “targets entrepreneurs working in book and magazine publishing, sound recording, film and video production and multimedia. The Fund is designed to foster the growth and prosperity of small cultural businesses by providing unconventional and flexible financing to address the challenges and opportunities specific to this creative and fast-moving sector.” (Business Literature at Lightspeed – page 369 Development Bank of Canada, 1999, unpaginated) Note that the purpose of this Fund is to support business (as, perhaps, should be expected from funding by the Business Development Bank of Canada); what gets produced by these companies is irrelevant as long as they employ enough people. In addition, there is the Multimedia Experimentation Support Fund, which “offers pre-startup support for entrepreneurs with multimedia projects.” (Canada Economic Development for Quebec Regions, 1999, unpaginated) The reference to entrepreneurs, as opposed to artists, makes sense when we find that the fund is “run by the CESAM Multimedia Consortium and receives financial support from the Government of Canada, through Canada Economic Development.” (ibid) Unlike the other programmes, however, individuals are eligible to apply for the Multimedia Experimentation Support Fund. One major exception to this trend is the Canada Council, the federal government’s main arts funding agency. The Council offers Creative Development Grants, which “pay for expenses related to a program of work that advances individual creative expression and growth as a practitioner,” and Production Grants, which “pay the direct costs of production of an independent media artwork.” (Canada Council, 1999c, unpaginated) The emphasis for these programmes is on the work of individual artists: “The Canada Council considers independent productions to be those over which directors/artists maintain complete creative and editorial control. Only the director/artist who initiates and maintains complete creative and editorial control over the work may apply.” (ibid) All projects funded by the Council are chosen by peer assessment committees that “select recipients on the basis of artistic or scholarly merit and against the criteria for each program or prize.” (Canada Council, 1999a, unpaginated) These are two grants programmes which previously existed, to which new media have been added. The production grants, for instance, “support the direct costs of production of a specific film, video, new media or audio project.” (Canada Council, Literature at Lightspeed – page 370 1999c, unpaginated) Another example of an existing programme which has added new media is the Media Arts Presentation, Distribution and Development Program, which “offers annual assistance to Canadian non-profit, artist-run media arts distribution organizations. Organizations must demonstrate a serious commitment to the distribution needs and interests of Canadian artists producing independent film, video, new media and audio artworks, by making their work accessible to the public and providing them with a financial return from the sale, rental and licensing of their work.” (Canada Council, 1999d, unpaginated) One problem with adding new media to existing programmes is that new media producers will be competing with old media producers for funding. The makeup of peer juries becomes crucial in this case: if nobody on a jury has knowledge of or experience with new media, the likelihood that such projects will be funded is lessened. As we have seen, one of the main advantages of the World Wide Web over traditional media is that every individual is a potential information producer. By adding new media to existing funding programmes, the Canadian and Ontario governments have, I fear, revealed an inability to deal with the unique aspects of this new medium. Worse, by focusing on the importance of digital media as centers of economic growth and job creation (which they may well be), governments are directing resources towards corporations which might better be directed at individuals. The Canada Council also has programmes which are specifically aimed at funding new media projects. For example, it gives financial support to help pay the production costs of an artist’s first independent Media Artworks. What are Media Artworks? The council defines them as “works that use multimedia, computers, or communications or information technologies for creative expression.” (Canada Council, 1999b, unpaginated) These may include: “creative Web sites, CD-ROM/multimedia productions, installations Literature at Lightspeed – page 371 or performances using multimedia, interactive technology, networks and telecommunications.” (ibid) As we have seen, government attempts at controlling content on the Internet are fraught with problems. A better strategy for promoting regional cultures, one which is likely to be more effective, would be to fund the widest possible range of local content providers; while this may include corporate content providers, the focus should be on individuals. Thousands of Canadians developing Web pages on whatever subjects interest them are likely to say more about what it means to be Canadian than a television network creating a featureless work to sell on international markets. “Against the enormously growing trend toward the universalization and standardization of aesthetic expression, particularly in the expanding telematic nets, the only strategies and tactics that will be of help are those that will strengthen local forms of expression and differentiation of artistic action, that will create vigourously heterogenous energy fields with individual and specific intentions, operations, and access in going beyond the limits that we term mediatization.” (Zielinski, 1997, 281/282) To be sure, there are problems with direct financial support of individual artists. Existing peer review methods of allocating government funds for the arts, for instance, may be difficult since, this being a new medium, there aren’t a lot of people qualified to judge the merit of potential projects. Even if there are enough qualified people to make up juries for new media works, they may not be representative of the broader public: “Most would also be critical of the culture of dependency that arises when an arts group is primarily accountable to a funding body rather than to its audience. For many, too, much of the British public service tradition of broadcasting and the arts is little more than a disguise for the narrow and exclusive interests of the London-based intellectual wing of the ruling class.” (Mulgan, 1989, 244) In Canada, there has also been a critique of publicly funded art which decried it as elitist and unrepresentative of the interests of most Literature at Lightspeed – page 372 citizens. Moreover, as art forms mature, juries tend to become conservative in their decisions about who to fund: they often continue to promote the work of artists who have already been successful rather than the work of lesser known artists. Direct support for individual artists may prove not be the best method of supporting Web creators. The development of online colonies of artists, the nodes around which strong community ties between artists can be built, is another possible route to take. Perhaps public money would be well spent on training programmes for individual atrtists. Likely, a combination of approaches will be necessary. The important thing is that governments must start taking seriously the idea that every individual consumer on the Web is also a potential producer of information, and consider ways of supporting them. To continue to subsidize corporations in traditional ways would be to seriously cripple the potential of the Internet as a personal medium of communication. Let a thousand Web pages bloom! The formerly stable system -- the axis with writer at one end, editor, publisher, and bookseller in the middle, and reader at the other -- is slowly being bent into a pretzel. What the writer writes, how he writes and gets edited, printed, and sold, and then read -- all of the old assumptions are under siege. (Birkerts, 1994, 5)

The Great Internet Gold Rush rivals even the legendary Tulip Mania of 1634-37 as a cultural/economic mass psychosis. At least tulips exist. Cyberspace is imaginary. (Robinson, 1998, A11)

Chapter Five: Other Stakeholders in Publishing and New Media In Chapter One, I suggested that online publishing occurred at the intersection of two technologies: digital communication networks and the printing press. Implicit in much of discussion since then is a tension between what is made possible by the two media which arise out of these technologies. Most of the writers surveyed in Chapter Two, for example, described their online publishing activities in reference to traditional publishing: they published online either as a supplement to their print publishing efforts or because their print publishing efforts had largely gone unrewarded. In Chapter Four, we saw that governments grappling with how best to regulate the World Wide Web often use analogies to previous media, including print media, as a guide for their efforts. In Chapter One, I showed that the path a work of fiction takes between writer and reader differs substantially in print and online (Figures 1.1 and 1.2). This may have given Literature at Lighstpeed – page 374 the mistaken impression that the two systems are separate. As we saw in Chapter Two, though, there is some traffic between print and online publishing venues, and we can assume that this will increase as use of the Internet grows. A more accurate way of looking at the two publishing systems is that they really are different aspects of a single phenomenon. This is illustrated in Figure 5.1. One of the things made clear by Figure 5.1 is how online publishing eliminates many of the people and organizations involved in the traditional publishing process of getting writing from writer to reader, the process known as disintermediation. We have already seen that many individuals who publish their work online do so in order to bypass the difficult hurdles set up by print publishers. In this chapter, I will look at the possible effects that will have on some stakeholders in print publishing who could be affected by this: publishers, designers and bookstores. All three stakeholder groups exist on the left side of Figure 5.1, the side representing traditional print publishing, but are not represented on the right side, which represents online publishing. As we saw in Chapter Two, many of the writers surveyed claimed that one of the disadvantages of publishing on the Web was that there was so much competition that potential readers would not be able to find their work. This chapter will continue with a look at some of the mechanisms which have been created to deal with this problem. I will show that search engines, one of the first, may not be reliable as they become the basis of portal sites whose main goal is to keep people from leaving their content. This will be followed by a discussion of the lack of critical writing about online writing in print media, which is spawning various filtering mechanisms on the Web. Finally, I will look at perhaps the second most important stakeholder group behind writers: readers. There is little information on people who read material, especially fiction, online. However, using a study of buyers of print books, I hope to suggest some ways in which readers may approach work published online. Literature at Lighstpeed – page 375

Figure 5.1 The Current Publishing System The multiple routes a story can take from the writer to the publisher in a world which contains both analogue and digital media. Combines Figures 1.1 and 1.2.

Publishers Starting in the 1960s, abating somewhat in the 1980s and accelerating in the 1990s, the publishing industry went through a period of great change. Small publishers merged. Large publishers bought out small publishers. Entertainment conglomerates bought publishers of all sizes in order to diversify their interests and give them access to another medium for their vertically integrated product chains. This process is often referred to as “corporate consolidation.” It is a process which has continued to the present day: Literature at Lighstpeed – page 376

A notable deal [in 1997] involved Pearson PLC, which bought Putnam Berkley for $336 million from MCA, the media group controlled by Seagram, and thereby made Pearson’s Penguin communication subsidiary the second largest English-language trade-book publisher in the world. Reed Elsevier was particularly intent upon restructuring with a view to specializing in a limited number of markets. To this end it bought Tolley Publishing from Thomson Corp. at the end of January and promptly followed this up by selling to Random House for approximately $20 million the trade-book division of Reed Books, which included such long-established imprints as William Heinemann, Secker & Warburg, and Methuen. (Curwen, 1999, unpaginated)

More recently, in 1999, “News Corp. agreed...to acquire the Hearst Book Group from the Hearst Corp. for a price estimated at nearly $180 million. The Hearst book units, William Morrow and Avon Books, will be integrated into News’s HarperCollins publishing subsidiary and will form the nation’s second largest trade publisher with worldwide revenues of more than $900 million.” (Milliot, 1999, unpaginated) At the same time, Bertelsmann purchased Random House. (Getlin, 1998, M6) It is at the point where the $80 billion a year publishing industry has been described as one where “The whales are eating the whales.” (Carvajal, undated, unpaginated) The result has been domination of the book publishing industry by a small number of companies. “Soon Random House will move from Third Avenue to a new corporate headquarters to be erected on Broadway by its current owner,” wrote Jason Epstein in The New York Review of Books,

an international media conglomerate which embraces several well-known publishing imprints—including, in addition to Random House and Knopf, Doubleday, Bantam, Pantheon, Dell, Crown, and Ballantine, as well as a number of British imprints. General book publishing in the United States is currently dominated by five empires. Two are based in Germany— Bertelsmann, which owns the Random House group, and Holtzbrinck, which owns Henry Holt, St. Martin’s, and Farrar, Straus and Giroux. Longmans, Pearson, based in London, owns the Viking, Penguin, Putnam, Dutton group, and Rupert Murdoch’s News Corporation owns HarperCollins and William Morrow. Simon and Schuster, Scribner, and Literature at Lighstpeed – page 377

Pocket Books belong to Viacom, which owns Paramount Pictures among other media properties.” (Epstein, 2000, 4)

In fact, the top 8 American publishing houses are owned by entertainment conglomerates. (Tabbi, 1997, 770) All of the individual publishing imprints remain. Thus, in book publishing, as in many other industries, consumers may believe that they are getting products from a wide variety of sources when, in fact, the number of producers is quite small (although in some cases, imprints which are part of larger companies do have some autonomy). Some argue that increased concentration in the publishing industry is not necessarily a bad thing. “Size and creativity are not necessarily incompatible,” argued Gordon Graham, then Chairman of Butterworths. (Moskin, 1989, 52) However, others have argued that large publishers tend to be conservative in the books they are willing to put out. At the best of times, “Paper publishing is a risky business. The economics of printing forces publishers to produce titles in large printings. Because per-copy costs drop sharply with volume, small print runs are not profitable. Large print runs, however, mean that more capital is tied up in paper for as long as the copies take to sell -- if they ever do. So less capital is available to buy new titles or promote current ones. In the meantime, the costs of warehousing, security, and insurance pile up.” (Rawlins, 1996, 60/61) For this reason, small publishers enter and leave the business with great frequency. To take advantage of economies of scale, most major publishers are now looking for books which will appeal to large audiences. “Many observers believe that much of the pressure to find the next [literary] blockbuster comes from conglomerates that now control American TV, movie and literary companies... Driven by the bottom line, book companies are chasing a mass audience, just like studios.” (Getlin, 1998, M6) This has driven the price of books which are expected to sell a lot of copies way up: the memoir of rock singer Grace Slick, Somebody to Love?, was reported to have been given a $1 Literature at Lighstpeed – page 378 million advance, while Christopher Reeve was given $3 million for his autobiography. (Quinn and Baker, 1999, unpaginated) Another trend in publishing, one enabled by the ownership of publishing houses by entertainment conglomerates, is to tie books in with work in other media. This means more than publishing novelized versions of films; books are being chosen specifically for their ability to be translated into other media. According to Tabbi, publishing houses “began to concentrate on sure-fire popular titles, not in order to subsidize serious work (the old rationale), but to provide material for the film and entertainment conglomerates with which most publishers are now affiliated. [note omitted]” (1997, 746) Aware of this, authors are adopting strategies to exploit it: “Today...savvy agents sell options on novels and non-fiction tales to [film] studios first, creating an industry buzz. Then they leverage huge advances out of publishers.” (Getlin, 1998, M6) The results of this willingness to pay increasing amounts in search of a popular hit are inevitable: “...book publishers are cutting back on the number of titles they release. Simon & Schuster’s trade division published 650 titles in 1996 but will publish only 550 this year.” (Stevens and Grover, 1998, 93) The major publishers are not putting as much money into books which offer less likelihood of large financial return, first novels by unknown writers, for instance. “‘New York [the centre of American publishing] is no longer backing mid-level books,’ [professor of creative writing at the University of Oregon Jon] Franklin told the New York Times. ‘A lot of quality books are not given a chance...’” (Link, 1998, 7) In 1989, Ohad Zmora, publisher of Zmora-Bitan Publishers, anticipated “a danger in the growing homogeneity of book lists” as a result of these trends. (Moskin, 1989, 27) Writers who are no longer being published in print must look for alternative places to have their work distributed. Some will turn to smaller presses. As was made clear in Chapter Two, many will consider publishing on the World Wide Web. Literature at Lighstpeed – page 379 According to a report called “The Rest of Us,” there are 53,000 small and independent publishers in the United States, whose sales totaled $14.3 billion in 1997. (Kinsella, 1999, unpaginated) These publishers are a distinct group within the industry: they tend to publish far fewer books a year; they have smaller profit margins (leading to a more precarious existence); they do not have the advertising budgets of the conglomerate-owned publishers, and; they often have a regional, rather than national or international scope. Independent publishing houses will often put out books by authors who cannot get a contract with a major publisher (although with smaller print runs, less publicity and, for many of the smallest, far less distribution). For the most part, independent and conglomerate-owned publishers have different stakes in the migration of publishing to the online world. One area in which this is clear is in the double-edged sword of opportunity and threat. Consider the possibility of publishing books online. This has the advantage of cutting out printing and distribution costs, a substantial saving. “A piece by Ken Auletta in the New Yorker pegged the cost of printing, binding and distribution at about 18% of the list price of a hardback – $4.50 for a $25 hardback. While this is probably about right for many publishers, even this number is high for a large publisher with reasonable efficiencies. In actuality, a well-run publishing company can expect to pay about 10%- 15% of a hardback’s list price for manufacture and distribution – $2.50-$3.75 for that $25 hardback.” (Eberhard, 1999, unpaginated) Include the 55% discount which goes to the retailer (ibid), and the economic advantage seems clear. Despite this, online publishing has not been widely accepted by the mainstream: “Despite a range of initiatives in new media and considerable rhetoric, U.S. publishers have yet truly to embrace the potential of digital publishing. Attempts are sporadic, and lack a cohesive, clear strategy, business model, management structure or decision-making process.” (Abraham and Lichtenberg, 1999, unpaginated) This may be understandable for Literature at Lighstpeed – page 380 the larger publishing houses, which may not be willing to jeopardize their current profitability for an uncertain future (a cost/benefit analysis which echoes that of writers contemplating publishing online). However, smaller publishers could benefit substantially, not only financially. The large publishers already have national, or even international promotion and distribution, so the Internet holds little new for them in this area; the small publishing houses, on the other hand, could greatly benefit from the potential international audience which they could receive from being online. While there is the carrot of increased profiles and profits, the Internet also holds out the stick of increased competition. As we have seen, individual writers are, in effect, becoming their own publishers by putting their writing directly onto the Web. It is not uncommon to hear stories like: “Late last summer, [professor of creative writing at the University of Oregon Jon] Franklin put up Bylines, a pay-per-read site, on the World Wide Web to sell his own out-of-print books and make available other original work previously ignored by print publishers.” (Link, 1998, 7) One somewhat controversial report “claims that consumer magazines have lost 61 million readers in the past year, and it blames the Web for the loss of many of them.” (Bennett, 1999, unpaginated) As we have seen, the issue of quality keeps readers away from some self-published work on the Web: “The elimination of the publisher’s filtering/editing function is all too obvious in some of these endeavors...” (Solomon, 1989, 109) However, the fact that online distribution greatly decreases the cost of publishing, a savings which can be passed on to the reader, makes Web-based works more attractive to potential readers. Large publishers, especially those which are part of an entertainment conglomerate, are in a strong position to weather this competition, whether or not they ultimately employ the Internet themselves. These publishers have large enough profit margins that they can lose readers and still make money. In addition, where they are part of a conglomerate, the parent company can cover their losses while they adjust to the new Literature at Lighstpeed – page 381 publishing environment, and they have sources of revenue other than publishing (for instance, tie-ins with other media). Because their profit margins are much smaller and they do not have other sources of funds to fall back on, the independent publishers are in a much worse position to handle serious competition from the Web; if they lose many readers, they will quickly become unprofitable. Competition from the Web may already be hurting alternative newspaper publishers.

The functions of the alternative press are being usurped by thousands of independent on-line publications springing up all over the Web. The costs involved in setting up a Web page are minimal and the potential audience is immense, making the imperative of ambivalent alliances considerably less urgent. The gay community can get its message out without having to share space with punk rockers, and the alternative arts scene can represent itself without appearing next to advertisements for phone sex. In effect, the emergence of alternative news media on the Internet has made publications like The Village Voice and Hour all but irrelevant. (Friedman, 1997, 177)

Content is not the only way in which the alternative weeklies are being challenged by the Web. According to Richard Karpel, executive director of the Association of Alternative Newspapers, “alternative weeklies’ revenues from personal ads have slowed in the past two years, possibly as a result of competition from singles sites on the World Wide Web.” (Walljasper, 1997, 94) As with books, most mainstream newspapers are parts of chains, many of which are part of entertainment conglomerates, putting them in a much better position to weather the competition developing on the Web than independents. The inevitable conclusion is that small presses have much to lose if they do not embrace the Web, and even more to gain if they do. “‘I’ve said this before, but now I say it with even greater assurance of its truth: Any independent publisher that does not have some plan to make use of the Internet as a publishing medium, and not just a marketing Literature at Lighstpeed – page 382 medium, will likely not be around five years from now,’ said Aron Trauring, publisher of Maxima New Media and the Jewish Heritage Online Magazine.” (Link, 1998, 7) One aspect of the Web which publishers large and small are embracing is its usefulness as a means of promotion. For example, posting a first chapter online in a strategic place, such as the Alt.books.mysteries newsgroup, is often “10 times more effective than an ad in the subway,” director of online marketing at Time Warner trade publishing Greg Voynow claimed. (Martin, 1999, unpaginated) Mystery novels seem especially well suited to promotion through interactivity: “Every month, Avon’s site...features a short mystery -- without the ending. Visitors to the site are invited to solve the crime and write their own conclusion or simply continue the story with more clues. Entries are sent via e-mail; the best one earns its writer three free Avon mysteries weeks in advance of their publication dates.” (ibid) While publishers are at the forefront of these efforts, individual writers may also develop initiatives which help promote their work.

Consider the case of author Lisa Scottoline. Months before her April HarperCollins release, Mistaken Identity, hit the shelves, Scottoline posted the legal thriller’s first chapter on her Web site and invited the entire world to take a crack at editing it. The established author’s unprecedented request for input received hundreds of responses and caught the press’s attention, earning Mistaken Identity valuable early exposure. More important, many of Scottoline’s online editors no doubt felt a sense of involvement with her book -- and it doesn’t take Hercule Poirot to see the sales potential of that. (ibid)

Allowing potential readers to take on writing or editorial functions radically changes the relationship between writer and reader and reader and text. Mari Florence, publisher at Really Great Books takes this even further. She

envisions a future in which some mysteries would span several media. For instance, a metropolitan newspaper could run a serialized mystery story every day and invite readers to go to one or more online sites for clues to solving the crime. The readers could determine the story’s outcome with Literature at Lighstpeed – page 383

the help of the online clues. Ultimately, the entire story could be published as a book -- which would, of course, be marketed and sold online. (ibid)

Aside from new forms of promotion, the Internet holds the potential solution to one of the most serious problems with the traditional publishing industry: the policy of returns. Publishers print a specific number of copies of a book based on their estimation of how many the public will buy. Bookstores ask for a certain number of copies of each book based on their estimate of how many copies they will sell. If these estimates are low, the bookstore will ask for additional copies; if the number of requested copies exceeds the original print run of the book, the publisher may decide on a second printing. If, however, the estimates are too high, and the bookstore cannot sell the number of books it has ordered, it can return the unsold books to the publisher, which must refund what the bookstore paid for them. These books are sometimes sold to remainder stores (the ones which frequently have a banner proclaiming “Big Book Sale” in their front window), which sell them to the public at deep discounts. Most often, however, they are simply destroyed. (While this destruction is extremely wasteful, it is absolutely necessary for the industry; if all of the books which were returned were to be put on the market at huge discounts, the ability of publishers to sell new books at the full price would be seriously undermined.) Returned books are a large part of the publishing industry. “According to statistics released by the Association of American Publishers, 1998 return rates dropped 5.1% from the previous year, to produce a new overall average return rate of 31.6%.” (Quinn and Baker, 1999, unpaginated) As it happens, the return rate did not drop because more people were reading books, but because, “publishers have dramatically scaled back first printings.” (ibid) Still, the important point to be made is that approximately one in three books which are published and sent to bookstores are subsequently returned and, for the most part, destroyed. When you take the bestsellers, like Danielle Steele or Stephen King, Literature at Lighstpeed – page 384 authors who can expect to sell out print runs of millions, out of the equation, the return rate for books may be closer to 50%. This represents a substantial unrecoverable cost to publishers, not only because of the cost of printing the books which are ultimately not sold, but also for shipping and warehousing them. Online bookselling holds the potential of decreasing the rate of books returned to publishers, offering them a potentially spectacular financial savings.

Production and Design Even before the advent of the World Wide Web, computers had a substantial effect on the production of books. As explained in Chapter One, since Gutenberg, each character in a printed page was represented by a metal block called type. To create a line of writing, a human being had to take a metal block for each letter and put it into a row. These rows were then stacked one on top of the other and placed in a wooden frame (and, ultimately, covered with ink and pressed on a page). This was known as the setting of type, or typesetting. The only major advance in the process in the 500 years before the introduction of the computer occurred in the 19th century, when a mechanical device automated the selection and placement of type. This process changed in the 1970s, when optical typesetting machines, known as phototypesetters, were introduced into newspapers and publishing houses. With these machines, a type font was placed on a transparent plastic disc, which spun in a drum. An operator typed in the words which were to be set; a laser chose the letters from the spinning disc. The characters were flashed onto a chemically treated piece of paper, which had to be processed with chemicals in a way similar to photographs in order to bring out the characters. This process was sometimes referred to as “cold type,” although it really didn’t resemble the “hot type” of the metal blocks in any appreciable way. Among the differences between the two systems are: typesetting machines automatically justified lines, while justification with hot type had traditionally required the judgment of Literature at Lighstpeed – page 385 the setter, and; using paper tape or another storage system allowed the setter to reset a block of type, making changes as necessary, with relative ease, while, with metal type, changes required the physical resetting of a line since the medium had no storage capability. Cold type setting machines changed the nature of typesetting: what was once a skilled profession became little more than glorified typing. Desktop publishing systems, which became widespread in the 1980s, further changed the nature of print production. With these, words which had been entered into a computer could be set in type on the computer, where pages could be designed and from which they could be printed out. While this originally had to be accomplished by an exchange of computer discs, digital communications networks further streamlined this process: “As almost anyone in contact with the publishing world knows, electronic transmission of documents has become the norm in the “developed” world. Authors e- mail manuscripts to publishers; publishers e-mail to editors who e-mail back corrected versions, and final copy is often e-mailed to the printers.” (Mermelstein, 1999, unpaginated) Because it was no longer necessary to key the words into the typesetting machine, a critical source of error was eliminated from the printing process. Furthermore, changing from regular type to italics, the size of type or the style itself became trivially easy. Producing professional-looking type was now a simple skill which anybody could acquire. As a result: “Typesetters and mechanical artists have vanished...” (Crawford, 1998, 27) In the 1980s, British typesetters, through a very strong union, entered into contracts with newspapers which required that they type into computers all material which would appear in the newspaper -- that is, everything they would have had to set in the old system -- even though it had already been typed into the computer by the writers. While this maintained employment levels for typesetters, it was an absurd make-work project which clearly showed how unnecessary the profession of typesetting had become. Literature at Lighstpeed – page 386 A similar change in the profession of design occurred. With metal type, the typesetter was, by necessity, also the designer of the page (an example of Ursula Franklin’s holistic technology type). Phototypesetting systems essentially split the process into two distinct functions: the design of pages and typesetting. Designers would get the elements (graphics and photographs prepared for print – which required a new profession – in addition to type) and decide how to put them together. Members of another new profession – paste-up artists – would then take all the elements and put them on a board as per the instructions of the designer. (A photograph would be taken of this page, which would then be used to make a plate which, when inked, would be used to imprint the image of the page on paper.) This breaking down of the production process into discrete steps mastered by different individuals is a good example of how a technology changes from holistic to prescriptive. The introduction of desktop publishing had a more ambiguous effect on the profession of page design. Using programmes such as Pagemaker or Quark Express, it became possible to take the type on the computer and place it directly onto the finished page. With digital scanning systems (and, increasingly, digital photography), all graphic elements could also be placed on the page on the computer screen. In fact, with some of the most advanced systems, the finished page is put directly onto the film from which the printing plates are made, the type never having been printed into hard copy (although more often the intermediate step of printing the finished page occurs). Paste-up artists, like typesetters before them, are no longer necessary. This is a great boon for individuals and small publishers “because the computer encourages the democratic feeling among its users that they can serve as their own designers. Anyone can experiment with type size or style when the computer provides the fonts and drops them into place at the writer’s request. Anyone can create and insert his or her own illustrations with the help of automated drawer programs. The new technology thus merges the role of writer and Literature at Lighstpeed – page 387 typographer...” (Bolter, 1991, 66) In larger publishing houses, part of the role of page designers has been given to editors, who can now develop the look of a page as well as working on the words. However, most professional organizations maintain a separate design department, partially because many editors balked at the increase to their workload, partially out of recognition that “professional” design requires specific skills which are not likely to be found in generalists. Desktop publishing and online distribution have given individuals and small organizations powerful design and distribution tools. To the extent that they will be doing design themselves, their work will compete with that of professional designers. It is not clear, however, that they will displace the professionals; after all, larger organizations will likely always need their services. Moreover, the explosion of Web pages has led to a huge increase in jobs for Web designers. Since the design principles and tools are largely similar, it is not unreasonable to believe that many of the designers who can no longer find work in print will be able to find work in online publishing. At present, then, designers seem to have the best of both worlds: not only is there a lot of work for them in traditional print publishing, but new work is opening up in designing online publications, corporate Web pages, etc. Looking at Figure 5.1, which shows the current state of publishing, with traditional and Web systems existing side by side, we can see that designers could get work from both. Thus, today’s page designers could be expected to support Web publishing, since it expands their economic opportunities. This seems to be part of a transitional period, however. As publishing migrates to the Web, many commentators expect that fewer books will be published in traditional form (ie: on paper). Thus, many of the design jobs on the left side of Fig 5.1, the side representing traditional publishing, will disappear in time. The question facing page designers will then be: is the number of jobs newly available online large enough to Literature at Lighstpeed – page 388 offset the number of jobs which will be lost in traditional publishing? The answer to this question will determine whether or not they support the new technology; all of a sudden, their stake in it is not so certain. Let us take this scenario one step further. The much-touted advantage of the Web is that it gives individuals the power to create and disseminate their own work. Imagine a Web swamped by hundreds of millions of personal pages, all of which have been designed by their individual creators. As reading and writing were necessary for citizens to be able to participate in society in the 20th century, visual literacy will have to be taught alongside traditional literacy in school because visual literacy will be necessary for citizens to be able to participate in society in the 21st century.1 In such an environment, “designer” as a separate category of employment will largely, although not entirely, disappear, since what were once specialized skills will become skills most people will need to have. (The reason it won’t entirely disappear is because some people will be better at it than others, and some people and organizations will always be willing to pay for such superior skills.) In such a case, it will be in the interest of designers to oppose the use of the Web for publishing, since their livelihood will be decimated. Note, this is exactly the same technology, used in exactly the same way, as it was when designers first supported it. What has changed are the social structures around the technology. Of course, it is reasonable for a stakeholder group to act on clear short term goals rather than nebulous, iffy long term problems (even if, ironically, the technology in question which they support in the short term has the potential to destroy the group in the long term). For our purposes, it is important to note that the interest a stakeholder group has in a technology can change as the social structures around the technology change, even if the salient features and uses of the technology itself have not changed. This may seem speculative, but there is precedent for this process in publishing. Before Gutenberg, written knowledge was kept alive in scriptoria, places where Literature at Lighstpeed – page 389 manuscripts were literally copied by hand. The scriptoria were often monasteries, but commercial scriptoria also existed. When mechanical type was perfected by Gutenberg, the need for hand copied books disappeared; within 100 years, the scriptoria had all but disappeared with it. The monks presumably returned to spiritual pursuits. Many of the commercial scriptoria workers, however, simply transferred the design skills they had learned from that profession to the new profession of typesetter/printer (Katz, 1995 and Lehmann-Haupt, 1957). The copiers of that age must have gone through a similar process of assessing their interests that modern designers will go through. There is one further complication to this picture. Web surfers have the ability to control the fonts, type size and display of graphics on their computer screens; when text is saved and printed out, the computer user can add graphics and change fonts, type styles, etc. In essence, a second level of design occurs with material delivered online, a level where the reader becomes the designer. Of course, this never happened with print books, since the reader bought the books in a fixed form. As we have seen previously, for a variety of reasons, most people do not read large amounts of text off a computer screen, they print it up before reading it. When a reader prints out material received by digital distribution, he or she effectively becomes a printer, with control of all of the design parameters. In the long term, this could have a devastating effect on the printing profession. It also suggests that paper producers will have to face changes in paper consumption over the long term, shifting from newsprint and other professional types to the standard 8 1/2 by 11 sheets which are used in the home.

Retail Booksellers Like other stakeholders, retail bookselling outlets are not a homogenous group. There are two distinct groups within this category: chains and independents. The chain stores (Chapters and Indigo in Canada) are usually housed in very big buildings, with a large number of books on a wide variety of subjects. Independent bookstores, by way of Literature at Lighstpeed – page 390 contrast, are usually small and often specialize in a specific type or genre of book. The two types of store are in direct competition for readers: “Today, the 800 or so superstores [in the United States] owned by Barnes & Noble, Books-A-Million, Borders and Crown control almost 50% of all bookstore sales. As a result, hundreds of independent bookstores have closed.” (Mann, 1997, 15) In Canada, Chapters owns 54 megastores and 210 mall stores. (Ross, 1999a, C1) As the chains expand the number of their stores, independents are closing shop. Independent bookstores in Canada which have closed in the last five years include: Albert Britnell Bookshop (Toronto), Sandpiper Books (Calgary), Bollums Books Ltd. (Vancouver), Printed Passage (Kingston, Ont.), Edwards Books & Art (Toronto) (Gibbon and Stueck, 1999, B1); Mary Scorer Books and Heaven Book and Art Cafe (Winnipeg), Food for Thought and Books Canada (Ottawa) (Ross, 1999b, C1); The Book Cellar and Ulysses Travel Books & Maps (Toronto) (Stoffman, 1999a, C6), and; C. Johnson Bookseller (London, Ont.) (Strauss, 1999, B1). Most recently, Duthie’s Books in British Columbia was forced into restructuring which was expected to lead it to close all but one of its regional chain of stores. (Dafoe, 1999, D4) A similar trend can be seen in the United States. In San Francisco, “Crown closed three years ago, as did Cottage Bookshop in Larkspur around the same time, and Books Revisited went belly up two years ago.” (High, 1999, unpaginated) Other closings include A New Leaf and New Albion Bookshop. (ibid) Since 1991, membership in the American Bookseller’s Association, which is largely made up of independents, has fallen from 5,200 to 3,300. (Gaudin, 1999, unpaginated) Reflecting on this trend, Celia Duthie, current owner of Duthie’s Books, commented, “Fifth Avenue in New York used to be such a fabulous place, lined with bookstores like Scribners, wonderful places. Now they are all gone and you’ve got a Barnes and Noble every three blocks. If that’s not fascism, I don’t know what is.” (Dafoe, 1999, D4) Literature at Lighstpeed – page 391 A large part of the chains’ advantage over independent bookstores is economic; because one buyer can purchase books for all the stores in the chain, they get volume discounts which they can pass on to consumers. According to Ross, the chain bookstores “squeeze publishers for discounts [they] can pass onto consumers, and keep the cash flowing through.” (1999b, C1) This is especially true of the bestsellers which make up much of the chain stores’ sales. In order to make their discounts as deep as possible, the chains work on slender profit margins: “Analysts estimate that Chapters’ net income for 1999 will be in the $10-million range -- not a high percentage on $500-million in sales.” (ibid) Independent retail outlets, by way of contrast, do not have access to the economies of scale available to the chains, and cannot discount books sufficiently to match their prices. Moreover, because of their size, the chain stores (especially the megastores) can stock far more books than any independent. A reader looking for a specific book might go to an independent specializing in that subject, but most readers just browsing will tend to go where there are the most books to consider. This has an effect on what books are available. Since independent bookstores owe their allegiance to a specific region, they often support the work of local writers. For instance, Bill Duthie, founder of Duthie’s Books, was described as “a pioneer who agreed to stock and sell B.C.-written books, and acted as unofficial author’s agent and salesman when required.” (Stueck, 1999, B4) Chain bookstores, whose buying decisions are made from a central office, cannot have the same level of commitment to local writers. (In fact, to the extent that they need national or international bestsellers to maintain their sales volume, they actively exclude small presses and local writers.) Concentration at the retail level goes hand in glove with concentration at the manufacturing level. As we saw above, publishers are looking increasingly for bestsellers from authors with reputations; booksellers exploit the blockbuster phenomenon by pushing books with authors with reputations (partially by giving them favourable Literature at Lighstpeed – page 392 positions on the floor, arranging signings, et al, but also simply by stocking far more copies of such books). This has changed what books are sold: “Between 1986 and 1996 the share of all books sold represented by the thirty top best sellers nearly doubled as retail concentration increased. But within roughly the same period 63 percent of the one hundred best-selling titles were written by a mere six writers — Tom Clancy, John Grisham, Stephen King, Dean Koontz, Michael Crichton, and Danielle Steele — a much greater concentration than in the past and a mixed blessing to publishers who sacrifice much of their normal profit, and often incur losses, to keep powerful authors like these.” (Epstein, 2000, 9) Because bookstores are looking for quick profits, they are returning unsold books to publishers at an increasing pace, even books with a substantial reputation for quality:

The Los Angeles Times, in its year-end list of the best books of 1999, called Morgan [a biography of J. P. Morgan written by Jean Strouse] ‘a riveting detective story and a masterpiece.’ Morgan was short-listed by The New York Times Book Review, The New Yorker, Time, Business Week, and other general-interest publications as one of the best works of nonfiction of 1999. But when these lists appeared nine months after Morgan was published, fewer than one thousand copies were on hand in the 528 superstores of the Barnes and Noble chain which, together with Borders Books, the second-largest chain, dominates the retail book trade. With Christmas a month away Barnes and Noble had apparently decided that in a year when millions of Americans were obsessed with the stock market, Morgan was nevertheless an unlikely Christmas gift. On the day the New York Times list appeared, copies of Morgan were no longer on display in Barnes and Noble’s branch four blocks north of the Random House building on Third Avenue. It was Strouse’s literary agent who visited the Third Avenue store that day, noticed the omission, and called it to the attention of the store manager, who ordered fifty copies. Thereafter, the chain as a whole restocked Morgan.” (ibid, 5)

The rapid turnover of books in stores is likely to have two effects. In the short run, it will drive readers to the Web to find books that are no longer being stocked by the chains, even if they came out recently and had good reputations. Coupled with the way Literature at Lighstpeed – page 393 publishers are cutting back on the number of books they are offering, the other likely effect will be to drive authors to self-publish, perhaps some on the Web. In the past, publishers would nurture books in the hope that they would become profitable in the long term. Many twentieth century writers whose work was not immediately successful became respected as masters of their craft because their publishers kept their books on their backlists, including Samuel Beckett and William Faulkner. The current rate of turnover means that books which aren’t immediately profitable are allowed to quickly go out of print. “In 1999 some 90,000 books—many worthless, many others valuable—went out of print, according to the vice chairman of Barnes and Noble.” (ibid) As we saw in Chapter Two, a couple of the surveyed writers put their work on the Web which had been allowed to go out of print; this may increase. Online booksellers complicate the retail book market: “Canada’s independent booksellers are in survival mode, struggling to meet competition from Chapters’ wildly successful megastores and from new book sources on the Internet...” (Stoffman, 1999a, C6) The largest online booksellers have access to economies of scale similar to that of retail chains, so they can afford similar discounts. Thus, although online booksellers are in direct competition with bricks and mortar book chains generally, the independent bookstores are the first casualties of the competition. Online booksellers have many advantages over physical booksellers.

John Gambrill enjoys browsing at his local bookstore, but when he wanted a specific tome for his son last Christmas, he was forced to search further afield. Even then, Nomads of Niger was out of stock at the larger bookstores in Vancouver, too. Finally, the retired engineer turned to an Internet bookseller and, almost immediately, the photo essay book turned up.” (Strauss 1999, B1)

Online booksellers can have far more books in their catalogues than physical stores, even megastores, can have in stock; this makes the Net a good place for people who want Literature at Lighstpeed – page 394 something obscure to look. The books need never become unavailable because copies are not physically present in a bookstore. In addition, the Web is a good place to look for books by foreign publishers; by accessing their site, readers have the opportunity to find books which are unlikely to be stocked by any physical retail outlet. Then, there is the convenience of shopping online: “It saves me a trip to the store, [and] it’s here the next day...” (Strauss, 1999, B9) Another advantage online booksellers have over physical retailers lies in the strange return policy of the publishing industry. As we have seen, “Many more books are published than there is retail space for, and few of us buy books anyway. So a publisher lets retailers return unsold copies to increase the chances that they can afford to carry new titles. Occasionally as much as half of a mass market fiction print run of half a million copies is returned and destroyed. With the ever-rising tide of new books, the average newsstand display time for a title is now around a month.” (Rawlins, 1996, 61) Because online retailers do not have to stock as many books (since they can order them directly from the publisher as order requests come in from readers), they do not have to tie up as much capital in buying books which ultimately will not be sold. Estimates vary, but some believe that online book sales will become quite substantial quite soon. “According to Cambridge, Mass.-based Forrester Research Inc., on-line book sales in the United States will reach $3-billion (U.S.) in 2003 -- 18 per cent of the retail market. In Canada, Forrester estimates sales of about $200-million.” (Ebner, 1999, B9) According to Chapters Internet President Rick Segal, “on-line sales will generate as much as 15 per cent of Chapters’ sales within five years...” (ibid) A large part of the reason traditional booksellers are moving online is the competition from native online booksellers, which has been taking sales away from physical stores, to the point where it threatens some of their existences. One prediction is Literature at Lighstpeed – page 395 that by the year 2005, half of books sold will be sold online. (Wolf and Sand, 1999, 112) The dominant player in online book sales is Amazon.com. Amazon.com was established in 1995. Estimates suggest that it currently has 10 million customers. (Stone, 1999, unpaginated) Given the international nature of the Internet, it should come as no surprise that the company’s customers come from 150 different countries. (Coffman, 1999, unpaginated) The amount of books available from the online retailer is staggering: “Amazon offers a selection of over 3 million titles, including all 1.5 million English language titles currently in print, as well as everything listed in Books Out of Print. That’s more than 17 times the 175,000 to 200,000 titles available at your local Barnes and Noble or Borders and way more than the 20,000 to 40,000 titles you might expect at a typical neighborhood bookstore.” (ibid) The online bookseller’s sales can be impressive: “In the last 3 months of 1998 alone, Amazon shipped some 7.5 million books, CDs, and videos -- enough to fill a bookshelf 101 miles long.” (ibid) To the extent that the books it sells are not available at physical stores, Amazon.com could be increasing the readership for books. However, to the extent that Amazon.com sells bestsellers or other books which are available from (or can be ordered through) traditional booksellers, the company is taking sales away from them. Online bookselling offers many advantages over in-person stores. One is purchasing convenience:

Amazon opens shop everywhere an Internet connection exists. Increasingly that means right on your desk at work and in your study at home and anywhere in the world you live. Best of all, it’s convenient. It waits patiently for you to come use it whenever you want, whether it’s 2:00 p.m. in New York, 11:30 at night in Bangkok, or 4:00 a.m. Sunday morning in Mexicali. You don’t have to do anything special to use it. You don’t have to get in your car, or find a parking space, or put up with surly, underpaid clerks. Just fire up your computer and click. (ibid) Literature at Lighstpeed – page 396 Another advantage pioneered by Amazon.com is payment convenience: “With the site’s 1-Click feature, once you’ve registered (which includes providing your credit-card number), Amazon recognizes you each time you visit and lets you order with a single click of the mouse. No lengthy shipping addresses to type in. No passwords to recall. For frequent shoppers, 1-Click saves time.” (Turner, 1999, unpaginated) Another technique enabled by online bookselling which Amazon.com pioneered was that of getting other Web sites to agree to affiliate with it. With this program, any Web site which refers to a book can have a link to Amazon.com, where Web surfers can then buy the book. The creators of the originating page get a percentage of any sales which arise out of their referrals to Amazon.com. Any Web site could, in theory, be affiliated with Amazon.com: a nurse’s association could connect to it through listings of medical texts; chess sites through books on chess; clothiers through fashion books and magazines; and so on. The affiliate model “is now a proven way for booksellers to cooperate with other retailers or information purveyors to grow sales for everybody.” (Shatzkin, 1999, unpaginated) If it becomes successful, this model could be a boon for smaller producers and booksellers. “An intelligent and informed specialist in business books, for example, might persuade some appropriate Web sites that it is a better choice for an affiliate relationship than Amazon. That is a new opportunity for independent booksellers.” (ibid) However, to the extent that Amazon.com has already established itself as the site to affiliate with, this must be considered an uphill battle. The theory by which many people judge Internet companies is that, because several intermediaries can be eliminated from the sales chain, they are in a better position than bricks and mortar stores to make money. Yet, despite some impressive numbers, Amazon.com, like many online companies, is losing money. “In the quarter ended March 31, Amazon.com counted net sales of $293.6 million, a 236% increase from the same quarter last year, before music and video sales were added to the site. It posted a loss of Literature at Lighstpeed – page 397 $61 million, compared with a $10.36 million loss a year earlier.” (Deck, 1999, unpaginated) The amount of losses Amazon.com incurs on an annual basis are even greater: “Amazon ran a net loss of $75 million last year, and another $200 million could go out the door this year.” (Koselka, 1999, unpaginated) Nor is Amazon.com unique in this regard. “...a Publishers Weekly analysis of four e-retailers [including Amazon.com] that report their results show that online book sales rose 322% in 1998 to $687.1 million.” (Mutter, 1999, unpaginated) Yet, “The four e-retailers reported a combined net loss of $227.5 million in 1998, a 370% increase over 1997. Losses at bn.com increased 511% in 1998, while Amazon had a 384% increase. Fatbrain was the only service to report a faster increase in sales (256%) than losses (209%). Borders.com reported a 75% increase in its net loss last year, to $10.5 million. The four online bookstores had accumulated losses of $275.9 million for 1997 and 1998.” (ibid) There are many possible reasons for this. One is the deep discounting of books, which means that the company sells many of its books at or close to a loss. Volume sales wouldn’t help, because the more books the company sold, the greater its potential losses. Moreover, in order to compete, other booksellers, including physical stores, would have to lower their prices to similar levels: “Amazon.com triggered an online bookselling price war last week when it announced that it would sell all New York Times bestsellers at a 50% discount... Within 24 hours, Barnesandnoble.com, Borders.com and Booksamillion.com all instituted bestseller discounts of at least 50%.” (Zeitchik, 1999a, unpaginated) While good for readers in the short term, this kind of discounting threatens the financial viability of many book retailers. Another possible reason for the losses of online booksellers is the amount they spend on advertising: Amazon.com “invested 22% of its total revenues on sales and marketing in 1998, while bn.com spent more money on its sales and marketing efforts, $70.4 million, than it earned in total revenues.” (Mutter, 1999, unpaginated) The theory is Literature at Lighstpeed – page 398 that losses today will pay off in the establishment of the biggest readership base tomorrow. As Candice Carpenter, head of iVillage explains, “This is a land grab. You want to put your stakes in the most valuable property you can as fast as you can because it’s not going to be there tomorrow.” (Fox, 1999, unpaginated) Despite not making a profit, Amazon.com is heavily supported by the market. In a single year, the price of Amazon.com stock went from $11 per share to $105. (Streitfeld, 1998, unpaginated) In early 1999, the company, which has never made a profit, was valued at $23 billion. (Fox, 1999, unpaginated) In addition, it is diversifying its product base, with the purchase of Exchange, which runs bibliofind.com, which “has a database of more than nine million hard-to-find and rare books, and lists thousands of independent dealers and retailers,” and Musicfile.com, which “has more than three million items of hard-to-find recordings and music memorabilia,” (“Amazon buys into rare books, music,” 1999, unpaginated) and investments in Drugstore.com, an online seller of medications and consumer medical supplies, and Homegrocer.com, an online seller of groceries, among other companies. (Stone, 1999, unpaginated) The upshot of all of this financial wheeling and dealing is that Amazon.com is likely to be able to weather losses from the online sales of books for a long time. On the one hand, it will continue to have large cash infusions as long as the market is willing to support it while it continues to lose money. On the other hand, as bookselling becomes a smaller part of its overall business, the losses it incurs when it sells books become less important. This is bad news for independent booksellers, who cannot afford substantial losses. Even if they band together, they are not likely to be able to afford the discounts necessary to compete with the major booksellers. They are likely to find themselves shut out of substantial online sales, which will increasingly compete with them. Literature at Lighstpeed – page 399 In the long-run, losses for even Internet companies are not sustainable. According to Stone, “no one doubts that there will be some consolidation of the crowded field [of online booksellers] in the months ahead.” (1999, unpaginated) In the physical world, this often means merging failing companies and combining their assets. However, to the extent that online booksellers have the same fundamental asset (a catalog of published books), mergers do not seem to offer any advantages. Some companies, then, will simply disappear. Cutthroat competition between online booksellers is good for consumers in the short term since it drives prices down. However, when the smoke clears and a small number of booksellers are left, we can expect prices to rise, not only to stem future losses, but also to pay for past losses. Bookstores have a couple of different strategies for coping with online competition. Most chain megastores have a coffee and pastry outlet in their store (each Chapters, for instance, has a Starbucks coffee counter (Gibbon and Stueck, 1999, B2)). Chapters and Indigo offer “plush customer seating,” encouraging their customers to read books or magazines in comfort, as well as hosting events such as “the occasional string quartet.” (Southworth, 1999, B9) Independent store Bolen Books “holds singles evenings a few times a year as a way to draw shoppers, who get a 10-per-cent discount that night... A gift registry was launched recently that has been popular for Father’s Day.” (Strauss, 1999, B9) Duthie’s Books has been involved in “sponsoring readings, lectures and events such as Bard on the Beach, Vancouver’s summer Shakespeare Festival.” (Stueck, 1999, B4) These, and other efforts, are attempts to make going to a bookstore a social event. Segal claims that, “People go to a bookstore much like they go to movies or they go out to eat. It’s a destination...” (Ebner, 1999, B9) You will recall that existing media structures continue to flourish in the face of competition from new media only to the extent that they can establish advantages over the new media. Bookstores are trying to Literature at Lighstpeed – page 400 use their physicality – something no online bookseller will ever be able to duplicate – to their advantage. However, the closure of so many independent bookstores suggests that the price at which books are sold will always be a critical factor in the success of book retailers, no matter what other advantages they offer, another factor which puts the independents at a distinct disadvantage. Another possible response to the challenge of online booksellers currently being explored by physical bookstores is sometimes referred to as on-demand printing. “Xerox’s idea is to set up sophisticated printer/binder and packaging units at various strategic points, just as many cities are now well-equipped with photocopying centers, usually at walking distance in busy areas. The copy would be downloaded into the unit from an on-line source and a finished, bound, and packaged book, brochure, pamphlet, or ‘special edition’ would come out at the other end.” (de Kerckhove, 1997, 114) The Xerox DocuTech 135 is touted as being able to print and bind a paperback book in a minute, a book which “looks identical to its offset brethren.” (Stoffman, 1999b, J10) Printing on-demand could solve many of the problems of the current physical distribution system. Returns could be minimized, for instance, since it would no longer be necessary to have a lot of copies of a large number of books in the store. Stores need never run out of copies of a book, since they could download as many as they need from their online source. Books need never go out of print, since storing digital copies takes far less space than physical copies (which would change the retail market in another way: “In the U.S., Lightning Press, owned by the wholesaler Ingram’s, has made a specialty of such [on-demand] reprints, undercutting second-hand bookstores as a source of out-of- print volumes.” (ibid)). Readers would benefit, since, “The market for regional and specialized artists should also expand. Today, these artists get limited shelf space because of the high risk of returns and warehousing costs. Once their work is digitally stored, their books or records are effectively always in stock in any store with an on-demand Literature at Lighstpeed – page 401 system.” (McNish, 1994, B6) In effect, any bookstore could make any book available over the Internet available to their customers; bookstores would have to compete, then, on price and such intangibles as quality of service, convenient location, et al. Many bookstores expect to be the strategic points at which printing on-demand will take place. The problem with this scenario is that there is no reason to believe that bookstores would have to be such central points. Relatively inexpensive printing equipment already exists; when the cost of binding equipment comes down enough, any corner store will be able to do on-demand printing. For that matter, as bandwidth increases and the price of hardware and software comes down, there seems to be no reason why individual readers will not be able to download, print and bind their own books. “As for the idea of bookstores printing books, I have to wonder why anyone thinks that there would be bookstores if that [personalized books without large print runs] were even to happen... In any event, if authors can reach their readers directly, then both bookstores and publishers will become obsolete.” (Crawford, 1998, 27) The final method by which bookstores hope to compete with online booksellers is to go online themselves. The Canadian Booksellers Association “has set up its own Web site, cbabook.com, billing it as Canada’s biggest bookstore. The site still needs fine- tuning, but members like [owner of Melfort Books and past president of the 1300 member CBA] Ms. [GailMarie] Anderson are thinking of developing an on-line presence for their store, which could hook into the group’s site.” (Strauss, 1999, B9) A chain of linked pages could give independent booksellers a substantial online presence. Moreover, a system whereby any store in the online chain could order books from any other store in the chain might help to counteract the advantage megastores have in their ability to stock a greater number of individual volumes. “‘The Net is probably the single most valuable access channel for independent booksellers,’ [Indigo founder and CEO] Ms. [Heather] Reisman said, noting they can expand their sales on-line...” (Ebner, 1999, B9) In another Literature at Lighstpeed – page 402 venture, Canadian independent booksellers have joined with Southam publishing to sell books online. “Consumers who visit the site will be able to order books from approximately 100 booksellers that currently have the capability to fulfill orders over the Internet, said Sheryl McKean, president of the Canadian Booksellers Association. Other CBA members who don’t yet have that capability will be able to list information about author tours, readings and other events on the site.” (Renzetti, 1998, C2) In the United States, “A group of 900 independent booksellers is banding together to jump into the e- commerce fray that made a Goliath out of Amazon.com and battered small-town Davids.” (Gaudin, 1999, unpaginated) But, of course, the major chains are also going online. Moreover, as we have seen, individual writers are bypassing the whole system by publishing directly on the Web. While online sales may seem like a good defense against online booksellers, and, as a consequence, may allow some booksellers to continue to exist as corporations, they may, at the same time, be undermining their ability to maintain their physical stores. “One has to wonder, ironically, how long they, including Chapters’ own Chapters.com, will permit the existence of even Chapters’ own bookstores, with their more costly real- estate locations, staff that must do more than pick and pack, and the breadth of stock that customers expect them to have on hand.” (Greaves, 1999, D6) As the costs associated with physical stores increase, and as more people get comfortable with buying products online (and secure and effective payment systems develop), we can expect more book sales to immigrate to the Web. The question is, how much will be left for physical bookstores?

Portals as Gatekeepers Traditional media can be considered a response to the vast amount of information being produced and stored by human beings. Editors of newspapers decide which articles to Literature at Lighstpeed – page 403 run; book publishers decide which books to put out; record companies decide which artists to sign; and so on. In traditional media, organizations which serve as information filters are sometimes referred to as gatekeepers. Some think that gatekeepers assure the quality of work in a medium by not allowing “bad” work through their gate, but this is only partially true: a tabloid newspaper is a gatekeeper in exactly the same way as a “quality” newspaper; the publisher of romance fiction is a gatekeeper in exactly the same way as a publisher of serious fiction. Gatekeepers are not concerned with the difference between high and low culture, only with information which contains the esthetic qualities which satisfy their requirements. Gatekeeping organizations establish brand identities; information consumers seek out these organizations because of the brand, and return to them as long as the information they supply consumers fulfills their needs. According to Levinson, “The problem with the gatekeeper -- whether unavoidable in the case of mass media or optional in the case of online publication -- is that it cuts off the flow of ideas before the intended recipients, the readers, have a chance to select them.” (1997, 134) While this is certainly true, many people seem willing to accept this situation in order not to have to go through the process of searching through vast amounts of information themselves. The choice seems to be between an imperfect gatekeeping process which will give people some, but not all of the information they could want and searching through a huge amount of information without any guarantee that they will find anything of value at all. As we have seen, traditional media are attempting to position themselves as gatekeepers in the electronic world, using it to sell the information which they produce. However, the World Wide Web has also spawned native gatekeepers. These are known as portal sites. A portal site is a home page (defined here as the first page to come up on your screen when you connect to the Web through a browser2) which opens out onto a Literature at Lighstpeed – page 404 relatively self-contained site. One of the problems with attempts to make the Web a commercial medium is that a surfer can move out of one’s site with a click of her or his mouse. Portal sites, rather than being doorways into the larger Web, are actually meant to keep people within a single site. For instance, “[Search engine] Excite capitalized on that flaw -- the overwhelming volume of information -- by creating shortcuts for its users, logging the most popular searches, and organizing them into corresponding channels (lifestyle, sports, shopping, etc.). The hope was that people would stick around longer as they burrowed through the channels, eyeballing ads on the way.” (Robischon, 1998, 41) By expanding its content in this way, Excite did increase how long people stayed at the site: users viewed an average 39.5 pages in 1998, up from 29.3 two years earlier. (ibid) This meant a potentially substantial increase in advertising revenue, since each of these pages would carry ads. Some estimates suggest that portals will be big business: “...by 2003, portals are expected to grab 20% of all Web traffic and $3.2 billion in Web advertising dollars.” (“Analysts Foresee ‘Portal Melee,’ 6) Not surprisingly, given this trend, traditional media are investing in existing portal sites: “Snap is part-owned by NBC; and GO Network is jointly run by Infoseek Corp. and the Walt Disney Company” (Robischon, 1999, 80), or creating their own, “Knight-Ridder Inc. is preparing to transform its U.S. newspaper Web sites into a national network of regional Internet portals, hoping to target advertising to people who use the sites as their first stop in cyberspace.” (Associated Press, 1999, B5) The Washington Post and The New York Times both intend to create portal sites. (ibid) Although many Internet portals began as stand-alone sites (for example, Compuserve), many search engines are currently morphing into portal sites. Search engines were once simply a single page which allowed computer users to search a database of Web links. Given the great need for tools with which to find information on the Web, it is understandable that “Search engines are the most visited sites on the Web, Literature at Lighstpeed – page 405 handling millions of requests a day, and can thus charge a premium for advertising banners.” (Rowland, 1997, 325) In a medium where revenues can be scarce, search engines are an important exception. “Search sites alone accounted for 35 percent of the $335.5 million total online advertising revenue in the fourth quarter of 1997...” (Robischon, 1998, 40) As the Web grows, the importance of search engines grows with it: “...Yahoo!s numbers are impressive: while the average old-media audience size remains relatively flat, Yahoo!s users have more than doubled in the past year.” (Behar, 1998, 48) Since they were already getting a large amount of traffic for their search engines, the companies which ran them decided to add content, more or less as we have seen with Excite. The strategy seems to have worked. In 1997, 4 million daily pageviews, mostly for its search engine, were logged by Excite; a year later, less than 40% of Excite’s 40 million pageviews used the search button (the rest used its channels). (Robischon, 1998, 41) While much of the material on search engine turned portal sites is original, a lot of it is also paid for by the owners of the pages being linked to. “Most on-line consumers are probably unaware that the most valuable real estate on a [search engine guide] usually goes to the highest bidder rather than the best content provider.” (ibid) This is a serious problem for those who can only afford to be listed on the search engine’s database (which is usually free). “Surely being linked to the Yahoo home page -- as are Visa, Reuters, GIST TV, Travelocity and even Apartments.com -- offers a distinct advantage over competing services, subtly stashed, albeit alphabetically, several levels within the Yahoo bowels.” (Weiss, 1999, 30) Moreover, this practice corrupts the concept of the gatekeeper: readers choose publications because they trust the editors of the publication to give them the best possible information in the category to which the publication belongs. As Robischon points out, allowing sites to buy space on a search engine’s home Literature at Lighstpeed – page 406 page “not only reduces the variety of choices available to web readers, but creates an environment where profitability trumps editorial quality.” (1998, 41) There is a fundamental conflict between offering a neutral search service and paid content; “A powerful and credible classification system for the Internet must be divorced from preferred providers -- heck, they never even should have dated.” (Weiss, 1999, 30) This conflict could undermine the faith people put in the results of searches done through search engines which have become portals, although there is no evidence that this has yet happened.3 One other serious potential source of conflict arises out of the fact that traditional media corporations are buying into portal sites. “On My Yahoo!, for example, users interested in technology will find material from just two sources: the Reuters/Wired Digital News Service and Ziff-Davis Inc., publisher of Yahoo! Internet Life, PC Week, and other computer trade magazines. What’s not disclosed is that Softbank Corp., the Japanese company that bought Ziff-Davis in 1996, also owns a 29.3 percent stake in Yahoo!” (ibid, 44) Whereas other portals favour paid content over unpaid content, Yahoo! seems to have taken this one step further by entirely eliminating unpaid content in some categories from its database. Were this kind of conflict of interest better known, it would completely undermine Yahoo!’s reputation as a neutral gatekeeper. Some see portal sites as a betrayal of the populist vision of the Web.

Those who envisioned a ‘networked nation’ of individuals -- reaching out to one another and forming online communities -- have sometimes been dismayed at the new portals and all-inclusive online environments that seek to be ‘sticky’ websites drawing ‘captive’ eyeballs. Those sites build on convenience (your homepage, your stock portfolio, your free e-mail) to bring everyone to one-stop-shopping locations -- and keep them there. But the creators’ vision was that the Web would encourage connections among diverse sites and collaboration among distributed communities, not draw a growing mass audience into ever fewer high-traffic sites. (Johnson, 1999, 86) Literature at Lighstpeed – page 407 For our purposes, it is important to note that individual Web page creators will likely have to ally themselves with a Web portal, losing control of some aspects of the presentation of their work (in particular, the placement of advertising), if they do not want to risk being shut out of the traffic which portal sites will generate. Ironically, although many individuals stated that they put their writing on the Web in order to avoid the gatekeepers in traditional media, this will mean they will have to negotiate publication with new, online gatekeepers.

Critics and Other Filters Bookstores and online booksellers are useful if you know what you want to purchase. However, there is a question readers have to ask before they get to this stage: how, from all of the material available to you, do you choose what you want to read in the first place? Even search engines do not answer this question, since you have to know what you are looking for before you can conduct a proper search. Mechanisms must be developed which allow a reader to filter the vast amounts of information available to her or him and find those which are most useful. Some people think that filtering mechanisms will be the most important aspect of the Internet: “It is this plethora of content that will make context the scarce resource. Consumers will pay serious money for anything that helps them sift and sort and gather the morsels that satisfy their fickle media hungers. The future belongs to neither the conduit or content players, but those who control the filtering, searching and sense-making tools we will rely on to navigate through the expanses of cyberspace.” (Saffo, 1994, 74/75) A critical establishment has grown up around traditional media. This means reviewers for various mass media (radio, newspapers, television and magazines), but it also includes gatekeepers whose opinions are important enough to be reported in the press (the film executive, for instance, whose comments on trends in the industry get a lot of attention from the media, affecting what filmgoers look forward to viewing), authors Literature at Lighstpeed – page 401 of books on the media (including academics), et al. The critical establishment contains the tastemakers upon whom we rely to help us find works of art which we have a good chance of enjoying. Why do we put such faith in the critical establishment? According to Bourdieu, they have accumulated “symbolic capital.” As opposed to economic or political capital, “For the author, the critic, the art dealer, the publisher or the theatre manager, the only legitimate accumulation consists in making a name for oneself, a known, recognized name, a capital of consecration implying a power to consecrate objects (with a trademark or signature) or persons (through publication, exhibition, etc.) and therefore to give value, and to appropriate the profits from this operation.” (1986, 132) In a sense, critics create a brand for themselves; through experience comparing their opinions to our experience of a work (as well as to the opinions of other critics, or information from other sources), we learn whom to trust and whom we should not. Unlike a product brand, which sells the product associated with the brand image, the accumulation of symbolic capital allows whoever has enough of it to help sell cultural artifacts created by others to the public. “It is all too obvious that critics also collaborate with the art trader in the effort of consecration which makes the reputation and, at least in the long term, the monetary value of works.” (ibid, 135) There is a symbiotic relationship between critics and the medium on which they report. Still, the relationship is not as airtight as Bourdieu claims; people do go against the critical consensus. Some Jim Carrey films, for instance, have had substantial critical drubbings, yet they have always drawn huge audiences; the critically acclaimed television series Homicide: Life on the Street, on the other hand, was never a popular success. As a general principle, however, it is clear that the critical establishment has great power to sway public opinion and help create an appetite for specific works of art. Some have suggested that new media will inevitably develop their own forms of critical filters. Literature at Lighstpeed – page 409 “With the explosive growth in electronic information, a whole new profession may develop -- people who find things -- perhaps they’ll be called ferrets. For those who want to rummage for themselves, there may be another new profession -- people who organize things. Maybe they’ll be called mapmakers. And everyone will need people who select things, distinguishing the good from the bad; perhaps they’ll be called filters.” (Rawlins, 1996, 55) Sometimes, descriptions of such filters can be quite fanciful: “Even if physical books were to vanish, I can’t help but believe that the craving for quality would create a new job title for the Internet -- something like Chief Digitrol (an abbreviation of digital controller, or perhaps a reference to the little troll under the bridge whose eager fingers would always be correcting, correcting, correcting the endless flow of virtuality).” (Crawford, 1998, 27) Since publishing on the Internet can be trivially easy, criticism will come from a greater variety of sources. “...if using the Internet continues to be analogous to drinking from a fire hydrant,” one writer suggests, “we will probably also begin to see library Web pages providing critical reviews of the literature prepared by librarians and their faculty counterparts in the fields and disciplines of academe. There is a crying need for such sifting mechanisms, and the library can address this need in a strategic way with the assistance of others in the academy.” (“Jim Williams: Librarians in the Cyberage,” 1998, 15) All of these positions essentially help people manage their time by directing their attention to the most important information, a service Esther Dyson, in Chapter Three, claimed people would be willing to pay a lot of money for. It is also possible for individual readers to become critics. Amazon.com, for instance, runs reviews written by readers of some of the books it carries; this is in addition to reviews it carries by professional critics. Some people will always prefer to listen to professionals, assuming they have greater knowledge than laypeople. However, as Bourdieu rightly points out, being a critic is largely about accumulating enough power, Literature at Lighstpeed – page 410 enough symbolic capital, to have your voice carry weight with the public. Some readers may be fed up with mainstream literary criticism and want to read other critical voices, voices which speak more directly to them. It may be possible that a new set of critical standard-bearers will assert a new form of online authority, much as Bourdieu’s critical establishment has authority over traditional media: “Inevitably, some contributors will become more respected than others, either through superior communication skills, greater trustworthiness, or higher social standing. They will then implicitly set standards of acceptability for various things -- just as Variety sets standards for many movies and the New York Times Book Review sets standards for many books.” (Rawlins, 1996, 78) The difference is that, as long as the Internet remains accessible as a means of distributing the work of individuals, voices which dissent from the critical consensus will always have a forum The expansion of critical voices is important. It offers the potential for increasing the public’s awareness of non-mainstream publications, since, “as every author who has been published in a small press knows (including the present author for some prior texts), such publications rarely find their way into reviews in the major organs, or onto the shelves of big bookstores. The business of publishing does not work that way.” (Levinson, 1997, 125) This will help readers to the extent that it enables them to find books which fulfill their needs which they would not have been alerted about by existing channels of criticism. This is particularly true of online publications. “Currently, no established trade review publication will review the new [electronic] book format. The standard line from publications like Library Journal and Independent Publisher has been that the magazines have a responsibility to serve the bookseller, and e-books are not sold through bookstores. This closed-door policy has been a frustration for both e-book publishers and authors.” (Link, 1998, 19) Trade publications are not the only ones; few newspapers or Literature at Lighstpeed – page 411 other mainstream publications devote space to reviewing electronic publications, and certainly with no regularity. Until such time as print publications are willing to review online publications, the online world will have to do so itself. One must also be mindful of the danger that certain online arrangements have to undermine the credibility of some criticism. As we have seen, online booksellers try to get a variety of Web sites to affiliate with them.

To take one example, nytimes.com offers a link to barnesandnoble.com next to its online book reviews, and The New York Times gets a piece of the action if anyone buys a book via that route. Because it’s The Times, we can be fairly sure that reviews aren’t skewed to help sales. But it has to be noted that the Times now has a financial interest in that book being reviewed that it didn’t have before. And it’s not coincidental that, as the Online Journalism Review recently reported, while most of the newspaper’s past articles are available online for just one year and can only be retrieved by paying a fee, the Times has made 19 years of book reviews available for free (with the Barnes & Noble ‘buy option,’ of course). (Effron, 1999, 56)

Of course, not every reviewing organization will be as conscientious as the Times; some may spike negative reviews, or choose books they know are more likely to be positively reviewed, or soften harsh reviews, all so as not to jeopardize their potential profits. Readers will never know such a bias exists, since it occurs in the editorial process before critical writing is published. (Perhaps it ultimately isn’t that important, since readers will stop seeking out critics whom they believe are not giving them the best advice -- critics whose cultural capital is diminished by the practice -- something which might work to the advantage of smaller, independent critical voices.) Digital technologies also spawn their own filtering mechanisms. One is the web ring. As we saw in Chapter Two, a series of pages on a common theme are linked together; each page must carry an icon announcing the name of the ring to which it belongs, with buttons which will lead to the next page in the ring, the last page in the ring, a random page in the ring and lists of the next five pages, the previous five pages Literature at Lighstpeed – page 412 and all of the pages in the ring. “The trend is rapidly gaining momentum -- in January [1997], webring.com, a directory for Web rings, listed about 1,000 rings. By September, it listed 18,000, encompassing some 200,000 Web sites. Webring.com estimates that its number of ‘hits’ is going up at a rate of 22% per quarter.” (“Ringing in a New Web Strategy,” 1997, unpaginated) The advantage of Web rings to surfers is that if they chance upon a page on a subject which interests them, they can use the various facets of ring software to find other pages on the same subject. In fact, some software allows readers to, in a sense, become their own critics: “They remember what you have bought and will send you nice little e-mails notifying you when new books have been published in your subject areas or by your favorite authors, or when the next book in the series you are reading has become available.” (Coffman, 1999, unpaginated) These kinds of filtering programs are simple, but they can be effective:

This whole column started from my experience while buying John Hagel III and Marc Singer’s new book, Net Worth, from Amazon. I bought that book because Amazon remembered about my earlier purchase of the [Nicholas] Negroponte book [Being Digital] and assumed that I would like Hagel and Singer’s book too. So Jeff Bezos and crew sent me a little message the next time I signed on to Amazon after Net Worth’s publication, telling me that it was available. (Chuck, 1999, unpaginated)

As it happens, there are much more sophisticated pieces of software which track not only your personal preferences, as indicated by your purchases, but compare them with a database of the preferences of others. These programs use mathematical algorithms to predict what will interest you; if a large number of the people who have bought the same three books as you have bought a fourth, the computer reasons, you likely will be interested in buying the fourth as well. Not only that, but such analyses can be made across media: if a large number of people who have bought the same three books as you have bought the same CD of music, you likely will be interested in buying the CD Literature at Lighstpeed – page 413 as well. (Of course, online sellers with a variety of goods, such as Amazon.com, are more likely to take advantage of this type of software.) There is one disadvantage to using Web rings or the more sophisticated types of filtering software; because they are based on previous preferences, there is very little possibility for serendipitous discovery. When reading a newspaper’s review columns, it is possible to find a book which wasn’t directly related to one’s interests, but nonetheless was something which the person could find value in. A common method which grad students use to find texts for research (once the card or online catalogues have been exhausted, of course), is to look at books clustered in the stacks around a book which is relevant to the subject. (One other facet of serendipity is that general publications with articles on a wide variety of subjects allow for readers to potentially find articles which they would not have thought they would be interested in, and, therefore, not have sought out; if people can order information online by the article, this will further limit serendipity on the Net.) Readers can adopt strategies for randomizing the results of applying their filters (assuming serendipity is valued for them, rather than thinking of information in strictly utilitarian terms), but it may be that losing serendipitous findings is a price which has to be paid for being able to find useful information in the vast collection of information on the Net. All of this assumes a fixed text; hypertext adds one additional complication to the question of criticism. A critic will traditionally bolster her or his argument about the worth of a text with specific examples taken from the text. The reader can then compare his or her experience of the text to what the critic has described (among other things, this helps the reader determine if the critic’s opinion was valid, an assessment which is applied to the critic’s subsequent writing). Since a given reader’s experience of a hypertext may be completely different than a critic’s, the criticism may not adequately describe the reader’s experience. Criticism of hypertext or hypermedia is not likely to be Literature at Lighstpeed – page 414 definitive; potential readers will likely have to seek out a variety of critical voices in order to get a full sense of what to expect from a work.

Readers To a large extent, the end users of a technology do not figure in narratives about technological development (Pinch and Bijker (1987) and Schwartz Cowan (1987) being notable exceptions). In some ways, this is understandable: much of the development of technology comes from research laboratories, and the choice of problems to work on and assumptions researchers bring to the creation of new technologies is vitally important to their early stages of development. However, many technologies (especially computer- based technologies) go through successive stages of development, introduction into society and further development; the feedback model of technological development which I introduced in Chapter One is a good way of understanding this process. This means that end users, although always a recognizable stakeholder in any technology, now have to be recognized as having a direct influence on the direction a technology develops. In the present case, for instance, we have seen how computer users rejected “push” technologies, forcing the companies which wanted to use them to reassess their strategies for the World Wide Web. In fact, the interests of the stakeholders who are users of the technologies under study -- readers -- have been mentioned throughout this dissertation. For example, we have already touched on the fact that, “According to the National Writers Union, ‘income-producing self-publishing on the Net could be a great boon not only for freelance writers, but for readers as well. The ability to earn a living from online distribution of one’s work will encourage a wider range of writers to produce a wider range of materials for a wider range of audiences.” (Godin, 1995, 189) The creation of new forms of filtering mechanisms, to use another example, benefits readers, who would otherwise have a lot of difficulties finding useful information from among the vast Literature at Lighstpeed – page 415 amounts of information on the Internet. And, of course, one of the central arguments in this dissertation is that the Internet has the capacity to allow every reader to be a writer, perhaps in a philosophical sense (as with hypertext), but certainly in a practical sense. I would like to add a few additional points about readers. In 1995, the Canadian Book Publisher’s Council, the Association of Canadian Publishers, the Canadian Booksellers and the Writer’s Union of Canada commissioned a three part study called “Who Buys Books?” which contained some useful information. First, a couple of caveats: people who buy books are not always readers (some people buy books as gifts for others, for instance, or buy books to keep up an appearance of being well read), so the facts aren’t exact. In addition, according to the first study, “Frequent buyers, representing 30% of the total market, account for 70% of all books sold. 61% of frequent buyers are female.” (Hushion, 1995, 2) This is, of course, the exact opposite of the Internet, where the majority of users continue to be men. (This disparity suggests that, as big as they are, online book sales won’t really take off until there are as many women on the Net as men.) In any case, for these two reasons, what is found in the study can only be suggestive, and should not be considered the final word on the subject. The first study divided buyers into three categories: frequent, occasional and infrequent book buyers. For frequent book buyers, who “are most likely to buy fiction,” (Hushion, 1995, 3) 63% said they bought their last book from a specific store because it was “conveniently located.” (Market Facts of Canada, 1995, unpaginated) This suggests that online booksellers have a tremendous advantage over retailers, since ordering from home, as we have seen, is more convenient for many people than going out to a store to buy a book. Thirty-five per cent of frequent buyers said that the last book they bought was at a store which did not specialize in books (a card shop, for instance, or a cigar store) because it had lower prices. (Market Facts of Canada, 1995b, unpaginated) In addition, of Literature at Lighstpeed – page 416 those frequent buyers who do not buy at bookstores, “52% said they did not because ‘Books are priced higher there.’” (Market Facts of Canada, 1995c, unpaginated) This is supported by an observation from the third study, an overview of studies of book buying habits in the United States: “Excluding remainders, used books, and book club books, close to half of all books sold fell into the [US]$3 to $7.99 price ranges, with an additional 18% under $3.” (Hunt, 1995, 4) This explains why price wars are so important to retail and online booksellers, and suggests that it will always be the case. It also suggests that if online booksellers can establish a reputation for offering the lowest prices, it will give them a powerful psychological advantage over physical retailers. On the other hand, individual writers, without the overheads of even the smallest publishers, may be able to make a living selling their work for even less. The second study, which divides book buyers into two groups, regular and irregular, offers some hope for publishers: “Book buyers do not regard other entertainment media as competition with reading. It is more a matter of what mood they are in and what need they want to satisfy. Reading is solitary, for long trips, for bedtime. Music is something to enjoy while doing other things (including reading). Films are thought to be more of a group social activity.” (Environics Research Group, 1995, unpaginated) This suggests that new media will not take time away from book reading. This is supported by the further observation that “Almost all book buyers and over half of non-book buyers say they often buy books to receive as gifts. Many of the non-book buyers say they receive enough books as gifts that they seldom have to buy any for themselves.” (ibid) I have yet to hear of anybody who gives an online publication to another person as a gift; we can assume that this will always give physical books an advantage over online publications (although, if consumers can download, print and bind the books themselves, this may not necessarily bode well for bookstores). Literature at Lighstpeed – page 417 The survey offers some evidence to contradict this, however. “The most common obstacle to purchase,” of a book, it claims, “was a lack of time to read [original emphasis].” (ibid) Thus, while book buyers recognize that different media serve different purposes in their life, they also seem to be supporting the suggestion that media consumption is a zero sum game, and that time devoted to reading books is decreasing. It is also worth noting that another important reason people give for not buying books is that they occupy too much space, a problem which is alleviated by the online delivery of digital texts. One other facet of book buying mentioned in the second Who Buys Books? survey is worth noting: “People prefer to shop at chain (62%), rather than smaller stores (15%), used book stores (10%), campus book stores (5%), corner stores (4%) or bargain bins (4%).” (ibid) This indicates rather strongly the precarious existence of independent bookstores. It stands to reason: as mentioned earlier, independents rarely have the economies of scale of the chains, and therefore cannot discount books as deeply, so book buyers, who, as we have just seen, are sensitive to price, are more likely to shop at the chains. The fact that used bookstores, which offer substantial discounts, are not high on the preference list of book buyers is an anomaly which can partially be explained by the fact that new books are a fetish commodity; used books, no matter how good their condition, are considered less worth having. Buying a used book as a gift (as we have seen, an important reason people buy books) is considered socially unacceptable, and the person receiving such a gift would feel slighted. More research on the relationship between book readers and electronic text delivery is needed. It should be clear, however, that readers are a stakeholder group which will have an important effect on the direction of this technology. Like other sub-systems, it [technology] can be geared to certain social ends. The choice, determination and implementation of these ends and technological ventures which are linked to them are matters of general policy. Technology cannot escape the process of value judgment resulting from political struggles and orientations of society. (Hetman, 1977, 4)

If our thinking centres on the effect of technology on society, then we will tend to pose questions like, ‘How can society best adapt to changing technology?’ We will take technological change as a given, as an independent factor, and think through our social actions as a range of (more or less) passive responses. If, alternatively, we focus on the effect of society on technology, then technology ceases to be an independent factor. Our technology becomes, like our economy or our political system, an aspect of the way we live socially... It even becomes something whose changes we might think of consciously shaping -- though we must warn right at the beginning that to say technology is socially shaped is not to say that it can necessarily be altered easily. (Winner, 1985, 2/3)

...broad synthesizing descriptions of on-line culture overstate both the Internet’s homogeneity and its independence from off-line contexts. (Kendall, 1999, 68)

We are supported as scholars and faculty in great measure by the public purse, and, unlike most arts and humanities, the justification for the money is largely that our activities inform public action. More important, we know much that is vital to national decisions and ought, as citizens, to contribute our knowledge -- both detailed social facts and general social perspectives -- to public discourse. (Fischer, 1990, 50)

Chapter 6: Conclusion: Web Fiction Writers in Society

The Story So Far... The word made digital means different things to different people. As we have seen, individuals who write fiction and make it available to the public are a diverse group: some are in their teens, other are in their sixties; they are somewhat geographically Literature at Lightspeed – page 419 dispersed; their education levels vary; and so on. Given this diversity, it comes as no surprise that the range of stories which they produce is also diverse: from genre works (such as science fiction and fantasy) to serious literary fiction. It is also understandable that they use the World Wide Web to distribute their fiction for a variety of reasons: while many are simply looking to expand their readership, others either make or would like to make money for their efforts; while some (mostly hypertext fiction authors) are excited by what they see as the possibility of working in a new artistic medium, the rules for which have yet to be established, others (mostly collaborative fiction writers) see their work as a game played with other writers. The writers feel that there are some disadvantages to publishing online, such as the fact that without the editorial process of traditional print venues, online publishing is seen as “vanity publishing” (read: inferior), and, therefore, readers do not look to the Web for quality fiction. However, the writers obviously believe that the advantages outweigh the disadvantages. These include: not having to please book publishers or magazine editors in order to get published; ease of online distribution, as opposed to self- published work in print which is usually hand-distributed; online publishing is seen to be less expensive than self-publishing in print; and so on. Had the writers been the only social group interested in using the Web, the story would have ended there, with an uncertain but hopeful conclusion. However, other groups also have an interest in the medium. We saw that transnational entertainment corporations would like to make the Web another revenue stream for their branded, cross- media properties. In a sense, this means that the corporations and individuals are competing for the attention of Web users. Both groups face common problems with using the Web for profit: as the amount of information as a generic commodity increases, the value of any single piece of information approaches zero. The greater the amount of information available, the smaller the potential audience for any single bit of it; as we Literature at Lightspeed – page 420 saw, audiences fragmented in this way make both traditional subscription and advertising models unworkable on the Web. I suggested that individuals, with lower overhead costs, need less revenue to make a profit; on the other hand, the branded content of the corporations may prove to be more attractive to potential audiences. Some corporate strategies are focused on trying to remake the two-way, interactive medium of the Web into a mass medium with minimal interactivity, since the companies have effective revenue models for mass media; “push” media have, to date, failed in this regard, but streaming video, Web TV and asymmetrical forms of distribution such as ADSL may yet change the fundamental nature of the Web, to the advantage of major corporate information producers and the detriment of individual producers. Between these two groups stands government, with its general mandate to mediate conflicts between various interests in society, and its specific mandate to promote national cultures. The Web poses many difficulties for governments which would act in this area: given its international scope, the effectiveness of national regulation is called into question; it combines media which have been previously discrete, making which regulatory regime to follow a matter of intense debate. As we shall see below, there is a danger that if governments attempt to apply regulations created for an existing medium to the Web, they will effectively reshape it, to the advantage, again, of entertainment conglomerates and the detriment of individual producers. Other specific government actions may affect the interests of various stakeholders. For example, while many writers expressed concern about unauthorized copying of their work (something made much easier by digital communications systems), initiatives on copyright pursued in national and international fora and courts have, for the most part, protected the interests of corporations, not individuals. In addition, as we saw, efforts at censoring online material would effectively silence the voices of some individual writers. I concluded by suggesting that the best way for national governments to pursue the Literature at Lightspeed – page 421 promotion of their natural culture would be to support the work of individual artists. An overview of some existing Canadian policies - which focused on digital art as either part of overall industrial strategy or something which could be added onto existing arts support programmes – suggests that they are insufficient to achieve cultural policies through the Web. Finally, I considered some of the other groups which could be said to have a stake in online publishing. The fact that print publishers are moving away from mid-list books in search of the next blockbuster, for example, means that many writers who might once have expected to be published in print will be no longer. This trend in publishing is exacerbated by trends in bookselling, most notably the heavy promotion of blockbusters and the increasing speed with which books are returned to publishers, which gives less immediately popular work less opportunity to find an audience. Some of the writers who are thus disenfranchised may decide to publish on the Web, this would increase the competition for readers among writers, but it could also increase the legitimacy of the Web as a publishing medium, resulting in more readers going there specifically to find fiction. Then, there is the reader. In their survey responses, few of the writers could give a clear indication of who read their work, which suggests that people are currently not using the Web to access fiction. However, the survey looked at in the last chapter showed that price was an important consideration for readers, which could have important implications for Web publishing given the potential cost benefits of disintermediating much of the publishing industry made possible by the ability of this new technology to distribute text. Taken as a whole, in this dissertation I have tried to show that individual writers are not isolated in their efforts to use the World Wide Web to distribute their work, that, in fact, they are part of a much larger web of individuals and organizations influencing Literature at Lightspeed – page 422 the shape and uses of this new technology. I would like to explore some of the implications of this in the balance of this work.

Media Theories Revisited In the first chapter, I suggested that neither technological determinism nor social constructivism were sufficient in themselves to explain all of the aspects of the relationship between society and technology, and suggested that they were both parts of a larger process of technological change which has been called “mutual shaping.” In Chapter Five, I looked at the stake of page designers over time, and showed that it could change not because the technology changed, but because the social structures around the technology changed. Neither determinism nor constructivism would sufficiently be able to explain this phenomenon. This isolated situation is not the only argument which favours mutual shaping. In Chapter Three, different forms of Web technology were explored, including “push” technology, WebTV and asymmetrical signal distribution. My argument was that these forms of the technology would change what individuals could do on the Web; in the worst case, they would no longer be able to create and upload their own work. In such a case, the formation of communities of individual creators on the Web would be next to impossible. Notice that this is a deterministic, rather than constructivist argument. I would not be able to make this case if I looked at the stakeholder groups alone. Why have no other constructivists encountered this problem? I would suggest that it is because one of the original aspects of the current study is that it looks at a technology which is still highly contested, whereas previous constructivist studies were of technologies which had already achieved a high degree of stability. If a technology is stabilized, then what happens after its widespread dissemination into society is moot, since its social effects are, in a sense, an inevitable consequence of its adoption. If, on the Literature at Lightspeed – page 423 other hand, you’re looking at technological change from the inside, that is, while the form an artifact takes is still being contested, the effects various forms the technology can take matters. Nobody can predict the future. Nonetheless, it is possible to determine some of the social effects of a technology before it is introduced into society. Before stability occurs, individuals and society have choices; in weighing those choices, we must consider the possible foreseeable outcomes on the stakeholders involved. We must combine constructivist and determinist considerations to decide on personal technological use as well as political policy. One of the potential pitfalls of social constructivist research is the temptation to write the history of a technology with the knowledge of the form in which it finally stabilized. This can lead to linear histories that simplify the conflict over the shape of the technology, as well as giving its stable form a kind of inevitability. One of the advantages of studying a technology which hasn’t achieved stability is that the diversity of relevant social groups, their visions of what a technology should be and the conflicts between them, becomes apparent in all their messy, human contingency. One of the disadvantages, however, is that, because there is no clear guide to which groups will be relevant to the stability of the technology, the researcher must cast a wide net when defining which groups may be relevant. This is most obvious in Chapter Five, which contains my best guess as to which stakeholder groups will be involved in the emergence of the Internet as a publishing medium. The involvement of some of the groups may be decisive; the involvement of other groups may be irrelevant. We will not be able to say definitively which groups are which until the technology has stabilized. My approach in this dissertation has been to identify relevant social groups, their stake in the technology, and how their view of the technology might affect the stakes other groups have in it. This appears in statements of the general form: “[STAKEHOLDER A] wants [TECHNOLOGY X] to develop in accord with Literature at Lightspeed – page 424 [INTEREST a], but [STAKEHOLDER B] would be affected because of [INTEREST b].” Thus: writers [A] would like to use the World Wide Web [X] to distribute their work in order to be able to get more readers and perhaps make money [a], but this would mean that traditional print publishers [B] could lose much of their existing market [b]. Notice that this is not a predictive statement; I am not suggesting that the interests of one or the other stakeholder will ultimately determine the direction of the development of the technology. The main advantage of stating the interests of various stakeholders this way is that it foregrounds the contested nature of the technology by making clear how the visions of a pair of stakeholders differ. One aspect of the general form of the statement is that it is commutative: it would work just as well (although the meaning would be somewhat different) if Stakeholder B’s interest was stated first and Stakeholder A’s was stated second. Another aspect of this approach is that it can also be used to describe conflicts over technology where closure has been achieved, even where the technology has failed to succeed. For example: entertainment conglomerates [A] tried to use “push” technologies on the Internet [X] because they thought they could make money from them [a], but individual users [B] did not accept push technologies because they preferred to search the Web for information they wanted rather than have information they may or may not have wanted thrust upon them [b]. Of course, this kind of statement is a simplification of a complex reality. However, it is useful for summing up the relationship between the interests of a pair of stakeholders in a given technology. This kind of statement does have an inherent problem: it makes it appear as if technology is determined by the outcome of a single conflict between two stakeholders. We may come to the conclusion that the creation of technology involves a tug of war, with each side pulling in a different direction, the outcome of which is determined by the relative social and/or economic power of the stakeholders. Yet, throughout this study, I Literature at Lightspeed – page 425 have stressed that the stakeholder groups in publishing on the World Wide Web are numerous, each with its own understanding of how the technology should develop, its own technological frame, and what it should be used for. Rather than a single line, a better graphic representation of this situation would be a vector geometry graph. In vector geometry, several lines of varying lengths and directions appear in a single graph; the sum of the lines requires not only adding their lengths together, but also their directions. This is analogous to the present situation, where a large number of stakeholder groups are pushing for the development of a technology along a variety of lines. To understand the larger picture of technological development, therefore, it is necessary to look at it as a series of statements about the competing interests of different stakeholders. Thus, to what has already been written in this section, we would have to add that publishers, whether individual or corporate [A] who use the World Wide Web [X] to distribute their work [a], may cause a decline in the use of printing presses [B], which stand to lose substantial work and, therefore, revenue [b]. We could also add that online booksellers [A] selling through the World Wide Web [X] hope to reap substantial profits [a], which seems to be adding to the financial burdens [b] of real world independent bookstores [B]. And so on. (A summation of the major relationships between stakeholders developed in this dissertation is provided in Chart 6.1). Another aspect of social constructivist theory which requires revisiting is the assignment of actors into stakeholder groups: at some level, this must always be an arbitrary process. When I began the current study, for instance, the subject I thought I would be looking at were “fiction writers who put their work on the World Wide Web.” It soon became apparent, however, that this was not a homogenous group, that, at the very least, it consisted of sub-groups of people who put their work on their own Web pages, people whose work appeared on the pages of electronic magazines and people who write hypertext fiction. These groups do not have the same set of interests in the new Literature at Lightspeed – page 427 technology. Most of the individual Web page creators, for instance, have also had their fiction published in print; they see the Web as one more venue for their work. Hypertext authors, on the other hand, could not create their works in other media (early print experiments in hyperfiction notwithstanding); for them, computers are not a convenience, but a necessity. In a similar way, we can see that individual Web page creators require different skills (coding in HTML, uploading material to the Web, et al) than writers whose work is published in an ezine (who need know no more than how to email their work to the zine editors, who are then responsible for designing the pages and putting them on the Web). All three sub-groups are united by a common goal (publishing fiction on the Web), yet have different investments in the technology (or, as Bijker might put it, define the technology in three different ways, seeing, in essence, different technologies). Nor does the process stop there. People who put fiction on their own Web pages can be divided into two groups: those who are mostly interested in developing a readership and those who are mostly interested in finding a way to make money from their work. It is in the best interests of the former group to keep the Internet as open to individual contributions as possible; it may be in the best interests of the latter group to accept some form of corporate control if they can use the economic models the corporations come up with for their own profit. Moreover, some people would like both more readers and increased revenue, goals which may, with some forms of the technology, not be entirely incompatible. Thus, there are at least three different sub-sub- groups within this sub-group, each with its own interests which lead it to define the technology in different ways. We could go further. Of the sub-sub-group which wants to use the Web to profit from their fiction writing, there are those who hope to be able to do it within the existing technology (ie: by putting a chapter of a novel on the Web and asking those who would like the rest of the novel to pay with their credit card) and those who feel that a more Literature at Lightspeed – page 428 sophisticated economic model is necessary. The former group is less likely to go along with corporate economic schemes than the latter. My survey of fiction writers on the Web effectively stops there, but I suspect that, if we had enough information, we could keep subdividing stakeholder groups into smaller and smaller units until we inevitably reached the level of the individual. Of course, research on all of the 80 million or so individuals currently on the Internet would tax the resources of even the most well-endowed research institutions! My purpose in pointing this out is not to invalidate the stakeholder model, but just to point out that we must always probe how stakeholder groups are defined to ensure that statements about their interests do have some validity.

The Use of Description

As the reader will have noted, this dissertation is primarily descriptive, explaining what people are doing, the reasons they give for doing it, who they are, etc. The proper relationship between description and theory should be a matter best left to the individual researcher; as Becker said, “The appropriate ratio of description to interpretation is a real problem every describer of the social world has to solve.” (1998, 79) However, the truth is that the academy values theory over description; theory is felt to be the proper path to a deeper understanding of real world phenomena. It is necessary, therefore, for me to justify my largely descriptive approach.

I would like to start by observing that computer mediated communications is a new phenomenon in communications history with unique features. While the former point is obvious, the latter isn’t, since most of the people who write about the Internet apply existing theoretical constructs to it. If the Internet were simply an extension of an existing communications medium, this would be unproblematic. However, as implied in Literature at Lightspeed – page 429 my dissertation, the Internet is developing into an extension of all existing communications media (see: the variable to variable model I develop in Chapter Four).

Applying existing theory to this new medium, as I argued in Chapter One, will necessarily distort our understanding of the medium by accentuating some of its features and downplaying others. Moreover, one of the main problems with the way governments approach the Internet is that they are trying to fit it into existing communication models, leading to attempts at regulation which are doomed to fail (as I showed in Chapter Four).

It becomes necessary, before we can go too deep into theories about how the

Internet functions, then, for us to have some empirical evidence about what the Internet actually is. In the absence of this, researchers are like the blind men trying to describe an elephant; each may have a perfectly workable theory about their small part of it, but none are able to grasp the whole. This results in the Internet becoming a Rorschach test for researchers; if you look hard enough, you will likely be able to find some area of online communications to support your theory. Great for the academy, perhaps; not so useful for people who are trying to understand the medium (including businesspeople and legislators).

In addition, as mentioned at various points in the dissertation, the World Wide

Web is in a constant state of flux.

The replicability of CMC field research is difficult, if not impossible, for two main reasons. On a technological level, the Internet is permanently changing its configuration and supporting technology. The underlying networking protocols cannot guarantee the same conditions when replicating experiments simply because each time the path of information communication is unique; thus, the time delay and consequences connected with it are different. On a communication level, the difficulties Literature at Lightspeed – page 430

in replication come from the creative aspect of language use. (Sudweeks and Simoff, 1999, 38)

For this reason, researchers on digital technologies have to be especially careful to

provide detailed descriptions of their subjects; without such descriptions, it soon becomes

impossible to determine whether their theories accurately explain the phenomenon under

study.

The combination of the newness of the medium and its ephemerality suggests

another reason that current studies should be heavily descriptive: they should be seen as

the foundation on which theoretical constructs can, in the future, be built. “A definitive

history of the Internet of our times is decades from being written. The various

perspectives being written now are the basis on which we build this history.” (Costigan,

1999, xx) Becker made the same point in a different way: “I worked my way through

graduate school playing piano in taverns and strip joints in Chicago. Should

ethnomusicologists study what every tavern piano player (the kind I was) plays in all the

joints on all the streets in all the world’s cities? No one would have thought it worthwhile

to do that around 1900, when a definitive study could have been done, say, of the origins

of ragtime. But wouldn’t it be wonderful if they had?” (1998, 74)

A related reason for doing descriptive work is that it can upset the cozy

assumptions which develop around a subject. “What does all this description do for us?

Perhaps not the only thing, but a very important one, is that it helps us get around

conventional thinking. A major obstacle to proper description and analysis of social

phenomena is that we think we know most of the answers already.” (ibid, 83) For instance, many people believe that fiction on the World Wide Web is dominated by Literature at Lightspeed – page 431 science fiction and fantasy, the literature of the young, technically literate people demographers tell us make up the majority of Netizens. In this dissertation, I devoted many pages to describing the wide variety of stories actually available. I could have simply said that the general impression was wrong, but I believe the evidence I have gathered is a much more eloquent argument.

Resolving Conflicts

To this point, we have looked at the various stakeholders and shown that their perceived interests often conflict. Theory, as well as our own experience, suggests that technological artifacts do achieve a form of stability. Perhaps the most important question open to us is: how do we go from an initial state of conflict to a state of stability? One answer is enrolment.

“This describes the process by which a social group propagates its variant of solution by drawing in other groups to support its sociotechnical ensemble. More than in the other configurations, the success of an innovation will here depend upon the formation of a new constituency -- a set of relevant social groups that adopts the technological frame [note omitted].” (Bijker, 1995, 278) To enroll somebody in your stakeholder group, you must convince him or her to align his or her interests with your own. The main tool a stakeholder wields to enroll members of other groups into his or her group is rhetoric; using a variety of arguments, the stakeholder must convince members of other stakeholder groups that it is in their interest to adopt the technology which the original stakeholder wants. Literature at Lightspeed – page 432 Thus, when corporations felt it was in their interest to develop “push” technologies, they had to find a way to enroll Internet users, to get them to adopt it. The corporations promoted the technology as a means of diminishing information overload: simply sign up with a push service, tell it what you are interested in, and you will get the information you want delivered to your desktop. No more frustration surfing the Web for hours to find the one piece of information you need. The Internet users who did sign up for these services had been enrolled in the corporate framework for understanding the technology. However, the majority of Internet users did not sign up for it. They valued their ability to search for information themselves, and resented the intrusive nature of the technology. You could say that the technological frame through which they perceived the Internet could not be enrolled into the frame of the corporations. So, in order to further their goals, the corporations needed to find a new stakeholder group which could be enrolled into their technological frame. Bruno Latour suggests that one way to enroll people to your frame is to invent new goals (1987, 114) and use them to develop new stakeholder groups. (ibid, 115) This process is beginning to happen around Web TV. Before its invention, people would watch television (more or less) passively. Web TV creates a whole new experience by making television interactive. A general dissatisfaction with television notwithstanding, there is little reason to believe that anybody actually wants or needs interactive television. Nonetheless, if the creators of Web TV can convince enough people to enroll in their technological frame, a whole new set of stakeholders will emerge: interactive television watchers (as opposed to more passive television watchers). If this happens, it will be because the switch in technological frame from passive to (somewhat more) active television viewer is not as great as the switch from active to passive computer user (which was necessary for the success of push). Literature at Lightspeed – page 433 Latour also argues that it isn’t sufficient to enroll people into your technological frame, since people are by nature unpredictable, and may use the technological artifact you have created in ways that you had not expected or intended. It becomes necessary, then, “to control their behaviour in order to make their actions predictable. [original emphasis]” (ibid, 108) Rhetoric is not necessarily the best way to accomplish this, since, once people have an artifact in their hand, they no longer have to listen to what its creator says about it. (How many people read the manual? Really.) The best way to control the behaviour of those who take up a technology, of course, is to design the artifact in such a way so that it constrains the actions of its users, limiting them to using it the way it was intended by the dominant stakeholder group. As we shall see, this occurred when the radios for public consumption were designed without transmitters; it was in the interest of the corporations which benefited from the development of commercial radio. This is also true of some of the more extreme methods of changing the Internet. The resolution of conflicts between stakeholders has many dimensions. A stakeholder group that contains a small number of individuals is more likely to develop a united strategy for advancing its interests than a stakeholder group with a large number of individuals, whose interests are not likely to be as homogenous, and whose efforts, therefore, are likely to be fragmented. In the present case, for instance, there are two broadly defined groups whose interests are largely in opposition: large entertainment conglomerates and individual Web page creators. The entertainment conglomerates have a common goal (to maximize profit for their shareholders), and, given a common corporate culture, can be assumed to come up with similar methods to achieve these ends with a given technology. As we have just seen, individuals have a much greater range of interests, and can be expected to follow different paths to achieve their various goals, some of which will be aligned with the interests of the corporations. Literature at Lightspeed – page 434 There is precedence for this type of analysis. As we shall see, the fate of radio was contested in a manner similar to that of the Internet. Two broad stakeholder groups were in conflict over the way the technology should develop: emerging entertainment corporations which felt that there was profit to be had in the new medium, and public rights groups (representing unions, religious organizations, educational organizations and the like) which felt that the airwaves should be employed for the larger public good. As McChesney (1993) showed in his exhaustive work on the subject, those who wanted the airwaves to continue to be publicly accessible felt that there were several different ways of assuring this, thus pulling the movement in a variety of directions. Had they made a united front against the corporate interests, there is no guarantee they would have prevailed, of course; however, being fragmented made it more difficult for the group to achieve its goals. One caveat to this general rule must be noted. The failure of push technologies shows that small, relatively homogenous groups do not always prevail. As it happened, the ethos of sharing information united the much larger computer network user community against push technologies. So, while it is generally easier for a small number of individual stakeholders to agree on a strategy to advance their interests than for a large number of individual stakeholders to do so, a simple comparison of the size of stakeholder groups is not a sufficient way of determining the relative effectiveness of the strategies of stakeholder groups. In Chapter Three, I argued that entertainment conglomerates continued to develop technical methods of turning the Internet into a form of television after the demise of push technologies because digital television offered them the best hope of imposing a workable economic model on the Internet. In the current discussion, we can begin to see a complementary reason for them to continue to pursue this line of research. Approximately 50 million North Americans use the Internet; however, over 340 million Literature at Lightspeed – page 435 people live in the United States and Canada. Thus, there are somewhere in the neighbourhood of 290 million people who do not use the Internet, a huge potential market. Many of these people are too poor to be able to afford computers and monthly Internet access fees, of course. Many of these people have little experience with the technology, and are uncertain how it will benefit them. By making the Internet more like television (by, for instance, adding a set-top box which allows Web access through a person’s TV set), entertainment conglomerates hope to make it comfortable for people who do not currently use it by making it analogous to a technology which they already use. This is in line with existing notions of how technology is adopted throughout society:

Early adopters are assumed to be different from late adopters in their willingness to try new things or make changes in their lifestyles. Previous research into the diffusion of new technologies shows that early adopters are different from early and late majority adopters of a technology in their appreciation of the technical aspects of innovation... This makes late adopters much more susceptible to the influence of brand-style marketing, where advertisers attempt to create positive personality or images with which consumers will associate their product [note omitted].” (McQuivey, 1997, 7)

In particular, late adopters are more likely to use borrowed expectations, knowledge of how existing media are structured and used, to describe new, less familiar media. (ibid, 1997, 5) To some extent, their existing expectations determine how they use the emerging technology rather than the new possibilities for communication which it creates. In this way, the entertainment corporations hope to completely by-pass current users and sell their vision of the Internet (which remains in their control) to a stakeholder group who, because they have little experience of it, have no emotional or intellectual investment in keeping the Internet open as a medium for individual, two-way communications. Again, this is not dissimilar from the efforts of early radio entrepreneurs Literature at Lightspeed – page 436 to enlist individual listeners to their cause by arguing that government regulation of the airwaves (which would be largely to their benefit) would benefit listeners because they would be able to hear clear signals. Most listeners were not interested in transmitting, and therefore didn’t even know what the “convenience” of a regulated system would cause them to lose. This suggests another general rule. A technological artifact makes a long journey from a gleam in an engineer’s eye to something which is diffused throughout society: it may start as an academic’s theory; prototypes must be designed, built, tested and redesigned, rebuilt, retested, and so on; once a workable model has been achieved, it must be mass produced; distribution networks must be opened, and; demand for the artifact must be generated. Access to the early parts of this production stream gives stakeholders an advantage over those who only have access to the later parts of the production stream. Those who have access to the laboratories have the power to create the artifact, whereas those who have access to artifacts after they are distributed only have the power to accept or reject them. The failure of push technologies suggests that acceptance or rejection can be a strong power; however, as we saw in Chapter Three, those who fund the research, and, therefore, control the research agenda, simply keep developing technologies which advance their interests. One of the most important aspects of control of the earliest parts of the development stream of new technologies is the ability to direct research funds and efforts by defining problems with existing technologies. According to Hughes, “A salient is a protrusion in a geometric figure, a line of battle, or an expanding weather front. Reverse salients are components in the system that have fallen behind or are out of phase with the others.” (1987, 73) Poorly designed motor systems became a reverse salient in automotive design when public environmental consciousness grew to the point that there were protests against the pollution caused by car exhausts. Research efforts tend to Literature at Lightspeed – page 437 cluster around solving the problems posed by reverse salients, since they affect all manufacturers in an industry. Thus, all automobile manufacturers had to devote research resources to improving the designs of their engine systems to lessen polluting exhaust. Reverse salients are most often seen as technical problems; however, as Hughes points out, “the defining and solving of critical problems is a voluntary action.” (ibid, 74) We can go further than this and suggest that how reverse salients are defined and solved depends on the interests of the stakeholder doing the defining and solving, and that they can be negotiated between different stakeholders. Defining and solving reverse salients is a social act. We can see this from the current study. The traditional phone system, on which the Internet has piggy-backed for most of its existence, gives equal bandwidth in both directions. As online applications such as streaming video become more bandwidth intensive, capacity becomes increasingly strained. Equal bandwidth into and out of the home is considered the reverse salient in this situation by major corporations since, in their vision of a digital future dominated by video-on-demand, many people will not use much of the bandwidth out of the home, which will be wasted. However, for individuals who desire to upload their own video to a server, equal bandwidth in and out of the home would be a necessity; for them, the reverse salient is the lack of bandwidth on the system adequate to everybody’s needs. How one defines the problem determines the solutions one will seek: as we saw, some corporate researchers are looking at asymmetrical digital networks to solve the reverse salient as they saw it; for individuals, increasing network capacity would be the solution to the reverse salient as it affects them. In terms of who controls the research agenda, computer software is unique. With most hardware, huge research and development laboratories are required to create technological advances, limiting those who can develop new artifacts to those with a large amount of money; a new computer program, by way of contrast, can be created by a Literature at Lightspeed – page 438 couple of people in their basement. The history of computer programming contains many stories of people who created a small program for their own benefit which was then taken up by the general computer-user community; in fact, some argue that this is the only way truly original computer software is developed (see, for example, Rushkoff, 1999). In this way, individuals have access to early parts of the production process in digital communication. One other general rule which can be stated is that economics is coming to play an increasingly important role in the stabilization of the definition of technological artifacts. As previously mentioned, various stakeholder groups use rhetoric to convince other stakeholders that their vision of a technology is correct, and to enlist members of other stakeholder groups into their group. “We need others to help us transform a claim into a matter of fact. The first and easiest way to find people who will immediately believe the statement, invest in the project, or buy the prototype is to tailor the object in such a way that it caters to these people’s explicit interests. [original emphasis]” (Latour, 1987, 108) Whereas debates over such issues may once have happened in scientific journals or at learned conferences, rhetorical persuasion favouring specific formations of technology now largely takes place through advertising. Thus, most people are learning about, say, Web TV from the television and print advertisements paid for by the corporations which are pushing the technology. This gives stakeholder groups with the financial resources to push their agendas a tremendous advantage over those without such resources. Moreover, the rhetoric of the corporate vision of digital technology, when it appears in newspaper and magazine articles, is delivered by “experts,” technology researchers or pundits who are assumed to have knowledge which is not available to “non-experts” (that is, members of the general public). “[I]t might be argued that...unorganized, uncoordinated members of the public, lacking in the advice of experts, are not in a strong position to forcefully express their views.” (Elliott and Elliott, 1977, Literature at Lightspeed – page 439 21) In essence, formal research is privileged over individual experience; many people who might otherwise be satisfied with a technology will feel the need to obtain a new technology simply because experts tell them they will benefit from its use. With previous technologies, this may have been enough to ensure widespread adoption, closing debates about the definition of the artifact. However, the Internet gives individuals and representatives of stakeholder groups concerned with how technology affects individuals a powerful means of presenting their case: chat rooms, personal email, etc. If they choose to use it, stakeholders who disagree with the corporate agenda have an important organizing tool with which to enlist others to their viewpoint about the technology. (This is another argument for the corporations to attempt to enlist people who are not currently on the Internet to their vision of what it can be: they have no access to the rhetoric of individual stakeholders who have a different vision of the technology, except on those rare occasions when such arguments make it into traditional media). These three ideas (small, homogeneous stakeholder groups have an advantage over larger, more heterogeneous groups; stakeholders with access to early stages of technological development have an advantage over those who only have access to later stages, and; stakeholders who can afford advertising have an advantage in disseminating their rhetoric over those who do not) are mutually reinforcing. Groups with enough money for advertising also tend to be those who fund, or at least have access to, the research of large laboratories (and the funds to develop the discoveries of the laboratories into viable products). Since these things require the accumulation of large amounts of capital, there are few corporations which can accomplish them. This may just be a case of pointing out the obvious: that the wealthy have means of advancing their interests which are not available to the rest of us. “The private citizen is greatly disadvantaged financially by comparison with private companies, public corporations, trade unions and, as in planning questions, the state itself.” (Williams, 1977, Literature at Lightspeed – page 440 34) As Bijker rightly points out, “explanations in terms of power so easily result in begging what seem to be the most interesting questions.” (1995, 11) The obviousness of this truth does not necessarily make it not worth telling. What I have tried to do here is show some of the actual mechanisms by which wealth shapes public debate and, ultimately, the nature of technological change.

Bias? There is a problem at the heart of social constructivism which needs to be addressed. Hands refers to it as the reflexivity problem: “If scientists make decisions on the basis of their individual or group interests, then that should also be the case for the social scientist who studies science.” (1998, 716) Applying their standards to their own work, social scientists do not have a privileged vantage point from which they can trace unbiased histories of technological development. “If sociologists can really find out what is going on out there in the world of science (that it is socially determined), then it means that they have the power to discover (not just construct) the nature of the objects in their domain (the social actions and beliefs of scientists), but this is precisely ability [sic] that they deny to the scientists they study.” (ibid, 717) One method of dealing with this problem is to accept that all findings, including that of the social scientist, are relative, but nonetheless offer invaluable insights. “The argument is that there are no supralocal standards of rationality, truth, or anything else; there are only local and context-dependent standards of valuation. But while these standards are local, they are relevant, important, and binding on those agents in that particular local context; no universal standards does not mean no standards.” (ibid, 718) This study, for instance, is limited by the nature of the analytical tools which existed when I wrote it, by the availability of information to me, etc. Moreover, the rapidity with which technology changes guarantees that the situation as I have written about it will not exist in exactly the same way when others read what I have written. Despite this, I Literature at Lightspeed – page 441 obviously feel there is value in describing the state of technological development at this given moment in time. A second, somewhat less academic way of dealing with this problem is for the author of a work to admit his or her biases, giving the reader the opportunity to assess how they may have affected the work. In this section, I would like to address this issue. I would like to start by pointing out that, while historical studies of technological development are necessary to help tease out theoretical structures, the enterprise of academic research must not stop there. What is the purpose of developing theoretical frameworks for understanding real world phenomena? I believe that it is to then take those theories and apply them to the world as it currently exists. Theory which exists for its own sake, without any relevance to, or in fact, decreasing amounts of reference to, the real world is sterile; an enjoyable parlour game for academics, perhaps, but of little value outside the academy. In the famous Marxist dictum, “The point is not to understand the world, but to change it.” Given this, to the list of stakeholder groups in a new technology, we must add academics. Many will be uncomfortable with this role. However, for me, this is a most compelling argument for studying technologies which have yet to be fixed in the mind of the general public: the possibility of having a positive outcome on the development of a technology, a possibility which is all but extinguished once the technology has achieved stability. It should be clear from the dissertation, then, that I give primacy to the interests of individual writers who are currently using the Web as a means of personal expression. In the figures in Chapter One which show the various stakeholders in old and new publishing media, the writer is at the top of the chain; this reflects my belief that in all media (including collaborative media such as filmmaking), the writer who originates the material is the most important creative figure. Moreover, by examining the interests of Literature at Lightspeed – page 442 writers first and at much greater length than other stakeholders (in Chapter Two), the interests of the other stakeholders are implicitly (when not explicitly) compared to and seen in the light of the interests of writers. If I had started the dissertation with a similar consideration of the interest of corporations, I would likely have ended up with a similar set of stakeholder relationships, but with a different emphasis which probably would have lead to a different conclusion. There are two reasons for this approach. The obvious one is that I am a writer who is currently using the Internet as a means of distributing my own work. The not so obvious one is that I am somewhat naive, and I would like to believe the rhetoric of individual empowerment which surrounds the Internet, I really would like to believe that it can remain a personal communications medium, despite the powerful economic forces which would like to turn it into a digital version of television. Unfortunately, outside of the business press, there is very little debate about the direction the technology is heading. The broad public is not discussing this issue, which will arguably have a great effect on society in the 21st century. This is a problem for social constructivism: how to account for stakeholders, people whose lives will be deeply affected by a technology, who do not have a say in its development. Since they are not active, their interests tend to be ignored. And yet... Four or five years ago, I was sitting with a friend in a Cafe on Yonge Street, watching the people pass by. I told my friend that most of those people were living in a world that no longer existed, by which I meant that they still thought in terms of a society structured around industrial technologies, but our society was already beginning the transition to digital technologies, which would open up the possibility of completely new social structures, many of which will inevitably be instituted. Like the monks busy at work in the scriptoria after the invention of the printing press, we are all living in a world Literature at Lightspeed – page 443 that few of us have truly grasped. The people whose voices are the weakest in this debate have the most to gain (or lose) by its outcome. To date, debate about the future course of digital communications has been largely missing from public discourse. One does not have to subscribe to paranoid fantasies to recognize that such a debate is unlikely to be carried on in traditional media, since the same corporations which dominate traditional media are heavily invested in specific formations of digitally networked communications. “One cannot expect that those who are sponsoring the development of a new technology will indulge in listing its undesirable social consequences, since an inherent feature of the promotion process is to minimise these consequences and to argue that they can be technologically overcome.” (Hetman, 1977, 7) My hope in writing this dissertation is that it will contribute to a public discussion on these issues. The reader can decide whether this approach is valid, and how it affects what has been written. This focus on the use of technology by individual people has led me to believe that one important quality a technology can have is that it enables individual user autonomy more than existing technologies. What this means is the subject of the next section.

Recommendations About Individual User Autonomy Autonomy is, of course, the ability of an individual to choose how to act. A World Wide Web which maximizes individual autonomy would allow surfers to browse wherever they wanted to go, and find whatever information they needed or choose whatever experiences they wanted; further, it would allow individuals to communicate with each other, as well as upload materials to the Web in all available formats, from plain text to full audio and video. The way some technologies diminish individual autonomy is obvious. Push technology, for instance, interferes with the individual’s ability to determine his or her experience of the Web since it requires users to accept messages on Literature at Lightspeed – page 444 their screen which are determined by others at times convenient for the others. Web TV decreases individual autonomy by making it impossible for individuals to produce their own material or upload it to the Web. Some technological interference with individual autonomy is much more subtle. Internet Service Providers which promote their own material on their home pages interfere with individual access to information by making some information harder for a Web surfer to find. Some people may find the convenience of having all their needs served by America Online, Compuserve or other such companies (especially people with children who want to ensure that they surf in safe areas of the Internet) is worth giving up some autonomy. Those who are not so motivated, however, should find themselves ISPs who are not also content producers. A similar subtle problem occurs with search engines which sell preferred spots at the top of user searches; this interferes with the ability of individuals to find exactly what they are looking for. Again, some people may prefer to have search engines funded in this way than with more direct advertising or, horror of horrors, having to pay themselves. Those who don’t should organize to demand that search engines publish their policies on selling positions at the top of searches in a prominent place on their home page, and should avoid using search engines with policies which make it harder for them to find exactly the information for which they are looking without commercial bias. Other technologies are on the horizon, and it is possible (indeed, it is likely) that there will be yet other technologies in the future which we do not see coming in the present. There is a general approach which individuals can take to determine if these technologies advance their interests. Confronted with a new technology, an individual should ask the following questions. What does the technology allow me to do? What does the technology make it harder (or impossible) for me to do? As we have seen, the answers to these questions are not always obvious; technologies which are touted as Literature at Lightspeed – page 445 having great benefits may, in fact, have serious drawbacks for individuals. In any case, the balance between the answers to these two questions will determine whether an individual should accept or resist a new technology. As digital networks have become more “user friendly,” the number of people who use them has increased dramatically; these people are increasingly not programmers, and, therefore, do not share a common view of the technology. Those who would like to see the Internet continue as a two-way medium of communication, offering a maximum of user autonomy, amount must enroll those who do not see their stake in the technology in these terms. In practical terms, this means presenting arguments favouring this use of the technology in appropriate chat rooms and other online fora. Of equal, if not greater, importance, is to discuss these issues with individuals as they sign up for online services and begin their experience of the online digital world; enrolling newbies is a crucial means by which individual stakeholders will increase the number of people who resist technologies which lessen their autonomy. I believe this also means that pundits of the digital age must step up their efforts to use traditional media to inform the general public about these issues. The battle for the allegiance of people who are not currently Internet users is likely to be an important determining factor in the shaping of digital communications networks. While user autonomy is an obviously important principle for individuals, it also has some policy implications which governments should consider. All too often, government regulatory agencies are “captured” by the industry which they are supposed to regulate (for an example of how this worked in Canadian television, see Hardin, 1985). In such situations, the public good becomes conflated with corporate benefit. As I have argued in this dissertation, however, what is good for individual members of the public conflicts with what is good for the major entertainment-producing corporations involved Literature at Lightspeed – page 446 in the Internet. Governments which are serious about their rhetoric of the Internet empowering individuals must take the interest of individuals seriously. Thus, they should embrace the concept of maintaining individual user autonomy on digital communications networks in their regulatory and other deliberations. To be at all effective, any regulation must be international. On the face of it, regional regulation of an international communications network seems impossible; international cooperation is necessary. As regulation of the international telephone system shows, it is possible for governments to find common ground in the regulation of communications networks which transcend their borders. There are also international treaties governing transborder transmission of radio and television signals. In order to best serve the interests of their citizens, governments should negotiate agreements which are in accord with the principles of individual autonomy stated above. Unfortunately, negotiations such as those of the World Trade Organization and the World Intellectual Property Organization are driven by the interests of transnational corporations, not individuals; because of this, governments cede much of their power to govern within their borders to distant bureaucracies which have no stake in local conditions. This is inherently undemocratic. For our present purposes, it is important to note that it also undermines the potential of the medium as a two-way communications system benefitting individuals. As a guide to their negotiations, governments must insist that any international treaties which deal with digital communications networks have to be based on the principles of user autonomy outlined above. Whether or not the World Wide Web remains a means of two-way communications with the potential to build communities or becomes predominantly corporate-driven and commercial is a matter of design, not default. The principles by which we determine the public good will be an important factor in determining the shape technology takes. Literature at Lightspeed – page 447

Radio in the United States: A Cautionary Tale It should be clear, from the discussion in Chapter Three, that, although they are having difficulty finding a way to make money from supplying information over digital networks, powerful economic players continue to look for ways to do so. This could profoundly change the nature of the Web, perhaps transforming it into something its current users would not recognize. If this seems farfetched, we only have to look at the history of radio in the United States, which in many ways parallels the current situation with the Web, to see how such a transformation was accomplished in the past. In the beginning of broadcast radio in the first and second decades of the 20th century, many of the people who transmitted signals were “amateurs who didn’t care much about radio’s profit-making potential. They got involved with wireless because they were fascinated by the new technology. The amateurs were hackers, basically -- hobbyists, tinkerers, and techno-fetishists who huddled in their garages, attics, basements and woodsheds to experience the wondrous possibilities of the latest communications miracle.” (Lappin, 1995, 177) As Lappin suggests, this is directly analogous to the early days of computers, when enthusiasts would build their own machines from kits, and hackers were always on the lookout for a more elegant way of doing things. Others have suggested that, “Radio began as a distributed, many-to-many, bottom-up medium, much like the early days of the World Wide Web...” (Johnson, 1997, 147) Although this second analogy isn’t as exact as the first, since those using the Web as a two-way communications medium are, for the most part, not interested in tinkering with the hardware, it is worth keeping in mind that the earliest users of radio could be producers as well as consumers of programming. Until well into the 1920s,

the new broadcasters were a colorful group. Several distinct categories emerged. First, there were the big manufacturing interests like Literature at Lightspeed – page 448

Westinghouse, GE and RCA. Then there were department stores like Gimbels and Wanamakers which operated stations for self-promotion. Some hotels had stations. There were stations in laundries, chicken farms and a stockyard. In 1922, eleven American newspapers held broadcast licenses, mainly, one suspects, out of self-defense, just as newspapers today were quick to populate the World Wide Web. Churches and universities operated stations. And there were many so-called ego stations operated by wealthy individuals in the spirit of noblesse oblige, or just for the hell of it. (Rowland, 1997, 155)

As with the Web, anybody could transmit as well as receive messages: “When radio was invented, the expectation was that there would be as many transmitters as there were receivers. In the early 1920s, what is now called ‘Ham radio’ -- i.e., amateur radio -- [was] the dominant mode of interacting with radio.” (“Web (vs) TV,” 1997, 37) Radio stations did have to be licensed, but “the U.S. Radio Act of 1912 placed no restrictions on ownership of a license beyond American citizenship...” (Rowland, 1997, 155) so anybody who applied for one received it. Throughout this period, major economic powers were curious about the potential profitability of the new medium.

...almost all research emphasizes the manner in which radio communication was dominated by a handful of enormous corporations, most notably RCA, which was established in 1919 under the auspices of the U. S. government. RCA was partially owned by General Electric (GE) and Westinghouse. By the early 1920s the radio industry -- indeed, the entire communications industry -- had been carefully divided through patent agreements among the large firms. RCA and Westinghouse each launched a handful of radio broadcasting stations in the early and mid- 1920s, although the scholarship tends to emphasize the American Telephone and Telegraph (AT&T) Company’s WEAF of New York because it was the first station to regularly sell airtime to commercial interests as a means of making itself self-sufficient. (McChesney, 1993, 5)

As we have seen, this mirrors the current situation with the Web. As Rick Salutin observes, “Technological changes in communications are always targets for the rich and mighty -- who want to own them to increase their own profits and power.” (1997, A10) Literature at Lightspeed – page 449 There was a problem, though: it was not immediately apparent how money could be made from this new medium. Advertising was not seriously considered because it was felt the public would not stand for it. WEAF thought it could get around this antipathy by selling blocks of airtime to sponsors rather than individual commercials; in advertising for its new concept, “toll broadcasting,” AT&T claimed that, “The American Telephone and Telegraph Company will provide no program of its own, but provide the channels through which anyone with whom it makes a contact can send out their own programs.” (Lappin, 1995, 221) In essence, the company pioneered what have come to be known as infomercials. However, as McChesney notes, “AT&T’s ability to sell its airtime was undermined by the willingness of the other stations, including those owned by RCA and Westinghouse, to give time away for free [note omitted].” (1993, 15) As we have seen, information available for free on the Internet likewise undermines the ability of producers to charge for their content. Other economic models were considered during this period. The British model (where the government levied a tax on radio components to pay for the BBC) was suggested by some, but rejected by powerful forces (political as well as economic) as an unnecessary government intrusion into private speech and enterprise. Some suggested that wealthy patrons sponsor radio, as they had libraries and educational institutions, but this didn’t go very far. For a long time, this problem seemed insoluble. “Thus a single question appears over and over on the pages of Radio Broadcast magazine throughout the first half of the 1920s: Who will pay for radio broadcasting?” (Lappin, 1995, 219) As much of Chapter Three demonstrated, this is directly analogous to the current situation with the Web: “The Big Question is exactly the same: Where will the money come from to pay for content?” (Rowland, 1997, 322) WEAF, which pioneered one form of advertising, added a new, and what would prove to be vital, wrinkle to broadcasting when “AT&T realized that it could offer toll Literature at Lightspeed – page 450 broadcasters access to an even larger listening audience (not to mention some impressive production economies of scale) by linking a few radio stations together with phone wires. AT&T called this innovation ‘chain broadcasting,’ and it was first tried successfully in the summer of 1923, when programming that originated from WEAF in New York was simultaneously broadcast by WJAR in Providence, Rhode Island, and WMAF in South Dartmouth, Massachusetts.” (Lappin, 1995, 222) By the mid-1920s, then, a successful economic model (commercial network broadcasting) was in existence; it only required broadcasters to overcome their squeamishness about advertising to exploit it. Oh, and to do something about all those non-commercial broadcasters clogging what would turn out to be immensely profitable airwaves. There was a problem with the unregulated airwaves: signals from various stations would overlap or otherwise interfere with each other. This was annoying for listeners. Perhaps more importantly, it made advertising much harder, since radio stations could not guarantee listeners to their frequency would, in fact, hear their station from one block to the next. “When [radio] companies began to test the commercial potential of radio broadcasting, they increasingly clashed over the use of frequencies. Despite numerous industry meetings and considerable government prodding, commercial radio stations could not reach an agreement on how to allocate spectrum, assign channels, and generally police compliance in order to minimize interference.” (Mosco, 1989, 187) No less a personage than then Commerce Department Secretary Herbert Hoover argued that legislation was necessary to make it “possible to clear up the chaos of interference and howls in radio reception [note omitted].” (McChesney, 1993, 17/18) Executives in the radio companies readily agreed. The rhetoric of chaos is echoed in some current writing about the World Wide Web. One commentator rather apocalyptically claimed that Literature at Lightspeed – page 451

Throughout history, in times of war, society is chaotic. People are doing what they can to make sense of the environment, and rarely is there any sense of structural order. Then, when the war is over and the dust settles, when the custodial forces take over or are restored to power, the communal structure returns and a sense of order reigns. Ironically, this paradigm mirrors the current situation with the Internet and society. While not exactly a war zone, the Web is a chaotic environment, with people trying to make sense of information where no one organization is in power and everyone is attempting to take their advantage where they can. (“Opening the Gate: Increasing Content of the Internet,” 1997, 151)

The problems with the Web are quite different from that of radio: pages are created and abandoned on an almost minute-by-minute basis; there are so many pages that it is increasingly difficult for one to find precisely what one wants; different kinds of software make pages inaccessible to some users; there is no central authority which controls entry; and so on. One writer suggested that, “If the DNS [domain name system] discord continues, the stage could be set for a reprise of the spectrum wars of the 1920s and ‘30s...” (Simpson, 1997, 92) Regardless of the differences between the two media, the rhetoric is eerily similar; the cover of a prominent computer magazine proclaims: “Reinventing the Web: XML and DHTML will bring order to the chaos.” (1998) In the case of radio, the rhetoric of a communication system in chaos was used as a pretext for government regulation. “In 1924, the niggling problem of what to do about remaining two-way radio in the hands of citizens -- amateur radio -- was dealt with by an international convention on the allocation of radio spectrum. Amateurs were henceforth to be denied access to radio frequencies below two hundred meters, which meant, in terms of contemporary engineering knowledge, they no longer had access to the only part of the spectrum usable for long-distance radio communication.” (Rowland, 1997, 148) This was only the first step in the process in the United States, though, since the majority of licenses for radio transmitters were not, at the time, in the hands of commercial broadcasters. More direct action was needed. Literature at Lightspeed – page 452 In 1927, Congress created the Federal Radio Commission to deal with the new medium. The Radio Act which established the FRC divided the radio spectrum into “clear channels” which could only be used by one station at one high frequency, and other channels at lower frequencies which had to be shared by more than one station. The little-known FRC General Order 40 of August 1928 mandated a reallocation of the frequencies on the radio spectrum. General Order 40 signaled the end of radio as a meaningful two-way communications medium and the beginning of corporate hegemony over the medium. “[O]f the first twenty-five stations set aside for clear channels by the FRC, twenty-three had been licensed to broadcasters affiliated with NBC.” (McChesney, 1993, 20) Non-profit broadcasters, including religious groups, unions, educational groups and individuals, were given frequencies with lower power, which they had to share, sometimes being able to broadcast only a few hours a day. This made it increasingly difficult for such stations to maintain any sort of financial stability, causing many to eventually fail. Thus, “without having to actually turn down the license renewal applications of very many broadcasters, there were 100 fewer stations on the air within a year of the implementation of General Order 40,” (ibid, 26) the vast majority of which were non-commercial. The FRC made this decision based on a vague notion in its mandate to ensure that broadcasters be licensed based on “public interest, convenience or necessity.” The FRC reasoned that only the most financially stable broadcasters would best serve the public interest (a form of logic which continues to be applied to television to this day), and based its licensing decisions accordingly. There was a problem with this reasoning, though: in some cases, non-commercial stations had better equipment and more capital than commercial stations, which were only just beginning to realize the potential economic benefits of radio. So, a second rationale for licensing grew out of General Literature at Lightspeed – page 453 Order 40: that radio stations should not be proponents of “propaganda,” that is, a single point of view. Thus, labor, religious and educational groups were labeled propagandists and excluded from meaningful participation in broadcasting. Corporations, whose only motive was profit, were not. The effects of these FRC decisions should come as no surprise.

Following the implementation of General Order 40, U. S. broadcasting rapidly crystallized as a system dominated by two nationwide chains supported by commercial advertising. Whereas NBC had twenty-eight affiliates and CBS had sixteen for a combined 6.4 percent of the broadcast stations in 1927, they combined to account for 30 percent of the stations within four years. This, alone, understates their emergence, as all but three of the forty clear channels were soon owned or affiliated with one of the two networks and approximately half of the remaining 70 percent of the stations were low-power independent broadcasters operating with limited hours on shared frequencies. (ibid, 29)

By 1934, James Rorty would write: “For all practical purposes radio in America is owned by big business, administered by big business, and censored by big business.” (ibid, 93) One final facet of this transition should be mentioned: although the technology to both transmit and receive signals existed, radio sets sold to the public were primarily receivers without the capacity to transmit. “Hardware practically flew off dealers’ shelves as sales of radio receivers jumped sixfold, from $60 million in 1922 to $358 million in 1924 [my emphasis].” (Lappin, 1995, 219) Levinson argues that this occurred because the cost of radio production was far greater than that of reception: “People could easily afford to have radio receivers in their homes, but not radio transmitters.” (1997, 118) While this may have been temporarily true, improvements in transmitting equipment would eventually have brought the price down to the point where it would have been practical for individuals to send as well as receive radio signals, if the technology had been conceived as a two-way medium. Implicit in the sales of radio receivers without transmitting capabilities is the idea that most users of the medium would be consumers of Literature at Lightspeed – page 454 radio signals, but not producers. Moreover, as generations became acclimated to using radio as a one-way medium, it becomes difficult for them to even conceive of the possibility that it could have developed any other way. Directing the way a medium is used by directing the development of the hardware is, as we have seen, one strategy for harnessing the economic potential of the World Wide Web. Not long after the consolidation of corporate control of radio, ideologically driven rhetoric developed which naturalized this control. It was argued that the educators, who were among the true innovators of radio in its earliest phases and who battled (as it happened, ineffectually) against corporate control of the medium, were out of touch with radio audiences. “‘People do not want to be educated,’ [NBC President] Merlin Aylesworth fulminated in 1934, adding, ‘they want entertainment.’” (McChesney, 1993, 117) We can begin to see such a rhetoric developing around the Web. “‘Content’ is a fighting word these days. Virtually every new-media pundit will tell you that content is king, though they’re hard-pressed to define what it means or how it works.” (Thompson, 1998, 59) Although it might not be intuitively clear what content on the Web means, some are beginning to argue that, “In the end, it may be entertainment -- not content, not community, not shopping, not any of the other ideas that have had their three hours in the sun -- will be king.” (Larsen, 1999, 95) This concept plays directly into the hands of corporations whose strength is the creation of entertainment, to the detriment of smaller players who might want to use the system to distribute other kinds of information (including individuals who might want to use it as a medium of personal communication). Even more damaging was the concept, espoused by Broadcast magazine, that commercialization of the airwaves was inevitable because “progress cannot be stopped.” (McChesney, 1993, 70) This is an early expression of a theory of media development Literature at Lightspeed – page 455 which has come to be known as “technological determinism.” The basic idea behind technological determinism is that technology has an existence of its own independent of human will, that it develops according to its own internal logic and that, once introduced into society, it changes social structures and relationships. This rhetoric of unstoppable progress can be found throughout current popular literature on the Internet. An important aspect of the rhetoric of the unstoppability of technology is that it masks the social battles which determine how technologies develop, and, in particularly, takes attention away from the self-interest of those who most benefit from certain manifestations of technology and who, therefore, push the technology to develop in specific ways for their own ends. It is important to note, as even this cursory outline of the early history of the medium indicates, there was no inevitability to the emergence of commercial radio broadcasting. Various social actors, each with its own stake in the future of radio, vied to determine how it would develop. One implication of the rhetoric of the unstoppability of technological progress is that any government attempt to shape the nature of a communications medium is futile, practically unnatural. In the case of radio, “Suddenly, the right of the government to regulate broadcasting, which had been accepted, if not demanded, by the commercial broadcasting industry in the years before 1934, was being questioned, primarily due to its being a threat to the First Amendment rights of broadcasters and the general communication requirements of a democratic society.” (ibid, 240) This point cannot be stressed enough: American commercial broadcasters only started arguing against government intervention in radio after government regulation had already established a profitable basis for their enterprise by effectively marginalizing non-profit competitors. As Brock Meeks put it: “[T]he folks on the commerce end, while they buck and spit about all the regulation, in a lot of cases they demand it, because they want to make sure Literature at Lightspeed – page 456 the rules of the road are there, that have to be followed, they need level playing fields, and blah blah, woof woof.” (1997, 285) There is currently a very strident movement to ensure that national governments do not regulate the Internet, much of which uses the rhetoric of inevitability. Mark Stahlman refers to any effort to regulate the Internet as an “assault.” (1994, 86) A typical sentiment is that, “‘We built [the Internet] to be Russian-proof,’ [former head of the government agency which developed the Internet Craig] Fields told the New York Times, ‘but it turned out to be regulator-proof.’” (Rosenzweig, 1997, 23) In Chapter Four, I argued that the Internet had features which would make regulation by governments difficult; however, I did not argue that it would make regulation impossible, nor did I argue that all attempts at regulation were illegitimate (as some cyberlibertarians do). Having come this far, we must recognize that there are points at which the analogy can be accused of being inapt. For example, the American experience of broadcasting regulation was unique in the world: where the American government opted for a wholly private broadcasting system, most other governments chose a wholly public system (thus, the creation of the British and Canadian Broadcasting Systems), or a mixed system with an integral public component. Moreover, while the FCC’s mandate was to regulate the smooth running of the broadcast marketplace, other national regulatory bodies such as the Canadian Radio-Television and Telecommunications Commission was given a broader mandate to promote Canadian culture. If the Internet were a solely American phenomenon, the analogy to the early days of radio would be more exact; since the Internet is an international phenomenon, governments with very different histories of technological development will be helping to shape its future. Two trends in international communications tend to shore up the analogy. The first is that, in most developing nations, the public broadcasting sector is diminishing in importance, some would say becoming increasingly irrelevant. This is due to a complex Literature at Lightspeed – page 457 set of factors, including increasing competition from private broadcasters (especially as new technologies such as cable television and the Internet help us apparoach a 500 channel universe, and go beyond it) eroding its audience; and, government cutbacks to the operating budgets of public broadcasters, which force them to become as competitive for programs which will attract large audiences (such as sports) and advertising as private broadcasters. As a result, in Canada, the CBC is a shadow of its former self, and no longer has the privileged position in people’s lives that it had in the first few decades after its creation. The other, as we have seen, is the trend towards international agreements on trade, intellectual property, etc. Agreements which affect the arts are largely being driven by entertainment conglomerates based in the United States. To the extent that they bind signatories to a particular view of economic life, they can be seen as replicating the American system throughout the world. This suggests that the American experience of broadcasting may be highly relevant to the developing international digital communications system. You never wade in the same river twice. To date, the rhetoric of chaos has not resulted in meaningful legislation to control the Web. Moreover, the number of people currently on the Internet dwarfs the number of people who were involved in the early days of radio. They arguably have a stronger potential lobby group in political capitals. Perhaps most importantly, there is a fundamental difference between the two media: radio was based on the frequencies available in the atmosphere, which were severely limited. One justification for government regulation of radio was this scarcity of frequencies on which to transmit signals. Computer mediated communication systems such as the Internet not only do not have this problem with bandwidth currently, but, as we have seen, are expected to develop an even greater capacity in the future. These and other factors may make the way the Web develops substantially different than radio. Literature at Lightspeed – page 458 Transforming computer mediated communications networks into television or some other more passive medium is a much greater challenge than changing radio from a two-way to a one-way medium, since the two media are so different. Perhaps it simply will prove impossible to accomplish. The experience of radio does suggest that a transformation of a medium from an active to a passive mode requires the confluence of three factors: 1) a corporate sector which sees the possibility of profit in the emerging medium; 2) the physical transformation of the medium which removes the possibility of meaningful interactivity, and; 3) legislators sympathetic to the desires of the corporate sector who are, therefore, willing to regulate the medium to the benefit of major corporations. To a greater or lesser extent, I believe I have shown that these three factors currently exist in the development of the Internet. How they play out against the expectations of the individuals using the system should be fascinating to watch. * * * One of the major criticisms of technological determinism is that it breeds passivity. If technology is an entity with its own life outside human control, then there is nothing human beings can do to shape it. We can only wait and watch it shape us. Oddly, pure social constructivism is also a recipe for passivity. If we allow that every definition of a possible technology is equally valid, we have no way of rationally choosing between them (although those of us who are direct stakeholders will, nonetheless, believe that our view of a technology is the “correct” one). In particular, public policy makers would be tempted to allow various stakeholder groups to fight it out among themselves, and support whatever technology emerged. Social constructivism can breed passivity in another way. Let us assume that the form technological development takes is always a matter of negotiation. In that case, it doesn’t matter whether or not a technology becomes fixed, achieves stability/closure, Literature at Lightspeed – page 459 since we can always open up the debate and change the form into something which works better (by whatever criteria we choose to define a workable technology). In fact, our experience of the world tells us that this is not so. Technologies do stabilize. As Langdon Winner points out, “By far the greatest latitude of choice exists the very first time a particular instrument, system, or technique is introduced. Because choices tend to become strongly fixed in material equipment, economic investment, and social habit, the original flexibility vanishes for all practical purposes once the initial commitments are made.” (1985, 30) As technological systems become more complex, they become more expensive, and the corporations which develop them become less inclined to make significant changes to them. Moreover, as individuals structure their lives around the use of a given technology, they become less inclined to introduce new technologies into their lives which would require them to substantially restructure the way they live. This has an important ramifications: it is imperative to generate public discussion and debate about a new technology as early as possible in its development/deployment. At the point where a technology has been stabilized in its corporate form and private use, public debate about its value becomes largely moot (at least, until new technologies challenge the position of the existing technology, reopening the debate). Despite the possibility of passivity in the face of technological change, I maintain that we, individual citizens, have choices which are non-trivial. If we choose to allow the Web to develop in one direction, it will subsequently reorder society in line with what the technology allows people to do, to the advantage of some and the disadvantage of others. If we choose to allow the Web to develop in a different direction, different social structures will emerge which advantage different groups. We have choices. “The important thing is to ensure that it [technical change] operates to the maximum extent possible in the public interest; to this end there can be no relaxation.” (Williams, 1977, 35) Notes

Chapter One Notes

1) I am indebted to Peter Roosen-Runge for this insight.

Chapter Two Notes

1) Another problem with the argument that computers are more environmentally friendly than paper is that the machines themselves contain harmful materials. “Thinking of computers as a disposable product is...bad news for the earth, environmentalists point out. That’s because of all the hazardous materials that computers contain, including lead, mercury, cadmium and chromium.” ("Computer pollution,” 2000, M1) As old computers are thrown out to make way for the latest versions, many find their way into landfills, where these chemicals are ultimately released into the environment.

2) Since the mailing list is not commercial, and is presumably welcome by the ezine’s readers, there is no justification for services such as AOL to block such messages; however, they have put in place filtering mechanisms which cannot differentiate between spam and legitimate communications. If something like this had been undertaken by a government, it would be called censorship, and a public outcry would likely ensue. This type of “private” or “corporate” censorship, however, raises public apathy. This is not the only example of private censorship I came across in the course of this study. The editorial guidelines of 1st Chapter stated that it would not accept pornographic material of any kind (defined as “any material where [sex or sexuality] is the main focus, or is not presented in good taste"), material that was “grossly offensive to the online community, including blatant expressions of bigotry, prejudice, racism, hatred or profanity,” material that promoted or provided instructional information about illegal activities or material that defamed any person or group.” (Schlau, 1998, unpaginated) These guidelines, it was stated, were “In accordance with GEOCITIES content requirements.” (ibid) Presumably, anybody who was caught publishing material which did not adhere to the commercial service’s guidelines (and many of the stories which we have seen were published online would not), would be in violation of their contract with the company, and would lose their account. Online, apparently, contract law supercedes constitutional or charter freedom of speech guarantees.

3) Despite this, books such as James Redfield’s The Celestine Prophecies, which became an international bestseller after it was picked up by a publisher, continued to be self- published right up to the end of the 20th century. (Boushka, 1999, unpaginated)

4) I made these two threads up off the top of my head. I suspect, however, that the reader attempted to find connections between the initial image and the two which followed. As I said, human beings are meaning generating machines.

5) If anything, collaborative writing on the Web supports the theory of secondary orality proposed by Walter Ong. (1994) Following Marshall McLuhan, Ong claimed that digital Literature at Lightspeed – page 461 communication networks recreated in modern cultures the social conditions of tribal, oral societies. Collaborative writing on digital networks is, in many ways, analogous to oral storytelling: individuals contribute details which help shape the overall narrative, which does not belong to any single contributor. Oral stories, like digital works, can be ephemeral, and need to be written down/printed out to achieve a more permanent form. There are differences as well, of course: oral stories require physical co-presence to be communicated effectively while digital stories do not. Still, the similarities are worth pursuing further.

6) The reason most often given for this exclusion is that individual producers do not measure up to “professional standards.” This takes two distinct forms. On the one hand, esthetic standards are invoked: thus, photocopied zines are not distributed in most bookstores because they aren’t as “esthetically pleasing” as glossy magazines. On the other hand, technical standards are invoked: 8mm or 16mm films are not shown in theatres equipped to show 35mm or higher films. While there is validity to both types of standards, we must recognize that they are often used to exclude a wide variety of voices, leading to a homogenization of the cultural artifacts which are widely distributed.

Chapter Three Notes

1) On the other hand, one writer suggests not to put too much store in Microsoft’s buying into a cable company since, “After all, [Microsoft Chairman Bill] Gates also has a stake in a satellite company and has been working with the Telcos for years.” (Steinberg, 1997, 80) Another writer claims that of the three facets of the online industry (content, software and access), only “Microsoft is working in all three areas by providing Internet access and content through The Microsoft Network, as well as Web browser and server software.” (Savets, 1997, 95) The strategy of Microsoft, the largest supplier of computer operating systems in the world and one of the biggest financial winners in the computer industry, seems to be to ally itself with companies involved in every trend in digital communications so that it will be positioned to capitalize no matter what the future of computer mediated communications is. “Between 1994 and 1996 Microsoft spent $1.5 billion to purchase or invest in forty-seven companies.” (Herman and McChesney, 1997, 126/127) Microsoft’s reserves of cash and short-term securities is estimated at between $18 and $21.8 billion dollars (Bank, 1999, B8), giving the company a huge amount of money to invest. For this reason, Microsoft’s name will come up often in this chapter.

2) Critics of traditional economics have argued that pegging economics to human desire is unsustainable and will necessary lead to the despoliation of the planet, since human desires, by definition, have no limits, while the planet has very definite limits. See, for instance, Matsu, 1997 or Lasn, 1997.

3) This quality of information is not celebrated by every commentator. Some see us suffering from conditions of “Information Glut” or “Data Smog” (Shenk, 1997), unable to find the information we need in the vast stores of information which exist. However, Literature at Lightspeed – page 462 most commentators believe the advantages of access to increased amounts of information necessarily outweigh the disadvantages.

4) If information was infinite, the cost of information would be zero. However, since information is finite, though increasingly vast, we can only say that its value approaches zero without ever achieving it, the condition of an asymptotic curve. It isn’t necessary for information to actually reach zero, though, for us to say that its value is zero.

5) The term channels, borrowed, of course, from television, to describe streams of online content, is used in order to make the new medium more comfortable for new users to use. It may also help new users, who can be assumed to be unfamiliar with the technology, more readily accept attempts to limit online media to the form of television. As with many metaphors applied to new media, the channels metaphor relies on a restricted view of the medium (in this case, the Web) which benefits specific interests.

6) As we shall see in the next chapter, the mistaken idea that artists -- writers, in particular -- are solely motivated by a love of craft has contributed to their low status on the economic totem pole.

Chapter Four Notes

1) Grousing about filtering systems on The MacNeil/Lehrer News Hour, Senator Exon claimed, “We didn’t hear much about that until the Exon Decency Bill was widely considered and debated.” ("Focus -- Sex in Cyberspace,” 1995, unpaginated) Perhaps. But if that were true, all that would mean is that the CDA actually had the beneficial effect of spurring the development and dissemination of useful online tools.

2) Given the overwhelming arguments in favour of striking down the CDA on the grounds that it violated the First Amendment, the Supreme Court decided it was not necessary to resort to the Fifth Amendment to strike the Act down. The Fifth Amendment assures every American’s right to free assembly: “those who object to [the CDA] are generally trying to defend the right to assemble a like-minded group...” (Johnson, unpaginated) In effect, it was being argued that the Internet was a place where communities formed. Thus, the ACLU argued that the Internet is analogous to a physical place where people gather, and that government had no right to interfere with them there: “We should stop thinking about most of these issues as if they involve the sending of a message from party A to party B (or to parties C through Z), and instead fully absorb the fact that most communications on the net amount to the joint creation of a new shared space allowing the assembly of like-minded individuals.” (ibid) It would have been fascinating to see the Court grapple with this interpretation...

3) Incivility in the debate about the CDA was not limited to those who opposed the legislation, of course. In a radio discussion of the CDA, one of its proponents claimed that “the same ACLU that says pornography is appropriate for children and embraces Literature at Lightspeed – page 463 pedophilia kinds of information for children says that prayer in school will destroy America.” (McPhee, 1996, unpaginated) This is a gross distortion of the ACLU’s position.

4) Critics of government control of content on the Internet often portray their opponents as Neanderthals who have no experience of the medium they are attempting to regulate. However, statements like Beaudoin’s do crop up from time to time, suggesting that politicians do have some understanding of the nature of the Internet. For example, Singapore’s minister of information and the arts “told Parliament that ‘Censorship can no longer be 100 percent effective, but even if it is only 20 percent effective, we should still not stop censoring....We cannot screen every bit of information that comes down the information highway, but we can make it illegal and costly for mass distributors of objectionable material to operate in Singapore.’” [note omitted] (Human Rights Watch, 1996, unpaginated) This suggests that some attempts to censor the Internet are made, not out of ignorance, but out of moral conviction. I find something endearingly Quixotic about this.

5) Applying copyright to a collaborative medium such as filmmaking was, at best, a quick fix since, the auteur theory notwithstanding, no single person can be considered a “creator” of a film. In a vacuum, the copyright for non-independent films is given neither to the writer or the director of the film, but the studio that produced it. It is argued that since the studio takes the financial risk in making the film, it should get the lion’s share of the economic benefits which come from the work. Perhaps. The point is, a regime which recognized the relative contributions of creators in a collaborative medium would likely divide the rewards of working in the medium in a substantially different way, likely to the benefit of some creators who are now, for the most part, poorly compensated for their work.

6) It can be argued that these sorts of legal battles are a consequence of seeing the rights of creators as property. In many jurisdictions in Europe, creators are considered to have a “moral” right in their work, regardless of who owns it. “Moral rights, more commonly found in civil law jurisdictions, protect the right of the author to be associated with the work, remain anonymous, not have the work mutilated or distorted, and not have the work associated with any product, service, cause or institution.” (Johnstone, Johnstone and Handa, 1995, 172) The courts following such a regime might have found that articles could not be reproduced in databases without the creators’ permission, regardless of the issue of compensation.

7) This is not to say, however, that the rewards of creation should be considered purely personal. A society which compensates artists poorly in the belief that they would create in any case for other, personal reasons takes unconscionable advantage of the artist, and does itself a disservice inasmuch as artists who require adequate compensation may not be able to create works which may have been of benefit to society. Look at it this way: Literature at Lightspeed – page 464 some garage mechanics truly love working with cars, but nobody would argue that they should be inadequately compensated for their work because of it.

Chapter Five Notes

1) I am indebted to Ella Chmielewska for insights on this subject.

2) Most people understand a home page to be a personal page created by an individual to promote his or her interests. While this definition of a home page sometimes coincides with the one being used in this chapter, it isn’t necessarily the case. A lot of Web surfers do not have personal pages, but everybody has to have a page come up when they first log on to the Web.

3) One search engine has even forgone any pretense to neutrality. “With GoTo.com, founder Bill Gross and CEO Jeffrey Brewer claim they’ve created the ‘first-ever market- driven search directory.’ That is, those who bid the most on a given word or search term come out on top in GoTo’s search results.” ("Will GoTo go?", 1998, 80) This would seriously disadvantage small information providers who could not pay for the premium space, as well as the people who could use their information if they could find it. APPENDIX A Surveys

1. Fiction Writers

To Whom It May Concern,

I am a PhD student in the Communications program at McGill University. My main interest is in how artists are using emerging media, in this case, the Internet. To help me in my research, I would greatly appreciate it if you would take a few minutes to fill out the attached questionnaire and return it to me. This is a purely academic questionnaire; the results will not be used for commercial purposes. If you would like to see the results of this research, please let me know. I may be part of a project at McGill to put Doctoral dissertations on the Web, in which case I will be happy to forward you the URL. If this does not work out, I will email you the chapter which is relevant to what you are doing. ALSO: I must apologize, in advance, if any of the questions are answered on a page on your Web site; since the survey is going to a large number of people, I had to make it as broad and general as possible. If you have any questions about the questionnaire or my research in general, please feel free to send me an email message with your queries. Otherwise, I look forward to receiving your response to this questionnaire.

Ira Nayman

QUESTIONS

1) What is your writing background?

2) Has your writing been published in traditional print media (books, magazines, journals, etc.)? 2a) If so, where?

3) Where did you get the idea to publish your writing on the WWW? (Ie: friends, saw other Web pages with writing on them, book or magazine, other) 3a) Why did you decide to publish your writing on the WWW? 3b) In your experience, what, if any, are the advantages of the WWW over traditional print media? 3c) What, if any, are the disadvantages?

4) What sort of feedback has your fiction gotten? 4a) What is your sense of the people who read your page? 4b) If possible, could you supply me with a (small, please) sampling of the responses to your fiction on the Web? Literature at Lightspeed – page 466

5) Where do you access the Internet from? (Ie: home, school, work, some combination of the three, other) 5a) How else do you use your computer? (Ie: word processing, game playing, other)

6) If I feel the need to clarify anything you have said, would you be willing to answer one or two follow-up questions?

2. Zine writers

To Whom It May Concern,

I am a PhD student in the Communications program at McGill University. My main interest is in how artists are using emerging media, in this case, the Internet. To help me in my research, I would greatly appreciate it if you would take a few minutes to fill out the attached questionnaire and return it to me. This is a purely academic questionnaire; the results will not be used for commercial purposes. If you would like to see the results of this research, please let me know. I may be part of a project at McGill to put Doctoral dissertations on the Web, in which case I will be happy to forward you the URL. If this does not work out, I will email you the chapter which is relevant to what you are doing. ALSO: I must apologize, in advance, if any of the questions are answered on a page on your Web site; since the survey is going to a large number of people, I had to make it as broad and general as possible. If you have any questions about my the questionnaire or my research in general, please feel free to send me an email message with your queries. Otherwise, please respond within the next two weeks (that is, by [DATE]). I look forward to hearing what you have to say.

Ira Nayman

QUESTIONS

1) What is your writing background?

2) Has your writing been published in traditional print media (books, magazines, journals, etc.)? 2a) If so, where?

3) Where did you get the idea to publish your writing on the WWW? (Ie: friends, saw other Web pages with writing on them, book or magazine, other) 3a) Why did you decide to publish your writing on the WWW? 3b) In your experience, what, if any, are the advantages of the WWW over traditional print media? 3c) What, if any, are the disadvantages? Literature at Lightspeed – page 467

4) How did you find out about the e-zine? 4a) Have you published fiction on your own Web page as well as in an ezine? 4b) If so, what are the advantages of publishing in both places? 4c) What are the disadvantages? 4c) If not, what are the advanatages of not publishing on your own page? 4d) What are the disadvantages? 4e) Has your fiction been published in more than one ezine? 4f) If so, could you briefly outline where and when?

5) What sort of feedback has your fiction gotten? 5a) What is your sense of the people who read your page?

6) Where do you access the Internet from? (Ie: home, school, work, some combination of the three, other) 6a) How else do you use your computer? (Ie: word processing, game playing, other)

7) If I feel the need to clarify anything you have said, would you be willing to answer one or two follow-up questions?

3. Zine editors

To Whom It May Concern,

I am a PhD student in the Communications program at McGill University. My main interest is in how artists are using emerging media, in this case, the Internet. To help me in my dissertation research, I would greatly appreciate it if you would take a few minutes to fill out the attached questionnaire and return it to me. This is a purely academic questionnaire; the results will not be used for commercial purposes. If you would like to see the results of this research, please let me know. I may be part of a project at McGill to put Doctoral dissertations on the Web, in which case I will be happy to forward you the URL. If this does not work out, I will email you the chapter which is relevant to what you are doing. ALSO: I must apologize, in advance, if any of the questions are answered on a page on your Web site; since the survey is going to a large number of people, I had to make it as broad and general as possible. If you have any questions about my the questionnaire or my research in general, please feel free to send me an email message with your queries. Otherwise, please respond within the next two weeks (that is, by [DATE]). I look forward to hearing what you have to say.

Ira Nayman Literature at Lightspeed – page 468

QUESTIONS

1) What is your publishing background?

2) Have you worked in traditional print media (books, magazines, journals, etc.)? 2a) If so, where, and in what capacity?

3) How long has your electronic publication been in existence? 3a) Could you give a brief history of your publication (how it started, how it has developed, grown, etc.)? 3b) How often do you publish? (Is time even a factor any more, when you can add pieces of writing at any time?) 3c) How have you tried to publicize the publication?

4) Do you pay writers? 4a) If so, how much? 4b) If not, do you plan to? 4c) Do you compensate your writers in any other way?

5) What is your submission policy? 5a) Specifically, what sort of material are you looking for? 5b) In what format must it be submitted? (Ie: do you accept paper submissions?)

6) What criteria do you use to decide what fiction to accept for publication? 6a) How many people typically edit a story before it is published? 6b) How much of your work with others at the ezine is conducted online?

7) Why did you decide to publish your electronic magazine on the WWW? 7a) In your experience, what, if any, are the advantages of the WWW over traditional print media? 8b) What, if any, are the disadvantages? 8c) Are there any other differences worth mentioning?

9) What sort of feedback has your publication gotten? 9a) What is your sense of the people who read your publication? 9b) If possible, could you supply me with a (small, please) sampling of the responses to your fiction on the Web?

10) Are you planning on making money from your electronic magazine? 10a) If so, how (subscriptions, advertising, other)?

11) If I feel the need to clarify anything you say, would you be willing to answer one or two follow-up questions? Literature at Lightspeed – page 469

4. Hypertext writers

To Whom It May Concern,

I am a PhD student in the Communications program at McGill University. My main interest is in how artists are using emerging media, in this case, the Internet. To help me in my research, I would greatly appreciate it if you would take a few moments to fill out the attached questionnaire and return it to me. This is a purely academic questionnaire; the results will not be used for commercial purposes. If you would like to see the results of this research, please let me know. I may be part of a project at McGill to put Doctoral dissertations on the Web, in which case I will be happy to forward you the URL. If this does not work out, I will email you the chapter which is relevant to what you are doing. ALSO: I must apologize, in advance, if any of the questions are answered on a page on your Web site; since the survey is going to a large number of people, I had to make it as broad and general as possible. If you have any questions about my the questionnaire or my research in general, please feel free to send me an email message with your queries. Otherwise, I look forward to receiving your response to my questionnaire.

Ira Nayman

QUESTIONS

1) What is your writing background?

2) Have you written traditional prose fiction? 2a) Has your writing been published in traditional print media (books, magazines, journals, etc.)? 2b) If so, where?

3) Where did you get the idea to publish your writing on the WWW? (Ie: friends, saw other Web pages with writing on them, book or magazine, course, other) 3a) Why did you decide to publish your writing on the WWW? 3b) In your experience, what, if any, are the advantages of the WWW over traditional other media (ie: CD-ROM)? 3c) What, if any, are the disadvantages?

4) Why did you decide to write in hypertext? 4a) In your experience, what, if any, are the advantages of hypertext over traditional prose? 4b) What, if any, are the disadvantages? 4c) Do you also write linear fiction? 4d) If so, what, in your experience, are the main differences between the two? Literature at Lightspeed – page 470

5) Do you start with the structure of your work, or start writing chunks and then develop the structure? 5a) How do you decide which chunks to hyperlink? 5b) How do you decide when a work is complete? 5c) What other considerations guide you in your creative process?

6) Have you used a hypertext authoring tool other than HTML (Ie: Hypercard or Storyspace?) 6a) If so, how do you feel HTML compares to the other system? 6b) What do you find works well about HTML? 6c) What aspects of HTML do you find don't work well?

7) As a hypertext author, do you feel you lose some control over your work? 7a) If so, does this affect how you go about creating hypertext stories?

8) What sort of feedback has your fiction gotten? 8a) What is your sense of the people who read your work? 8b) If possible, could you supply me with a (small, please) sampling of the responses to your fiction on the Web?

9) Where do you access the Internet from? (Ie: home, school, work, some combination of the three, other) 9a) How else do you use your computer? (Ie: word processing, game playing, other)

10) Can you recommend any other WWW sites which contain hypertext fiction?

11) If I feel the need to clarify anything you have said, would you be willing to answer one or two follow-up questions?

5. Collaborative Fiction Writers

To Whom It May Concern,

I am a PhD student in the Communications program at McGill University. My main interest is in how artists are using emerging media, in this case, the Internet. To help me in my research, I would greatly appreciate it if you would take a few minutes to fill out the attached questionnaire and return it to me. This is a purely academic questionnaire; the results will not be used for commercial purposes. If you would like to see the results of this research, please let me know. I may be part of a project at McGill to put Doctoral dissertations on the Web, in which case I will be happy to forward you the URL. If this does not work out, I will email you the chapter which is relevant to what you are doing. ALSO: I must apologize, in advance, if any of the questions are answered on a page on your Web site; since the Literature at Lightspeed – page 471 survey is going to a large number of people, I had to make it as broad and general as possible. If you have any questions about my the questionnaire or my research in general, please feel free to send me an email message with your queries. Otherwise, please respond within the next two weeks (that is, by [DATE]). I look forward to hearing what you have to say.

Ira Nayman

QUESTIONS

1) What is your writing background?

2) Have you written traditional prose fiction? 2a) Has your writing been published in traditional print media (books, magazines, journals, etc.)? 2b) If so, where?

3) Where did you get the idea to publish your writing on the WWW? (Ie: friends, saw other Web pages with writing on them, book or magazine, course, other) 3a) Why did you decide to put your writing on the WWW? 3b) In your experience, what, if any, are the advantages of the WWW over traditional other media (ie: CD-ROM)? 3c) What, if any, are the disadvantages?

4) Why did you decide to add to a collaborative work? 4a) In your experience, what, if any, are the advantages of collaborative fiction over traditional prose? 4b) What, if any, are the disadvantages? 4c) Do you also write traditional fiction? 4d) If so, what, in your experience, are the main differences between the two?

5) As a collaborative author, do you feel you lose some control over your work? 5a) If so, does this affect how you write?

6) Where do you access the Internet from? (Ie: home, school, work, some combination of the three, other) 6a) How else do you use your computer? (Ie: word processing, game playing, other)

7) If I feel the need to clarify anything you have said, would you be willing to answer one or two follow-up questions? Sources Cited

“ABC signs Drudge.” The Toronto Star (7 July 1999).

Abdulrazzak, Hassan [[email protected]] “Q & A.” Personal email to Ira Nayman [[email protected]]. 15 August 1998.

“About DargonZine.” [http://www1.shore.net/~dargon/about02.shtml]. June 1998.

Abramson, Ruth. “The Uncooling of Brands.” Adbusters (V6 N4, Winter 1999).

Abraham, Jeff and James Lichtenberg. “Publishers and the new media: All dressed up...but still waiting to go.” Publishers Weekly (V246 I16, 19 April 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Adams, Jill [[email protected]]. “Editor’s Page” [http://www.web- show.com/barcelona/review/eng/ed.htm]. Barcelona Review.

Adams, Linda [[email protected]]. “Re: Linda Adams’ Home Page (r2b11).” Personal email to Ira Nayman [[email protected]]. 5 September 1998.

Adams, Mike [[email protected]]. “A Daughter’s Duty.” [http://www.dargonzine.org/dz115s2.htm]. DargonZine [http://www.dargonzine.org/]. V11 N5, 27 June 1998.

“Advice to Emily Dickinson: Speak Up!” New York Times Magazine (7 July 1996). Quoted in Educom Review (V31 N5, September/October 1996).

Agre, Philip E. “Designing Genres for New Media: Social, Economic, and Political Contexts,” Cybersociety 2.0: Revisiting Computer-Mediated Communication and Community. Steven G. Jones, ed. Thousand Oaks, California: SAGE Publications, 1998.

Allan, Annemarie [[email protected]]. “questionnaire.” Personal email to Ira Nayman [[email protected]]. 24 August 1998.

Alt, Frances Fasano [[email protected]] “Fw: Panda (r2b8).” Personal email to Ira Nayman [[email protected]]. 17 August 1998.

Alvarez, Aldo [[email protected]]. “Guidelines” [http://www.blithe.com/bhq1.1/guidelines1.1.html]. Blithe House Quarterly.

“Amazon buys into rare books, music.” Publishers Weekly (V246 I18, 3 May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

American Civil Liberties Union. “STOP STATE LEGISLATORS FROM CENSORING ONLINE CONTENT!” Literature at Lightspeed – page 473

[http://www.eff.org/pub/Censorship/Exon_bill/Foreign_and_local/state_censorship_aclu. article].

“Analysts Foresee ‘Portal Melee.’ Educom Review (V33, N6, November/December 1998). Reprinted from Business Week (7 September 1998).

“And viewers fight back against Web ad overload.” Los Angeles Times (2 March 1999). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 3 March 1999.

Annechino, Rachelle [[email protected]]. “Take the A Train” [http://www.12gauge.com/issue7/rachelle.html]. The 12 Gauge Review [http://www.12gauge.com/frames/index.html].

Anuff, Joey. “The Web Isn’t a Sitcom...It’s a comedy club.” Wired (V6 N1, January 1998).

Anwar, Shan (Siddhartha) [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 29 June 1998a. ______. “Water Buffaloes” [http://www.explode.com/ink/water.shtml]. Blast [http://www.explode.com/ink/index.shtml]. 1998b.

Anya [[email protected]]. “RE: Black Wind (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Archer, Bert. “King, Grisham experiments are hits.” Globe and Mail. 25 March 2000.

Ardito, Stephanie C. “The alternative press: Newsweeklies and zines.” Database (V22 I3, June/July 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Aristotle. Poetics. Translated by Richard Janko. Indianapolis, Indiana: Hackett Publishing, 1987.

Associated Press. “Porn sites remain despite Communications Decency Act.” 1995. [http://www.nando.net/newsroom/nt/209pornsites.html] ______. “Telecom bill provision spurs cyberspace protest.” 1996. [http://www.nando.net/newsroom/nt/208cyprotest.html]. ______. “E-Books to Come Singing Down the Wire.” 23 October 1998. Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 25 October 1998. ______. “Knight-Ridder plans Net portals.” Toronto Star (24 June 1999).

“AT&T Web Site Services advertisement.” Wired (V5 N9, September 1997).

Atkinson, Sarah [[email protected]]. “Re: Burning Love (r2b7).” Personal email to Ira Nayman [[email protected]]. 9 August 1998. Literature at Lightspeed – page 474

Avgerakis, George and Becky Waring. “Industrial Strength Streaming Video.” NewMedia (V7 N12, 22 September 1997).

Aviott, John [[email protected]]. “Re: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 6 August 1998.

Babbie, Earl. “We Am a Virtual Community.” The American Sociologist (V27 N1, Spring 1996).

Babe, Robert E. “Convergence and the New Technologies.” The Cultural Industries in Canada: Problems, Policies and Prospects. Michael Dorland, ed. Toronto: James Lorimer & Company, 1996.

Back, Joann [[email protected]]. “RE: Research Survey.” Personal email to Ira Nayman [[email protected]]. 30 June 1998.

Baldwin, Thomas F., D. Stevens McVoy and Charles Steinfield. Convergence: Integrating Media, Information & Communication. Thousand Oaks, California: Sage Publications, 1996.

Balfour, Gail. “Electronic buyers still wary.” ComputerWorld-Canada (V4 N13, 3 July 1998).

Bamberger, John F. [[email protected]]. “Re: Reserach (r2b3).” Personal email to Ira Nayman [[email protected]]. 24 June 1998.

Bancroft, John [[email protected]]. “RE: TW3: The Whole Wired Word (r2b9) ANSWER.” Personal email to Ira Nayman [[email protected]]. 23 August 1998.

Bandy, Bo [[email protected]]. “Re: Exercise in Futility (r2b7).” Personal email to Ira Nayman [[email protected]]. 8 August 1998.

Bank, David. “Huge cash hoard gives Microsoft clout. The Globe and Mail (19 July 1999).

Barber, Dean [[email protected]]. “RE: When It Rains, It Pours (r2b6).” Personal email to Ira Nayman [[email protected]]. 1 August 1998.

Bardelli, Carol [[email protected]]. “Fw: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Barger, Jorn. “HyperTerrorist’s Timeline of Hypertext History” [http://www.mcs.net/~jorn/html/net/timeline.html]. 4 March 1996. Literature at Lightspeed – page 475

Barkow, Tim, ed. “The Value of Privatization.” Wired (V5 N6, June 1997).

Barlow, John Perry. “Selling Wine Without Bottles.” Clicking In: Hot Links to a Digital Culture. Leeson, ed. Seattle: Bay Press, 1996.

Barthes, Roland. “The Death of the Author.” Image, Music, Text. Stephen Heath, ed. and trans. New York: Hill, 1977.

Baumander, Tabitha [[email protected]]. “Dear Katie” [http://members.xoom.com/_XMCM/Malexis/writing/prose/storiesbytabitha/tabitha- dearkatie.htm]. Raven Wolf. ______. “Re: Research (r2b3).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 11 July 1998.

Bayers, Chip. “The Great Web Wipeout.” Wired (V4 N4, April 1996).

Baym, Nancy K. “Interpreting Soap Operas and Creating Community: Inside A Computer-Mediated Fan Culture.” Journal of Folklore Research (V30 N2/3, 1993).

Beardsley, E. R. [[email protected]]. “Martin 29 Dash CompuG [http://www.intangible.org/StoriesWeb/bodywork/bodywork1.html]. Intangible [http://www.intangible.org/contents.html]. 1998a. ______. “Re: Research (r2b5).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 24 July 1998b.

Beato, G. “Paying the Piper.” Wired (V5 N11, November, 1997).

Beaudoin, Louise. “Bill 101 on the Web.” The Montreal Gazette (21 June , 1997).

Beck, Emily [[email protected]]. “Questionaire from the author of Melanie Jensen’s Secrets of the Universe.” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 6 September 1998.

Becker, Howard S. Tricks of the Trade. Chicago: University of Chicago Press, 1998. ______. Writing for Social Scientists. Chicago: University of Chicago Press, 1986.

Beckett, Samuel. I Can’t Go On, I’ll Go On: A Samuel Beckett Reader. Richard W. Seaver, ed. New York: Grove Press, 1976.

Behar, Michael. “The Eyeball Index.” Wired (V6 N2, February 1998).

Bennahum, David S. “The Internet Revolution.” Wired (V5 I4, April 1997). Literature at Lightspeed – page 476

Bennett, Jo. “Taking a print brand online: The Internet leaders.” Folio: The Magazine for Magazine Management (V28 I7, June 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Benson, David [[email protected]]. “Re: No Dead Trees (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Berkman Center for Internet & Society. “About Counter-copyrights (CC).” 1 October 1999. [http://cyber.law.harvard.edu/commons/cc.html].

Bernard, Chris [[email protected]]. “Midnight Snack” [http://www2.aphelion- webzine.com/shorts/snack.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1997.

Bernstein, Solveig. “Beyond the Communications Decency Act: Constitutional Lessons of the Internet.” Cato Policy Analysis (No. 262, 4 November 1996). [http://www.cato.org/pubs/pas/pa-262.html].

Bertin, Oliver. “EMI-Time Warner a loud merger.” Globe and Mail (24 January 2000).

Bertrand, Christian ([email protected]). Dorom. [http://www.cyberbeach.net/~skywalkr/dorom.htm]. 1996.

Bess, Jana Rae [[email protected]]. “Re: Words Call the Muse and Yinnae of Faerfire (r2b12).” Personal email to Ira Nayman [[email protected]]. 19 September 1998.

Bey, Hakim. “Terminal Terrorism.” 21C: World of Ideas (I26).

Bianchi, Matteo B. [[email protected]]. “R: Research Survey.” Personal email message to Ira Nayman [[email protected]]. 27 June 1998.

Bijker, Wiebe E. Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Societechnical Change. Cambridge, Mass.: MIT Press, 1995. ______. “The Social Construction of Bakelite: Toward a Theory of Invention.” The Social Construction of Technological Systems, Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Massachusetts: MIT Press, 1987.

Biocca, Frank and Mark R. Levy. “Virtual Reality as a Communication System.” Communication in the Age of Virtual Reality. Frank Biocca and Mark R. Levy, eds. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1995.

Birkerts, Sven. The Gutenberg Elegies: The Fate of Reading in an Electronic Age. New York: Fawcett Columbine, 1994. Literature at Lightspeed – page 477

Blann, Barbara [[email protected]]. “RE: Research (r2b4).” Personal email to Ira Nayman [[email protected]]. 20 July 1998.

Blenman, Keith “Chaos” [[email protected]]. “4000 A.D.” [http://ourworld.compuserve.com/homepages/hobbes1/4000.htm]. 1998.

Blum, Barbra A. [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 12 July 1998.

Boczkowski, Pablo J. “Mutual Shaping of Users and Technologies in a National Virtual Community.” Journal of Communication (Spring, 1999).

Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, New Jersey: Lawrence Earlbaum Associates, 1991.

Bourdieu, Pierre. “The production of belief: contribution to an economy of symbolic goods.” Media, Culture and Society: A Critical Reader, Richard Collins, ed. London: SAGE Publications, 1986.

Boushka, Bill [[email protected]]. “A Personal Perspective on SELF- PUBLISHING” [http://www.hppub.com/selfpub.htm]. 14 August 1999.

Boutin, Paul. “Pushover.” Wired (V6 N3, March 1998).

Boyer, Paul S. Purity in Print. Quoted by David Dubin. “Earlier Smut Legislation -- by Senator Jim Exon’s Mentor-in-Mind.” [http://www.eff.org/pub/Censorship/Exon_bill/de_schmutz_und_schund_26_act.note].

Boyle, James. “A Theory of Law and Information: Copyright, Spleens, Blackmail, and Insider Trading.” 1992. [http://www.wcl.american.edu/pub/faculty/boyle/law&info.htm].

Brandt, Richard L. “Gary Reback.” Upside (V X N2, February 1998).

Braverman, Harry. “Technology and capitalist control. The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman, eds. Buckingham, England: Open University Press, 1985.

Breivik, Jan Petter [[email protected]]. “Re: Princess Rebel (r2b13).” Personal email to Ira Nayman [[email protected]]. 30 September 1998.

Brin, David. The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? Reading, Mass.: Perseus Books, 1998. Literature at Lightspeed – page 478

Brinson, J. Dianne and Mark F. Radcliffe. “An Intellectual Property Law Primer for Multimedia and Web Developers” [http://www.eff.org/pub/Intellectual_property/multimedia_ip_primer.paper]. 1991.

“Broadcasters Target the Office Worker.” Tech Web (November 12 1998). Reprinted in Edupage. John Gehl and Suzanne Douglas, eds. 15 November 1998.

Broadhurst, Lida [[email protected]]. “Reunion” [http://www2.aphelion- webzine.com/shorts/reunion.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1998.

Brooks, Jeff [[email protected]]. “The Joyce Kilmer Service Area.” [http://morpo.com/v2i2/kilmer.html]. Morpo Review [http://morpo.com/index.htm]. V2 I1, 1995

Browning, John. “Africa 1: Hollywood 0.” Wired (V5 N3, March 1997).

Browning, John and Spencer Reiss. “Encyclopedia of the New Economy, Part II.” Wired (V6 N4, April 1998).

Brown, Josh [[email protected]]. “The Long Way Home.” [http://www.dargonzine.org/dz103s3.htm]. DargonZine [http://www.dargonzine.org/]. V10 I3, 26 April 1997.

Bronson, Po. “A Year in the Life of the Digital Gold Rush.” Wired (V7 N7, July 1999).

Bruckman, Amy S. “Gender Swapping on the Internet.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996.

Brundage [[email protected]]. “Re: Research (r2b3) Writer’s survey SAYS.” Personal email to Ira Nayman [[email protected]]. 12 July 1998.

Bruland, Tine. “Industrial conflict as a source of technical innovation: the development of the automatic spinning mule.” The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman eds. Buckingham, England: Open University Press, 1985.

Bubien, Mark Stanley [[email protected]]. “Submission Guidelines” [http://www.storybytes.com/inquiries.html]. StoryBytes. 1997.

Burch, Ben [[email protected]. “RE: No Dead Trees (r2b13)” Personal email to Ira Nayman [[email protected]]. 26 September 1998. Literature at Lightspeed – page 479

Burke, Andrew [[email protected]]. “Re: your mail.” Personal email to Ira Nayman [[email protected]]. 23 October 1996.

Burne, Philippa [[email protected]]. 24 hours with someone you know... [http://www.cinemedia.net/GlassWings/modern/24hours/]. ______. “RE: ‘24 hours with someone you know...’ (r2b13).” Personal email to Ira Nayman [[email protected]]. 5 October 1998.

Burnett, Richard. “The Grey Fox.” Hour (June 18-24, 1998).

Bush, Vannevar. “As We May Think” [http://www.isg.sfu.ca/~duchier/misc/vbush]. 1945.

Business Development Bank of Canada. “Cultural Industries Development Fund” [http://www.cbsc.org/fedbis/display.cfm?DocID=DDD8C09BBE65980B8525627E004C EE50&coll=FEDERAL_BIS]. 26 August 1999.

Butler, Pierce. The Origin of Printing in Europe. Chicago: University of Chicago Press, 1940.

Calef III, Fred “J.” [[email protected]]. “Questionnaire Responses.” Personal email message to Ira Nayman [[email protected]]. 17 October 1996.

Callon, Michel. “Society in the Making: The Study of Technology as a Tool for Sociological Analysis.” The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Mass.: MIT Press, 1987.

Campbell, Andy J. [[email protected]]. “Re: The Regression (r2b9).” Personal email to Ira Nayman [[email protected]]. 25 August 1998.

Canada Council. “Endowments and Prizes Program” [http://www.canadacouncil.ca/prizes/prbr01-e.asp]. 1999a. ______. “Grants for First Productions in Media Arts” [http://www.canadacouncil.ca/archival/program/media/mash02-e.htm]. June 1999b ______. “Grants to Artists in Media Arts” [http://www.canadacouncil.ca/archival/program/media/mash01-e.htm]. April 1999c ______. “Media Arts Presentation, Distribution and Development Program; Annual Assistance to Distribution Organizations” [http://www.canadacouncil.ca/archival/program/media/mash09-e.htm]. June 1999d

Canada Economic Development for Quebec Regions. “Multimedia Experimentation Fund” [http://www.cbsc.org/fedbis/display.cfm?DocID=420B43B29D95A9E85256601004F37 B2&coll=FEDERAL_BIS]. 1 April 1999. Literature at Lightspeed – page 480

Canadian Intellectual Property Office. “What is copyright? [http://strategis.ic.gc.ca/sc_mrksv/cipo/help/faq_cp-e.html]. 1998.

Carlson, Randall L. The Information Superhighway: Strategic Alliances in Telecommunications and Multimedia. New York: St. Martin’s Press, 1996.

Carr, Jim. “Get Me the Money.” NewMedia (V8 N1 13 January 1998).

Carroll, Fraser [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 30 June 1998.

Carvajal, Doreen. “Book Publishes Seek Global Reach and Grand Scale.” The New York Times [http://www.nytimes.com/yr/mo/day/news/financial/bookpublish-global.html].

Carvalho, Jim. “Welcome” [http://www.borderbeat.com/welcome.htm]. Border Beat - the Border Arts Journal.

Case, Natalie [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 29 June 1998.

Casey, Allan and Bruce Grierson. “Calvin Klein: Love Poems.” Adbusters (N26, July/August 1999).

The Censorware Project. “Size of the web: A dynamic essay for a dynamic medium” [http://censorware.org/web_size/]. 2000.

Center for Democracy and Technology. “CDT Testifies at Senate Judiciary Subcommittee Hearing On the Availability of Bomb-Making Materials on the Internet” [http://www.eff.org/pub/Censorship/Exon_bill/s735_95_cdt.alert]. CDT Policy Post (N13, 12 May 1995).

Cerf, Vint. “The Internet is for Everyone” [http://www.isoc.org/isoc/media/speeches/foreveryone.shtml]. 7 April 1999.

Cetron, Marvin. “Get Ready for a Digitized Future: Smart Toasters, Media Butlers, and More.” The Futurist (V31 N4, July-August 1997).

Chaddock, Gail Russell Chaddock. “When Is Art Free?” Christain Science Monitor [http://www.csmonitor.com/durable/1998/06/11/fp51s1-csm.htm]. 11 June 1998.

Chartier, Andre. “Gold is the details: Micropayment’s big future.” Computing Canada (V25 I7, 19 February 1999).

Cheal, David. The Gift Economy. London: Routledge, 1988. Literature at Lightspeed – page 481

Chong, Grace [[email protected]]. Love Troubles [http://www.geocities.com/SouthBeach/Docks/6033/love.htm].

Chuck, Lysbeth B. “On being ‘consumer-ed’: Marketing the user.” Searcher (V7 N5, May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Church, Kim [[email protected]]. “Museum of Hands” [http://sushi.st.usm.edu/mrw/mr/1998/kimchurch.html]. Mississippi Review [http://sushi.st.usm.edu/mrw/]. September 1998.

Claburn, Tom. “Go With the Flow.” Wired (V5 N5, May 1997).

Clark, Joe. “Magic weaves its on-line spell.” The Globe and Mail (15 November 1994).

Clay, Cynthia [[email protected]]. “Re: Looking Glass Presents a Sci-Fi Novel (r2b11).” Personal email to Ira Nayman [[email protected]]. 8 September 1998.

Cleaver, Cathleen A. “Why we need the Communications Decency Act in cyberspace” [http://www.nando.net/newsroom/nt/209for.html]. 1995.

Cochrane, Hank [[email protected]]. “Re: Gerald and Misty (r2b6).” Personal email to Ira Nayman [[email protected]]. 1 August 1998.

Cockburn, Cynthia. “Caught in the wheels: the high cost of being a female cog in the male machinery of engineering.” The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman eds. Buckingham, England: Open University Press, 1985.

Coffman, Steve. “Building Earth’s largest library: Driving into the future.” Searcher (V7 I3, March 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Coleman, Susanna Coleman [[email protected]]. Confessions of a Reluctant Hacker [http://members.xoom.com/IVGypsy/]. Invisigoth Gypsy’s Writing Page. 1999/2000.

“Committee adopts standard for counting Web ad viewers...” Wall Street Journal (2 March 1999). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 3 March 1999.

“Communication Decency Act” [http://www.duke.edu/~mag1/procon.cda.html]. 29 April 1997.

“Computer pollution.” Globe and Mail (1 March 2000).

Contenta, Sandro. “Quebec’s language police prowl Web.” Toronto Star (18 June 1997). Literature at Lightspeed – page 482

Coomber, R. “Using the Internet for Survey Research.” Sociological Research Online [http://www.socresonline.org.uk/socresonline/2/2/2.html]. V2 N2, 1997.

Corcoran, Cate C. “From DC to Cyberspace: Net censorship legislation creeps from the House toward your house.” HotWired. [http://hotwired.lycos.com/special/indecent/dcpc.html].

Cornell, Christopher B. [[email protected]]. “RE: Rustlings of the Wind (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Costigan, James T. “Introduction.” Doing Internet Research: Critical Issues and Methods for Examining the Net. Steve Jones, ed. Thousand Oaks California: Sage, 1999

Cotterill, Mark E. [[email protected]]. “Re: Research (r2b4).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Coursey, David. “The Secret Settop.” Upside (V IX N9, October 1997).

Craig, Susanne. “AOL deal just the beginning: Blockbuster deal expected to spark other takeovers.” Globe and Mail (11 January 2000).

Crawford, Tad. “Writing, Publishing, and the Internet.” Publishing for Entrepreneurs (V4 I1, February/March 1998).

Cribb, Robert. “The dawn of do-it-all media.” Toronto Star (11 January 2000).

Crisp, Tom [[email protected]]. “Nor A Lender Be” [http://www.blithe.com/bhq2.2/noralend.html]. Blithe House Quarterly [http://www.blithe.com/]. V2 N2, Spring 1998a. ______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 27 Jun 1998b.

Croe, Michaela [[email protected]]. “Tracks” [http://twilight.fortysecond.net/T_W_4_3.TXT]. Twilight World (V4 I3, 18 May 1996).

Cross, Carol A. [[email protected]]. “Re: RE Jazz Dog and Star Gazer (r2b9).” Personal email to Ira Nayman [[email protected]]. 22 August 1998.

CRTC. “Broadcasting Public Notice CRTC 1999-84” [http://www.crtc.gc.ca/ENG/TELECOM/NOTICE/1999/P9914_0.txt]. 1999.

“CRTC’s Internet decision: dumb or dumber?” eye (3 June 1999). Literature at Lightspeed – page 483

Crumlish, Christian [[email protected]]. “RE: Enterzone (r2b13).” Personal email to Ira Nayman [[email protected]]. 30 September 1998.

Cumyn, Richard James [[email protected]]. “The Effort.” [http://www.etext.org/Zines/InterText/v4n6/effort.html]. Intertext [http://www.intertext.com/]. V4 N6, 1994. ______. “Re: Research (r2b5).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 27 July 1998.

Currier, Jameson [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 6 July 1998.

Curtis, Pavel. “MUDding: Social Phenomena in Text-based Virtual Realities.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996.

Curwen, Peter J. “Year in Review 1997: Book Publishing” [http://search.eb.com/bol/topic?eu=124401&sctn=1]. Encyclopedia Britannica Online. Accessed 7 July 1999.

Cury, James Oliver. “Digital Directors: Filmmakers Focus on the Net: Allison Anders.” The Web Magazine (V1 N12, December 1997a). ______. “I’ll Take Game Shows for $500: Will online game shows be a Web success story?” The Web Magazine (V1 N5, May 1997b).

Curzon, Daniel [[email protected]]. “The Monster in the Wood” [http://www.blithe.com/bhq1.1/inthewood.html]. Blithe House Quarterly [http://www.blithe.com/]. V1 N1, Summer 1997. ______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 1 July 1998.

Dafoe, Chris. “Duthie reflects as book store business falters.” Globe and Mail (3 June 1999).

Dangerously Psycho [[email protected]]. The Strange Society Homepage [http://members.tripod.com/~DPsycho/ST.htm]. 24 March 2000.

Danko, Pete. “Yahoo for Ad Revenue.” Wired (V6 N6, June 1998).

“DargonZine Writers’ FAQ” [http://www1.shore.net/~dargon/writers.htm]. 11 April 1998.

Darkshine [[email protected]]. “RE: Writer’s Corner (r2b9).” Personal email to Ira Nayman [[email protected]]. 22 August 1998. Literature at Lightspeed – page 484

Darnell, Keith [[email protected]]. “RE: My Aspirations (r2b11).” Personal email to Ira Nayman [[email protected]]. 17 September 1998.

Dave [[email protected]]. “RE: The Inflated Graveworm (r2b6).” Personal email to Ira Nayman [[email protected]]. 8 August 1998.

Davidow, William H. and Michael S. Malone. The Virtual Corporation. New York: HarperBusiness, 1992.

Davis, Bill [[email protected]]. “The Old Man” [http://www.artandlove.org/pr/oldman1.htm]. Art and Love on the Net [http://www.artandlove.org/noframes.htm]. 1998a. ______. “RE: Research Survey.” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 7 July 1998b.

December, John. “Units of Analysis for Internet Communication.” Journal of Communication (V46 N1, Winter 1996).

Deck, Stewart. “Amazon hints at data sales.” Computerworld (V33 I18, 3 May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Deemer, Charles [[email protected]]. “RE: What Do Men Want? (r2b13).” Personal email to Ira Nayman [[email protected]]. 26 September 1998. de Kerckhove, Derrick. Connected Intelligence: The Arrival of the Web Society. Toronto: Somerville House Publishing, 1997.

De Lancey, Craig [[email protected]]. “Re: time dilation (r2b6).” Personal email to Ira Nayman [[email protected]]. 3 August 1998. ______. “Time dilation” [http://sushi.st.usm.edu/mrw/mr/1997/delancy.html]. Mississippi Review [http://sushi.st.usm.edu/mrw/]. October 1997. de Leeuw, Edith and William Nicholls II. “Technological Innovations in Data Collection: Acceptance, Data Quality and Costs.” Sociological Research Online [http://www.socresonline.org.uk/socresonline/1/4/leeuw.html]. V1 N4, 1996.

Dermansky, Marcy [[email protected]]. “Drop It.” [http://sushi.st.usm.edu/mrw/mr/1998/dermansky-drop.html/]. Mississippi Review [http://sushi.st.usm.edu/mrw/]. 1998.

Dessart, James. [[email protected]]. An Interactive Cyberpunk Tale. My Written Works [http://www.cam.org/~dessart/james/HTMLpapr/]. ______. “RE: My Written Works (r2b13).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 29 September 1998. Literature at Lightspeed – page 485

Dewdney, Christopher. Last Flesh: Life in the Transhuman Era. Toronto: HarperCollins, 1998.

Dibbell, Julian. “A Rape in Cyberspace; or How an Evil Clown, a Haitian Trickster Spirit, Two Wizards, and a Cast of Dozens Turned a Database into a Society.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996.

Dickerson, David Ellis [[email protected]]. “Crash” [http://morpo.com/v4i3/crash.htm]. Morpo Review [http://morpo.com/index.htm].

Diekmeyer, Peter. “Brand names have special clout.” Montreal Gazette (28 October 1998).

Douit, Willie [[email protected]] “RE: Four Corners of the Wind (r2b13).” Personal email to Ira Nayman [[email protected]]. 3 October 1998.

Doucette, Anne [[email protected]]. “Re: The Literature Page (r2b11).” Personal email to Ira Nayman [[email protected]]. 4 September 1998.

Doyle, Bob. “Pipe Dream or Reality?” NewMedia (V7 N12, 22 September 1997).

Drew, Jesse. “Media Activism and Radical Democracy.” Resisting the Virtual Life: The Culture and Politics of Information. James Brook and Iain A. Boal, eds. San Francisco: City Lights, 1995.

Duffey, Suzan [[email protected]]. “Re: The Therapist (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Duncombe, Stephen. Notes from the Underground: Zines and the Politics of Alternative Culture. London: Verso, 1997.

Dyderski, Richard [[email protected]]. “D User” [http://www2.aphelion- webzine.com/shorts/d_user.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1998.

Dyson, Esther. “Intellectual Property on the Net.” [http://www.eff.org/pub/Intellectual_property/ip_on_the_net.html]. ______. “Intellectual Value.” Wired (V3 N7, July 1995). ______. Release 2.1. New York: Broadway Books, 1998.

Eberhard, Martin. “E-book economics.” Publishers Weekly (V246 I10, 8 March 1999). Proquest Database [http://proquest.umi.com/pdqweb]. Literature at Lightspeed – page 486

Ebner, Dave. “Little guys defy on-line giants.” Globe and Mail (19 June 1999).

“E-commerce Problems.” Ottawa Citizen. 4 Feb, 1998. Quoted in EduPage. John Gehl and Suzanne Douglas, eds. 5 February 1998.

[[email protected]]. “Who We Are” [http://www.demon.co.uk/review/whoweare.html]. Richmond Review.

Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, Mass.: MIT Press, 1997.

Effron, Eric. “journalism.commerce.” Brill’s Content (V2 N6, August 1999).

Eisembeis, Hans. “Web publishers packing it in.” NOW (April 30-May 6 1998).

Eisen, Adrienne [[email protected]]. “Your Questions.” Personal email to Ira Nayman [[email protected]]. 9 October 1998.

Eisenberg, Rebecca L. “No Time Like the Real (TV) Time.” Upside (V IX, N9, October 1997).

Electronic Frontier Foundation (a). “Constitutiontal Problems with the Communications Decency Amendment” [http://www.realaudio.com/contentp/rabest/eff.html]. ______(b). “Federal Court Rules Communications Decency Act Unconstitutional” [http://www.eff.org/pub/Censorship/Exon_bill/960612_eff_cda_decision.statement].

Elkins, David J. “Globalization, Telecommunication, and Virtual Ethnic Communities.” International Political Science Review (V18 N2, 1997).

England, Paul [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 12 July 1998. ______. “The Punishment” [http://www.io.com/~crberry/DuctTape/Archive/06_fic_england.html]. Duct Tape Press [http://www.io.com/~crberry/DuctTape/].

Elliott, David and Ruth Elliott. “Social control of technology.” The Politics of Technology, Godfrey Boyle, David Elliott and Robin Roy, eds. New York: Longman, 1977.

“English-only Approved for Georgia Tech Web Site in France.” Chronicle of Higher Education. May, 1998. Quoted in EduPage. John Gehl and Suzanne Douglas, eds. 10 May 1998. Literature at Lightspeed – page 487

Environics Research Group. Who Buys Books? Toronto: Canadian Book Publisher’s Council, et al, 1995.

Epstein, Jason. “The Rattle of Pebbles.” New York Review of Books. 27 April 2000. [http://www.nybooks.com/nyrev/WWWarchdisplay.cgi?20000427055F]

Erdedy, Michael James [[email protected]]. “Daily” [http://www.eclectica.org/v1n11/erdedy.html]. Eclectica Magazine.

Erikson, Kai. “On Sociological Prose.” The Rhetoric of Social Research: Understood and Believed. Albert Hunter, ed. New Brunswick: Rutgers University Press, 1990.

Eshoo, Anna. “Nanny on the Net.” 5 January 1996. [http://www.eff.org/pub/Censorship/Exon_bill/eshoo_010596_cda.article].

European Union Action. “Illegal and harmful content on the Internet” [http://www2.echo.lu/legal/en/internet/communic.html]. 5 December 1999.

Evans, Diana [[email protected]]. “RE: Scarlet (r2b9).” Personal email to Ira Nayman [[email protected]]. 24 August 1998.

Evans, Mark. “EMI deal would add music to the mix.” Globe and Mail (24 January 2000a). ______. “Merger mates have eyes on each other’s assets.” Globe and Mail (11 January 2000b).

Evans, Michael S. and Rebecca C. Stone. “Communications Decency Act Abridges Constitutional Freedoms: A Recommendation for a Free Internet.” 20 August 1995. [http://www.gsu.edu/~lawppw/lawand.papers/cda.html].

Everard, J. L. Virtual States: The Internet and the Boundaries of the Nation-state. London: Routledge, 1999.

Exon, James. Letter to the editor. Washington Post. [http://www.eff.org/pub/Censorship/Exon_bill/s314_hr1004_95_exon_post.letter].

Failing IV, Chuck [[email protected]]. “RE: Thought Flow (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Farber, Philip H. [[email protected]]. “Re: Paradigm Shift! (r2b7).” Personal email to Ira Nayman [[email protected]]. 9 August 1998. Literature at Lightspeed – page 488

Farmer, F. Randall and Chip Morningstar. “The Lessons of Lucasfilms’ Habitat.” Cyberspace: First Steps. Michael Benedikt, ed. Cambridge, Massachusetts: The MIT Press, 1991.

Farmer, Shannon [[email protected]]. Wings of Destiny [http://members.aol.com/shanx1/wings.htm]. Realms of Imagination [http://members.aol.com/shanx1/]. December 23, 1998.

Fernback, Jan. “The Individual within the Collective: Virtual Ideology and the Realization of Collective Principles.” Virtual Culture: Identity & Community in Cybersociety. London: Sage Publications, 1997.

Filion, Sydney [[email protected]]. “RE: It Began In The Attic (r2b13).” Personal email to Ira Nayman [[email protected]]. 1 October 1998.

Fischer, Claude S. “Entering Sociology into Public Discourse.” The Rhetoric of Social Research: Understood and Believed. Albert Hunter, ed. New Brunswick: Rutgers University Press, 1990.

Fiske, John. “Communication Theory.” Introduction to Communication Studies London: Routledge, 1982.

Flanders, Julia. “The Body Encoded: Questions of Gender and the Electronic Text.” Electronic Text: Investigations in Method and Theory. Sutherland, ed. Oxford, England: Clarendon Press, 1997.

Flood, Joseph W. [[email protected]]. “Eire” [http://www.etext.org/Zines/InterText/v6n3/eire.html]. Intertext [http://www.intertext.com/]. V6 N3, 1996.

“Focus – Sex in Cyberspace” [http://www.eff.org/pub/Censorship/Exon_bill/berman_v_exon_062295_newshour. transcript]. The MacNeil/Lehrer NewsHour. 22 June 1995.

Fogel, Melanie [[email protected]]. “Re: No Loose Ends (r2b10).” Personal email to Ira Nayman [[email protected]]. 31 August 1998.

Foucault, Michel. “What Is an Author?” Textual Strategies: Perspectives in Post- structuralist Criticism. Josue V. Harari, ed. Ithica, New York: Cornell University Press, 1979.

Franklin, Ursula. The Real World of Technology. Toronto: House of Anansi Press, 1990. Literature at Lightspeed – page 489

Frank, Steven J. [[email protected]]. “The Gelato Affair” [http://sushi.st.usm.edu/mrw/mr/1997/gelato.html]. Mississippi Review [http://sushi.st.usm.edu/mrw/]. 1997. ______. “RE: The Gelato Affair (r2b6).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 6 August 1998.

Fraser, B. E. [[email protected]]. “Madeline Deerstalker” [http://www.angelfire.com/on/frasernotes/deerstalker1.html]. Fraser’s Notes [http://www.angelfire.com/on/frasernotes/].

“Freelancers Lose to Publishers Over Electronic Reproduction.” New York Times. August 14 1997. Quoted in EduPage. John Gehl and Suzanne Douglas, eds. 14 August 1997.

Freeman, Isaac. Letter to the editor. Wired (V5 N6, June 1997).

Free Software Foundation. GNU General Public License Version 2. 16 February 1998. [http://www.fsf.org/copyleft/gpl.html].

Fremont, Kira [[email protected]]. “RE: Sound of Fury (r2b9).” Personal email to Ira Nayman [[email protected]]. 25 August 1998.

Friedman, Matthew. Fuzzy Logic: Dispatches from the Information Revolution. Montreal: Vehicule Press, 1997.

Friesen, Dwight [[email protected]]. “The Inheritance.” [http://www.interlog.com/~dtv/Inheritance/elijah1.html]. DTV [http://www.interlog.com/~dtv/splash.html].

Freund, Jesse. “Networking Networks: Anti-atavistic advertising.” Wired (V5 N2, February 1997).

Fritz, Mark [[email protected]]. “The Automatic Door Swings Both Ways” [http://members.aol.com/makfritz/ficdoor.htm]. Funny Fiction [http://members.aol.com/makfritz/fiction1.htm]. 18 June 1997.

Fuller, Steve. “Why even scholars don’t get a free lunch in cyberspace.” Cyberspace Divide: Equality, Agency and Policy in the Information Society. Brian D. Loader, ed. London: Routledge, 1998.

Gallo, Ana Maria [[email protected]], “Re: The Mystery Box (r2b6).” Personal email to Ira Nayman [[email protected]]. 3 August 1998. Literature at Lightspeed – page 490

Garni, Ricky [[email protected]] “Re: Soft Kiss (r2b8).” Personal email to Ira Nayman [[email protected]]. 22 September 1998. ______. “Soft Kiss” [http://www.wln.com/~salasin/garni97.html]. RealPoetik. 1997.

Gaudin, Sharon. “Independents take aim at Amazon.” Computerworld (V33 I20, 17 May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Gaylin, Alison Sloane [[email protected]]. “Getting Rid of January” [http://www.etext.org/Zines/InterText/v8n2/january.html]. Intertext [http://www.intertext.com/]. V8 N2, 1998.

Geertz, Clifford. The Interpretation of Cultures. New York: Basic Books, 1973.

Gehl, John. “TV Or Not TV? That is the Question.” Educom Review (V33 N1, January/February 1998).

Geirland, John. “Making AOL a Media Company.” Wired (V5 N11, November 1997).

“General Introduction.” The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Massachusetts: MIT Press, 1987.

Gensler, Marc. “A Pro-CDA View.” April 29 1997. [http://www.duke.edu/~mag1/procon.cda.html].

Getlin, Josh. “Publishers losing out to movie makers.” The Toronto Star (18 July 1998).

Gibbon, Ann and Wendy Stueck. “Duthie Books chain has an unhappy ending.” Globe and Mail (1 June 1999).

Gibson, Christie [[email protected]]. “Higher Learning” [http://members.tripod.com/~l_ananas/higher_learning2.htm]. Writers’ Online - Christie [http://members.tripod.com/~l_ananas/index.html]. 6 March 2000.

Gibson, William. “Disneyland with the Death Penalty.” Wired (V1 N4, 1993).

Giese, Mark. “Text as body: Narratives of identity in a text-based Internet Community.” Paper presented at the International Communication Association conference. May, 1997.

Gilbert, Jeremiah [[email protected]]. “RE: Research Survey.” Personal email to Ira Nayman [[email protected]]. 28 June 1998. Literature at Lightspeed – page 491

Gilbert, Michael T. [[email protected]]. “Atmospherics 1: the Club.” [http://www.shallowend.org/fiction/V1/I4/atmospherics.htm]. The ShallowEND [http://www.shallowend.org/]. V1 N4. ______. “RE: Atmospherics 1: the Club (r2b8).” Personal email to Ira Nayman [[email protected]]. 16 August 1998.

Gilster, Paul. The Internet Navigator. New York: John Wiley and Sons, 1993.

Ginsburg, Lynn. “Contrarian Libertarian.” Wired. V5 N7, July 1997.

Godin, Seth. Presenting Digital Cash. Indianapolis, Indiana: Sams.net Publishing, 1995.

Godwin, Mike and Hal Abelson. “Response to Volokh article” [http://www.eff.org/pub/Censorship/Exon_bill/960730_godwin_abelson_filter.letter]. 30 July 1996.

Goldberg, D. G. K. [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Goldhaber, Michael H. “Attention Shoppers!” Wired (V5 I12, December 1997).

Golding, Adam [[email protected]]. “Re: Solar Flares (r2b13).” Personal email to Ira Nayman [[email protected]]. 1 October 1998.

Goldman, Michael. “Tuning In To Tomorrow.” Digital Diner (V2 N2, July 1997).

Goldman Rohm, Wendy. “Going For Broke.” Upside (V IX N9, October 1997).

Gooderham, Mary. “Computer networks hack through the Homolka ban.” The Globe and Mail (2 December 1993).

Goodwin, Michael. “By the Time We Get to Webstock...” The Web Magazine (V1 N1, October/November 1996).

Goodwins, Rupert [[email protected]]. “Re: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Gray, Matthew K., and Jake Harris [[email protected]]. Matthew and Jake’s Adventures [http://www.mit.edu:8001/mj/mj.html].

Greaves, Howard. “On-line challenges.” Globe and Mail (June 12 1999).

Green Onions [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 27 June 1998. Literature at Lightspeed – page 492

Greenstein, Louis [[email protected]]. Mister Boardwalk [http://www.pcisys.net/~drmforge/louis.htm]. Dream Forge [http://www.pcisys.net/~drmforge/]. 1997-1998. ______. “Re: Research (r2b2).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 5 July 1998.

Greenwald, Sara [[email protected]]. Fields of Night [http://www.sfo.com/~sarapeyton/]. ______. “RE: Fields of Night (r2b13).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 2 October 1998.

Greywolf the Wanderer (Sindadraug Ranae) [[email protected]]. “Dark Lord.” [http://www.geocities.com/Area51/Shire/3951/grey1.html]. Faerytales. [http://www.geocities.com/Area51/Shire/3951/door1.html].

Griffin, Rand’l [[email protected]]. “Timothy Jordan’s Conscience Springs a Pop Quiz” [http://shell.rmi.net/~rgriffin/timothy.html]. Gr(y)phon Bibliotech [http://shell.rmi.net/~rgriffin/]. January 1999.

Groves, John. “The Thin Gray Line.” Adbusters (V5 N2, Summer 1997).

Gubesch, Carrie [[email protected]]. “RE: Slave Trade (r2b8).” Personal email to Ira Nayman [[email protected]]. 15 August 1998.

Gunderloy, Mike and Cari Goldberg Janice. The World of Zines. New York: Penguin Books, 1992.

Gutstein, Donald. E.con: How the Internet Undermines Democracy. Toronto: Stoddart Publishing, 1999.

Gwyn, Richard. “It’s going to be corporate century.” Toronto Star (12 January 2000).

Hands, D. Wade. “Conjectures and Reputations: The Sociology of Scientific Knowledge and the History of Economic Thought.” History of Political Economy (V29 I4, 1998).

Hardin, Herschel. Closed Circuits: The Sellout of Canadian Television. Vancouver: Douglas & McIntyre, 1985.

Harrison, Ann. “New payment service safeguards content.” Computerworld (V33 N23, June 7 1999). Literature at Lightspeed – page 493

Harrison, Lucy [[email protected]]. “RE: One Man Went to Mow (r2b7).” Personal email to Ira Nayman [[email protected]]. 18 August 1998.

Harth, Sydney [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Hearne, Reed [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 28 June 1998.

Helgeson, James G. and Michael L. Ursic. “The Decision Process Equivalency of Electronic Versus Pencil-and-Paper Data Collection Methods.” Social Science Computer Review (V7 N3, Fall 1989).

Henderson, Raechel [[email protected]]. Personal email to Ira Nayman [[email protected]]. “Re: Research (r2b5).” 27 July 1998.

Henning, Cliff [[email protected]]. “Cheez” [http://www.swingmachine.org/issue9/cheez1.html]. The Electric Big-Bang Swing Machine! [http://www.swingmachine.org/]. 1998.

Herman, Edward S. and Robert W. McChesney. The Global Media: The New Missionaries of Global Capitalism. London: Cassell, 1997.

Hester. [http://www.scarletnetter.org/]. The Scarlet Netter.

Heterick, Jr., Robert C. “Creative Destruction.” Educom Review (V32 N3, May/June 1997).

Hetman, Francois. “Technology on Trial.” The Politics of Technology. Godfrey Boyle, David Elliott and Robin Roy, eds. New York: Longman, 1977.

High, John. “Niche markets no longer a guarantee for San Francisco independents.” Publishers Weekly (V246 I22, 31 May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Hill, Kevin A. and John E. Hughes. “Computer-Mediated Political Communication: The USENET and Political Communities.” Political Communication (V14 N1, January- March 1997).

Hindle, John. “The Internet as Paradigm: Phenomenon and Paradox.” The Internet as Paradigm. Queenstown, Maryland: The Institute for Information Studies, 1997.

Hoffman, Heather [[email protected]]. “Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 26 July 1998. Literature at Lightspeed – page 494

Hollifield, Dan. “Welcome to the direct descendant of Dragon’s Lair Webzine issue #14!” [http://www.aphelion-webzine.com/ap14v2.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1997

Houpt, Simon. “A Renaissance rocker embraces the Net.” The Globe and Mail (30 May 1998).

Hovis, John H. “Internet Primed for Broadcasting.” NewMedia (V7 N1, 22 September 1997).

HR 774 IH. “Internet Freedom and Child Protection Act of 1997” [http://thomas.loc.gov/cgi-bin/query/z?c105:H.R.774:].

Hubschman, Thomas J. [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 14 July 1998.

Hughes, Thomas P. Hughes. “The Evolution of Large Technological Systems.” The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Mass.: MIT Press, 1987.

Human Rights Watch. The Internet in the Mideast and North Africa. New York: Human Rights Watch, 1999. ______. “Silencing the Net – The Threat to Freedom of Expression Online.” (V8 N2 (G), May 1996). humdog. “pandora’s vox: on community in cyberspace.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996.

Hunt, David. “Who Buys Books?” Who Buys Books? Toronto: the Canadian Book Publisher’s Council, et al, 1995.

Hunter, Albert. “Introduction: Rhetoric in Research, Networks of Knowledge.” The Rhetoric of Social Research: Understood and Believed. Albert Hunter, ed. New Brunswick: Rutgers University Press, 1990.

Hunter, Kasin [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 22 September 1998.

Hushion, Jacqueline. Who Buys Books? Toronto: the Canadian Book Publisher’s Council, et al, 1995. Literature at Lightspeed – page 495

IITF Working Group on Intellectual Property Rights. “Intellectual Property and the National Information Infrastructure” [http://www.eff.org/pub/Intellectual_property/ipwg_nii_ip_lehman.report]. September 1995.

“The increasing cost of surfing.” Interactive Week (18 February 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 19 February 1998.

Inglis, Gavin [[email protected]]. “RE: ‘Under the Ashes’ (r2b13).” Personal email to Ira Nayman [[email protected]]. 7 October 1998. ______. Under the Ashes [http://www.tardis.ed.ac.uk/~krynoid/ashes/ashes.html].

Innis, Harold. The Bias of Communications. Toronto: University of Toronto Press, 1951.

“Interactive Digital Media Small Business Growth Fund” [http://www.est.gov.pn.ca/english/st/st_digit.html]. February 1999.

INT’L.com. “Canada” [http://www.headcount.com/count/datafind.htm?search=&choice=country&id=192]. 1999a. ______. “The US” [http://www.headcount.com/count/datafind.htm?search=&choice=country&id=235]. 1999b.

“ISPs Not Liable for Actions of Subscribers.” San Jose Mercury News (22 June 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 23 June 1998.

Israelson, David. “Net advertising still raises a lot of question.” The Toronto Star (12 October 1998).

Ivry, Bob. “From Net to Set: Check out tomorrow’s TV shows today...on the Web.” The Web Magazine (V1 N8, August 1997).

“I Want My...PatroNet?” Wired (V5 N5, May 1997).

Janelle, Donald. “Global Interdependence and its Consequences.” Collapsing Space and Time. Stanley Brunn and Thomal Leinhach (eds). London: Harper and Collins, 1991.

Jassin, Lloyd J. “When does a movie infringe on a novel’s copyright?” Creative Screenwriting (V5 N2, March/April 1998). Literature at Lightspeed – page 496

Jennings, Larry D. [[email protected]]. “Oops!” [http://www.geocities.com/SoHo/Lofts/1917/October_1997/October_page7.html]. Retribution. October 1997. ______. “RE: Oops! (r2b8).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 15 August 1998.

“Jim Williams: Librarians in the Cyberage.” Educom Review (V33, N4, July/August 1998).

Johanneson, Pat [[email protected]]. “RE: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 27 July 1998.

Johnson, Bill [[email protected]]. “RE: Aphistis’ Realm (r2b7).” Personal email to Ira Nayman [[email protected]]. 8 August 1998.

Johnson, David. “Community vs. Commerce.” Brill’s Content (V2 N3, April 1999).

Johnson, David R. “Creating Network Redistribution Rights -- Does Electronic Information Really Want to Be Free?” [http://www.eff.org/pub/Intellectual_property/net_redist_johnson.article]. 24 January 1994. ______. “Taking Cyberspace Seriously: Dealing with Obnoxious Messages on the Net” [http://www.eff.org/pub/Censorship/Exon_bill/content_regulation_johnson.article].

Johnson, Steven. Interface Culture: How New Technology Transforms the Way We Create and Communicate. San Francisco: HarperEdge, 1997.

Johnstone, David, Deborah Johnstone and Sunny Handa. Getting Canada Online. Toronto: Stoddart, 1995.

Jones, Adam [[email protected]]. “Re: survey, stuff (fwd).” Personal email to Ira Nayman [[email protected]]. 6 October 1998.

Jones, Karen. “Getting Real.” Digital Diner (V2 N2, July 1997).

Kadrey, Richard [[email protected]]. “Re: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Kaeser, Eva [[email protected]] “RE: Intermezzo (r2b7).” Personal email to [[email protected]]. 17 August 1998.

Kapica, Jack. “How the Internet keeps its secrets.” The Globe and Mail (26 May 1995). Literature at Lightspeed – page 497

Karr, Steve [[email protected]]. “Levitation (or How to Float)” [http://www.elektromedia.com/float/].

Karsmakers, Richard [[email protected]]. “Re: Twilight World (r2b9).” Personal email to Ira Nayman [[email protected]]. 22 August 1998.

Katz, Bill. Dahl’s History of the Book, third edition. Metuchen, New Jersey: Scarecrow Press, 1995.

Katz, Jon. “Online or Not, Newspapers Suck.” Wired (V2 N9, September 1994).

Kay, Angela [[email protected]]. “Re: Wonky Zine (r2b9).” Personal email to Ira Nayman [[email protected]]. 22 August 1998.

Keller, Ammi [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 8 July 1998.

Kelly [[email protected]]. “RE: End of the World (r2b6).” Personal email to Ira Nayman [[email protected]]. 1 August 1998.

Kelly, Kevin. “New Economy? What New Economy?” Wired (V6 N5, May 1998).

Kendall, Lori. “Recontextualizing ‘Cyberspace:’ Methodological Considerations for On- line Research.” Doing Internet Research: Critical Issues and Methods for Examining the Net. Steve Jones, ed. Thousand Oaks California: Sage, 1999.

Kerstetter, Jim. “Micropayments rebound.” PC Week (V16 N12, 22 March 1999).

Kiley, Dean [[email protected]]. “Eight Answers, Four Replies, A Peepshow and an Epilogue” [http://www.blithe.com/bhq2.2/answers.html]. Blithe House Quarterly [http://www.blithe.com/]. V2 N2, Spring 1998a. ______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 6 July 1998b.

Kilker, Julian A. and Sharon S. Kleinman. “Researching Online Environments: Lessons from the History of Anthropology.” The New Jersey Journal of Communication (V5 N1, Spring 1997).

Kinney, Jay. “Is There a New Political Paradigm Lurking in Cyberspace?” Cyberfutures: Culture and Politics on the Information Superhighway, Sardar and Jerome J. Ravetz, eds. New York: New York University Press, 1996.

Kinsella, Bridget. “When your past is your future.” Publishers Weekly (V246 I12, 22 March 1999). Proquest Database [http://proquest.umi.com/pdqweb]. Literature at Lightspeed – page 498

Kirkwood, J [[email protected]]. “Re: Shadow Feast Magazine (r2b8).” Personal email to Ira Nayman [[email protected]]. 14 August 1998.

Kline, Clark [[email protected]]. “Subject: Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Kline, David. “Big Brands Win.” Upside (V IX N9, October 1997).

Kingwell, Mark. “WebTV Unplugged.” The Utne Reader (N81, June 1997).

Knowlton, Melinda [[email protected]]. “RE: Ravenscar Nights (r2b13).” Personal email to Ira Nayman [[email protected]]. 3 October 1998.

Kollock, Peter. “The economies of online cooperation: Gifts and public goods in cyberspace.” Communities in Cyberspace. Marc A. Smith and Peter Kollock, eds. London: Routledge, 1999.

Koselka, Rota. “A real Amazon.” Forbes (V163 I7, 5 April 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Kramer, Art. “News delivery pushes ahead.” The Globe and Mail (14 June 1997).

Kruger, Paul [[email protected]]. “RE: Gypsy Dawn (r2b11).” Personal email to Ira Nayman [[email protected]]. 5 September 1998.

Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962.

Kyrlach, Patty [[email protected]]. “Submission Guidelines for Writers, Artists, and Photographers” [http://www.wams.org/pages/guides.htm]. MorningStar. 12 October 1997.

Lachesis January [[email protected]]. “RE: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 7 July 1998.

La Gesse, Kate [[email protected]]. “RE: Of Dark, Light and Shadows (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Landauer, Thomas K. The Trouble with Computers: Usefulness, Usability, and Productivity. Cambridge, Massachusetts: MIT Press, 1997.

Landow, George P. Hypertext: The Convergence of Contemporary Theory and Technology. Baltimore: The Johns Hopkins University Press, 1992. Literature at Lightspeed – page 499

“Language Rules.” Montreal Gazette. 14 June 1997. Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 15 June 1997.

Lanier, Jaron. “Mass Transit!” Wired (V5 N5, May 1997).

Lappin, Todd. “Deja Vu All Over Again. Wired (V3 N5, May 1995)

Larsen, Amy K. “Virtual cash gets real.” Informationweek (N736, 31 May 1999).

Larsen, Elizabeth. “Online and Under Pressure.” The Utne Reader (N91, January/February 1999).

Lasn, Kalle. “Voodoo at the Summit.” Adbusters (N18, Summer 1997).

Latour, Bruno. Science in Action. Cambridge, Mass.: Harvard University Press, 1987.

Laurel, Brenda. Computers as Theatre. Reading, Massachusetts: Addison-Wesley, 1993.

Lazonick, William. “The self-acting mule and social relations in the workplace.” The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman eds. Buckingham, England: Open University Press, 1985.

Lehmann-Haupt, Hellmut. The Life of the Book. London: Abelard-Schuman, 1957.

Lehrer, Tom. “Smut.” That Was the Year That Was. Warner Music (6179-2).

Levens, Joseph [[email protected].]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 29 June 1998.

Levinson, Paul. The Soft Edge. London: Routledge, 1997.

Lewinski, John Scott. “Interactive Multimedia: The New Esthetic.” Creative Screenwriting (V4 N1, Spring 1997).

Lindlof, Thomas R., Tim Edwards, Brian Malloy, Gaelle Picherit, Thor Townsend. “X- [email protected]: Community Building in a Virtual Popular Culture.” Paper presented at the International Communication Association conference. May 23, 1997.

Lindsay, Jon [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 13 July 1998.

Link, Mardi. “The RISE of the E-Book.” Publishing for Entrepreneurs (V4 I1, February/March 1998). Literature at Lightspeed – page 500

Li-Ron, Yael. “It’s the End of the Web as We Know It.” The Web Magazine (V1 N10, October 1997).

Loeppky, Bill [[email protected]]. “Subject: Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Logan, Robert K. The Fifth Language. Toronto: Stoddart, 1995.

London, RoseMarie [[email protected]]. “survey.” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Long, Duncan [[email protected]]. “Re: Reserach (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Luesebrink, Marjorie [[email protected]]. The Probability of Earthquake... [http://artnetweb.com/blast5/anacap/earth_1.html]. “Re: The Probability of Earthquake... (r2b13).” Personal email to Ira Nayman [[email protected]]. 28 September 1998.

Lutke, Craig [[email protected]]. “Re: FW: Operation: Dead Bang (r2b9).” Personal email to Ira Nayman [[email protected]]. 4 September 1998.

MacDonald, Gayle. “Internet plus content equals powerhouse.” Globe and Mail (11 January 2000).

Machlis, Sharon. “Micropayments aren’t just chump change.” Computerworld (V32 N9, 2 March 1998).

Macris, Kristie [[email protected]]. “Try to get home without suicide today “ [http://www.geocities.com/7karma7/suic.html]. A Little OffBase [http://www.geocities.com/SunsetStrip/Towers/6210/]. 1998.

Madsen, Hunter. “Reclaim the Deadzone.” Wired (V4 N12, December 1996).

Mahoney, John [[email protected]]. “RE: Log Cabin Chronicles (r2b6).” Personal email to Ira Nayman [[email protected]]. 1 August 1998.

Mandel, Jerome [[email protected]]. “Lorelei Adams” [http://morpo.com/v1i3/ladams.html]. Morpo Review [http://morpo.com/index.htm] ______. “RE: Lorelei Adams (r2b6).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 1 August 1998.

Mann, Charles C. “Volume Business.” Cybernautics Digest (V4 N5, October/November 1997). Literature at Lightspeed – page 501

Mann, Roland [[email protected]]. “In The Trenches.” [http://www.bri- dge.com/short_takes/short59.html]. Bridge [http://www.bri- dge.com/issues/currentissue.html]. 1998a. ______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 29 June 1998b.

Mark, David [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Market Facts of Canada, “Location and Selection are a Bookstore’s Major Strengths.” Who Buys Books? Toronto: the Canadian Book Publisher’s Council, et al, 1995a. ______. “Price and Presentation are a Non-bookstore’s Major Strengths.” Who Buys Books? Toronto: the Canadian Book Publisher’s Council, et al, 1995b. ______. “Price is a Bookstore’s Major Weakness.” Who Buys Books? Toronto: the Canadian Book Publisher’s Council, et al, 1995c.

Martin, Brian. “Against intellectual property” [http://www.uow.edu.au/arts/sts/bmartin/pubs/95psa.html]. Philosophy and Social Action (V21 N3, July-September 1995).

Martin, James A. “Spinning a new Web.” Publishers Weekly (V246 I17, 26 April 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Marvin, Carolyn. "When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century." New York: Oxford University Press, 1988.

Masterson, Noah [[email protected]]. “RE: El Zine de Eugene (r2b11).” Personal email to Ira Nayman [[email protected]]. 4 September 1998a. ______. “Stella: A Fictional Haircut Story” [http://www.concentric.net/~Noahm/stella.htm]. El Zine de Eugene [http://www.concentric.net/~noahm]. 1998b.

Matteson, Susan, “Re: the ShallowEND (r2b8).” Personal email to Ira Nayman [[email protected]]. 16 August 1998.

Marx, Karl. “The machine versus the worker.” The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman, eds. Philadelphia: Open University Press, 1985.

Matsu, Kono. “Are You Blind?” Adbusters (N19, Autumn 1997). Literature at Lightspeed – page 502

McCarty, Katie [[email protected]) “It Could Happen To Anyone” [http://www.geocities.com/SunsetStrip/Studio/8232/story.html]. 1996a. ______. “Wish Upon A Star...” [http://www.geocities.com/SunsetStrip/Studio/8232/]. 1996b.

McChesney, Robert W. Telecommunicaions, Mass Media, & Democracy: The Battle for Control of U. S. Broadcasting, 1928 -1935. Oxford: Oxford University Press, 1993.

McGowin, Kevin [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

McKinney, Tucker [[email protected]]. Breves Itineres [http://www.mindspring.com/~cthubert/BrevesI/]. 1999.

McLuhan, Marshall. McLuhan: Forward Through the Rearview Mirror. Paul Benedetti and Nancy DeHart, eds. Toronto: Prentice Hall Canada, 1996. ______. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964.

McNish, Jacquie. “Instant books, music riding digital wave.” The Globe and Mail (22 January 1994).

McPhee, Joyce (transcriber). “Pornography on the Internet: Straight Talk from the Family Research Council.” 3 July 1996. [http://www.eff.org/pub/Censorship/Exon_bill/960703_frc_radioshow.trascript].

McQuivey, James L. “How the Web Was Won: The Commercialization of Cyberspace.” Paper presented at the International Communications Association conference, 1997.

Meeks, Brock N. “Jacking in from the ‘Sticking It to the Net’ Port” [http://hotwired.lycos.com/special/indecent/poker.html]. HotWired. 1995.

Meese III, Edwin et al. “Re: Computer Pornography Provisions in Telecommunications Bill” [http://www.eff.org/pub/Censorship/Exon_bill/fundamentalists_cda_congress_101695.let ter]. 16 October 1995.

Meissner, Gerd. “Germany: Depression Online.” Educom Review (V32 N6, November/December 1997).

[[email protected]]. “HistOracle: A Journal of Uncommon History” [http://www.zoltan.org/historacle/]. HistOracle.

Memon, Farhan. “Student paper offers way to bypass Homolka ban.” The Globe and Mail. (17 January 1994). Literature at Lightspeed – page 503

Mendels, Pamela. “On-Line Newspaper’s Provocation to Test Decency Act” [http://www.nytimes.com/library/cyber/week/0426reporter.html]. New York Times (26 April 1996).

Mendler, W. S. (Skip) [[email protected]]. “The Screwdisk E-Mail” [http://www.well.com/user/smendler/scrintro.htm]. September 1996.

Menzies, Heather. Whose Brave New World?: The Information Highway and the New Economy. Toronto: Between the Lines, 1996.

Mercuti777 [[email protected]]. “Aruss Returns” [http://www.bestweb.net/~kali93/eros.htm]. Paradigm Shift! [http://members.aol.com/para93/index.html]. V1 N1, July 1998.

“Merger of Web measurement firms will smooth out differences,” New York Times (13 October 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 13 October 1998.

Mermelstein, Judyth [[email protected]]. Mailing list. “Technology, Publishing and On-line communities” [IRNTECH:861]. 15 May 1999.

Merz, Jon F. [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Meyers, Jeff [[email protected]]. “Gilbert Henry Tries Again” [http://the- office.com/bedtime-story/gilbert.htm]. 1997.

Meyrowitz, Joshua. No Sense of Place. London: Oxford University Press, 1985.

Midnight [[email protected]]. “RE: Midnight (r2b11).” Personal email to Ira Nayman [[email protected]]. 7 September 1998.

Milano, Sharon [[email protected]]. “RE: The Therapist (r2b13).” Personal email to Ira Nayman [[email protected]]. 28 September 1998.

Miller, Jack [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Miller, Roger. Economics Today: The Macro View, Sixth Edition. New York: Harper and Row, 1988.

Milliot, Jim. “News Corp. to acquire Morrow, Avon from Hearst.” Publishers Weekly (V246 I25, 21 June 1999). Proquest Database [http://proquest.umi.com/pdqweb]. Literature at Lightspeed – page 504

Milner, Brian. “New and old media merge in massive AOL deal.” Globe and Mail (11 January 2000).

Milofksy, Carl. “Writing and Seeing: Is There Any Sociology Here?” The Rhetoric of Social Research: Understood and Believed. Albert Hunter, ed. New Brunswick: Rutgers University Press, 1990.

Minton, Stephanie [[email protected]]. “Re: One Last Yearning (r2b8).” Personal email to Ira Nayman [[email protected]]. 23 August 1998.

Mirolla, Michael [[email protected]]. “Pulling One’s Leg” [http://www.RecursiveAngel.com/mirolla.htm]. Recursive Angel [http://www.recursiveangel.com/]. V3 IX, March 1998.

Mitchell, Brandon [[email protected]]. The Dark Saviour [http://www.geocities.com/Area51/Shadowlands/2375/main.html]. 1998.

Moore, Heidi [[email protected]]. “Talent” [http://www.eclectica.org/v1n11/moore.html] Eclectica Magazine.

Moore, Kira [[email protected]]. “RE: Rivalry (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Moore, Ralph Robert [[email protected]]. “Big Inches” [http://www.ralphrobertmoore.com/sheets.html]. Sentence [http://home.swbell.net/robmary/]. 1997 ______. “Re: Sentence (r2b12).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 19 September 1998.

Morrigan [[email protected]]. “F is for Fiona” [http://www.willmaster.com/thestoryteller/morrigan/forfiona.html]. The Storyteller [http://www.willmaster.com/thestoryteller/index.html]. 1998a. ______. “Re: F is for Fiona (r2b9).” Personal email to Ira Nayman” [inayma@PO- Box.McGill.CA]. 2 September 1998b.

Mosco, Vincent. The Pay-per Society: Computers & Communication in the Information Age. Norwood, New Jersey: Ablex, 1989.

Moseley, Maboth. Irascible Genius: Charles Babbage, Inventor. London: Hutchinson & Co., 1964.

Moskin, J. Robert. Toward the Year 2000: New Forces in Publishing. Gutersloh, Federal Republic of Germany: Bertelsmann Foundation Publishers, 1989. Literature at Lightspeed – page 505

Mosley-Matchett, J. D. “Big bucks or lots and lots of tiny bucks.” Marketing News (V31 N16, 4 August 1997).

Moulthrop, Stuart. “In the Zones: Hypertext and the Politics of Interpretation.” [http://www.ubalt.edu/www/ygcla/sam/essays/zones.html]. February 1989. ______. “Pushing Back: Living and Writing in Broken Space.” Modern Fiction Studies (V43 N3, Fall 1997).

Mr ED [[email protected]]. “Re: Mr. ED’s Story Shed (r2b11).” Personal email to Ira Nayman [[email protected]]. 5 September 1998.

Mulgan, Geoff. “What is Socialist Cultural Practice: Does Everyone Have a Right to be an Artist?” Communication for and Against Democracy. Marc Raboy and Peter A. Bruck, eds. Montreal: Black Rose Books, 1989.

Muri, James R. [[email protected]]. The Plains Diaries [http://www.io.com/~crberry/DuctTape/Archive/01_fic_muri.html]. Duct Tape Press [http://www.io.com/~crberry/DuctTape/]. ______. “RE: Research (r2b3).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 13 July 1998.

Murphy, G. [[email protected]]. “RE: The Pseudo-Magazine of Writings (r2b7).” Personal email to Ira Nayman [[email protected]]. 9 August 1998.

Murphy, Patricia A. “Nickeled-and-dimed on the Internet.” Credit Card Management (V10 N10, January 1998).

Mutter, John. “Sales, losses soar at online bookstores.” Publishers Weekly (V246 I26, 28 June 1999). Proquest Database [http://proquest.umi.com/pdqweb].

National Writers Union. “Resolution Supporting Free Speech and Privacy in Cyberspace” [http://www.eff.org/pub/Censorship/Exon_bill/nwu_anti-censorship_95.resolution]. 5 August 1995.

Nayman, Ira. Tell Me A Story I Can Live: Interactive Electronic Narrative and the Effacement of the Author. Masters Thesis: New School for Social Research, 1996.

Nee, Eric. “Dyson on Demand (And Supply).” Upside (V X N2, February 1998).

Negroponte, Nicholas. “On Digital Growth and Form.” Wired (V5 N10, October 1997). ______. “Taxing Taxes.” Wired (V6 N5, May 1998). Literature at Lightspeed – page 506

Nelson, Theodore. Literary Machines, ed. 93.1. Sausalito, California: Mindful Press, 1992.

Nestvold, Ruth [[email protected]]. Cutting Edges; Or, A Web of Women [www.lit-arts.com/cutting_edges/]. 1997. ______. “RE: Cutting Edges; Or, A Web of Women (r2b13).” Personal email to Ira Nayman [[email protected]]. 14 October 1998.

“Netchannel likely to turn off its Internet-via-TV-service.” Wall Street Journal (29 April 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 30 April 1998.

Neuman, W. Russell, Lee McKnight and Richard J. Solomon. The Gordian Knot: Political Gridlock on the Information Highway. Cambridge, Mass.: MIT Press, 1998.

“The New Economics.” O’Reilly & Associates, eds. The Harvard Conference on The Internet and Society. Cambridge, Mass.: Harvard University Press, 1997.

“New set-top box challenges WebTV.” Wall Street Journal (17 May 1997). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 19 May 1997.

Nichols, Bill. “The Work of Culture in the Age of Cybernetic Systems.” Screen (V29 N1, 1988).

Nilsson, Magnus [[email protected]]. “SV: The Scimitar (r2b11).” Personal email to Ira Nayman [[email protected]]. 6 September 1998.

Nixon, Jonathan [[email protected]]. “Re: Research (r2b4).” Personal email to Ira Nayman [[email protected]]. 22 July 1998.

Noam, Eli N. “Will Books Become the Dumb Medium?” Educom Review (V33 N2, March/April, 1998).

Noble, David F. America By Design. Oxford: Oxford University Press, 1977. ______. Progress without people: new technology, unemployment, and the message of resistance. Toronto: Between the Lines, 1995.

Nolan, Rhonda [[email protected]]. Rhondavous [http://www.swiftsite.com/rhondavous/].

NYT Editorial Staff. “Protecting Digital Copyrights” [http://www.nytimes.com/yr/mo/day/editorial/24fri1.html]. New York Times. Literature at Lightspeed – page 507

O’Donnell, James J. Avatars of the Word: From Papyrus to Cyberspace. Cambridge, Mass.: Harvard University Press, 1998.

O’Donnell, Richard F. “Courting Irrelevance: The digirati needs to learn how to make friends and win influence in Washington” [http://www.eff.org/pub/Censorship/Exon_bill/pff_online_activism.critique]. 3 May 1996.

O’Harrow Jr., Robert. “Hard at work on ‘lazy’ interactive TV” [http://www.washingtonpost.com/wp-srv/WPlate/1998-08/03/022l-080398-idx.html]. Washington Post (3 August 1998).

Oliver, Tom [[email protected]]. “Anarchus” [http://www.aphelion- webzine.com/anarchus.htm]. Aphelion [http://www.aphelion-webzine.com/index2.htm]. 1999.

O’Neal, Miles [[email protected]]. “My Screws Aren’t Loose (I’m Just Wired a Bit Different) (r2b11).” Personal email to Ira Nayman [[email protected]]. 4 September 1998.

Ong, Walter. “Orality, Literacy, and Modern Media.” Communication in History. David Crowley and Paul Heyer, eds. Longman, 1994.

“Online Providers Not Responsible for Content from Others.” Washington Post (23 April 1998). Quoted in EduPage. John Gehl and Suzanne Douglas, eds. 23 April 1998.

“Opening the Gate.” O’Reilly & Associates, eds. The Harvard Conference on The Internet and Society. Cambridge, Mass.: Harvard University Press, 1997.

“Oracle’s plans for integrating Web with TV.” Wall Street Journal (13 August 1997). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 14 August 1997.

Oram, Andy et al. “Frequently Asked Questions About the 1995 Communications Decency Act” [gopher://gopher.panix.com/0/vtw/exon/faq]. 26 August 1995.

Orr, William F. [[email protected]] “RE: Orr’s Tavern (r2b11).” Personal email to Ira Nayman [[email protected]]. 11 September 1998. ______. Any Other Season [http://www.hofstra.edu/~nucwfo/aos/aos-pre.htm]. Orr’s Tavern. [http://www.hofstra.edu/~nucwfo/]. 1997.

Ossello, Judy [[email protected]]. “The 11th Arrondissement [http://www.eclectica.org/v2n4/ossello.html]. Eclectica Magazine. Literature at Lightspeed – page 508

Owens, James [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

“Pandesic advertisement.” Wired (V6 N5, May 1998).

Patch, Kimberly and Eric Smalley. “Drop a dime online.” InfoWorld (V20 N48, 30 November 1998).

Paxton, Robert [[email protected]]. Between Heaven & Hell [http://www.rivercityreader.com/voices/novel/prolog.htm].

“Pay-per-view Internet news becoming more common.” USA Today (28 April 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 28 April 1998.

Pearson, Emily C. Gutenberg and the Art of Printing. Boston: Noyes, Holmes and Company, 1871.

Perlez, Jane. “Milosevic retains firm grip on power Yugoslavia.” The Globe and Mail (31 July 1997).

Peterson, Julie. Letter to the editor. Wired (V5 N6, June 1997).

Petter, Sylvia [[email protected]]. “Viennese Blood” [http://www.richmondreview.co.uk/library/petter01.html]. Richmond Review [http://www.richmondreview.co.uk/]. 1998.

Phoenix Publishing Group [[email protected]]. “Self Publishing vs. Traditional Publishing” [http://www.phoenixpublishinggroup.com/self- publishing/what-is.htm].

Phipps, Don [[email protected]]. Avatar. [http://users.kcyb.com/phipps/avatar.html] The Science Fiction Works of D. W. Phipps, Jr. [http://users.kcyb.com/phipps/scifi.html]. 1991. ______. “Reply to questionnaire.” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 7 September 1998.

Pinch, Trevor J. and Wiebe E. Bijker. “The Social Construction of Facts and Artifacts: or How the Sociology of Technology Might Benefit Each Other.” The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Massachusetts: MIT Press, 1987.

Pipsqueak Productions [[email protected]]. “RE: The Therapist (r2b13).” Personal email to Ira Nayman [[email protected]]. 29 September 1998. Literature at Lightspeed – page 509

“The Place of the Internet in National and Global Information Infrastructure.” The Harvard Conference on the Internet and Society. O’Reilly & Associates, eds. Cambridge, Ma.: Harvard University Press, 1997.

Plant, Raymond. “Community: Concept, Conception, and Ideology.” Politics and Society (V8 N1, 1978).

Platt, John R. [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

[[email protected]]. “Re: Petite Madeleine (r2b11).” Personal email to Ira Nayman [[email protected]]. 8 September 1998.

Pond, Jeremiah [[email protected]]. “Welcome to Endsville [http://www.geocities.com/SoHo/Cafe/8861/fiction.html]. stan writes prose. [http://www.geocities.com/SoHo/Cafe/8861/index.html]. 1998.

Poster, Mark. “Postmodern Virtualities” [http//www.hnet.uci.edu/mposter/]. 1995.

“Postmortem on Time Warner’s Full Service Network.” Broadcasting & Cable (6 May 1997). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 6 May 1997.

Potter, Mitch. “AOL deal inspires fear, joy.” Toronto Star (12 January 2000).

Potts, Rolf [[email protected]]. “RE: .” Personal email to Ira Nayman [[email protected]]. 9 August 1998.

Poulsen, Sally. [[email protected]]. “1956” [http://www.zoltan.org/historacle/issue1/1956.htm]. HistOracle [http://www.zoltan.org/historacle/issue1/index.htm]. V1 I1, Winter 1997-1998. ______. “Re: Research (r2b3).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 11 July 1998.

Powell, Darin. “Why Do So Many People Hate AOL?” Digital Diner (V1 N1, Premiere Issue).

Powers, David [[email protected]]. “The history of this ‘Zine...” [http://www.angelfire.com/biz/graveworm/frames.html]. Inflated Graveworm.

Powers, Doug [[email protected]]. The Powers That Be.” [http://www.inditer.com/powers/]. The Inditer. ______. “research answers.” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 12 July 1998. Literature at Lightspeed – page 510

“Press and the New Media.” The Harvard Conference on the Internet and Society. O’Reilly & Associates, eds. Cambridge, Ma.: Harvard University Press, 1997.

“‘Push’ found to be too pushy.” New York Times CyberTimes (17 February 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 17 February 1998.

Pylman, Regan S. (Azrael Coyotesdaughter) [[email protected]]. “RE: The Collected R. S. Pylman (r2b10).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 30 August 1998.

Qining, Luo [[email protected]]. “Re: The Earth is a Cube (r2b10).” Personal email to Ira Nayman [[email protected]]. 1 September 1998.

Quillen, Lida E. “Twilight Times: a digital journal of Speculative Fiction” [http://www.twilighttimes.com/]. Twilight Times (27 June 1998).

Quinn. Judy and John F. Baker. “High flyers, crash landings.” Publishers Weekly (V246 I13, 29 March 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Rattansi, Afshin [[email protected]]. “Caprice” [http://www.dimax.com/pif/vol9/ratt.htm]. Pif [http://www.pifmagazine.com/vol34/]. N9, October 1997. ______. “RE: Caprice (r2b7).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 9 August 1998.

Rawlins, Gregory J. E. Moths to the Flame: The Seductions of Computer Technology. Cambridge, Mass.: MIT Press, 1996.

Recker, Christine [[email protected]]. “Adventure Hath Risen (http://www.inditer.com/enigma/risen.htm]. The Inditer [http://www.inditer.com/default.htm]. 1996. ______. “Re: Research (r2b3).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 14 July 1998.

Rees, Gareth. “Tree fiction on the World Wide Web” [http://www.cl.com.ac.uk/users/gdr11/tree-fiction.html]. September 1994.

Reguly, Eric. “AOL deal just the beginning: Time Warner purchase predictable, but shocking.” Globe and Mail (11 January 2000).

Reid, Elizabeth M. “Communication and Community on Internet Relay Chat: Constructing Communities.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996a. Literature at Lightspeed – page 511

______. “Text-based Virtual Realities: Identity and the Cyborg Body.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Peter Ludlow, ed. Cambridge, Massachusetts: The MIT Press, 1996b.

Reid, Robert H. “Real Revolution.” Wired (V5 N10, October 1997).

Reinhardt, Andy. “What Could Whip the World Wide Wait.” Business Week (16 February 1998).

“Reinventing the Web: XML and DHTML will bring order to the chaos.” Byte (V23 N3, March 1998).

Renshaw, Camille [[email protected]]. “Re: Pif Magazine (r2b7).” Personal email to Ira Nayman [[email protected]]. 9 August 1998.

Renzetti, Elizabeth. “Independent booksellers join Southam to combat Chapters on-line venture.” The Globe and Mail (20 July 1998).

Reuters. “38 journalists murdered in 1996, media group says.” The Globe and Mail (6 January 1997). ______. “Debate rages over telecom law; Playboy web page intact” [http://www.nando.net/newsroom/nt/209playboy.html]. 1995.

Rheingold, Howard. “A Slice of Life in My Virtual Community.” Global Networks: Computers and International Communication. Linda M. Harasim, ed. Cambridge, Mass.: MIT Press, 1993a. ______. Tools For Thought. New York: Simon & Schuster, 1985. ______. The Virtual Community: Homesteading on the Electronic Frontier. New York: HarperPerennial, 1993b.

Richardson, Christine [[email protected]]. Personal email to Ira Nayman [[email protected]]. 27 July 1998.

Rick [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 6 July 1998.

Rifkin, Jeremy. The End of Work. New York: Putnam, 1995.

Riga, Andy. “WebTV not quite ‘Net for the masses.’” Montreal Gazette (5 August 1998).

Rimmer, Steven William. “Indecent Images” [http://www.mindworkshop.com/alchemy/indcnt.html]. 20 January 2000. Literature at Lightspeed – page 512

“Ringing in a New Web Strategy.” Investor’s Business Daily (26 September 1997). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 28 September 1997.

Robert, Adrian [[email protected]]. A Further Xanadu [http://cogsci.ucsd.edu/~arobert/fxanadu.html].

______. “Re: “A Further Xanadu” (r2b13).” Personal email to Ira Nayman [[email protected]]. 28 September 1998.

Robinson, Spider. “Sometimes it just rains weird.” The Globe and Mail (27 July 1998).

Robischon, Noah. “Best of the Web 1999: Portals/Search Sites.” Brill’s Content (V2 N3, April 1999). ______. “Browser Beware.” Brill’s Content (V1 N1, July/August 1998).

Rodebaugh, Thomas [[email protected]]. “RE: network (r2b13).” Personal email to Ira Nayman [[email protected]]. 26 September 1998.

Romano, Mike. “Deja Vu All Over Again.” Wired (V6 N4, April 1998).

Rose, Frank. “Keyword: Context.” Wired (V4 I12, December 1996). ______. “Sex Sells.” Wired (V5 I12, December 1997).

Rose, Lance. “Is Copyright Dead on the Net?” Wired (V1 N4, November 1993).

Rosenberg, Alexander. Philosophy of Social Science. Boulder: Westview Press, 1988.

Rosenberg, Robert [[email protected]]. Crimes of the City [http://www.ariga.com/cohen/crimes01.htm]. New York: Simon and Schuster, 1991. Re: The Avram Cohen Mystery Series [http://www.ariga.com/cohen/].

Rosenzweig, Sandra. “World Wide Wagering.” Cybernautics Digest (V4 N5, October/November 1997).

Ross, Val. “Book group’s future uncertain.” Globe and Mail (17 June 1999a). ______. “Reading Duthie’s book of the dead.” Globe and Mail (7 June 1999b)

Rothenberg, Randall. “Bye-bye.” Wired (V6 N1, January 1998).

Rowan Wolf [[email protected]]. “Boil a manchild for Odin” [http://www.rowansongs.com/stories/boilaman.html]. Rowan Songs [http://www.rowansongs.com/]. 1999. ______. “Re: Rowan Songs (r2b11).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 7 September 1998. Literature at Lightspeed – page 513

Rowland, Wade. Spirit of the Web: The Age of Information from Telegraph to Internet. Toronto: Somerville House Publishing, 1997.

Rushkoff, Douglas. “State of the Net.” Shift (V6 N6, October 1998).

Russell, Richard [[email protected]]. “On the Way Down” [http://www.etext.org/Zines/ASCII/Sparks/sparks17.html#ON THE WAY DOWN]. Sparks [http://www.etext.org/Zines/ASCII/Sparks/sparks17.html]. V6 I3, May/June 1997.

Russell, Steve. “The X-On Congress: Indecent Comment on an Indecent Subject.” [http://www.eff.org/pub/Censorship/Exon_bill/russell_0296_indecent.article]. American Reporter (February 1996).

Ryan, Alan. “Exaggerated Hopes and Baseless Fears.” Social Research (V64 N3, Fall 1997).

Ryman, Geoff [[email protected]]. “Re: 253 (r2b13).” Personal email to Ira Nayman [[email protected]]. 20 October 1998.

S.314. “Communications Decency Amendment” [http://www.prognet.com/contentp/rabest/thebill.html]. 14 June 1995.

S.735. 1995. [http://www.eff.org/pub/Censorship/Exon_bill/s735_95_feinstein_amend.draft].

Saffo, Paul. “It’s the Context, Stupid.” Wired (V2, N3, March 1994).

Salutin, Rick. “Canadian posties the last bastion of public culture.” The Globe and Mail (10 October 1997).

Samuelson, Pamela. “Maximum Copyright, Minimum Use.” Wired (V6 N3, March 1998).

Sandberg, Jared. “Mundane matters star on the Web. “The Globe and Mail (20 July 1998).

Sanders, Todd J. [[email protected]]. The Rubicon Beckons [http://www.bri- dge.com/short_takes/short39.html]. Bridge [http://www.bri- dge.com/issues/currentissue.html]. 1998.

Sandi [[email protected]]. “RE: Story Land (r2b9).” Personal email to Ira Nayman [[email protected]]. 2 September 1998. Literature at Lightspeed – page 514

Sandvig, Christian Edward [[email protected]]. “RE: think (r2b9).” Personal email to Ira Nayman [[email protected]]. 24 August 1998.

Sanford, Christy Sheffield [[email protected]]. “RE: Madame de Lafayette Book of Hours (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Sato, Michael [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 6 July 1998.

Savets, Kevin M. “Multicasting for the Masses.” Digital Diner (V1 N1, Premiere Issue). ______. “Netscape vs, Microsoft: One on One.” Digital Diner (V2 N2, July 1997).

Schiff, Steven [[email protected]]. “Re: Message from Internet.” Personal email to Ira Nayman [[email protected]]. 20 October 1996.

Schlau, Michael N. “Submission Guidelines” [http://www.geocities.com/SoHo/Coffeehouse/6160/submit.html]. 1st Chapter. 13 May 1998.

Schmitz, Jeff [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 12 July 1998.

Schroeder, Pat. “Press Release.” 6 February 1996. [http://www.eff.org/pub/Censorship/Exon_bill/schroeder_960206_comstock.announce.

Schroeder, Ralph. “Virtual worlds and the social reality of cyberspace.” Loader, Brian, ed. The Governance of Cyberspace. London: Routledge, 1997.

Schustereit, Michael V. [[email protected]]. “RE: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Schwartz, Michael [[email protected]]. “RE: Fortune Cookies (r2b7).” Personal email to Ira Nayman [[email protected]]. 8 August 1998.

Schwartz Cowan, Ruth. “The Consumption Junction: A Proposal for Research Strategies in the Sociology of Technology. The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Mass.: MIT Press, 1987.

Scowen, Peter. “Faisons de la planche aquatique interreseau!” Hour (19-25 June 1997). Literature at Lightspeed – page 515

Seabrook, John. Deeper: My Two Year Odyssey in Cyberspace. New York: Simon and Schuster, 1997.

Sellier, Stephanie [[email protected]] “Winter in the Wessermarsch” [http://www.blithe.com/bhq2.3/winter.html]. Blithe House Quarterly [http://www.blithe.com/]. V2 N3, Summer 1998.

Servin, Jacques [[email protected]]. “I Was Living in a Gay Condo” [http://www.blithe.com/bhq1.2/operettas.html]. Blithe House Quarterly [http://www.blithe.com/]. V1 N2, Fall 1997.

Setzer, Jan [[email protected]]. “When the Wisteria Blooms” [http://www.geocities.com/~tiffanyrose/wisteria.html]. Tiffany’s Writing Place [http://www.geocities.com/~tiffanyrose/]. 1996.

Shadow NightWolf [[email protected]]. “RE: Darke Wolf’s Sanctuary (r2b10).” Personal email to Ira Nayman [[email protected]]. 2 September 1998.

Shaffer, Larry [[email protected]]. Tears from Ao [http://www.geocities.com/~outer-rim/mercuric/tears/]. The Outer Rim (December 1997).

Shafir, Oren [[email protected]]. “Dead Man’s Boots” [http://www.ptialaska.net/~eclectic/v1n12/shafir.html]. Eclectica Magazine. ______. “RE: Research (r2b2).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 6 July 1998.

ShanMonster [[email protected]]. “RE: The Rut (r2b8).” Personal email to Ira Nayman [[email protected]]. 14 August 1998.

Shapiro, Andrew L. “Pushing Forward, Falling Back.” Wired (V5 N6, June 1997).

Shea, Joe. Untitled [http://www.newshare.com/current/censor/excerpts.html]. The American Reporter (8 February 1996).

Sherwood, Steve [[email protected]]. “RE: We Have Others (r2b8).” Personal email to Ira Nayman [[email protected]]. 14 August 1998.

Shinn, Christopher [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 27 June 1998.

Shirley, Virginia [[email protected]]. “Men At Work.” [http://www.blithe.com/bhq1.1/atwork.html]. Blithe House Quarterly [http://www.blithe.com/]. V1 N1, Summer 1997. Literature at Lightspeed – page 516

______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 27 June 1998.

Selkirk, Errol. Computers for Beginners. New York: Writers and Readers Publishing, 1995.

Serexhe, Bernard. “Deregulation/Globalisation.” Digital Delirium. Arthur and Marilouise Kroker, eds. New York: St. Martin’s Press, 1997.

Shade, Leslie Regan. Gender and Community in the Social Constitution of the Internet. PhD Dissertation: McGill University, 1997.

Shatzkin, Mike. “Fasten your high-tech seatbelts.” Publishers Weekly (V246 I21, 24 May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Shenk, David. Data Smog: Surviving the Information Glut. San Francisco: Harper Edge, 1997.

Simpson, Roderick. “Critical Mess: Sorting out the domain name system.” Wired (V5 N6, June 1997).

Sirius, R. U. “alt.tru.istic.con.” 21.C: Scanning the Future (N1, 1997).

Sirois, A. L. [[email protected]] “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

“Slate drops subscription fees.” Atlanta Journal-Constitution (13 February 1999). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 15 February 1999.

“Slate tries subscription model.” Broadcasting & Cable (5 January 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 15 January 1998.

Smith, D. K. [[email protected].] “Re: Research (r2b4).” Personal email to Ira Nayman [[email protected]]. 22 July 1998.

Smith, Janine [[email protected]]. “Re: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Smith, Kevin Paul [[email protected]] -- a. “The Man Who Licked the Pope” [http://www.tx3.com/~tarkin/popelicker.htm]. Dog’s House Of Tales [http://www.tx3.com/~tarkin/main.html]. ______-- b. “The Search For Common Sense” [http://www.tx3.com/~tarkin/commonsense.htm]. Dog’s House Of Tales [http://www.tx3.com/~tarkin/main.html]. Literature at Lightspeed – page 517

Smith, Marc A. “Invisible crowds in cyberspace: mapping the social structure of the Usenet.” Communities in Cyberspace. Marc A. Smith and Peter Kollock, eds. London: Routledge, 1999.

Smith, Russell. “Arts Community? Gay Community? Napster community? I hear the C- word and I reach for my Walkman.” Globe and Mail (August 19, 2000).

Smith, Zach [[email protected]]. “It’s All in the Translation.” [http://www.redbay.com/dazzler/fff/trans/trans1.htm]. Dazzler’s Digital Domocile. [http://www.redbay.com/dazzler/fff/]. ______. “Re: Research (r2b2).” Personal email to Ira Nayman [inayma@PO- Box.McGill.CA]. 5 July 1998.

Solomon, Howard. “Facts, figures abound in new Internet study.” Computing Canada (V24 N9, 9 March 1998).

Solomon, Richard Jay. “Electronic and Computer-aided Publishing Opportunities and Constraints.” Information Technology and New Growth Opportunities (Paris: Organisation For Economic Co-operation and Development, 1989).

Sonnenschein, David [[email protected]]. “Re: God’s Third Joke (r2b8).” Personal email to Ira Nayman [[email protected]]. 14 August 1998.

Sorrells, Walter [[email protected]]. “Re: The Heist (r2b13).” Personal email to Ira Nayman [[email protected]]. 26 September 1998.

Southworth, Natalie. “Indigo founder faces epic challenge.” Globe and Mail (19 June 1999).

Spender, Dale. Nattering on the Net: Women, Power and Cyberspace. Toronto: Garamond Press, 1995.

Spiegelman, Eli. “Corporate Constellations: Increasing consolidation creates strange bedfellows...” Adbusters (V6 N2, Autumn 1998).

Spinrad, Norman [[email protected]] -- a. “The Fat Vampire” [http://ourworld.compuserve.com/homepages/normanspinrad/fat.htm]. ______-- b. “HE WALKED AMONG US: A publishing soap opera” [http://ourworld.compuserve.com/homepages/normanspinrad/walked.htm]. ______-- c. “MY DAY, APRIL 29, 1994.” [http://ourworld.compuserve.com/homepages/normanspinrad/bantam.htm]. Literature at Lightspeed – page 518

______. “RE: Norman Spinrad (r2b11).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 5 September 1998.

Sprague, D. J. [[email protected]]. “Phoenix” [http://www.nwlink.com/~phoenix/op4.htm]. ______. “Re: Phoenix (r2b11).” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. 4 September 1998.

Sproull, Lee and Sara Kiesler. “Computers, Networks, and Work.” Global Networks: Computers and International Communication. Linda M. Harasim, ed. Cambridge, Mass.: MIT Press, 1993.

Stahlman, Mark. “Just Say No -- To Cybercrats and Digital Control Freaks.” Wired (V2 N10, October 1994).

Stallman, Richard M. “Copywrong.” Wired (V1, N3, July/August 1993).

Starry, Ace [[email protected]]. “Re: The Magic Life - A Novel Philosophy (r2b11).” Personal email to Ira Nayman [[email protected]] 8 September 1998.

Starseed, Inc. “What is a Web Ring?” [http://www.webring.org/what.html]. 1998.

Stazya [[email protected]]. “RE: Staz’s Writing Nook (r2b12).” Personal email to Ira Nayman [[email protected]]. 21 September 1998.

Steffensen, Alan [[email protected]]. “Re: Research (r2b5).” Personal email to Ira Nayman [[email protected]]. 25 July 1998.

Steiner, Robert F. [[email protected]]. “reply to questionnaire.” Personal email to Ira Nayman [[email protected]]. 1 September 1998.

Steinberg, Steve G. “Hype List.” Wired (V5 N9, September 1997a). ______. “Hype List.” Wired (V5 N11, November 1997b).

Sterne, Jonathan. “Thinking the Internet: Cultural Studies Versus the Millenium.” Doing Internet Research: Critical Issues and Methods for Examining the Net. Steve Jones, ed. Thousand Oaks California: Sage, 1999.

Stevens, Elizabeth Lesly, and Ronald Grover. “The Entertainment Glut.” Business Week (16 February 1998).

Stewart Millar, Melanie. Cracking the Gender Code: Who Rules the Wired World. Toronto: Second Story Press, 1998. Literature at Lightspeed – page 519

Stoffman, Judy. “Booksellers ponder a frightening future.” Toronto Star (15 June 1999a). ______. “Instant Books.” Toronto Star (5 June 1999b).

Stokes, James [[email protected]]. “Invisible City” [http://members.tripod.com/~fictionwriter/001/invisible02.html]. Web Site For Writers [http://members.tripod.com/~fictionwriter/main.html]. 1999. ______. “Survey Re: Web Site For Writers (r2b12).” Personal email to Ira Nayman [[email protected]]. 3 October 1998.

Stone, Brad. “Amazon’s pet projects.” Newsweek (V133 I25, 21 June 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Stowe, Lynn [[email protected]]. “Re: claycrystal: writers, readers, graphic arts devotees (r2b10).” Personal email to Ira Nayman [[email protected]]. 31 August 1998.

Strauss, Marina. “Small book stores face tragic ending.” Globe and Mail (19 June 1999).

Streitfeld, David. “Booking the Future.” Washington Post. [http://www.washingtonpost.com/wp-srv/WPlate/1998-07/10/160l-071098-idx.html].

Strong, William S. “Copyright in the New World of Electronic Publishing” [http://www.eff.org/pub/Intellectual_property/copyright_in_new_world.paper]. Paper presented at the workshop Electronic Publishing Issues II at the Association of American University Presses (AAUP) Annual Meeting. 17 June 1994.

Stueck, Wendy. “Business problems at Duthie’s Books mark bittersweet chapter for owner.” Globe and Mail (2 June 1999).

Stutz, Michael (a). “Applying Copyleft To Non-Software Information.” [http://dsl.org/copyleft/non-software-copyleft.shtml]. ______(b). “Copyleft, open source and sharing digital information.” [http://dsl.org/copyleft/].

“Submission Guidelines.” All Mixed Up E-zine [http://www.digilogic.com/AllMixedUp/Constant/submit.htm].

Sudweeks, Fay and Simeon J. Simoff. “Complementary Explorative Data Analysis: The Reconciliation of Quantitaive and Qualitative Principles.” Doing Internet Research: Critical Issues and Methods for Examining the Net. Steve Jones, ed. Thousand Oaks California: Sage, 1999.

Sutherland, David [[email protected]]. “RE: Recursive Angel (r2b8).” Personal email to Ira Nayman [[email protected]]. 16 August 1998. Literature at Lightspeed – page 520

Svensson, Anna [[email protected]]. “RE: survey.” Personal email to Ira Nayman [[email protected]]. 31 August 1998.

Swann, Cara [[email protected]]. “Say Anything...But Don’t Say Goodbye” [http://www.geocities.com/SoHo/Studios/5116/part1.htm]. The Prose Menagerie [http://www.geocities.com/SoHo/Studios/5116/index.html]. 1999.

Swan, John [[email protected]]. “RE: The Works of Brian Swan (r2b12).” Personal email to Ira Nayman [[email protected]]. 22 September 1998.

Switaj, Elizabeth [[email protected]]. “Loving the Mortal” [http://www.geocities.com/SoHo/Cafe/2759/lovingthemortal.html]. Elizabeth Switaj’s Madhouse [http://www.geocities.com/SoHo/Cafe/2759/]. 1997-2000. ______. “Re: [RE: Elizabeth Switaj’s Madhouse (r2b10)].” Personal email to Ira Nayman [[email protected]]. 2 September 1998.

Tabbi, Joseph. “Solitary Inventions: David Markson at the End of the Line.” Modern Fiction Studies (V43 N3, Fall 1997).

Tapscott, Don. The Digital Economy: Promise and Peril in the Age of Networked Intelligence. New York: McGraw-Hill, 1996.

Tasane, Nigel W. [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 3 July 1998. ______. “Schrodinger’s Nobody” [http://www.hooked.net/~jalsop/ntnobody.html]. Alsop Review.

Tavlan, Andrea [[email protected]]. “Life Goes On.” [http://www2.aphelion- webzine.com/shorts/lifegoes.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1997.

Taylor, Paul A. Hackers: The Hawks and the Doves -- Enemies & Friends. London, Routledge. In press.

Tennis, Cary [[email protected]]. “The Journalist Responds Incorrectly to an Airline Crash” [http://www.slip.net/~carytenn/journalist.html]. Cary Tennis, Friend of the People! [http://www.slip.net/~carytenn/]. 17 July 1998.

Thomas, Emily [[email protected]]. “RE: Internal Rage (r2b13).” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Thompson, Clive. “Bomb Squad.” Shift (V7 N6, November 1999). ______. “Keith Kocho has seen the future of the Internet and it is television.” Shift (V6 N6, October 1998). Literature at Lightspeed – page 521

Tindall, Kenneth [[email protected]]. “Sv: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Towler, Candi [[email protected]]. “Four Corners of the Wind and etc.” Personal email to Ira Nayman [[email protected]]. 26 September 1998.

Trammell, John [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Truman, Gemma [[email protected]]. “RE: Paradoxes 2.01 (r2b11).” Personal email to Ira Nayman [[email protected]]. 5 September 1998.

Tseng, Sandra [[email protected]]. The Jandorian Chronicles [http://www.geocities.com/Broadway/2271/JANCHRON1.TXT]. Gauriel’s Original Fiction [http://www.geocities.com/Broadway/2271/original.html]. 4 May 1998.

Turkle, Sherry. “Rethinking Identity Through Virtual Community.” Clicking In: Hot Links to a Digital Culture, Lynn Hershman Leeson, ed. Seattle, Washington: Bay Press, 1996.

Turner, Rob. “Web shopping: A cautionary amazon tale.” Money (V28 N5, May 1999). Proquest Database [http://proquest.umi.com/pdqweb].

Tyree, William [[email protected]]. “RE: Losing Snakes (r2b6).” Personal email to Ira Nayman [[email protected]]. 2 August 1998.

Ulmen, Steven [[email protected]]. “Re: CatspawVP’s Writings (r2b10).” Personal email to Ira Nayman [[email protected]]. 4 September 1998.

“The ultimate synergy of tools and content.” Globe and Mail (7 April 2000).

Underwood, Rick ([email protected]). “The Honeymoon is Over” (http://www.geocities.com/Athens/Delphi/1145/orens6.html). Oren’s Short Stories and more...

Unsworth, John. “Electronic Scholarship; or, Scholarly Publishing and the Public.” The Literary Text in the Digital Age. Richard J. Finneran, ed. Ann Arbor: University of Michigan Press, 1996.

[[email protected]]. “Utterants... Submission Guidelines” [http://www.globalgraphics.com/zines/utterants/utterantsw6.html]. Utterants.

van Bakel, Rogier. “Fast Fiction.” Wired (V3 N11, November 1995). Literature at Lightspeed – page 522

Varian, Hal A. “Economic Issues Facing the Internet.” The Internet as Paradigm. Queenstown, Maryland: The Institute for Information Studies, 1997.

Vary Stark [[email protected]]. “RE: the NaNopoLiS (r2b7). Personal email to Ira Nayman [[email protected]]. 8 August 1998.

Vaughan, Jr., Memphis [[email protected]]. “About the TimBookTu Homepage” [http://www.timbooktu.com/]. TimBookTu. 1997.

Via, Eric [[email protected]]. “Re: Research Survey.” Personal email to Ira Nayman [[email protected]]. 29 June 1998.

Vinik, Danny [[email protected]]. Jack Tar [http://brink.com/brink/tar/index.html].

Vinyard, Nelma M. [[email protected]]. “survey.” Personal email to Ira Nayman [[email protected]]. 27 June 1998.

Voight, Joan. “Beyond the Banner.” Wired (V4 N12, December 1996).

Wakulich, Bob [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Wallace, Jonathan and Mark Mangan. Sex, Laws and Cyberspace. New York: Henry Holt, 1996.

Wallich, Paul. “Your 0.002 Cent’s Worth.” Scientific American (V280 N6, June 1999).

Wallis, David J. [[email protected]]. “Our Schools are Burning” [http://www.dezines.com/wallis/schools.htm]. Wallis’ Mystery and Science Fiction Short Stories [http://www.dezines.com/wallis/]. 1999 ______. “Re: Wallis’ Mystery and Science Fiction Short Stories (r2b12).” Personal email to Ira Nayman [[email protected]]. 20 September 1998.

Walljasper, Jay. “Age of the Mega-Alternatives.” Utne Reader (N82, July-August, 1997).

Wardrip, Josh [[email protected]]. “Re: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 11 July 1998.

Wark, McKenzie. “Data Trauma.” 21C: Scanning the Future (I24, 1997).

Watmough, David [[email protected]]. “The Beautiful Landlord” [http://www.blithe.com/bhq1.1/landlord.html]. Blithe House Quarterly [http://www.blithe.com/]. V1 N1, Summer 1997. Literature at Lightspeed – page 523

______. “Re: Research Survey.” Personal email to Ira Nayman [inayma@po- box.mcgill.ca]. June 27 1998.

“Web Wide Media advertisement.” Wired (V5 N2, February 1997).

“Web profits still elusive.” Miami Herald (8 February 1998). Quoted in Edupage. John Gehl and Suzanne Douglas, eds. 10 February 1998.

“Web (vs) TV.” The Web Magazine (V1 N9, September 1997).

Weinberg, V. [[email protected]]. “Re: The Last Time I Saw Elvis (r2b8).” Personal email to Ira Nayman [[email protected]]. 15 August 1998.

Weindorf, Chuck [[email protected]]. “Re: Mudsox Personal Publishing (r2b11).” Personal email to Ira Nayman [[email protected]]. 5 September 1998. ______. “Time Walk” [ftp://ftp.erie.net/home/mudsox/timewalk.txt]. Mudsox Personal Publishing [http://moose.erie.net/~mudsox/]. 1996.

Weiss, Aaron. “Yahoo turns against Net that spawned it.” NOW (Feb. 25-March 3 1999).

Weiss, Jonathan [[email protected]]. “Re: Lawn Care (r2b6).” Personal email to Ira Nayman {[email protected]]. 2 August 1998.

Wellman, Barry. “Preface.” Networks in the Global Village. Wellman, ed. Boulder, Colorado: Westview Press, 1990.

Wellman, Barry and Janet Salaff, Dimitrina Dimitrova, Laura Garton, Milena Gulia and Caroline Haythornthwaite. “Computer Networks as Social Networks: Collaborative Work, Telework, and Virtual Community.” Annual Review of Sociology (N22, 1996).

Wellman, Barry and Milena Gulia. “Virtual communities as communities: Net surfers don’t ride alone.” Communities in Cyberspace. Marc A. Smith and Peter Kollock, eds. London: Routledge, 1999.

Wellman, Barry and S. D. Berkowitz. “Introduction: Studying social structures.” Social Structures: A Network Approach. Barry Wellman and S. D. Berkowitz, eds. Cambridge, England: Cambridge University Press, 1988.

Weston, Jeff [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 5 July 1998.

Weston, Kaleen [[email protected]]. A Meeting of the Minds. [http://www2.aphelion- webzine.com/shorts/mindmeet.htm]. Aphelion [http://www.aphelion- webzine.com/index2.htm]. 1998. Literature at Lightspeed – page 524

“What Is Copyleft?” [http://arirang.snu.ac.kr/~ilhwan/copyleft.html].

“When shove comes to push.” The Economist (11 July 1998).

Whitby, Stuart W. [[email protected]]. “Re: Research (r2b2).” Personal email to Ira Nayman [[email protected]]. 7 July 1998a. ______. “A Spell of Rain, Part 1” [http://www.dargonzine.org/dz115s1.htm]. DargonZine [http://www.dargonzine.org/]. V11 N5, 27 June 1998b.

Whitehill, Archie R. [[email protected]]. “The Prodigal Son” [http://members.tripod.com/~archiew/Prodigal.html]. “Archie’s Rest Stop” [http://members.tripod.com/~archiew/]. 1998.

Whittenburg, Alice and G. S. Evans [[email protected]]. “Writer’s Guidelines” [http://home.sprynet.com/sprynet/awhit/guidelin.htm]. Cafe Irreal. 1998.

Whittle, David B. Cyberspace: The Human Dimension. New York: W. H. Freeman and Co., 1997.

Wiggins, Richard. “Corralling Your Content: Stop Those Copyright Claim Jumpers!” New Media (V7 N13, 13 October 1997).

“Will GoTo go?” Wired (V6 N5, May 1998).

Williams, Roger. “The more effective political control of technical change.” The Politics of Technology, Godfrey Boyle, David Elliott and Robin Roy, eds. New York: Longman, 1977.

Wilson, Ian Randall [[email protected]]. If We Even Did Anything [http://users.aol.com/ibar88/private/story/index.htm].

Winchester, Jay. “Making Cybercash: Selling Yourself on the Internet.” American Writers Review (V2 N1, January 1997).

Winkler, Crys [[email protected]]. “RE: Nights in WhiteSatin (r2b6).” Personal email to Ira Nayman [[email protected]]. 3 August 1998.

Winner, Langdon. Autonomous technology: technics-out-of-control as a theme in political thought. Cambridge, Massachusetts: MIT Press, 1997. ______. “Do Artifacts Have Politics?” The Social Shaping of Technology. Donald MacKenzie and Judy Wajcman, eds. Buckingham, England: Open University Press, 1985. Literature at Lightspeed – page 525

______. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: The University of Chicago Press, 1986.

Winson, L. J. [[email protected]]. “hypeterxt fiction short story/comments.” Personal email to Ian Randall Wilson [[email protected]]. 20 July 1996. Forwarded to Ira Nayman [[email protected]]. 28 September 1998.

World Intellectual Property Organization. “Agreement Between the World Intellectual Property Organization and the World Trade Organization” [http://www.wipo.int/eng/iplex/wo_wto0_.htm]. 22 December 1995. ______. “Convention Establishing the World Intellectual Property Organization” [http://www.wipo.int/eng/iplex/wo_wip0_.htm]. 1993.

______. “International Protection of Copyright and Neighboring Rights.” [http://www.wipo.int/eng/general/copyrght/intro.htm].

Wisenberg, Solomon L. “Reno vs. ACLU Syllabus” [http://caselaw.findlaw.com/scripts/getcase.pl?court=US&navby=case&vol=000&invol= 96-511]. 26 June 1997.

Witmer, Diane F., Robert W. Colman and Sandra Lee Katzman. “From Paper-and-Pencil to Screen-and-Keyboard.” Doing Internet Research: Critical Issues and Methods for Examining the Net. Steve Jones, ed. Thousand Oaks California: Sage, 1999.

Wittmaack, Ron [[email protected]]. “RE: Earthbound Titans (r2b13).” Personal email to Ira Nayman [[email protected]]. 28 September 1998.

Wolf, Michael J. and Geoffrey Sands. “Fearless Predictions: The Content World, 2005.” Brill’s Content (V2 N6, August 1999).

Wood, Lon. “Access or privacy? Staying afloat on sea of info takes both.” Times Colonist 13 April 1994).

Wood, Sunday [[email protected]]. “Answers to your questions.” Personal email to Ira Nayman [[email protected]]. 27 September 1998.

Wombat [[email protected]]. “Re: The Rose (r2b10).” Personal email to Ira Nayman [[email protected]]. 3 September 1998.

Wortzel, Adrianne [[email protected]]. The Electronic Chronicles [http://artnetweb.com/artnetweb/projects/ahneed/first.html]. 1995. ______. “RE: The Electronic Chronicles (r2b13).” Personal email to Ira Nayman [[email protected]]. 11 October 1998. Literature at Lightspeed – page 526

Wresch, William. Disconnected: Haves and Have-nots in the Information Age. New Brunswick, New Jersey. Rutgers University Press, 1996.

The Writer’s Centre. “Self-Publishing” [http://www.writer.org/resources/selfpub.htm].

Wu, Frank [[email protected]]. “RE: Research (r2b3).” Personal email to Ira Nayman [[email protected]]. 13 July 1998.

Yan, Yunxiang. The Flow of Gifts: Reciprocity and Social Networks in a Chinese Village. Stanford: Stanford University Press, 1996.

Youngren, S. D. [[email protected]]. “RE: Rowena’s Page (r2b11).” Personal email to Ira Nayman [[email protected]]. 16 September 1998.

Yoxen, Edward. “Seeing with Sound: A Study of the Development of Medical Images.” The Social Construction of Technological Systems. Wiebe E. Bijker, Thomas P. Hughes and Trevor J. Pinch, eds. Cambridge, Mass: MIT Press, 1987.

Zarefsky, David. “How Rhetoric and Sociology Rediscovered Each Other.” The Rhetoric of Social Research: Understood and Believed. Albert Hunter, ed. New Brunswick: Rutgers University Press, 1990.

Zeitchik, Steven M. “Amazon.com launches price war in online bookselling.” Publishers Weekly (V246 I21, 24 May 1999a). Proquest Database [http://proquest.umi.com/pdqweb]. ______. “Amazon.com to open two distribution centers.” Publishers Weekly (V246 I22, 31 May 1999b). Proquest Database [http://proquest.umi.com/pdqweb].

Zerbisias, Antonia. “Web TV control debate rages on.” The Toronto Star (11 June 1997).

Zielinski, Siegfried. “Media Archeology.” Digital Delirium. Arthur and Marilouise Kroker, eds. (New York: St. Martin’s Press, 1997).

Zgodzinski, David. “Free stuff: Deals just keep coming on the Web.” The Gazette (29 July 1988).

Zinner, Lars [[email protected]]. “Welcome to PARK & READ!” [http://www- public.rz.uni-dusseldorf.de/~zinner/parkreae.html]. PARK & READ. July 1996.