
1 Knowing Computers Of all the technologies bequeathed to us by the twentieth century, one above all saturates our lives and our imaginations: the digital computer. In little more than half a century, the computer has moved from rarity to ubiquity. In the rich, Euro-American, world—and to a growing extent beyond it as well—computers now play an essential part in work, educa- tion, travel, communication, leisure, finance, retail, health care, and the preparation and the conduct of war. Increasingly, computing is to be found in devices that do not look like computers as one ordinarily thinks of them: in engines, consumer products, mobile telephones, and in the very fabric of buildings. The benefits brought by all this computerization are clear and compel- ling, but it also brings with it dependence. Human safety, the integrity of the financial system, the functioning of utilities and other services, and even national security all depend upon computing. Fittingly, the twentieth century’s end was marked both by an upsurge of enthusiasm for computing and by a wave of fear about it: the huge soon-to-be- reversed rise in the prices of the stock of the “dotcom” Internet compa- nies and the widespread worries about the “millennium bug,” the Y2K (year 2000) date change problem. How can we know that the computing upon which we depend is de- pendable? This is one aspect of a question of some generality: how do we know the properties of artifacts? The academic field of the social studies of science and technology (a diverse set of specialisms that ex- amine the social nature of the content of science and technology and their relations to the wider society) has for several decades been asking how we know the properties of the natural world.1 The corresponding question for the properties of artifacts is much less often asked; sur- prisingly so, given that a good part of recent sociological interest in tech- nology derives ultimately from the sociology of scientific knowledge.2 2 Chapter 1 Asking the question specifically for computers—how do we know the properties of computers and of the programs that run on them?—is of particular interest because it highlights another issue that sociolog- ical work on science has not addressed as much as it might: deductive knowledge. Sources of knowledge, whether of the properties of the natural world or of those of artifacts, can usefully be classified into three broad categories: • induction—we learn the properties by observation, experiment, and (especially in the case of artifacts) testing and use; • authority—people whom we trust tell us what the properties are; and • deduction—we infer the properties from other beliefs, for example by deducing them from theories or models.3 Social studies of science have focused primarily on the first of these processes, induction, and on its relations to the other two: on the de- pendence of induction upon communal authority4 and interpersonal trust;5 and on the interweaving of induction and deductive, theoretical knowledge.6 Deduction itself has seldom been the focus of attention: the sociology of mathematics is sparse by comparison with the sociology of the natural sciences; the sociology of formal logic is almost nonexistent.7 At the core of mathematics and formal logic is deductive proof. That propositions in these fields can be proved, not simply justified empirically, is at the heart of their claim to provide “harder,” more secure, knowledge than the natural sciences. Yet deductive proof, for all its consequent cen- trality, has attracted remarkably little detailed attention from the sociol- ogy of science, the work of David Bloor and Eric Livingston aside.8 In the social studies of science more widely, the single best treatment of proof remains one that is now forty years old, by the philosopher Imre Lakatos in the 1961 Ph.D. thesis that became the book Proofs and Refuta- tions (see chapter 4).9 Indeed, it has often been assumed that there is nothing sociological that can be said about proof, which is ordinarily taken to be an absolute matter. I encountered that presumption more than once at the start of this research. Even the founder of the modern sociology of knowledge, Karl Mannheim, excluded mathematics and logic from the potential scope of its analyses.10 Asking how we know the properties of computer systems (of hardware and/or of software) directly raises the question of deductive knowledge Knowing Computers 3 and of proof. An influential current of thinking within computer science has argued that inductive knowledge of computer systems is inadequate, especially in contexts in which those systems are critical to human safety or security. The number of combinations of possible inputs and internal states of a computer system of any complexity is enormously large. In consequence it will seldom be feasible, even with the most highly auto- mated testing, to exercise each and every state of a system to check for errors and the underlying design faults or “bugs” that may have caused them.11 As computer scientist Edsger W. Dijkstra famously put it in 1969, “Program testing can be used to show the presence of bugs, but never to show their absence!”12 Even extensive computer use can offer no guar- antees, because bugs may lurk for years before becoming manifest as a system failure. Authority on its own is also a problematic source of knowledge of the properties of computer systems. In complex, modern societies trust is typically not just an interpersonal matter, but a structural, occupational one. Certain occupations, for example, are designated “professions,” with formal controls over membership (typically dependent upon pass- ing professional examinations), norms requiring consideration of the public good (not merely of self-interest), mechanisms for the disciplin- ing and possible expulsion of incompetent or deviant members, the ca- pacity to take legal action against outsiders who claim professional status, and so on. Although many computer hardware designers are members of the established profession of electrical and electronic engineering, software development is only partially professionalized. Since the late 1960s, there have been calls for “software engineering” (see chapter 2), but by the 1990s it was still illegal, in forty-eight states of the United States, to describe oneself as a “software engineer,” because it was not a recognized engineering profession.13 With induction criticized and authority weak, the key alternative or supplementary source of knowledge of the properties of computer sys- tems has therefore often been seen as being deductive proof. Consider the analogy with geometry.14 A mathematician does not seek to demon- strate the correctness of Pythagoras’s theorem15 by drawing triangles and measuring them: since there are infinitely many possible right-angled triangles, the task would be endless, just as the exhaustive testing of a computer system is, effectively, infeasible. Instead, the mathematician seeks a proof: an argument that demonstrates that the theorem must hold in all cases. If one could, first, construct a faithful mathematical representation of what a program or hardware design was intended to 4 Chapter 1 do (in the terminology of computing, construct a “formal specifica- tion”), and, second, construct a mathematical representation of the ac- tual program or hardware design, then perhaps one could prove deductively that the program or design was a correct implementation of its specification. It would not be a proof that the program or design was in an absolute sense “correct” or “dependable,” because the specifi- cation might not capture what was required for safety, security, or some other desired property, and because an actual physical computer system might not behave in accordance with even the most detailed mathemati- cal model of it. All of those involved in the field would acknowledge that these are questions beyond the scope of purely deductive reasoning.16 Nevertheless, “formal verification” (as the application of deductive proof to programs or to hardware designs is called) has appeared to many within computer science to be potentially a vital way of determining whether specifications have been implemented correctly, and thus a vital source of knowledge of the properties of computer systems. The Computer and Proof Formal verification is proof about computers. Closely related, but dis- tinct, is the use of computers in proof. There are at least three motiva- tions for this use. First, proofs about computer hardware designs or about programs are often highly intricate. Many of those who sought such proofs therefore turned to the computer to help conduct them. Mechanized proofs, they reasoned, would be easier, more dependable, and less subject to human wishful thinking than proofs conducted using the traditional mathematicians’ tools of pencil and paper. This would be especially true in an area where proofs might be complicated rather than conceptually deep and in which mistakes could have fatal real- world consequences. Second, some mathematicians also found the computer to be a necessary proof tool in pure mathematics. Chapter 4 examines the most famous instance of such use: the 1976 computer- assisted solution of the long unsolved four-color problem: how to prove that four colors suffice to color in any map drawn upon a plane such that countries that share a border are given different colors. A third source of interest in using computers in proofs is artificial intelligence. What is often claimed to be the first artificial-intelligence
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-