Approaching the 'Ought' Question When Facing the Singularity
Total Page:16
File Type:pdf, Size:1020Kb
Running Head: Approaching the “Ought” Question Approaching the “Ought” Question when Facing the Singularity1 Chad E. Brack National Defense University College of International Security Affairs April 1, 2015 DISCLAIMER: THE OPINIONS AND CONCLUSIONS EXPRESSED HEREIN ARE THOSE OF THE INDIVIDUAL STUDENT AUTHOR AND DO NOT NECESSARILY REPRESENT THE VIEWS OF THE NATIONAL DEFENSE UNIVERSITY, THE DEPARTMENT OF DEFENSE OR ANY OTHER GOVERNMENTAL ENTITY. REFERENCES TO THIS STUDY SHOULD INCLUDE THE FOREGOING STATEMENT. 1. This paper was a submission for CISA Master of Arts in Strategic Security Studies course 6933: Science, Technology, and War. The topic question was “can knowledge (output of science) be a bad thing for humans? How should humans view the Bill Joy–Ray Kurzweil debate? Approaching the “Ought” Question Many scholars agree that science itself should not be expected to answer “ought” questions (Bauer 1994; Feyerabend 1975; Midgley 1994; Hoover 2010). As a theoretically amoral system,2 science is simply the pursuit of knowledge or, as Kenneth Hoover (2010) puts it, “a strategy for learning about life and the universe” (134). Science easily provides us with three of the four types of knowledge: empirical, analytical, and theoretical. Normative knowledge, however, stems from human values and norms. It is not scientific by nature and can be highly relative and subjective. Normative questions require “other forms of understanding” (Hoover 2010, 130), which come from many faucets of human experience (including the three types of knowledge listed above). The empirical, analytical, and theoretical knowledge that science provides us is neither good nor bad. It simply is. It is the application of that knowledge — in the form of technology — in which the “ought” question comes into play. Technology allows us to apply scientific knowledge to the world around us. It does not, however, imply that we should pursue all forms of technology simply because we have the ability to do so, or whether the benefits of our technology will outweigh its potential costs. Who should be responsible for such decisions is up for debate. Some people argue that scientists should take the lead in decision-making. Others believe even matters of science and technology remain in the realm of politics. According to Marvin Minsky (2014), scientists are no better (and perhaps worse) than anyone else at deciding what is good or bad. Such questions of morality and ethics are bigger than science, and their consequences can profoundly impact not only all of humanity, but the world at large. Bill Joy and Ray Kurzweil approach the technological “ought” problem from opposite angles. Joy foresees knowledge-enabled weapons of mass destruction as a likely means to the end of human civilization, while Kurzweil paints technology as the ultimate savior of mankind. Both futurists speak of a coming technological singularity — a point at which technology irreversibly transforms humanity — with Joy pessimistically advocating technological or knowledge-based regulation and Kurzweil optimistically encouraging exponential technological growth. Both men bring some valid arguments and concerns to the discussion, but both also fall victim to assumptions that drastically flavor their conclusions. Both view the development of technology as a normative question, and both seek to steer mankind towards a positive future — one by avoiding and the other by embracing the singularity. Both approaches are ideological and emotional by nature, and they both fail to recognize that technological singularities have commonly occurred in the past.3 Humanity is already irreversibly bound to technology, and no technology to date has proven to be the destroyer or the savior of mankind. 2. I label science “theoretically amoral” because, although the system itself is amoral by nature, its findings can be skewed, tainted, or manipulated, whether purposefully or inadvertently, by biased individuals or groups with specific agendas. 3. Academics discuss different types of singularities, most of which revolve around the development of Artificial Intelligence (AI) or post-humanism (the point at which humans and machines merge). According my analysis, Kurzweil and Joy are speaking about two different types of singularities. Kurzweil leans toward the AI-based or post-human singularity, while Joy speaks of a more vague technological singularity in which human knowledge—through technology—produces irreversible consequences of some kind. In this paper, my use of the term coincides with Joy’s more general definition. The Kurzweilian singularity, of course, has not yet occurred. 1 Approaching the “Ought” Question Though each scenario is a real possibility, current empirical data leaves no reason to believe that the next major technological advancement will break the pattern. Joy (2000; 2008) posits that knowledge can, indeed, be a bad thing. He argues that humanity should willingly limit its pursuit of certain types of knowledge and come together as a species to determine which forms of technology should be relinquished (Joy 2000, 8). His argument revolves around the potential for Genetic Engineering, Nanotechnology, and Robotics (GNR) to damage the physical world. With the ability to self-replicate, once loosed upon the environment such potentially destructive technologies may prove unstoppable (3). Furthermore, Joy recognizes that the research and development of GNR technologies has moved from secretive government projects, many of which relied heavily on hard-to-get materials, to more open source-type corporate environments in which knowledge itself enables production. Unlike nuclear bombs, which require the combination of expertise and elusive fissile materials, knowledge-enabled weapons of mass destruction might become easily accessible to the general population and could prove more destructive than nuclear weapons. They may also be extremely easy to assemble, requiring little more than a bit of knowledge and readily available parts (6). For Joy, the risk of such knowledge-enabled destruction is not worth the potential rewards of pursuing the knowledge itself. He argues that certain GNR technologies inherently contain such vastly destructive potential to warrant a ban on their very development. Joy is also concerned about developing general artificial intelligence. He believes that a new, in some ways superior, species would inevitably compete for resources, energy, and power. He mentions that biological species rarely survive encounters with superior competitors. Furthermore, humans could become so dependent on technology that they are forced to accept machine decisions that might not be in humanity’s best interests (1–2).4 Joy worries that we have a tendency to fail to “understand the consequences of our inventions while we are in the rapture of discovery and innovation” (4). He dismisses the Kurzweilian notion of human-robot synthesis, seeing it as a road to the loss of what we consider “human,” which could lead to the oppression or extinction of purely biological humanity. Not without valid concerns, Joy’s technological pessimism borders on the paranoid and leaves little room for a non-dystopian future. Kurzweil, on the other hand, may not be cautious enough (Grossman 2011; Kurzweil 2014). His version of the future is extremely utopian, and one in which even death has been conquered (Grossman 2011, 1, 4). Kurzweil envisions a melding of humanity with technology. He sees artificial intelligence billions of times smarter than humans solving the world’s toughest problems. Meanwhile, humans with genetic enhancements will be smarter, faster, stronger, and healthier than ever before (Transcendent Man 2009). Eventually humans will be able to download their intelligence into robot vessels and travel the stars, re-engineering all of the matter in the universe (Grossman 2011, 5).5 Furthermore, Kurzweil predicts the singularity to happen much 4. Joy uses quotes from Luddite, Theodore Kaczynski (the Unabomber), and robotics researcher, Hans Moravec, to express his concerns with technology run amok. 5. This idea sounds like a precursor to Frank Tipler’s (1986) Anthropic Universe. 2 Approaching the “Ought” Question sooner than anyone expects (3). Simply stated, Kurzweil sees technology as the answer to all of humanity’s problems and for him, banning it would be unethical, dangerous, and impossible (4). Both authors agree that technology’s exponential growth will soon offer awe- inspiring transformative power. What they do not agree on is whether or not that transformative power will be ultimately good or bad for humanity or the planet Earth. Current data seems to suggest that the answer is neither nor. Or perhaps we could logically conclude that it is actually both. A more interesting question is whether our opinion truly matters. Do we even have the power to choose? Technological determinism can be viewed in two distinct ways. The first says that technology itself is a driving force containing “agency within the scope of human history.”6 Kevin Kelly (2010) frames this idea in terms of evolution. He describes technology’s tendency to move toward higher forms of complexity as universal and inevitable. For Kelly, once precursor technologies are in place, the next step will definitely occur. We do not have the power to stop progress, but we do have the power to influence how it happens, e.g. choosing which type of light bulb to pursue, not whether or not to produce artificial light. The second way to view technological determinism