The Empirical Untenability of Sentient Artificial Intelligence
Total Page:16
File Type:pdf, Size:1020Kb
Dickinson College Dickinson Scholar Student Honors Theses By Year Student Honors Theses 5-22-2011 The mpirE ical Untenability of Sentient Artificial Intelligence Andrew Joseph Barron Dickinson College Follow this and additional works at: http://scholar.dickinson.edu/student_honors Part of the Philosophy Commons Recommended Citation Barron, Andrew Joseph, "The mpE irical Untenability of Sentient Artificial Intelligence" (2011). Dickinson College Honors Theses. Paper 122. This Honors Thesis is brought to you for free and open access by Dickinson Scholar. It has been accepted for inclusion by an authorized administrator. For more information, please contact [email protected]. "If you gotta ask, you ain't never gonna know" THE EMPIRICAL UNTENABILITY OF SENTIENT ARTIFICIAL INTELLIGENCE By: Andrew Barron Submitted in partial fulfillment of Honors Requirements for the Department of Philosophy Professor Jessica Wahman, Supervisor Professor Crispin Sartwell, Reader Professor Susan Feldman, Reader Professor Chauncey Maher, Reader May 22, 2011 "All sentience is mere appearance - even sentience capable of passing the Turing test." Tuvok Star Trek: Voyager "The key distinction here is between duplication and simulation. And no simulation by itself ever constitutes duplication." John Searle tvunds, Brains, and Science TABLE OF CONTENTS INTRODUCTION 1 CHAPTER ONE: MINDS AND/OR COMPUTERS WHAT IS COMPUTATION? 4 IS A MIND A COMPUTER? 9 IS A COMPUTER A MIND? 13 CHAPTER TWO: THE ENIGMA OF FAMILIARITY THE HARD PROBLEM 23 ARGUMENTS FROM INEFFABILITY 28 CHAPTER THREE: DO ANDROIDS DREAM OF ELECTRIC SHEEP? WE'LL NEVER KNOW FOR SURE THE EXPLANATORY GAP 39 MCGINN'S THEORY OF COGNITIVE CLOSURE 43 CRITICISM AND RESPONSE 52 Al, CONSCIOUSNESS, AND BLADE RUNNER: TYING EVERYTHING TOGETHER 56 CONCLUSION 62 WORKS CITED 63 INTRODUCTION The ultimate goal of Artificial Intelligence {Al) is to model the human mind and ascribe it to a computer.1 Since the 1950s, astounding progress has been made in the field, leading some to defend "strong Al," John Searle's term for the theory that it is possible to write a computer program equivalent to a mind. As a result, a common trope in science fiction from Isaac Asimov to James Cameron is the idea of sapient and sentient robots living amongst humans, sometimes peacefully, but more commonly not. In Ridley Scott's Blade Runner, androids indistinguishably humanlike in appearance and behavior live as outlaws among human beings, hiding in plain sight. The idea of the strong Al makes for good entertainment, but is it actually possible? Is it within the realm of human capability to synthesize consciousness? To many scholars and researchers, the answer is a resounding "yes!" Cognitive science, the interdisciplinary amalgamation of neuroscience, computer science, philosophy, linguistics, and psychology, has churned out increasingly advanced instances of Al for more than half a century. This paper is an attempt to restrain the mounting excitement. There is no doubt that Al is an incredible idea with far reaching implications already in effect today. The marketplace is already saturated with "smart" cars, calculators, wristwatches, and dishwashers, but the average consumer generally avoids thinking about what that really means. Does the luxury car that parallel parks itself know that it is parallel parking? Simply 1 'Mind' is a controversial term. Intelligence does not presuppose mentality, as we shall 1 because a device is touted by advertisers as "intelligent" does not entail the existence of a conscious mind. The purpose of my argument is not to prove the impossibility of a conscious computer, but to prove the empirical untenability of ever knowing whether or not we have succeeded in producing one. Consciousness is not something we can detect by observing behavior-including brain behavior - alone; just because something seems sentient does not necessarily mean that it is. There is an explanatory gap between brain behavior and the phenomenon of conscious experience that cannot be solved using any extant philosophical or scientific paradigms. Hypothetically, if we were to ever fully grasp the nature of what consciousness is and how it arises and express it in the form of a coherent theory, it might be possible to ascribe such a theory to an artifact. Even so, no conceivable test can transcend the explanatory gap and definitively prove the existence of a sentient mind. My thesis is a three-pronged argument against the possibility of ever knowing for sure whether or not we succeed in building a self-aware, sentient, conscious computer. The first tier is an explanatory discussion of computation and the Turing test. Here I address various arguments for computer minds and the theoretical underpinnings for Strong artificial intelligence. I also discuss the counterarguments that cognitive science continuously fails to rebut. I place special emphasis on Searle's Chinese Room thought experiment and its implications for the possibility of machine sentience. The second tier is a discussion of what it means to be a conscious agent. I reject reductionism in any form as acceptable solutions to the mind-body problem because of the explanatory gap fatally separating first-person introspective accounts of consciousness and third-person observable 2 neuro-behavioral correlates. In the final section I support Colin McGinn's cognitive closure hypothesis by defending his view that the mind-body problem is inherently insoluble because a full understanding of consciousness is beyond our epistemic limits. Using the film Blade Runner and its antecedent novel Do Androids Dream of Electric Sheep?, I aim to prove the impossibility of ever differentiating between a conscious, "strong" Al and one that behaves as if it were conscious, but has no inner life whatsoever, also known as "weak" Al. 3 CHAPTER ONE MINDS AND/OR COMPUTERS WHAT IS COMPUTATION? Throughout the course of my research, I have encountered a bevy of disparate definitions of the word 'computer'. The Oxford Dictionary of Philosophy defines it as "any device capable of carrying out a sequence of operations in a defined manner" (Blackburn 2008). Computers permeate every aspect of modern society, from the microchips in hearing aids to massive parallel-processing supercomputers. Recently, IBM built one such supercomputer called Watson that competed on Jeopardy! against the two most successful contestants in the show's history. Watson seemed to have no problem comprehending the complex linguistic puzzles posed by Trebek, answering them in record time. As impressive IBM's creation may be, does it function at all like a human brain? Are these electronic operations equivalent to whatever phenomena are responsible for human thought? Numerous cognitive scientists, philosophers, computer scientists, and neuroscientists would say yes (Carter 2007). Computationalism is a cognitive theory that posits that the mind is the functional representation of the external world through the manipulation of digital symbols; the mind is software within brain hardware. 4 Before I discuss computationalism in greater detail, it is important to take a closer look at what a computer is. John Haugeland defines a computer as "an interpreted automatic formal system" (Haugeland 1989, 48). To understand this definition, we must first decipher what each of its component terms signifies. A formal system is comprised of tokens to be manipulated according to a set of predetermined rules, not unlike a game. Take chess, for instance. Before the game starts and regardless of who is playing, it is decided that a pawn can only move one (or two) spaces forward, and one space diagonally when attacking. Unless different rules are decided on before the game starts, these rules are set in stone. Instead of physical pieces, computer tokens are electronic and invisible to the eye. Formal systems like chess and checkers are necessarily digital. 'Digital' means discrete and precise while its opposite, 'analogue' means variable or nebulous. The alphabet is digital - A, B, C, D ... are static and discrete with no middle ground between A and B. The station preset buttons on car radios are digital, while an "old fashioned" dial is analogue. If button '1' is set to 98.5 MHz, pushing the button will reliably set that exact frequency. But when the driver of a 1983 Ford Bronco turns the tuner knob, it is effectively impossible to tune the exact same frequency every time. To my knowledge, all existing digital computers are binary, using strings of ls and Os as tokens. Formal systems are completely self-contained, meaning that the rules only apply to tokens within the system itself; 'black knight' seldom means 'movement restricted to two spaces by one space' outside a the realm of a chess match. As a result, the "outside world" is irrelevant. Chess can be played indoors, outdoors, on the moon, or underwater with 5 pieces made from plastic, gold, or elephant meat; the medium is irrelevant. All that matters is that the symbols pertain to the same system of rules, or syntax. As we shall see later, the idea of medium independence is extremely relevant to the field of artificial intelligence.2 So far we have learned that computers are digital, self-contained, and are syntactic. A formal system is automatic if it works or runs devoid of any external influence. In his discussion of automatic systems, Haugeland imagines a fanciful example: "a set of chess pieces that hop around the board, abiding by the rules, all by themselves" or "a magical pencil that writes out formally correct mathematical derivations without the guidance of any mathematicians" (Haugeland 1989, 76). A computer becomes automated when its legal moves are predetermined and carried through algorithmically. An algorithm works like a flowchart, a "step-by step recipe for obtaining a prespecified result" (Haugeland 1989, 65). Algorithms are designed to work indefinitely in finite time. For example, a programmer can design a procedure that alphabetizes a set of data. The algorithm used for this program can be used reliably with new sets of data ad infinitum.