ABDUCTIVE HUMANISM: COMPARATIVE ADVANTAGES OF AND HUMAN COGNITION ACCORDING TO LOGICAL INFERENCE

By

WILLIAM JOSEPH LITTLEFIELD II

Submitted in partial fulfillment of the requirements for the degree of Master of Arts

World Literature

CASE WESTERN RESERVE UNIVERSITY

May, 2019

We hereby approve the thesis/dissertation of

William Joseph Littlefield II

candidate for the degree of Master of Arts. ​ ​

Committee Chair

Florin Berindeanu

Committee Member

Richard Buchanan

Committee Member

Mark Turner

Date of Defense

April 3, 2019

*We also certify that written approval has been obtained for any proprietary

material contained therein.

2

Table of Contents

Abstract 4

1. Introduction 5

2. Logical AI 6

a. Deductive Methods

b. Inductive Methods

c. Comparative Advantages of AI

3. Big Data 16

a. Epistemic Superiority

b. Computational Creativity

4. Abduction 19

a.

b. Abduction & Discovery

5. Abductive Humanism 23

a. Comparative Advantages of Human Cognition

b. A Future of Abduction

6. Conclusion 31

Bibliography 33

3

Abductive Humanism: Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference

Abstract

By

WILLIAM JOSEPH LITTLEFIELD II

Speculation about artificial intelligence and big data has become commonplace.

Foremost among these discussions is the potential for these technologies to displace the value of human labor. However, these discussions have become untethered from the history and development of the technologies in question.

This paper analyzes paradigms of artificial intelligence to reveal that they are closely tied to different types of logical inference. The properties of computing machinery provide them with an advantage at deductive and inductive tasks when compared with human cognition. But, the understudied method of Peircean abduction poses several problems for computing machinery. Moreover, Peircean abduction, also called “creative abduction,” bears resemblance to important themes in the history of humanism. Human cognition handles the issues of

Peircean abduction with remarkable ease, suggesting that humans will maintain a comparative advantage at this type of logical inference for the foreseeable future.

4

“Imagination is more robust in proportion as reasoning power is weak.”1 ​ - Giambattista Vico, The New Science ​

1. Introduction

Two great chimeras of science fiction are proliferating: one represents an existential crisis, the other, epistemological. I speak, of course, of artificial intelligence and big data, and like most crises, their horror is in the questions that they pose. What is the value of human labor once we have outsourced ?

What is the value of knowledge when we have outsourced memory? Machine learning algorithms can be trained to see what we do not; brute force computation can outrun us. Our favorite applications now know us better than our friends or family do.2

Still, there are particular tasks where human cognition will possess a comparative advantage for the foreseeable future — this paper will argue that such tasks are centered around the human capacity for .

Meanwhile, as these two technologies continue to develop, their comparative

1 Vico, Giambattista, et al. The New Science of Giambattista Vico: Revised Translation of the ​ ​ Third Edition (1744). Cornell University Press, 1968. ​ 2 Youyou, Wu, et al. “Computer-Based Personality Judgments Are More Accurate than Those ​ Made by Humans.” Proceedings of the National Academy of Sciences, vol. 112, no. 4, 2015, pp. ​ ​ 1036–1040., doi:10.1073/pnas.1418680112.

5

advantage at inductive and will only improve.3 Therefore, I advocate for a new humanism, simply referred to as abductive humanism. ​ To this end, this paper begins by reviewing some paradigms in the history of artificial intelligence and how they are representative of the aforementioned types of logical inference. The speed, inerrability, and indefatigability of computing machines has rendered them as superior tools for such inference.

Additional consideration is given to the epistemic advantages which big data provides and the consequences of those advantages for computational creativity.

Ultimately, though, a revisiting of the foundations of humanism is combined with outstanding problems in logical artificial intelligence (logical AI), problems which are well handled by abductive reasoning, to conclude as Selmer Bringsjord quipped, that, “computation, among other things, is beneath us.”4

2. Logical AI

3 A disclaimer regarding methodology: there are some who may object to the idea of broadly periodizing the history of artificial intelligence according to the underlying employed. Furthermore, scholars of logical AI have been grappling with the problems of abduction, defeasible reasoning, and nonmonotonic logic since the emergence of the field. Suggesting that the technologies of the past, or of today, are strictly deductive, inductive, or abductive in their approach could be denounced as an oversimplification. The author readily concedes both of those points. Instead, this paper considers which approaches have generally materialized in ​ ​ artificial intelligence empirically, and which have generally eluded success, to describe ​ ​ macroscale trends in the history and of technology. Legitimacy of method lies in the fact that this paper intends to respond to scholarly and popular thought of a similar ilk. 4 John McCarthy provides the following definition of logical AI: “logical AI involves representing knowledge of an agent's world, its goals and the current situation by sentences in logic.” McCarthy, John. “Concept of Logical AI.” Logic-Based Artificial Intelligence, by Jack Minker, ​ ​ Kluwer Academic Publishers, 2000, pp. 37–56. This witticism by Bringsjord dates from a 1994 refutation of Searle’s against cognition as computation. Bringsjord, Selmer. ​ “Computation, among Other Things, Is beneath Us.” Minds and Machines, vol. 4, no. 4, 1994, pp. ​ ​ 469–488., doi:10.1007/bf00974171.

6

Much of the speculation and hysteria regarding AI arises from generalizations about its seemingly infinite applications. Once more, predictions of a leisure society seem reasonable, if not inevitable.5 However, by contextualizing this conversation within the history of logical AI, the limitations of these technologies become much clearer.6 What is more, the history of AI, in practice, maps to different strategies of logical inference. As will be seen, abductive reasoning is mostly aspirational within logical AI, which is a telling contrast to the achievements in inductive and deductive reasoning.

Deductive Methods

Deductive and both comprise large fields of study within philosophy. Despite the many historical interpretations of induction and deduction, it can be safely said that induction is a kind of “bottom-up” reasoning while deduction is “top-down.” The most reprinted illustration of deduction is likely the following syllogism:

All men are mortal.

Socrates is a man.

5 See the work of Yuval Noah Harari, whose arguments about the future of artificial intelligence are among those which this paper intends to refute. Harari, Yuval Noah. “The Meaning of Life in a World without Work.” The Guardian, Guardian News and Media, 8 May 2017, ​ ​ www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-book. For a ​ book length exposition, see Homo Deus: A Brief History of Tomorrow. Harari, Yuval N. Homo ​ ​ ​ Deus: A Brief History of Tomorrow. Harper Perennial, 2018. ​ 6 Something very similar occurred with early academic speculation about “cyberspace.” After the word was coined by William Gibson in the 1984 novel, Neuromancer, a flurry of papers, ​ ​ conferences, and books emerged that imagined a digital future which was increasingly untethered from reality. Those working more closely with the technology and policy of the era, such as John Perry Barlow, offered much more accurate prognostications.

7

Therefore, Socrates is mortal.

Bertrand Russell provides a memorable example of induction in the chicken that is fed daily by a farmer, inferring that the following day, they will be fed once more (only to have their neck wrung).7

As can be seen, deduction operates by inspecting whether a particular case falls within the domain of an established rule. Some of the most visible examples of early AI were chess-playing programs, often called “chess engines.”

The approach first employed in the development of chess engines was to survey chess grandmasters about how to play chess, aggregate their answers, and create a set of logical rules that corresponded to their strategies. During gameplay, if a situation fell under an established rule, then the program would execute a particular strategy in accordance with that rule.8

This is very typical of the dominant paradigm in early AI. In fact, such programs are often referred to as “GOFAI,” or Good Old-Fashioned Artificial

Intelligence, a name coined by the philosopher John Haugeland (1945-2010).9

But, in academic terms, this technology is referred to as “symbolic AI.” Deep

Blue, the chess engine developed by IBM, and which first defeated world champion Gary Kasparov in 1996, relied primarily upon symbolic AI.10

7 Russell, Bertrand, and John Skorupski. The Problems of Philosophy. OUP Oxford, 2014. Here, ​ ​ Russell is of course explaining the infamous “problem of induction.” 8 Such programs rely upon a system of conditional if-then statements, a basic fundamental of computer science. When a program runs, “if” a condition is met, “then” execute this piece of functionality. 9 Haugeland, John. Artificial Intelligence: The Very Idea. MIT Press, 1985. ​ ​ ​ 10 Hsu, Feng-Hsiung, et al. “Deep Blue System Overview.” Proceedings of the 9th International ​ ​ Conference on Supercomputing - ICS '95, 1995, doi:10.1145/224538.224567. ​

8

Symbolic AI is plainly deductive. A large set of applications are served well by symbolic AI. The benefit of these rules-based systems is their capacity to store complex knowledge and to work through that complexity without error. In circumstances that call for convoluted and procedural decision making, they can function as experts. So long as the truth value of their rules is maintained, they are infallible.

Inductive Methods

If AI were limited to deductive systems, concerns about their dominance over human intelligence would be limited — after all, the knowledge base upon which they make inferences from would have to be designed by humans. Ergo, how could they know more than the experts who informed them? Today, chess engines, and other examples of AI, obtain their knowledge bases much differently.11

Symbolic AI has given way to more inductive technologies, in particular, machine learning algorithms and neural networks. Principally, these programs are “trained” on a dataset from which they draw their own conclusions, as opposed to having those conclusions supplied by external experts.12

11 The underlying techniques for training these systems have existed since at least the 1960’s. However, it was not until 2011 or 2012 that modern implementations and hardware improvements reached an inflection point, and a modern revolution in deep neural networks was ignited. Parloff, R. “Why Deep Learning Is Suddenly Changing Your Life.” Fortune, ​ ​ ​ fortune.com/ai-artificial-intelligence-deep-machine-learning/. For an applied example of deep neural networks, see the paper by Silver et al. Silver, David, et al. “Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature, vol. 529, no. 7587, 2016, pp. 484–489., ​ ​ doi:10.1038/nature16961. 12 Some programs take a two-pronged approach, supplying some expert knowledge in a complementary system.

9

(Unfortunately, almost any review of contemporary methods in machine learning and neural networks, so called “deep learning,” is rather involved.13 This is simply outside the scope of this paper.) This difference in paradigm is important, and technical details aside, there is a fundamental shift from systems that are ​ supplied rules and deductively infer which rule to apply to systems that are given a small set of learning algorithms from which to generate rules after being trained on sample data. The latter, whereby n number of samples is used to infer and ​ ​ hone a rule, is emblematic of inductive reasoning.

Comparative Advantages of AI14

There are a number of properties which make today’s and tomorrow’s computing machines better at some deductive and inductive tasks. The foremost of these are: speed, inerrability, and indefatigability.

Chess engines are, again, a useful example in recognizing the first two of these properties. In any given position on a chessboard, there are a finite number of possible moves, and for symbolic AI, a finite number of rules to be applied to them. As any novice chess player can attest, one of the early learning difficulties in chess is to see all of the moves available at any given turn, and to keep track of all the rules and principles which may or may not apply for each available

13 For an introduction to the history and development of deep learning, see The Deep Learning ​ Revolution by Sejnowski. Sejnowski, Terrence Joseph. The Deep Learning Revolution. The MIT ​ ​ ​ ​ Press, 2018. A technical introduction to the methods of contemporary deep learning can be found ​ in Deep Learning by Goodfellow, Bengio, and Courville. Goodfellow, Ian, et al. Deep Learning. ​ ​ ​ ​ ​ MIT Press, 2018. 14 The term “comparative advantage” has been selected for the economic implications which it connotes, and because it does not strongly assert that there is no possible world where AI would have different advantages.

10

move. What is more, there is a need to see several “moves ahead” or, to evaluate a move not just based on the immediate consequences, but also in consideration of the possible moves in forthcoming turns which open up as a result of that move. This substantially expands the number of moves which need to be considered and is part of what makes chess a challenging game. Finally, to make this cognitive task even more stressful, competitive chess is timed. ​ Thanks to advancements in the speed of computer processing, computing machines can simply evaluate more possible moves than a human can in the same span of time. In a single turn, there might be only a few dozen immediate moves to evaluate. But, once the possible moves which result from those immediate moves are considered, the set of moves subject to evaluation becomes much larger (often termed “combinatorial explosion”), quickly surpassing the number which a human can recognize and analyze in a timed condition.

In computer science, this problem can be addressed with brute-force search.15 Abstracting from chess, one way to solve a problem with finite solutions is to enumerate every candidate solution and to test each one, eventually arriving at the correct solution or the best candidate in the set. For humans, brute-force search is usually inefficient compared to alternative methods (even computationally, brute-force search is comparatively inefficient). Many people have had the experience of using brute-force search to solve multiple choice

15 The latest developments in brute-force techniques are quite astonishing. Heule, Marijn J. H., and Oliver Kullmann. “The Science of Brute Force.” Communications of the ACM, vol. 60, no. 8, ​ ​ Aug. 2017, pp. 71–79., doi:10.1145/3107239.

11

problems, usually mathematical, on a standardized test. By testing every candidate, with sufficient knowledge to distinguish an incorrect answer from a correct answer, one will assuredly solve the problem. Most people, including those who craft the tests, also know that this is not the “right” way to solve the problem. Analytic approaches that involve the correct application of a formula or heuristic are usually what the test seeks to review — as a result, brute-force search is often a suboptimal last resort in such circumstances.

Thanks to the ceaseless marching on of Moore’s Law, computing machines can often ignore the downsides of brute-force search. So long as there is a method of distinguishing between candidates in a finite set (in the case of chess engines, weights are applied to different moves through the famous minimax algorithm), computing machines possess the speed to evaluate all candidates in a reasonable time in many applications.16 What is more, there are several techniques that can improve upon brute-force search, such as alpha-beta pruning, which disregards a subtree of candidates if a better candidate has already been discovered in another tree.17

Then there is the matter of selectively applying the right rules in evaluating a potential move. While there are many heuristics used to simplify the evaluation process — (such as assigning a numerical value to each piece in chess, and favoring some areas of the chess board more than others — especially, the

16 Shah, Rajiv Bakulesh. “Minimax.” Minimax, Dictionary of Algorithms and Data Structures , ​ ​ 2007, xlinux.nist.gov/dads/HTML/minimax.html. 17 Newell, Allen, and Herbert A. Simon. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM, vol. 19, no. 3, 1976, pp. 113–126., ​ ​ doi:10.1145/360018.360022.

12

center of the board), inevitably, recognizing the correct rule to apply in selecting a move becomes very complicated. Computing machines, though, unaffected by anxiety and psychology, do not err when navigating the complexity of rules which factor into evaluating a given position.

These two properties, speed and inerrability, are enough to provide computing machines with a decisive advantage in deductive reasoning. This is true of many expert systems, not just chess engines. In medicine, an expert system can quickly iterate over all candidate ailments. Given input about patients’ symptoms, an expert system can deductively infer if each candidate ailment is a potential match, never forgetting or failing to apply every rule for validation.18 The same is true of expert systems in the legal field in instances where there is a finite number of potential legal actions. One prominent and successful example of this was an artificially intelligent chatbot that contested parking tickets.19

To explore the merits of indefatigability, a better case might be image processing. Let’s say that I am teaching both a person and a computer about the popular book series, Where’s Waldo? by Martin Handford.20 Of course, the ​ purpose of “playing” Where’s Waldo? is to locate the character of Waldo in a ​ large and busy illustration. Waldo has some identifying features, such as glasses,

18 See the efforts that were made with the expert system, MYCIN. Buchanan, Bruce G., and ​ Edward Hance Shortliffe. Rule-Based Expert Systems The MYCIN Experiments of the Stanford ​ Heuristic Programming Project. Addison-Wesley Publ. Comp., 1985. ​ 19 Gibbs, Samuel. “Chatbot Lawyer Overturns 160,000 Parking Tickets in London and New York.” ​ The Guardian, Guardian News and Media, 28 June 2016, ​ www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london -new-york. 20 Outside of the United States and Canada, this book series is actually titled, Where’s Wally? ​ Handford, Martin. Where's Waldo? Candlewick Press, 2017. ​ ​

13

a striped sweater, blue jeans, and a beanie. But, in each illustration, Waldo is often partially obscured, wearing additional accessories, and gesturing in a unique position. Hence, beyond finding the small character of Waldo in an enormous scene, there is further difficulty in recognizing each discrete token of the Waldo character, as they are all different.

This task was intractable for AI until quite recently. Due to the variation in

Waldo tokens or signifiers, there is no simple deductive method for identifying the character. No amount of expert knowledge can be formed into a system of axioms that will reliably identify him — here, symbolic AI is doomed.

Unfortunately, the complexity and fuzziness of real-world applications are more often like finding Waldo than they are like playing chess.

But, how does a person learn to identify Waldo? And, why does Where’s ​ Waldo? remain challenging even after several exposures? Any explanation will ​ include several processes, but significant among them will be feature extraction, pattern recognition, and induction. Feature extraction is self-descriptive: looking at Waldo, we might recognize that his glasses are rounded, his shirt is striped horizontally, he has two arms and legs. Note that we do not form strictly-defined rules about these features, but instead know that some combination of them will be present with some variance in their shape, position, and size. We do this all the time as we learn to visually identify things which are novel to us. In more academic vernacular, we might say that while never entirely consistent, tokens of

14

Waldo have some regularity in their data — it is this fuzzy regularity which then enables patterns to be recognized across multiple contexts.

Finally, the more examples which we see of Waldo, the better and more versatile our mental model of him becomes. From numerous specific instances we are able to infer an increasingly accurate set of features and rules for recognizing Waldo. Eventually, we possess a strong sense of Waldo’s essential properties, can find him more quickly, and can recognize him despite greater variation in each token representation of him.

The fundamental point is that since induction works by inferring from specific cases to general rules, the more specific cases which we have to inspect, the more nuanced and predictive the general rules will typically be. Such reasoning can be erroneous, as the problem of induction reminds us. But, if the thing which we intend to understand follows a normal distribution, then there is simple probabilistic likelihood that with more sample data, the more veridical our understanding will be.

In light of this, the merits of indefatigability become almost self-explanatory. Aside from overheating, computers do not, generally speaking,

“tire.” If directed to do so, a program will run a learning algorithm against images of Waldo for as long as there are images left in a queue.21 Even if that algorithm is poorer at feature extraction than a person is, given sufficiently more images to train on than any reasonable person would spend their time looking at, eventually

21 Believe it, or not, there are AI systems which have been successfully trained to find Waldo. Clifford, Catherine. “Where's Waldo? Watch This Robot Find Him.” CNBC, CNBC, 16 Aug. 2018, ​ ​ www.cnbc.com/2018/08/16/viral-video-of-bot-built-with-google-ai-tools-finding-wheres-waldo.html.

15

that algorithm will induce rules for recognizing patterns that eclipse a human counterpart.

What is more, the purity of data which learning algorithms are trained against may provide them with an advantage over human cognition. Humans inevitably bring their biases from past experiences into the comprehension of new ones. Often, this is an advantage, as we recognize that knowledge about similar concepts will somewhat translate to a new concept. However, this tendency can also lead us astray, as we might overlook or minimize the differences between the two concepts. Or, our familiarity with and partiality to a concept may prevent us from revising or discarding it — chess engines built on deep learning purportedly discover and subsequently abandon some of the most beloved strategies of grandmasters throughout their training process.22

3. Big Data

Concurrent with the latest revolution in AI, there has been a radical acceleration in the amount of data collected and processed in virtually all industries. Not unlike Moore’s Law regarding transistor density, it has been shown that the global amount of per capita information storage doubles about every 3 years.23 These developments in big data are an important complement to

22 Kohs, Greg, director. AlphaGo. AlphaGo, Moxie Pictures, 2018, www.alphagomovie.com/. A ​ ​ ​ ​ ​ ​ technical summary of the work exhibited in this documentary can be found in Science. Silver, ​ ​ ​ David, et al. “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play.” Science, vol. 362, no. 6419, 2018, pp. 1140–1144., ​ ​ doi:10.1126/science.aar6404. 23 Hilbert, M., and P. Lopez. “The World's Technological Capacity to Store, Communicate, and ​ Compute Information.” Science, vol. 332, no. 6025, 2011, pp. 60–65., ​ ​ doi:10.1126/science.1200970.

16

those in AI as they enable and expand the comparative advantages previously discussed.

Epistemic Superiority

Increases in information storage are valuable to both deductive and inductive strategies for AI. For deductive systems, they increase the size of the knowledge base from which they can draw from. For inductive systems, they increase the size of the training set. A chess engine can remember and apply the lessons from 10,000 chess books, or gain experience by playing through more games than a human could in 10 lifetimes. We might call this decisive advantage, the “epistemic superiority” of AI.

Computational Creativity

One provocative consequence of epistemic superiority is that AI has the potential to know more about an individual than another person could. We are constantly astonished by just how much personal data is collected by our devices and online platforms. Unsettling about these technologies is how they undermine our sense of what is distinctly human. For, intuitively, a computer defeating us at chess or recognizing patterns that we do not immediately see feels like a feat of engineering — a computer that composes poetry which strokes our preferences better than a Wordsworth feels like something else.

17

This encroachment upon the humanities coerces us to acknowledge that computers can be creative. Indeed, the field of computational creativity has been slowly, but steadily, emerging. There are many computer systems that have been devised to write poetry, music, and even stories.24 In the same 1958 paper where

Allen Newell, Herbert Simon, and John Clifford Shaw provide early criterion for creativity, they mention an existing music composition program.25 When big data supplements these programs with epistemic superiority, there is a real question as to whether or not they will eventually generate a majority of the media that we consume.

The attributes described by Newell, Simon, and Shaw, such as novelty, persistence, and clarification of what was once vague, are now considered to be much too simplistic in describing creative processes. However, our definition of human creativity might be refining as a response to the increasingly creative endeavors undertaken by computing machines. Computational processes have been studied and used as a method for creative problem solving for some time, even as aids to scientific discovery.26

24 For an example of poetry, see the work of Pablo Gervas. Gervás, Pablo. “An Expert System for ​ the Composition of Formal Spanish Poetry.” Knowledge-Based Systems, vol. 14, no. 3-4, 2001, ​ ​ pp. 181–188., doi:10.1016/s0950-7051(01)00095-8. David Cope leads the field of computational ​ music composition. Cope, David. Computer Models of Musical Creativity. MIT Press, 2017. ​ ​ ​ Selmer Bringsjord, who is quoted at the beginning of this paper, has enhanced techniques of AI-based storytelling, a field that stretches back until at least the 1970’s. Bringsjord, Selmer, and David A. Ferrucci. Artificial Intelligence and Literary Creativity inside the Mind of BRUTUS, a ​ Storytelling Machine. Psychology Press, Taylor & Francis Group, 2013. ​ 25 Newell, Allen, et al. “The Processes of Creative Thinking.” Contemporary Approaches to ​ Creative Thinking: a Symposium Held at the University of Colorado, Atherton Press, 1962, pp. ​ 63–119. 26 Langley, Pat. “The Computational Support of Scientific Discovery.” International Journal of ​ ​ Human-Computer Studies, vol. 53, no. 3, 2000, pp. 393–410., doi:10.1006/ijhc.2000.0396. ​

18

Yet, there is still an intuitive feeling that computers lack the capacity for the conceptual understanding which seems essential to creativity. The objection of John Searle, that, “syntax by itself is neither constitutive of nor sufficient for ,” seems to exclude some forms of human creativity from easy computational reproduction.27 While there have been numerous prominent rebuttals to Searle, there is the empirical matter that we still lack computers which seem to convincingly approximate semantic understanding. Furthermore, there are some examples of human creativity which the computational advantages of speed, inerrability, indefatigability, and epistemic superiority, do not seem to directly facilitate as they do inductive and deductive reasoning — one example of which is creative abduction.

4. Abduction

Abduction is a type of logical inference. Although induction and deduction are more frequently mentioned in academic vernacular, abduction is just as natural and pervasive within human thought. The comparative obscurity of abduction may be due to its late articulation and vaguely defined boundaries.28 In contrast, deduction and induction were thoroughly described by in the

Prior Analytics and Posterior Analytics, respectively.29 ​ ​ ​

27 Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, vol. 3, no. 03, ​ ​ ​ 1980, p. 417., doi:10.1017/s0140525x00005756. 28 Ironically, this line of reasoning is abductive. 29 For a thorough introduction to deduction within Aristotelian logic, see Smith, Robin. “Aristotle's ​ Logic.” Stanford Encyclopedia of Philosophy, Stanford University, 17 Feb. 2017, ​ ​ plato.stanford.edu/entries/aristotle-logic/.

19

Studies of abduction must be further specified between what is often called selective abduction, or “inference to the best explanation,” and creative ​ ​ ​ abduction. Igor Douven argues that the difference between these two processes ​ is between justifying hypotheses and generating hypotheses.30 They can also be ​ ​ ​ ​ categorized historically: contemporary studies of abduction often refer to the former, while the latter form of abduction, now known as Peircean abduction, has become a separate topic, despite being the original process which interested philosophers. Unless otherwise specified, the sections that follow are primarily concerned with Peircean abduction, and will refer to the different process of justifying hypotheses as “inference to the best explanation.”

Charles Sanders Peirce

The American philosopher, Charles Sanders Peirce (1839-1914), found something new to say about logical inference. In a series of remarkable and evolving essays, Peirce developed the concept of abduction. This is concisely laid out alongside induction and deduction in his 1878 essay, “Deduction,

Induction, and Hypothesis.”31 His example is reproduced below:

Deduction

30 Douven, Igor. “Abduction.” Stanford Encyclopedia of Philosophy, Stanford University, 28 Apr. ​ ​ ​ 2017, plato.stanford.edu/entries/abduction/. 31 Peirce, Charles S. Chance, Love, and Logic: Philosophical Essays. Routledge, 2017. Peirce ​ ​ ​ was promoting abduction as early as 1867. The Proceedings of the American Academy of Arts and Sciences include a presentation by Peirce entitled, “On the Natural Classification of Arguments.” Therein, he describes hypothesis in similar, but less immediately tractable, terms as in the 1878 essay mentioned here. Peirce, C. S. “On the Natural Classification of Arguments.” Proceedings of the American Academy of Arts and Sciences, vol. 7, 1867, pp. 261–287., ​ doi:10.2307/20179566.

20

Rule: All beans from this bag are white.

Case: These beans are from this bag.

Result: These beans are white.

Induction

Case: These beans are from this bag.

Result: These beans are white.

Rule: All beans from this bag are white.

Hypothesis

Rule: All beans from this bag are white.

Result: These beans are white.

Case: These beans are from this bag.

As with several other terms which he invented, Peirce experimented with the nomenclature of abduction. In 1878, he employs the name “hypothesis” for this type of inference. What is important to note from this example is how abduction or “hypothesis” differs from the “top-down” and “bottom-up” approaches of deduction and induction. Instead, abduction is the generation of an assertion

(more simply put, a “guess”) about how a rule and a result might be logically connected. In a series of lectures given at Harvard in 1903, Peirce offered this concise description of abduction:

21

The surprising fact C is observed;

But if A were true, C would be a matter of course,

Hence there is reason to suspect that A is true.32

What is remarkable about Peircean abduction is our innate capacity to generate new hypotheses. For, in the above example, where does A arise from? ​ ​ According to Peirce, abduction, “is the only logical operation which introduces any new idea.”33 As can be seen in the example of deduction, the Result is necessarily entailed by the Rule and the Case. With induction, no new information is introduced, rather, information that is already known is assumed to be veridical into the future. However, in Peircean abduction, from a set of infinite possibilities, we infer the plausible proposition A, despite lacking direct ​ ​ information to suggest A. ​

Abduction & Discovery

The similarity of inferring how rules and results might be connected, and the process of scientific investigation of physical laws by experiment, was not lost on Peirce or his successors. In his own writings, Peirce used examples of scientific discovery to illustrate the process of abduction, most notably with

32 Peirce, Charles S., and Justus Buchler. Philosophical Writings of Peirce. Dover Publications, ​ ​ ​ 1955. This lecture series was organized by the eminent William James, philosopher, and a founder of psychology. The continuous fealty, admiration, and generosity which James extended to Peirce rivals any friendship in the history of philosophy. 33 Peirce, Charles Sanders, et al. Collected Papers of Charles Sanders Peirce. The Belknap ​ ​ ​ Press of Harvard University Press, 1960. This passage can be found at CP 5.172.

22

Kepler’s work on elliptical orbits.34 Obviously, the early nomenclature of

“hypothesis” is suggestive, as well.

Here, it is important to note the difference between selective and creative abduction.35 What is not so remarkable is our ability to justify some hypothesis once we have found one that is adequately testable. Selective abduction is little more than deference to probabilistic likelihood, and therefore is not so different from induction. Computers have proven perfectly capable of collecting data about competing hypotheses and selecting which one has the strongest statistical support. This is valuable, but a natural continuation of the technologies previously discussed.

Yet, these hypotheses must originate from somewhere. This is the ​ question which most interested Peirce. Throughout the history of science, attempts to describe hypothesis generation have appealed to the language of

“inspiration” and “ingenuity.” Problems quickly present themselves, though, when we try to model and implement this process computationally. These difficulties are better appreciated when contrasted with the flexibility of human cognition.

5. Abductive Humanism

34 Peirce, Charles S., and Justus Buchler. Philosophical Writings of Peirce. Dover Publications, ​ ​ ​ 1955. 35 Lorenzo Magnani has emerged as the authority on creative abduction within the . He has penned numerous penetrating books on the subject, including Abduction, ​ Reason, and Science. Processes of Discovery and Explanation, and Abductive Cognition. The ​ ​ Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning. Magnani, Lorenzo. ​ ​ Abduction, Reason, and Science: Processes of Discovery and Explanation. Kluwer ​ Academic/Plenum, 2001. Magnani, Lorenzo. Abductive Cognition: the Epistemological and ​ Eco-Cognitive Dimensions of Hypothetical Reasoning. Springer, 2013. ​

23

“Man is the measure of all things.”36 This aphorism of Protagoras has been a regular touchstone in humanist philosophy. On the whole, humanism has a complex, confusing, and storied history. What all humanisms have in common, though, is a celebration of human learning. Giovanni Pico della Mirandola, one of the great Renaissance humanists, argued that this provided man, “dignity.”37

Later, philosophers who adopted the term, such as F. C. S. Schiller, would,

“focus on the problem of cognition,” and .38

The following section draws attention to comparative advantages of human cognition over artificial intelligence. These are somewhat different from the manifold arguments against computational creativity which essentially assert that computers just transform input into output without introducing anything new.

Instead, problems in computationally reproducing creative abduction are contrasted with the triviality with which the same problems are handled by human cognition. Our natural aptitude for creative abduction bears some relation to those values revered by humanists and the emphasis that they placed upon .

Comparative Advantages of Human Cognition

36 Taylor, C.C.W., and Mi-Kyoung Lee. “The Sophists.” Stanford Encyclopedia of Philosophy, ​ ​ ​ Stanford University, 9 Sept. 2015, plato.stanford.edu/entries/sophists/. 37 Randall, John Herman, et al. The Renaissance Philosophy of Man : Petrarca, Valla, Ficino, ​ ​ Pico, Pomponazzi, Vives: Selections in Translation. Univ. of Chicago Press, 1956. ​ 38 The best short review of the many incarnations of humanism can be found by Giustiniani in the Journal of the History of Ideas. Giustiniani, Vito R. “Homo, Humanus, and the Meanings of ​ 'Humanism'.” Journal of the History of Ideas, vol. 46, no. 2, 1985, p. 167., doi:10.2307/2709633. ​ ​

24

The mundane example of creative abduction provided by Peirce provides a nice segue into what scholars of logical AI call, “the frame problem.”39 Let’s look again at the example:

Hypothesis

Rule: All beans from this bag are white.

Result: These beans are white.

Case: These beans are from this bag.

What is inferred here is the Case. Now, while the Case seems immediately plausible, what is important to note is that the Case is not necessarily true, and ​ there are infinitely many other possibilities for the Case, including, but not limited to:

Hypothesis

Rule: All beans from this bag are white.

Result: These beans are white.

C1: These beans are from this bag.

C2: These beans are from a similar bag.

C3: These beans are unrelated to those in the bag.

C4: These beans are similar but different from those in the bag.

39 Shanahan, Murray. “The Frame Problem.” Stanford Encyclopedia of Philosophy, Stanford ​ ​ ​ University, 8 Feb. 2016, plato.stanford.edu/entries/frame-problem/.

25

C5: These beans came from the same place as the beans in the

bag.

C6: ...

C7: ……

C8: ………

Ad infinitum

For humans, the problem is determining which of these possibilities we should consider. But, for computing machines, the problem is which of these possibilities exist. In some circumstances, the number of possibilities is large, but truly finite ​ — such as potential moves to make in a game of chess. Or, we might know with certainty which variables are involved in a particular situation, and exactly how those variables can vary, their range. Again, such problems are tractable by computation; Herbert Simon explored them with “heuristic programming.” We can search through all possible configurations of our variables. We can even write a program to look at how different variables might have a mathematical relationship, say, a linear or inverse relationship.40

Heuristic programming somewhat cheats the problem, though. By reducing a problem to a finite set of variables with finite configurations, we have given the computing machine what scientists call a “problem space.” Sometimes,

40 Kulkarni, Deepak, and Herbert A. Simon. “The Processes of Scientific Discovery: The Strategy ​ of Experimentation.” Cognitive Science, vol. 12, no. 2, 1988, pp. 139–175., ​ ​ doi:10.1207/s15516709cog1202_1.

26

though, in science, we discover a variable which we did not know of before. The reason that the phenomenon might have been a scientific problem in the first place was due to inadequate knowledge of variables.

For instance, we might solve the problem of Mercury’s anomalous orbit by inferring that there was a missing planet named Vulcan — this is precisely what

Urbain le Verrier, the French mathematician, proposed. Today, a computer simulation might even be able to infer properties of astronomical objects, such as their mass, based on how much they disturb the center of mass within a system.

41 However, the possibility of a missing planet was already within le Verrier’s problem space, as he had used the same technique to discover Neptune.42 He was, of course, wrong, and there is no additional planet Vulcan. He was wrong, though, for inconceivable . For le Verrier would never have considered that the variables of space and time formed a continuum, spacetime, that was subject to the effects of a preposterous theory which Albert Einstein would later articulate as relativity — a preposterous theory that just so happens to explain the precession of Mercury quite perfectly.43 We might call this the difficulty of framing the original problem, or the frame problem by design. ​ ​ Yet, once we have stated a problem, another issue with framing announces itself. The frame problem as traditionally conceived involves the

41 This is largely how we calculate the mass of black holes, which are, of course, invisible in layman’s terms. 42 Galle, J. G. “Account of the Discovery of the Planet of Le Verrier at Berlin.” Monthly Notices of ​ ​ the Royal Astronomical Society, vol. 7, no. 9, 1846, pp. 153–153., doi:10.1093/mnras/7.9.153. ​ 43 Clemence, G. M. “The Relativity Effect in Planetary Motions.” Reviews of Modern Physics, vol. ​ ​ ​ 19, no. 4, 1947, pp. 361–364., doi:10.1103/revmodphys.19.361.

27

consequences of our actions, which again, can tend toward infinity. Shanahan provides the following definition: “using , how is it possible to write formulae that describe the effects of actions without having to write a large number of accompanying formulae that describe the mundane, obvious non-effects of those actions?”44 In other words, how does an AI system know that a single event has not changed every data point in the system? In some applications, such as chess, a single action does change the entire system. Each move can be consequential for recalculating almost every position on the board.

Shanahan gives another example of a robot that makes tea. If the robot moves a tea cup, obviously it needs to be registered that the tea cup is not in the place that it was before. But, what else needs to be updated? The robot has moved past the “getting the cup” phase of making tea to the next phase. What if there is a teaspoon in the teacup? Then the location of the teaspoon needs updating, too. How about the number of remaining tea cups in the cupboard?

Number of teaspoons in the cupboard? And, what about non-effects? How does the robot know that by removing the teacup, it has not inadvertently changed the temperature of the tea, flavor of tea, remaining sugar, remaining milk, so on and so forth? The going solution for this problem is the, “sleeping dog,” strategy, where AI systems assume that their actions have not affected anything besides what they directly intended them to.45

44 Shanahan, Murray. “The Frame Problem.” Stanford Encyclopedia of Philosophy, Stanford ​ ​ ​ University, 8 Feb. 2016, plato.stanford.edu/entries/frame-problem/. 45 Humans immediately recognize this to be problematic, as we understand that moving a candle can have the surprising effect of burning a house down. Mcdermott, Drew. “AI, Logic, and the

28

Almost every working software engineer has encountered both frame problems in practice. They even have a word for how well a given program is designed to handle framing: “robustness.” Sussman defines robustness as,

“systems that have acceptable behavior over a larger class of situations than was anticipated by their designers.”46 Someone using your application is inevitably going to do so in a way that you did not consider; how do you prevent an application from breaking when this inevitably occurs? Of course, one cannot exhaustively do so, as there are infinite possible ways a user might interact with an application.

Before turning to the merits of human cognition, there is one more problem to consider — that of appraisal. Looking again at the example of Peirce, we can recognize that, “these beans are from this bag,” and, “these beans are from a similar bag,” are both plausible hypotheses. Immediately, we might value them more than the alternative hypotheses, “these are magical beans to grow a giant beanstalk,” or, “tSN#JPZB2@y!0PvE.” How can an AI system decide that one generated poem is preferable to another, or preferable to gibberish, without some kind of value system which we ascribe to it? The difficulty is, though, that our value systems change. If the musical composition system mentioned by Allen et al in 1958 were to generate something like contemporary rap music, would we not have designed the system to discard something like this at the time? Or, if we were to create a new heuristic program that incorporated Einsteinian relativity,

Frame Problem.” The Frame Problem in Artificial Intelligence, 1987, pp. 105–118., ​ ​ doi:10.1016/b978-0-934613-32-3.50013-7. 46 Sussman, Gerald. “Building Robust Systems an essay.” (2007).

29

could we possibly craft the system to recognize the next paradigm shift in physics?

Human cognition readily triumphs over all of these problems. While a review of the cognitive theories of creativity is outside the scope of this paper, our capacity for combinatorial creativity, metaphorical thinking, and conceptual blending, seem to be inseparable from our comparative advantage at creative abduction.47 What is more, a comprehensive account is unnecessary to prove the point of this — for our ability to deal with these problems can be plainly observed. While a program might find itself in an infinite loop, iterating over possible hypotheses, we are able to select a few, plausible hypotheses, almost instantly, from an infinite set. And, if those prove false, we can quickly and easily expand our set of candidate hypotheses, and even our set of relevant variables which contribute to those hypotheses, without overheating. At a glance, we can take account of our entire data structure for a particular situation; what does and does not need updating does not even occur as a problem.48 Our mental ability to manipulate and combine concepts allows us to adapt to entirely unprecedented problems and situations; we can intuit which values or principles we ought to seek in order to achieve a particular goal. Interestingly, inerrability may be a

47 Much of the work on metaphor, concept, and blending in cognitive science has been carried out by Gilles Fauconnier and Mark Turner. Fauconnier, Gilles, and Mark Turner. The Way We Think: ​ ​ Conceptual Blending and the Mind's Hidden Complexities. Basic Books, 2010. ​ 48 Though, we do suffer from what psychologists call “change blindness.” This is probably a result of the cognitive solution to what those in logical AI refer to as the frame problem. Rensink, Ronald ​ A., et al. “To See or Not to See: The Need for Attention to Perceive Changes in Scenes.” Psychological Science, vol. 8, no. 5, 1997, pp. 368–373., ​ doi:10.1111/j.1467-9280.1997.tb00427.x.

30

detriment to creative abduction, as we often glean additional information from errors or nonsense that a pruning algorithm would omit.

A Future of Abduction

There may be a possible world where creative abduction is tractable to computing machines. Certainly, there is, if one adheres to a computational theory of mind. However, this paper has looked at the properties of computing machines and how they relate to types of logical inference, not with the purpose of setting limits to what can be accomplished in AI, but to suggest what effects contemporary AI and big data will have, “in determining the nature of the socioeconomic order.”49 After all, much of the anxiety surrounding these ​ technologies arises from their potential for economic disruption. Simply put, there is reason to believe that AI will continue to displace us in roles that rely mostly upon deductive and inductive reasoning. The “forms of life” that these technologies entail will be defined as much by their inefficiencies as their efficiencies.50 This will likely constrain and compel us into labor which is characterized by creative abduction, such as research, design, and the arts.

There is a beauty, though, to this conclusion, as such activities are paradigmatic of the humanist tradition.

6. Conclusion

49 Emphasis original. Heilbroner, Robert L. “Do Machines Make History?” Technology and ​ ​ Culture, vol. 8, no. 3, 1967, p. 335., doi:10.2307/3101719. ​ 50 Winner, Langdon. “Technologies as Forms of Life.” and Emerging Technologies, 2014, ​ ​ ​ pp. 48–60., doi:10.1057/9781137349088_4.

31

Artificial intelligence and big data are not so much supplanting us as they are unburdening us from the rote labor of organizing and revisiting what we have already discovered. (And, what we have discovered how to discover.) Expert systems will increasingly be able to guide us, despite our educational backgrounds, to the best conclusions available with the knowledge that we collectively possess. Meanwhile, deep learning programs will tirelessly inspect everything convoluted and banal.

The speed, inerrability, indefatigability, and epistemic superiority, of today’s computing machines facilitates their deductive and inductive reasoning beyond our powers. Yet, their standing inferiority at generating ideas, tackling novelty, and making sense of infinity, leave human cognition unchallenged in the realm of creative abduction.

32

Bibliography

Bringsjord, Selmer, and David A. Ferrucci. Artificial Intelligence and Literary ​ Creativity inside the Mind of BRUTUS, a Storytelling Machine. Psychology Press, ​ Taylor & Francis Group, 2013.

Bringsjord, Selmer. “Computation, among Other Things, Is beneath Us.” Minds ​ and Machines, vol. 4, no. 4, 1994, pp. 469–488., doi:10.1007/bf00974171. ​

Buchanan, Bruce G., and Edward Hance Shortliffe. Rule-Based Expert Systems ​ The MYCIN Experiments of the Stanford Heuristic Programming Project. ​ Addison-Wesley Publ. Comp., 1985.

Clemence, G. M. “The Relativity Effect in Planetary Motions.” Reviews of Modern ​ Physics, vol. 19, no. 4, 1947, pp. 361–364., doi:10.1103/revmodphys.19.361. ​

Clifford, Catherine. “Where's Waldo? Watch This Robot Find Him.” CNBC, ​ ​ CNBC, 16 Aug. 2018, www.cnbc.com/2018/08/16/viral-video-of-bot-built-with-google-ai-tools-finding-wh eres-waldo.html.

Cope, David. Computer Models of Musical Creativity. MIT Press, 2017. ​ ​

Douven, Igor. “Abduction.” Stanford Encyclopedia of Philosophy, Stanford ​ ​ University, 28 Apr. 2017, plato.stanford.edu/entries/abduction/.

Fauconnier, Gilles, and Mark Turner. The Way We Think: Conceptual Blending ​ and the Mind's Hidden Complexities. Basic Books, 2010. ​

Galle, J. G. “Account of the Discovery of the Planet of Le Verrier at Berlin.” ​ Monthly Notices of the Royal Astronomical Society, vol. 7, no. 9, 1846, pp. ​ 153–153., doi:10.1093/mnras/7.9.153.

Gervás, Pablo. “An Expert System for the Composition of Formal Spanish Poetry.” Knowledge-Based Systems, vol. 14, no. 3-4, 2001, pp. 181–188., ​ ​ doi:10.1016/s0950-7051(01)00095-8.

Gibbs, Samuel. “Chatbot Lawyer Overturns 160,000 Parking Tickets in London and New York.” The Guardian, Guardian News and Media, 28 June 2016, ​ ​ www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parkin g-tickets-london-new-york.

Giustiniani, Vito R. “Homo, Humanus, and the Meanings of 'Humanism'.” Journal ​ of the History of Ideas, vol. 46, no. 2, 1985, p. 167., doi:10.2307/2709633. ​

33

Goodfellow, Ian, et al. Deep Learning. MIT Press, 2018. ​ ​

Handford, Martin. Where's Waldo? Candlewick Press, 2017. ​ ​

Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. Harper Perennial, ​ ​ 2018.

Harari, Yuval Noah. “The Meaning of Life in a World without Work.” The ​ Guardian, Guardian News and Media, 8 May 2017, ​ www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapi ens-book. ​

Haugeland, John. Artificial Intelligence: The Very Idea. MIT Press, 1985. ​ ​

Heilbroner, Robert L. “Do Machines Make History?” Technology and Culture, vol. ​ ​ 8, no. 3, 1967, p. 335., doi:10.2307/3101719.

Heule, Marijn J. H., and Oliver Kullmann. “The Science of Brute Force.” Communications of the ACM, vol. 60, no. 8, Aug. 2017, pp. 71–79., ​ doi:10.1145/3107239.

Hilbert, M., and P. Lopez. “The World's Technological Capacity to Store, Communicate, and Compute Information.” Science, vol. 332, no. 6025, 2011, pp. ​ ​ 60–65., doi:10.1126/science.1200970.

Hsu, Feng-Hsiung, et al. “Deep Blue System Overview.” Proceedings of the 9th ​ International Conference on Supercomputing - ICS '95, 1995, ​ doi:10.1145/224538.224567.

Kohs, Greg, director. AlphaGo. AlphaGo, Moxie Pictures, 2018, ​ ​ ​ ​ www.alphagomovie.com/. ​

Kulkarni, Deepak, and Herbert A. Simon. “The Processes of Scientific Discovery: The Strategy of Experimentation.” Cognitive Science, vol. 12, no. 2, 1988, pp. ​ ​ 139–175., doi:10.1207/s15516709cog1202_1.

Langley, Pat. “The Computational Support of Scientific Discovery.” International ​ Journal of Human-Computer Studies, vol. 53, no. 3, 2000, pp. 393–410., ​ doi:10.1006/ijhc.2000.0396.

Magnani, Lorenzo. Abduction, Reason, and Science: Processes of Discovery ​ and Explanation. Kluwer Academic/Plenum, 2001. ​

34

Magnani, Lorenzo. Abductive Cognition: the Epistemological and Eco-Cognitive ​ Dimensions of Hypothetical Reasoning. Springer, 2013. ​

McCarthy, John. “Concept of Logical AI.” Logic-Based Artificial Intelligence, by ​ ​ Jack Minker, Kluwer Academic Publishers, 2000, pp. 37–56.

Mcdermott, Drew. “AI, Logic, and the Frame Problem.” The Frame Problem in ​ Artificial Intelligence, 1987, pp. 105–118., ​ doi:10.1016/b978-0-934613-32-3.50013-7.

Newell, Allen, and Herbert A. Simon. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM, vol. 19, no. 3, 1976, pp. ​ ​ 113–126., doi:10.1145/360018.360022.

Newell, Allen, et al. “The Processes of Creative Thinking.” Contemporary ​ Approaches to Creative Thinking: a Symposium Held at the University of Colorado, Atherton Press, 1962, pp. 63–119. ​

Parloff, Roger. “Why Deep Learning Is Suddenly Changing Your Life.” Fortune, ​ ​ ​ fortune.com/ai-artificial-intelligence-deep-machine-learning/.

Peirce, Charles S. Chance, Love, and Logic: Philosophical Essays. Routledge, ​ ​ 2017.

Peirce, Charles Sanders, et al. Collected Papers of Charles Sanders Peirce. The ​ ​ Belknap Press of Harvard University Press, 1960. This passage can be found at CP 5.172.

Peirce, C. S. “On the Natural Classification of Arguments.” Proceedings of the ​ American Academy of Arts and Sciences, vol. 7, 1867, pp. 261–287., ​ doi:10.2307/20179566.

Peirce, Charles S., and Justus Buchler. Philosophical Writings of Peirce. Dover ​ ​ Publications, 1955.

Randall, John Herman, et al. The Renaissance Philosophy of Man : Petrarca, ​ Valla, Ficino, Pico, Pomponazzi, Vives: Selections in Translation. Univ. of ​ Chicago Press, 1956.

Rensink, Ronald A., et al. “To See or Not to See: The Need for Attention to Perceive Changes in Scenes.” Psychological Science, vol. 8, no. 5, 1997, pp. ​ ​ 368–373., doi:10.1111/j.1467-9280.1997.tb00427.x.

Russell, Bertrand, and John Skorupski. The Problems of Philosophy. OUP ​ ​ Oxford, 2014.

35

Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, ​ ​ vol. 3, no. 03, 1980, p. 417., doi:10.1017/s0140525x00005756.

Sejnowski, Terrence Joseph. The Deep Learning Revolution. The MIT Press, ​ ​ 2018.

Shah, Rajiv Bakulesh. “Minimax.” Minimax, Dictionary of Algorithms and Data ​ ​ Structures , 2007, xlinux.nist.gov/dads/HTML/minimax.html.

Shanahan, Murray. “The Frame Problem.” Stanford Encyclopedia of Philosophy, ​ ​ Stanford University, 8 Feb. 2016, plato.stanford.edu/entries/frame-problem/.

Silver, David, et al. “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play.” Science, vol. 362, no. 6419, 2018, pp. ​ ​ 1140–1144., doi:10.1126/science.aar6404.

Silver, David, et al. “Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature, vol. 529, no. 7587, 2016, pp. 484–489., ​ ​ doi:10.1038/nature16961.

Smith, Robin. “Aristotle's Logic.” Stanford Encyclopedia of Philosophy, Stanford ​ ​ University, 17 Feb. 2017, plato.stanford.edu/entries/aristotle-logic/.

Sussman, Gerald. “Building Robust Systems an essay.” (2007).

Taylor, C.C.W., and Mi-Kyoung Lee. “The Sophists.” Stanford Encyclopedia of ​ Philosophy, Stanford University, 9 Sept. 2015, ​ plato.stanford.edu/entries/sophists/.

Vico, Giambattista, et al. The New Science of Giambattista Vico: Revised ​ Translation of the Third Edition (1744). Cornell University Press, 1968. ​

Winner, Langdon. “Technologies as Forms of Life.” Ethics and Emerging ​ Technologies, 2014, pp. 48–60., doi:10.1057/9781137349088_4. ​

Youyou, Wu, et al. “Computer-Based Personality Judgments Are More Accurate than Those Made by Humans.” Proceedings of the National Academy of ​ Sciences, vol. 112, no. 4, 2015, pp. 1036–1040., doi:10.1073/pnas.1418680112. ​

36