Why Machines Don't
Total Page:16
File Type:pdf, Size:1020Kb
KI - Künstliche Intelligenz https://doi.org/10.1007/s13218-019-00599-w SURVEY Why Machines Don’t (yet) Reason Like People Sangeet Khemlani1 · P. N. Johnson‑Laird2,3 Received: 15 February 2019 / Accepted: 19 June 2019 © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2019 Abstract AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have diferent meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that refect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities. Keywords Reasoning · Mental models · Cognitive models 1 Introduction problems for mathematicians). Their other uses include the verifcation of computer programs and the design of com- The great commercial success of deep learning has pushed puter chips. The feld has its own journal: the Journal of studies of reasoning in artifcial intelligence into the back- Automated Reasoning. There have even been eforts to use ground. Perhaps as a consequence, Alexa, Siri, and other machine learning to assist theorem-provers [20]. such “home helpers” have a conspicuous inability to reason, Deep learning systems were inspired by human learners, which in turn may be because AI has never come to grips but they do not learn concepts and categories the same way with how human beings reason in daily life. Many automated humans do, and so they can be fooled in trivial ways known theorem-provers that implement reasoning procedures exist as “adversarial attacks” [52, 54, 66]. Nevertheless, they have online, but they remain highly specialized technologies. One achieved success and pervasiveness despite such failings, of their main uses is as an aid to mathematicians, who seek because they can discover latent patterns of relevance— simpler proofs (see [38, 72] for the notion of simplicity they do so by analyzing vast amounts of data generated by in proofs—a problem that goes back to one of Hilbert’s humans. Theorem-provers likewise diverge from human rea- soning, and yet they rarely enter our everyday technologies. Why not? The central thesis of this paper is that theorem- * Sangeet Khemlani provers—and other automated reasoning systems—cannot [email protected]; [email protected] simulate human reasoning abilities. Any machine reasoning P. N. Johnson-Laird system built to interact with humans needs to understand [email protected] how people think and reason [12]. But theorem-provers 1 US Naval Research Laboratory, Navy Center for Applied have no way of discovering human-level reasoning compe- Research in Artifcial Intelligence, Washington, DC 20375, tence on their own, and they implement principles that yield USA counterintuitive patterns of reasoning. In what follows, we 2 Department of Psychology, Princeton University, Princeton, trace their limitations back to two historical developments in NJ 08540, USA AI: the motivation to establish automated theorem-provers 3 Department of Psychology, New York University, New York, for systems of mathematical logic (see, e.g., [3]), and the NY 10003, USA Vol.:(0123456789)1 3 KI - Künstliche Intelligenz formulation of nonmonotonic systems of reasoning. We then implication, A → B , in logic, which is analogous to if A then describe human reasoning patterns that no machine reason- B, can be transformed into: ¬A ∨ B , because both A → B and ing system—or deep learning system, for that matter—can ¬A ∨ B are false if and only if A is true and B is false. Other mimic. Finally, we present the functions that any future auto- connectives, such as material equivalence (‘↔ ’) can likewise mated reasoning system should perform in order to approach be transformed into inclusive disjunctions. In its simplest human-level thinking abilities. form, the resolution rule is: A ∨ B. ¬A ∨ C. 2 Automated theorem‑proving Therefore, B ∨ C. Not surprisingly, naive individuals presented with two such Automated theorem-proving aimed originally to fnd logi- disjunctive premises and asked what follows seldom draw cal proofs in an efcient way. A central constraint of such the conclusion in the rule [24]. systems is that they are built on the foundation of math- Inferences in classical logic concern relations between ematical logic. Most theorem-proving programs take as entire sentences, where each sentence can be true or false. input the logical form of an argument, and then search for a The predicate calculus includes the sentential calculus, but derivation of the conclusion using formal rules of inference also allows inferences that depend on the constituents of and perhaps axioms to capture general knowledge. In the sentences. For example: early days, some theorem-provers aimed to simulate human performance [53]. But, most others were exercises in imple- Any actuary is a statistician. menting a logical system [64] or intended to help users to Jean is an actuary. discover novel mathematical proofs [71], and so they were Therefore, Jean is a statistician. not intended to embody human patterns of reasoning. For instance, theorem-provers operate by searching for deriva- The quantifer “any actuary” ranges over individual entities. tions of given conclusions rather than generating conclu- The frst-order predicate calculus concerns only quantifca- sions of their own. The main diferences among them are in tion of entities, whereas the second-order predicate calculus the nature of the logical rules of inference on which they rely allows quantifers also to range over properties. The calculus and in the methods they use to search for proofs. treats the above quantifed statement as: The frst theorem-provers relied on mathematical logic, ∀x(x is an actuary → x is a statistician) as did contemporary psychological theories of reasoning. where ‘ ∀x ’ ranges over all values of x. Theorem-provers The latter uses rules of inference from logic as formulated exist for most branches of logic—the sentential calculus, in a “natural deduction” system [5, 63]. The idea of natural the predicate calculus, and modal logics, which concern deduction was due to the logician Gentzen [9], and intended ‘possible’ and ‘necessary,’ temporal relations, and deontic to provide a more “natural” way of doing logic than axi- relations such as permissions and obligations. Most theorem- omatic systems. So, it eschews axioms, and it relies on for- provers dealing with the predicate calculus are restricted to mal rules that can introduce each sentential connective into frst-order logic, because second-order logic cannot have a a proof, e.g.: complete proof procedure. But, quantifers such as “more A. than half of the actuaries” cannot be expressed in frst-order Therefore, A ∨ B , [disjunction introduction] logic. where ‘ ∨ ’ is a symbol for inclusive disjunction (A or B or One standard way to cope with universal quantifers in both), and can eliminate it from a proof, e.g.: automated proofs (e.g. “any actuary is a statistician”) is to A ∨ B. delete the determiner ( ∀x ), and to transform the remaining ¬A. formula, such as: Therefore, B. [disjunction elimination] x is an actuary → x is a statistician ¬ where ‘ ’ denotes negation. Likewise, some AI systems into its disjunctive equivalent: adopted natural deduction too [2]. The challenge for those x is a not an actuary ∨ x is a statistician. theorem-provers, of course, was to determine which natural The process of unifcation can set the value of a variable in deduction rules of inference to try next as they searched for one expression to its value in another expression that has the proofs. same predicate. So, the unifcation of these two formulas: One answer that seemed attractive was instead to use just x is an actuary a single rule of inference, the resolution rule, in one of its Jean is an actuary many variants [43]. The rule calls for all the premises and sets the value of x equal to Jean. (It is equivalent to the conclusion to be transformed into inclusive disjunctions, rule of universal instantiation in predicate logic [19].) The which is feasible in classical logic. For example, material 1 3 KI - Künstliche Intelligenz resolution rule with unifcation applies to the following The rule for the material implication in the first prem- premises: ise allows the list to be expanded into an “inverted tree,” x is not an actuary ∨ x is a statistician. because the connective is equivalent to: ¬A ∨ B . Each alter- Jean is not a statistician ∨ Jean is a probabilist. native is added to the list as follows: It cancels out the one clause and its negation and unifes the two remaining clauses to yield: ∨ A B Jean is not an actuary Jean is a probabilist. → It follows that: A → B Jean is an actuary Jean is a probabilist. ¬ A more complex treatment, which we spare readers, is AB needed for existential quantifers, such as: “Some statisti- ¬ cians,” which as the name suggests establish the existence of entities, whereas universal quantifers, such as “any,” do not. The left-hand branch of the tree contains two contradictory The formal derivation of a deductive conclusion resem- elements: A and ¬A , so that case fails; and the right-hand bles the execution of a computer program, and the anal- branch of the tree contains two contradictory elements: ¬B ogy was exploited in the development of programming and B, so that case fails.