Formalizing Deceptive Reasoning in Breaking Bad: Default Reasoning in a Doxastic Logic
Total Page:16
File Type:pdf, Size:1020Kb
Deceptive and Counter-Deceptive Machines Papers from the AAAI 2015 Fall Symposium Formalizing Deceptive Reasoning in Breaking Bad: Default Reasoning in a Doxastic Logic John Licato [email protected] Analogical Constructivism and Reasoning Lab (ACoRL) Indiana University and Purdue University-Fort Wayne Abstract In this paper, I will attempt to model the reasoning used by The rich expressivity provided by the cognitive event agents in an episode of the television series Breaking Bad. calculus (CEC) knowledge representation framework Episode 13 of season 5, entitled To’hajiilee, is notably rich in allows for reasoning over deeply nested beliefs, desires, deceptive behaviors between characters, being a point in the intentions, and so on. I put CEC to the test by attempt- series’ overall story arc where the conflict between several ing to model the complex reasoning and deceptive plan- consistently wily characters comes to a climax. One group ning used in an episode of the popular television show (Jesse and Hank) devises a plan to lure, trap, and catch an- Breaking Bad. CEC is used to represent the knowledge other character (Walt), and I try to answer two questions used by reasoners coming up with plans like the ones about their plan in this paper: First, what sort of reasoning devised by the fictional characters I describe. However, and knowledge representation would be necessary to devise it becomes clear that a form of nonmonotonic reason- such a plan as the one created by Jesse and Hank? Second, is ing is necessary—specifically so that an agent can rea- sufficiently powerful to represent such knowledge and son about the nonmonotonic beliefs of another agent. I CEC show how CEC can be augmented to have this ability, serve as a base framework for such reasoning? Section 1 will argue that even an analysis of how well and then provide examples detailing how my proposed CEC augmentation enables much of the reasoning used by can model reasoning in a fictional story can be beneficial to agents such as the Breaking Bad characters. I close by the field of automated human-level reasoning, discussing re- discussing what sort of reasoning tool would be neces- lated literature. I give an overview of in Section 2, fol- sary to implement such nonmonotonic reasoning. lowed by a synopsis of the relevant portionsCEC of To’hajiilee’s plot (Section 3.1). An analysis of the plan generation used by An old joke, said to be a favorite of Sigmund Freud, opens the characters1 in Section 3.2 suggests the need for a form with two passengers, Trofim and Pavel, on a train leaving of nonmonotonic reasoning that requires, at a minimum, rea- Moscow. Trofim begins by confronting Pavel, demanding to soning over second-order beliefs. I then spend some time ex- know where he is going. plaining how this nonmonotonic reasoning can work in . Pavel: “To Pinsk.” The paper wraps up with a discussion of implications forCEC the Trofim: “Liar! You say you are going to Pinsk in order future of deceptive and counter-deceptive AI (Section 5). to make me believe you are going to Minsk. But I know you are going to Pinsk!” (Cohen 2002) 1 Why Bother Modeling Reasoning in Plots? Fictional stories can sometimes capture aspects of decep- tion in the real world, especially between individuals who The cognition of deception is particularly interesting to are skilled at reasoning over the beliefs of others (second- model: Knowing when to deceive in social situations may order beliefs), the beliefs of one party about the beliefs of an- make for robots that are better accepted socially (Wagner other (third-order beliefs), and so on. For example, an agent and Arkin 2009; Sharkey and Sharkey 2011). Deceptive ma- a desiring to deceive agent b may need to take into account chines may indeed be the inevitable consequence, or per- agent b’s counter-deception measures (where the latter mea- haps explicit goal, of human-level AI (Castelfranchi 2000; sures may be directed back at agent a, as was suspected by Clark and Atkinson 2013). poor Trofim). Such fictional stories may thus sometimes be Instances of deception in fiction are not difficult to find. a suitable source of test cases for frameworks specializing in Some variant of deceptive behavior seems to appear in any the representation of, and reasoning over, complex doxastic story involving characters containing beliefs, intentions, and statements. The cognitive event calculus ( ) promises to desires about the beliefs of other characters, depending on CEC be such a framework, given its ability to represent beliefs, 1 knowledge, intentions, and desires over time (Arkoudas and Of course, the characters I discuss here are fictional. I really Bringsjord 2009). am talking about the work of the writers of the show, who are rea- soning from the perspectives of the fictional characters. It will be Copyright c 2015, Association for the Advancement of Artificial more convenient in this paper to simply say it is the fictional char- Intelligence (www.aaai.org). All rights reserved. acters doing the reasoning. 27 Rules of Inference [R1] [R2] the definition of deception one accepts. Although some sto- Syntax C(t,P(a,t,f) K(a,t,f)) C(t,K(a,t,f) B(a,t,f)) ries are better than others at accurately portraying realistic ! ! C(t,f) t t ...t tn K(a,t,f) Object Agent Self @ Agent ActionType Action Event 1 behaviors, all were written at some point by imaginative hu- S ::= | | | | v | [R3] [R4] Moment Boolean Fluent Numeric K(a1,t1,...K(an,tn,f)...) f man beings (with some exceptions, cf. (Bringsjord and Fer- | | | [R5] rucci 1999)). They therefore offer clues about the human C(t,K(a,t ,f f )) K(a,t ,f ) K(a,t ,f ) 1 1 ! 2 ! 2 1 ! 3 2 ability to think deceptively and counter deceptively; e.g., a action : Agent ActionType Action plan of deception devised by a fictional character, at the very ⇥ ! [R6] initially : Fluent Boolean C(t,B(a,t1,f1 f2)) B(a,t2,f1) B(a,t3,f2) least, tells us what types of plans humans are capable of both ! ! ! ! holds : Fluent Moment Boolean [R7] ⇥ ! C(t,C(t ,f f )) C(t ,f ) C(t ,f ) comprehending (as the readers of a story do) and creatively 1 1 ! 2 ! 2 1 ! 3 2 happens : Event Moment Boolean generating (as the writers did). For researchers interested in ⇥ ! [R8] [R9] understanding the expressivity of human-level thought, sto- clipped : Moment Fluent Moment Boolean C(t, x. f f[x t]) C(t,f1 f2 f2 f1) f ::= ⇥ ⇥ ! 8 ! 7! $ !¬ !¬ ries of deception are useful benchmarks. initiates : Event Fluent Moment Boolean [R ] ⇥ ⇥ ! 10 C(t,[f1 ... fn y] [f1 ... fn y]) terminates : Event Fluent Moment Boolean ^ ^ ! ! ! ! ! ⇥ ⇥ ! B(a,t,f) f y B(a,t,f) B(a,t,y) 2 An Overview of prior : Moment Moment Boolean ! [ ] [ ] ⇥ ! R11a R11b CEC B(a,t,y) B(a,t,y f) The cognitive event calculus ( ) is a first-order modal interval : Moment Boolean ^ ⇥ CEC S(s,h,t,f) logic for knowledge representation first introduced by Ark- payoff : Agent ActionType Moment Numeric ⇥ ⇥ ! [R12] oudas and Bringsjord (2009) as a way to model Piaget’s B(h,t,B(s,t,f)) false-belief task. A member of the cognitive calculi family I(a,t,happens(action(a ,a),t )) t ::= x : S c : S f (t ,...,tn) ⇤ 0 of logics (Bringsjord et al. 2015), contains operators | | 1 [R13] for several mental states and events:CECBelief, Knowledge, P(a,t,happens(action(a⇤,a),t)) Intention, Desire, Common knowledge, and Speech acts. t : Boolean f f y f y B(a,t,f) B(a,t,O(a⇤,t,f,happens(action(a⇤,a),t0))) |¬ | ^ | _ | Note that not all of these operators are introduced in Ark- O(a,t,f,happens(action(a ,a),t )) f ::= P(a,t,f) K(a,t,f) C(t,f) S(a,b,t,f) S(a,t,f) ⇤ 0 | | | | [R14] oudas and Bringsjord (2009); rather, much of the current (a,t, (a ,t,happens(action(a ,a),t ))) B(a,t,f) D(a,t,holds( f ,t )) I(a,t,happens(action(a ,a),t )) K I ⇤ ⇤ 0 version of reflects subsequent developments, most of | 0 | ⇤ 0 f y CEC $ which were produced in parallel with work on the deon- [R15] ∗ O(a,t,f,g) O(a,t,y,g) tic cognitive event calculus ( ), an extension of $ DCEC CEC Figure 1: The Syntax Used in this Paper (Bringsjord et al. 2014). CEC is loosely based on the event calculus (Kowalski and SergotCEC 1986), but departs from it and other similar logics in several important ways, two of which are especially relevant show, an augmentation is needed before the sort of reason- to this paper’s purposes: ing used by the Breaking Bad characters can be faithfully modeled. Although no formal semantics is fully defined for , • CEC there is a preference for proof-theoretic (and the highly re- 3.1 Plots and Plans lated argument-theoretic) semantics and a natural deduc- Three characters are relevant to the plot at this point. Wal- tion (Jaskowski´ 1934) style of inference. Although there ter White (“Walt”) is a now-retired methamphetamine king- are some cases where cognitively implausible techniques pin who is in possession of over $80 million, trying to such as resolution may assist in the proof-finding process, live the remainder of his life (likely not to be long due the underlying inferences are rooted in a set of constantly to a recurrence of cancer) in peace with his family.