Machine Reasoning Explainability
Total Page:16
File Type:pdf, Size:1020Kb
Machine Reasoning Explainability Kristijonas Cyrasˇ ,* Ramamurthy Badrinath, Swarup Kumar Mohalik, Anusha Mujumdar, Alexandros Nikou, Alessandro Previti, Vaishnavi Sundararajan, Aneta Vulgarakis Feljan Ericsson Research December 2, 2020 Abstract As a field of AI, Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning. Studies in early MR have notably started inquiries into Explainable AI (XAI) – arguably one of the biggest concerns today for the AI community. Work on explain- able MR as well as on MR approaches to explainability in other areas of AI has continued ever since. It is especially potent in modern MR branches such as argumentation, constraint and logic programming, and planning. We hereby aim to provide a selective overview of MR explainability techniques and studies in hopes that insights from this long track of research will complement well the current XAI landscape. *Corresponding author. Email: [email protected], ORCiD: 0000-0002-4353-8121 arXiv:2009.00418v2 [cs.AI] 1 Dec 2020 1 K. Cyrasˇ et al. Machine Reasoning for Explainable AI Contents 1 Introduction 3 1.1 Contributions . .5 1.2 Motivations . .6 2 Explainability7 2.1 Purpose of Explanations . .8 2.2 Categorization for Explanations . .9 2.2.1 Attributive Explanations . 10 2.2.2 Contrastive Explanations . 12 2.2.3 Actionable Explanations . 15 3 Explanations in MR 16 3.1 Inference-based Explanations . 16 3.1.1 Axiom Pinpointing . 17 3.1.2 Implicants . 17 3.1.3 Abduction . 18 3.2 Logic Programming (LP) . 18 3.2.1 Abductive Logic Programming (ALP) . 19 3.2.2 Inductive Logic Programming (ILP) . 19 3.2.3 Answer Set Programming (ASP) . 20 3.3 Constraint Programming (CP) . 22 3.3.1 SAT and Beyond . 22 3.3.2 General CP . 24 3.4 Automated Theorem Proving (ATP) and Proof Assistants . 27 3.5 Argumentation . 29 3.5.1 Attributive/Contrastive argumentative explanations . 29 3.5.2 Actionable argumentative explanations . 32 3.5.3 Applications of argumentative explanations . 33 3.6 Planning . 33 3.7 Decision Theory . 36 3.8 Causal Approaches . 37 3.9 Symbolic Reinforcement Learning . 39 3.9.1 Constrained RL . 40 3.9.2 Multi-Agent RL (MARL) . 40 2 Ericsson Research K. Cyrasˇ et al. Machine Reasoning for Explainable AI 4 Discussion 40 4.1 Omissions . 41 4.2 Categorization-related Aspects . 42 4.3 Terminology . 44 5 Conclusions 46 1 Introduction Machine Reasoning (MR) is a field of AI that complements the field of Machine Learning (ML) by aiming to computationally mimic abstract thinking. This is done by way of uniting known (yet possibly incomplete) information with background knowledge and making inferences regarding un- known or uncertain information. MR has outgrown Knowledge Representation and Reasoning (KR, see e.g. [39]) and now encompasses various symbolic and hybrid AI approaches to automated reason- ing. Central to MR are two components: a knowledge base (see e.g. [83]) or a model of the problem (see e.g. [116]) , which formally represents knowledge and relationships among problem components in symbolic, machine-processable form; and a general-purpose inference engine or solving mecha- nism, which allows to manipulate those symbols and perform semantic reasoning.1 The field of Explainable AI (XAI, see e.g. [2, 21, 33, 73, 160, 179, 181, 188, 210, 222, 249]) encompasses endeavors to make AI systems intelligible to their users, be they humans or machines. XAI comprises research in AI as well as interdisciplinary research at the intersections of AI and subjects ranging from Human-Computer Interaction (HCI) [181] to social sciences [45, 177]. According to Hansen and Rieger[128], explainability was one of the main distinctions between the 1st wave (dominated by KR and rule-based systems) and the 2nd wave (expert systems and statisti- cal learning) of AI, with expert systems addressing the problems of explainability and ML approaches treated as black boxes. With the ongoing 3rd wave of AI, ML explainability has received a great surge of interest [21, 73, 181]. By contrast, it seems that a revived interest in MR explainability is only just picking up pace (e.g. ECAI 2020 Spotlight tutorial on Argumentative Explanations in AI2 and KR 2020 Workshop on Explainable Logic-Based Knowledge Representation3). However, explainability in MR dates over four decades [128, 139, 184, 188, 249] and can be roughly outlined thus. The 1st generation expert systems provide only so-called (reasoning) trace explanations, show- ing inference rules that led to a decision. A major problem with trace explanations is the lack of “information with respect to the system’s general goals and resolution strategy”[184, p. 174]. The 2nd generation expert systems instead provide so-called strategic explanations, “displaying system’s 1See [37] for an alternative view of MR stemming from a sub-symbolic/connectionist perspective. 2https://www.doc.ic.ac.uk/∼afr114/ecaitutorial/ 3https://lat.inf.tu-dresden.de/XLoKR20/ 3 Ericsson Research K. Cyrasˇ et al. Machine Reasoning for Explainable AI control behavior and problem-solving strategy.”[264, p. 95] Going further, so-called deep explana- tions separating the domain model from the structural knowledge have been sought, where “the sys- tem has to try to figure out what the user knows or doesn’t know, and try to answer the question taking that into account.”[259, p. 73] Progress in MR explainability notwithstanding, it has been ar- gued [169, 184, 210] that to date, explainability in MR particularly and perhaps in AI at large is still insufficient in aspects such as justification (“describing the rationale behind each inferential step taken by the system” [264, p. 95]), criticism, and cooperation. These aspects, among others, are of concern in the modern MR explainability scene (this millennium), whereby novel approaches to explainability in various branches of MR have been making appearances. Explainability is a highly desired aspect of autonomous agents and multi-agent systems (AA- MAS) [151, 160, 187]. There are a few area-specific reviews of explainability in AAMAS-related ar- eas: for instance [14] on human-robot interaction; [196] on expert and recommender systems; [222] on explaining ML agents; [59] on planning. However, explainability in multi-agent systems (MAS) is still under-explored [151, 160, 187]. In AI-equipped MAS (sometimes also called Distributed AI [164]), explainability concerns interactions among multiple intelligent agents, be they human or AI, to agree on and explain individual actions/decisions. Such interactions are often seen as a crucial driver for the real-world deployment of trustworthy modern AI systems. We will treat the following as a running example in Section2 to illustrate the kinds of explanations that we encounter in the XAI literature, including but not limited to MR approaches. Example 1.1. In modern software-defined telecommunication networks, network slicing is a means of running multiple logical networks on top of a shared physical network infrastructure [25]. Each logical network, i.e. a slice, is designed to serve a defined business purpose and comprises all the required network resources, configured and connected end-to-end. For instance, in a 5G network, a particular slice can be designated for high-definition video streaming. Such network slices are then to be managed—in the future, using autonomous AI-based agents—to consistently provide the desig- nated services. Service level agreements stipulate high-level intents that must be met, such as adequate quality of service, that translate into quantifiable performance indicators. An example of an intent is that end-to-end latency (from the application server to the end-user) should never exceed 25ms. Such intents induce lower level goals that AI-based agents managing the slice need to achieve. We consider the following agents to be involved in automatically managing the slice. When proac- tively monitoring adherence to intents, predictions of e.g. network latency are employed. So first, prediction of latency in the near future (say 20min interval) is done by an ML-based predictor agent based on previous network activity patterns (see e.g. [225]). Given a prediction of latency violation, the goal is to avoid it. To this end, a rule-based root cause analysis (RCA) agent needs to determine the most likely cause(s) of the latency violation which may, for instance, be a congested router port. Given a root cause, a constraint solver agent aims to find a solution to a network reconfiguration 4 Ericsson Research K. Cyrasˇ et al. Machine Reasoning for Explainable AI problem, say a path through a different data centre, that satisfies the slice requirements, including latency. Finally, a planner agent provides a procedural knowledge-based plan for execution of the reconfiguration (i.e. how to optimally relocate network resources). In all of the above phases, explainability of the AI-based agents is desirable. First and foremost, one may want to know which features contributed the most to the predicted latency violation. These may point to the performance measurement counter readings that via domain expert-defined rules lead to inferring the root cause. Explaining RCA by indicating the facts and rules that are necessary and sufficient to establish the root cause contributes to the overall explainability of handling intents. Orthogonally, the constraint solver may be unable to find a solution within the initial soft constraints, such as limited number of hops, whence the network reconfiguration problem unsolvability could be explained by indicating a set of mutually unsatisfiable constraints and suggesting a relaxation, such as increasing the number of hops. When some reconfiguration solution is found and the planner yields a plan for implementation, its goodness as well as various alternative actions and contrastive states may be considered for explainability purposes. Last but not least, the overall decision process needs to be explainable too, by for instance exhibiting the key considerations and weighing arguments for and against the best outcomes in all of the phases of prediction, RCA, solving and planning.