Alan Turing and the Development of Artificial Intelligence

Total Page:16

File Type:pdf, Size:1020Kb

Alan Turing and the Development of Artificial Intelligence 1 Alan Turing and the development of Artificial Intelligence ∗ Stephen Muggleton , Hilbert at the 1928 International Mathematical Congress. Hilbert presented three open questions During the centennial year of his birth Alan Turing for logic and mathematics. Was mathematics (1912-1954) has been widely celebrated as having laid 1. complete in the sense that any mathematical the foundations for Computer Science, Automated De- assertion could either be proved or disproved, cryption, Systems Biology and the Turing Test. In this 2. consistent in the sense that false statements paper we investigate Turing’s motivations and expec- tations for the development of Machine Intelligence, as could not be derived by a sequence of valid expressed in his 1950 article in Mind. We show that steps and many of the trends and developments within AI over 3. decidable in the sense that there exists a def- the last 50 years were foreseen in this foundational pa- inite method to decide the truth or falsity of per. In particular, Turing not only describes the use every mathematical assertion. of Computational Logic but also the necessity for the development of Machine Learning in order to achieve Within three years Kurt G¨odel [13] had shown human-level AI within a 50 year time-frame. His de- that not even the axoims of arithmetic are both scription of the Child Machine (a machine which learns complete and consistent and by 1937 both Alonzo like an infant) dominates the closing section of the pa- Church [5] and Alan Turing [43] had demonstrated per, in which he provides suggestions for how AI might the undecidability of particular mathematical as- be achieved. Turing discusses three alternative sug- sertions. gestions which can be characterised as: 1) AI by pro- While G¨odel and Church had depended on gramming, 2) AI by ab initio machine learning and 3) demonstrating their results using purely math- AI using logic, probabilities, learning and background ematical calculi, Turing had taken the unusual knowledge. He argues that there are inevitable limi- route of considering mathematical proof as an ar- tations in the first two approaches and recommends tifact of human reasoning. He thus considered a the third as the most promising. We compare Turing’s physical machine which emulated a human math- three alternatives to developments within AI, and con- clude with a discussion of some of the unresolved chal- ematician using a pen and paper together with a lenges he posed within the paper. series of instructions. Turing then generalised this notion to a universal machine which could emu- Keywords: Alan Turing, Artificial Intelligence, Ma- chine Intelligence late all other computing machines. He used this construct to show that certain functions cannot be computed by such a universal machine, and conse- 1. Introduction quently demonstrated the undecidability of asser- tions associated with such functions. In this section we will first review relevant parts At the heart of Turing’s universal machine is of the early work of Alan Turing which pre-dated a model of human calculation. It was this choice his paper in Mind [42]. which set the scene for later discussions on the degree to which computers might be capable of 1.1. Early work: the Entscheidungsproblem more sophisticated human-level reasoning. Turing’s initial investigations of computation stemmed from the programme set out by David 1.2. Bletchley Park *Corresponding author: Stephen Muggleton, Department The outbreak of the Second World War provided of Computing, Imperial College London. Turing with an opportunity and resources to de- AI Communications ISSN 0921-7126, IOS Press. All rights reserved 2 S. Muggleton / Turing and the development of AI sign and test a machine which would emulate hu- the building of machines designed to carry out man reasoning. Acting as the UK’s main wartime this or that complicated task. He was now fas- decryption centre, Bletchley Park had recruited cinated with the idea of a machine that could many of the UK’s best mathematicians in an at- learn. tempt to decode German military messages. By 1940 the Bombe machine, designed by Turing and Welchman [7], had gone into operation and was ef- 2. Turing’s 1950 paper in Mind ficiently decrypting messages using methods pre- viously employed manually by human decoders. In keeping with Turing’s background in Mathemati- 2.1. Structure of the paper cal Logic, the Bombe design worked according to a reductio ad absurdum principle which simplified The opening sentence of Turing’s 1950 paper the hypothesis space of 263 possible settings for the [42] declares Enigma machine to a small number of possibilities I propose to consider the question, “Can ma- based on a given set of message transcriptions. chines think?” The hypothesis elimination principle of the Bombe was later refined in the design of the Colos- The first six sections of the paper provide a philo- sus I and II machines. The Tunny report [15] (de- sophical framework for answering this question. classified by the UK government in 2000), shows These sections are briefly summarised below. that one of the key technical refinements of Colos- sus was the use of Bayesian reasoning to order the 1. The Imitation Game. Often referred to as search through the space of hypothetical settings the “Turing test”, this is a form of parlour for the Lorenz encryption machine. This combi- game involving a human interrogator who al- nation of logical hypothesis generation tied with ternately questions a hidden computer and a Bayesian evaluation were later to become central hidden person in an attempt to distinguish to approaches used within Machine Learning (see the identity of the respondents. The Imita- Section 5). Indeed strong parallels exist between tion Game is aimed at providing an objective decryption tasks on the one hand, which involve test for deciding whether machines can think. hypothesising machine settings from a set of mes- 2. Critique of the New Problem. Turing dis- sage transcriptions and modern Machine Learning cusses the advantages of the game for the pur- tasks on the other hand, which involve hypothesis- poses of deciding whether machines and hu- ing a model from a set of observations. Given their mans could be attributed with thinking on grounding in the Bletchley Park decryption work an equal basis using objective human judge- it is hardly surprising that two of the authors of ment. the Tunny report, Donald Michie (1923-2007) and 3. The Machines Concerned in the Game. Tur- Jack Good (1916-2009), went on to play founding ing indicates that he intends digital comput- roles in the post-war development of Machine In- ers to be the only kind of machine permitted telligence and Subjective Probabilistic reasoning to take part in the game. respectively. In numerous out-of-hours meetings at 4. Digital Computers. The nature of the new Bletchley Park, Turing discussed the problem of digital computers, such as the Manchester machine intelligence with both Michie and Good. machine, is explained and compared to Charles According to Andrew Hodges [16], Turing’s biog- Babbage’s proposals for an Analytical En- rapher gine. These meetings were an opportunity for Alan 5. Universality of Digital Computers. Turing to develop the ideas for chess-playing machines explains how digital computers can emulate that had begun in his 1941 discussions with any discrete-state machine. Jack Good. They often talked about mechani- 6. Contrary Views on the Main Question. Nine sation of thought processes, bringing in the the- traditional philosophical objections to the ory of probability and weight of evidence, with proposition that machines can think are in- which Donald Michie was by now familiar. troduced and summarily dismissed by Tur- He (Turing) was not so much concerned with ing. S. Muggleton / Turing and the development of AI 3 2.2. Learning machines - Section 7 of Turing would be a very practicable possibility even by paper present techniques. It is probably not necessary to increase the speed of operations of the ma- The task of engineering software which ad- chines at all. Parts of modern machines which dresses the central question of Turing’s paper have can be regarded as analogs of nerve cells work dominated Artificial Intelligence research over the about a thousand times faster than the lat- last sixty years. In the final section of the 1950 ter. This should provide a “margin of safety” paper Turing addresses the motivation and possi- which could cover losses of speed arising in ble approaches for such endeavours. His transition many ways. Our problem then is to find out from the purely philosophical nature of the first how to programme these machines to play the six sections of the paper is marked as follows. game. At my present rate of working I produce The only really satisfactory support that can about a thousand digits of programme a day, be given for the view expressed at the begin- so that about sixty workers, working steadily ning of section 6, will be that provided by wait- through the fifty years might accomplish the ing for the end of the century and then doing job, if nothing went into the wastepaper bas- the experiment described. But what can we say ket. Some more expeditious method seems de- in the meantime? sirable. Turing goes on to discuss three distinct strategies In retrospect it is amazing that Turing managed to which might be considered capable of achieving a foresee that “Advances in engineering” would lead thinking machine. These can be characterised as to computers with a Gigabyte of storage by the follows: 1) AI by programming, 2) AI by ab initio end of the twentieth century. It is also noteworthy machine learning and 3) AI using logic, probabil- that Turing suggests that in terms of hardware, it ities, learning and background knowledge.
Recommended publications
  • Investigation of Topic Trends in Computer and Information Science
    Investigation of Topic Trends in Computer and Information Science by Text Mining Techniques: From the Perspective of Conferences in DBLP* 텍스트 마이닝 기법을 이용한 컴퓨터공학 및 정보학 분야 연구동향 조사: DBLP의 학술회의 데이터를 중심으로 Su Yeon Kim (김수연)** Sung Jeon Song (송성전)*** Min Song (송민)**** ABSTRACT The goal of this paper is to explore the field of Computer and Information Science with the aid of text mining techniques by mining Computer and Information Science related conference data available in DBLP (Digital Bibliography & Library Project). Although studies based on bibliometric analysis are most prevalent in investigating dynamics of a research field, we attempt to understand dynamics of the field by utilizing Latent Dirichlet Allocation (LDA)-based multinomial topic modeling. For this study, we collect 236,170 documents from 353 conferences related to Computer and Information Science in DBLP. We aim to include conferences in the field of Computer and Information Science as broad as possible. We analyze topic modeling results along with datasets collected over the period of 2000 to 2011 including top authors per topic and top conferences per topic. We identify the following four different patterns in topic trends in the field of computer and information science during this period: growing (network related topics), shrinking (AI and data mining related topics), continuing (web, text mining information retrieval and database related topics), and fluctuating pattern (HCI, information system and multimedia system related topics). 초 록 이 논문의 연구목적은 컴퓨터공학 및 정보학 관련 연구동향을 분석하는 것이다. 이를 위해 텍스트마이닝 기법을 이용하여 DBLP(Digital Bibliography & Library Project)의 학술회의 데이터를 분석하였다. 대부분의 연구동향 분석 연구가 계량서지학적 연구방법을 사용한 것과 달리 이 논문에서는 LDA(Latent Dirichlet Allocation) 기반 다항분포 토픽 모델링 기법을 이용하였다.
    [Show full text]
  • Can Predicate Invention Compensate for Incomplete Background Knowledge?
    Can predicate invention compensate for incomplete background knowledge? Andrew CROPPER Stephen MUGGLTEON Imperial College London, United Kingdom Abstract. In machine learning we are often faced with the problem of incomplete data, which can lead to lower predictive accuracies in both feature-based and re- lational machine learning. It is therefore important to develop techniques to com- pensate for incomplete data. In inductive logic programming (ILP) incomplete data can be in the form of missing values or missing predicates. In this paper, we inves- tigate whether an ILP learner can compensate for missing background predicates through predicate invention. We conduct experiments on two datasets in which we progressively remove predicates from the background knowledge whilst measuring the predictive accuracy of three ILP learners with differing levels of predicate inven- tion. The experimental results show that as the number of background predicates decreases, an ILP learner which performs predicate invention has higher predictive accuracies than the learners which do not perform predicate invention, suggesting that predicate invention can compensate for incomplete background knowledge. Keywords. inductive logic programming, predicate invention, relational machine learning, meta-interpretive learning 1. Introduction In an ideal world machine learning datasets would be complete. In reality, however, we are often faced with incomplete data, which can lead to lower predictive accuracies in both feature-based [6,10] and relational [22,17] machine learning. It is therefore important to develop techniques to compensate for incomplete data. In inductive logic programming (ILP) [11], a form of relational machine learning, incomplete background knowledge can be in the form of missing values or missing predicates [9].
    [Show full text]
  • Knowledge Mining Biological Network Models Stephen Muggleton
    Knowledge Mining Biological Network Models Stephen Muggleton To cite this version: Stephen Muggleton. Knowledge Mining Biological Network Models. 6th IFIP TC 12 International Conference on Intelligent Information Processing (IIP), Oct 2010, Manchester, United Kingdom. pp.2- 2, 10.1007/978-3-642-16327-2_2. hal-01055072 HAL Id: hal-01055072 https://hal.inria.fr/hal-01055072 Submitted on 11 Aug 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Knowledge Mining Biological Network Models S.H. Muggleton Department of Computing Imperial College London Abstract: In this talk we survey work being conducted at the Cen- tre for Integrative Systems Biology at Imperial College on the use of machine learning to build models of biochemical pathways. Within the area of Systems Biology these models provide graph-based de- scriptions of bio-molecular interactions which describe cellular ac- tivities such as gene regulation, metabolism and transcription. One of the key advantages of the approach taken, Inductive Logic Pro- gramming, is the availability of background knowledge on existing known biochemical networks from publicly available resources such as KEGG and Biocyc.
    [Show full text]
  • Centre for Bioinformatics Imperial College London Second Report 31
    Centre for Bioinformatics Imperial College London Second Report 31 May 2004 Centre Director: Prof Michael Sternberg www.imperial.ac.uk/bioinformatics [email protected] Support Service Head: Dr Sarah Butcher www.codon.bioinformatics.ic.ac.uk [email protected] Centre for Bioinformatics - Imperial College London - Second Report - May 2004 1 Contents Summary..................................................................................................................... 3 1. Background and Objectives .................................................................................. 4 1.1 Objectives of the Report........................................................................................ 4 1.2 Background........................................................................................................... 4 1.3 Objectives of the Centre for Bioinformatics........................................................... 5 1.4 Objectives of the Bioinformatics Support Service ................................................. 5 2. Management ......................................................................................................... 6 3. Biographies of the Team....................................................................................... 7 4. Bioinformatics Research at Imperial ..................................................................... 8 4.1 Affiliates of the Centre........................................................................................... 8 4.2 Research..............................................................................................................
    [Show full text]
  • Towards Chemical Universal Turing Machines
    Towards Chemical Universal Turing Machines Stephen Muggleton Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ. [email protected] Abstract aid of computers. This is not only because of the scale of the data involved but also because scientists are unable Present developments in the natural sciences are pro- to conceptualise the breadth and depth of the relationships viding enormous and challenging opportunities for var- ious AI technologies to have an unprecedented impact between relevant databases without computational support. in the broader scientific world. If taken up, such ap- The potential benefits to science of such computerization are plications would not only stretch present AI technology high knowledge derived from large-scale scientific data has to the limit, but if successful could also have a radical the potential to pave the way to new technologies ranging impact on the way natural science is conducted. We re- from personalised medicines to methods for dealing with view our experience with the Robot Scientist and other and avoiding climate change (Muggleton 2006). Machine Learning applications as examples of such AI- In the 1990s it took the international human genome inspired developments. We also consider potential fu- project a decade to determine the sequence of a single hu- ture extensions of such work based on the use of Uncer- man genome, but projected increases in the speed of gene tainty Logics. As a generalisation of the robot scientist sequencing imply that before 2050 it will be feasible to de- we introduce the notion of a Chemical Universal Turing machine.
    [Show full text]
  • Efficiently Learning Efficient Programs
    Imperial College London Department of Computing Efficiently learning efficient programs Andrew Cropper Submitted in part fulfilment of the requirements for the degree of Doctor of Philosophy in Computing of Imperial College London and the Diploma of Imperial College London, June 2017 Abstract Discovering efficient algorithms is central to computer science. In this thesis, we aim to discover efficient programs (algorithms) using machine learning. Specifically, we claim we can efficiently learn programs (Claim 1), and learn efficient programs (Claim 2). In contrast to universal induction methods, which learn programs using only examples, we introduce program induction techniques which addition- ally use background knowledge to improve learning efficiency. We focus on inductive logic programming (ILP), a form of program induction which uses logic programming to represent examples, background knowledge, and learned programs. In the first part of this thesis, we support Claim 1 by using appropriate back- ground knowledge to efficiently learn programs. Specifically, we use logical minimisation techniques to reduce the inductive bias of an ILP learner. In addi- tion, we use higher-order background knowledge to extend ILP from learning first-order programs to learning higher-order programs, including the support for higher-order predicate invention. Both contributions reduce learning times and improve predictive accuracies. In the second part of this thesis, we support Claim 2 by introducing tech- niques to learn minimal cost logic programs. Specifically, we introduce Metaopt, an ILP system which, given sufficient training examples, is guaranteed to find minimal cost programs. We show that Metaopt can learn minimal cost robot strategies, such as quicksort, and minimal time complexity logic programs, including non-deterministic programs.
    [Show full text]
  • Growing the Artificial Intelligence Industry in the Uk
    GROWING THE ARTIFICIAL INTELLIGENCE INDUSTRY IN THE UK Professor Dame Wendy Hall and Jérôme Pesenti Growing the Artificial Intelligence Industry in the UK FOREWORD We are grateful to the Business Secretary and Culture Secretary for asking us to conduct this Review of how to grow Artificial Intelligence in the UK, in terms of those developing it and deploying it. We believe that this is the right time for the UK to accelerate on AI, and ensure that our unique history of ground breaking research bears fruit in the social and economic benefits that the technology offers. We are at the threshold of an era when much of our productivity and prosperity will be derived from the systems and machines we create. We are accustomed now to technology developing fast, but that pace will increase and AI will drive much of that acceleration. The impacts on society and the economy will be profound, although the exact nature of those impacts is uncertain. We are convinced that because of the UK’s current and historical strengths in this area we are in a strong position to lead rather than follow in both the development of the technology and its deployment in all sectors of industry, education and government. We have a choice. The UK could stay among the world leaders in AI in the future, or allow other countries to dominate. We start from a good position in many respects but other leading countries are devoting significant resources to growing and deploying AI. The UK will need to act in key areas and to sustain action over a long period and across industry sectors, to retain its world leading status, and to grow our AI capability as well as deploying it much more widely.
    [Show full text]
  • The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus the Secrets of Enigma
    The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus The Secrets of Enigma B. Jack Copeland, Editor OXFORD UNIVERSITY PRESS The Essential Turing Alan M. Turing The Essential Turing Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma Edited by B. Jack Copeland CLARENDON PRESS OXFORD Great Clarendon Street, Oxford OX2 6DP Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide in Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Taipei Toronto Shanghai With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan South Korea Poland Portugal Singapore Switzerland Thailand Turkey Ukraine Vietnam Published in the United States by Oxford University Press Inc., New York © In this volume the Estate of Alan Turing 2004 Supplementary Material © the several contributors 2004 The moral rights of the author have been asserted Database right Oxford University Press (maker) First published 2004 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.
    [Show full text]
  • Alan Turing and the Development of Artificial Intelligence
    Alan Turing and the development of Artificial Intelligence Stephen Muggleton Department of Computing, Imperial College London December 19, 2012 Abstract During the centennial year of his birth Alan Turing (1912-1954) has been widely celebrated as having laid the foundations for Computer Sci- ence, Automated Decryption, Systems Biology and the Turing Test. In this paper we investigate Turing’s motivations and expectations for the development of Machine Intelligence, as expressed in his 1950 article in Mind. We show that many of the trends and developments within AI over the last 50 years were foreseen in this foundational paper. In particular, Turing not only describes the use of Computational Logic but also the necessity for the development of Machine Learning in order to achieve human-level AI within a 50 year time-frame. His description of the Child Machine (a machine which learns like an infant) dominates the closing section of the paper, in which he provides suggestions for how AI might be achieved. Turing discusses three alternative suggestions which can be characterised as: 1) AI by programming, 2) AI by ab initio machine learning and 3) AI using logic, probabilities, learning and background knowledge. He argues that there are inevitable limitations in the first two approaches and recommends the third as the most promising. We com- pare Turing’s three alternatives to developments within AI, and conclude with a discussion of some of the unresolved challenges he posed within the paper. 1 Introduction In this section we will first review relevant parts of the early work of Alan Turing which pre-dated his paper in Mind [42].
    [Show full text]
  • Knowledge Mining Biological Network Models
    Knowledge Mining Biological Network Models S.H. Muggleton Department of Computing Imperial College London Abstract: In this talk we survey work being conducted at the Cen- tre for Integrative Systems Biology at Imperial College on the use of machine learning to build models of biochemical pathways. Within the area of Systems Biology these models provide graph-based de- scriptions of bio-molecular interactions which describe cellular ac- tivities such as gene regulation, metabolism and transcription. One of the key advantages of the approach taken, Inductive Logic Pro- gramming, is the availability of background knowledge on existing known biochemical networks from publicly available resources such as KEGG and Biocyc. The topic has clear societal impact owing to its application in Biology and Medicine. Moreover, object descrip- tions in this domain have an inherently relational structure in the form of spatial and temporal interactions of the molecules involved. The relationships include biochemical reactions in which one set of metabolites is transformed to another mediated by the involvement of an enzyme. Existing genomic information is very incomplete concerning the functions and even the existence of genes and me- tabolites, leading to the necessity of techniques such as logical ab- duction to introduce novel functions and invent new objects. Moreo- ver, the development of active learning algorithms has allowed automatic suggestion of new experiments to test novel hypotheses. The approach thus provides support for the overall scientific cycle of hypothesis generation and experimental testing. Bio-Sketch: Professor Stephen Muggleton FREng holds a Royal Academy of Engineering and Microsoft Research Chair (2007-) and is Director of the Imperial College Computational Bioinformatics Centre (2001-) (www.doc.ic.ac.uk/bioinformatics) and Director of Modelling at the BBSRC Centre for Integrative Modelling at Impe- rial College.
    [Show full text]
  • Logic-Based Machine Learning
    Chapter LOGICBASED MACHINE LEARNING Stephen Muggleton and Flaviu Marginean Department of Computer Science University of York Heslington York YO DD United Kingdom Abstract The last three decades has seen the development of Computational Logic techniques within Articial Intelligence This has led to the development of the sub ject of Logic Programming LP which can b e viewed as a key part of LogicBased Articial Intelligence The subtopic of LP concerned with Machine Learning is known as Inductive Logic Programming ILP which again can b e broadened to Logic Based Machine Learning by dropping Horn clause restrictions ILP has its ro ots in the groundbreaking research of Gordon Plotkin and Ehud Shapiro This pap er provides a brief survey of the state of ILP application s theory and techniques Keywords Inductive logic programming machine learning scientic discovery protein prediction learning of natural language INTRODUCTION As envisaged by John McCarthy in McCarthy Logic has turned out to b e one of the key knowledge representations for Articial Intelligence research Logic in the form of the FirstOrder Predicate Calculus provides a representation for knowledge which has a clear semantics together with wellstudied sound rules of inference Interestingly Turing Turing and McCarthy McCarthy viewed a combination of Logic and Learning as b eing central to the development of Articial Intelligence research This article is concerned with the topic of LogicBased Machine Learning LBML The mo dern study of LBML has largely b een involved with the study
    [Show full text]
  • Logical Explanations for Deep Relational Machines Using Relevance Information
    Journal of Machine Learning Research 20 (2019) 1-47 Submitted 8/18; Revised 6/19; Published 8/19 Logical Explanations for Deep Relational Machines Using Relevance Information Ashwin Srinivasan [email protected] Department of Computer Sc. & Information Systems BITS Pilani, K.K. Birla Goa Campus, Goa, India Lovekesh Vig [email protected] TCS Research, New Delhi, India Michael Bain [email protected] School of Computer Science and Engineering University of New South Wales, Sydney, Australia. Editor: Luc De Raedt Abstract Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network. We will focus attention on deep relational machines (DRMs: a term introduced in Lodhi (2013)). A DRM is a deep network in which the input layer consists of Boolean-valued functions (features) that are defined in terms of relations provided as domain, or background, knowledge. Our DRMs differ from those in Lodhi (2013), which uses an Inductive Logic Programming (ILP) engine to first select features (we use random selections from a space of features that satisfies some approximate constraints on logical relevance and non-redundancy). But why do the DRMs predict what they do? One way of answering this was provided in recent work Ribeiro et al. (2016), by constructing readable proxies for a black-box predictor. The proxies are intended only to model the predictions of the black-box in local regions of the instance-space. But readability alone may not be enough: to be understandable, the local models must use relevant concepts in an meaningful manner.
    [Show full text]