A Best-First Backward-Chaining Search Strategy Based on Learned

Total Page:16

File Type:pdf, Size:1020Kb

A Best-First Backward-Chaining Search Strategy Based on Learned A Best-first Backward-chaining Search Strategy based on Learned Predicate Representations Alexander Sakharov Synstretch, Framingham, MA, U.S.A. Keywords: Knowledge Base, First-order Logic, Resolution, Backward Chaining, Neural-symbolic Computing, Tensorization. Abstract: Inference methods for first-order logic are widely used in knowledge base engines. These methods are pow- erful but slow in general. Neural networks make it possible to rapidly approximate the truth values of ground atoms. A hybrid neural-symbolic inference method is proposed in this paper. It is a best-first search strategy for backward chaining. The strategy is based on neural approximations of the truth values of literals. This method is precise and the results are explainable. It speeds up inference by reducing backtracking. 1 INTRODUCTION inference with neural networks (NN) (Rocktaschel,¨ 2017; Serafini and d’Avila Garcez, 2016; Dong et al., The facts and rules of knowledge bases (KB) are usu- 2019; Marra et al., 2019; Van Krieken et al., 2019; ally expressible in first-order logic (FOL) (Russell Sakharov, 2019). Most commonly, it is done via pred- and Norvig, 2009). Typically, KB facts are literals. icate tensorization. Objects are embedded as real- Quantifier-free implications A ( A1 ^ ::: ^ Ak, where valued vectors of a fixed length for the use in NNs. A;A1;:::;Ak are literals, are arguably the most com- Predicates are represented by one or more tensors of mon form of rules in KBs. All these literals are pos- various ranks which are learned. The truth values of itive in Prolog rules. In general logic programs, rule ground atoms of any predicate P are approximated by heads are positive. These rules are equivalent to dis- applying a symbolically differentiable function s to junctions of literals in classical FOL. These disjunc- an algebraic expression. The range of s is the interval tions are known as non-Horn clauses, and the disjunc- [0;1]. One corresponds to true, and zero corresponds tions corresponding to Prolog rules are called Horn to false. The expression is composed of the tensors clauses. Any FOL formula can be expressed by such representing P, embeddings of the constants that are set of non-Horn clauses that their conjunction is equi- P arguments, tensor contraction operations, and sym- satisfiable with this formula (Nie, 1997). bolically differentiable functions. Resolution methods work on sets of non-Horn One key advantage of this machine learning ap- clauses. These methods have become a de facto stan- proach over inference is that approximation of truth dard for inference in KBs and logic programming values of ground atoms is fast. Assuming that s and (Russell and Norvig, 2009). Their success is a ma- other functions from the aforementioned expression jor reason for the popularity of non-Horn clauses as are efficiently implemented, the approximation takes a knowledge representation format. Complete infer- a linear time over the size of the tensors representing ence methods for FOL including resolution are inher- a predicate. Unfortunately, there are serious cons to ently slow. Multiple strategies and heuristics speeding this approach. up resolution procedures have been developed. SLD This approach is limited to ground atoms. If the resolution, which is also known as backward chain- result of an approximation is around 0.5, it is not pos- ing, is a complete strategy for Horn clauses. Faster sible to draw any conclusion about the truth value of but incomplete inference procedures are also accept- an atom. Approximation results may not be reliable. able for KBs. Prolog also utilizes an incomplete in- Their accuracy is not known in advance and previous ference procedure (Stickel, 1992). results do not provide any assurance for the same for The inefficiency of inference in FOL and even future results. The truth values yielded by NNs are not in its Horn fragment prompted attempts to replace explainable because machine learning does not pro- 982 Sakharov, A. A Best-first Backward-chaining Search Strategy based on Learned Predicate Representations. DOI: 10.5220/0010299209820989 In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021) - Volume 2, pages 982-989 ISBN: 978-989-758-484-8 Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved A Best-first Backward-chaining Search Strategy based on Learned Predicate Representations vide any justification for the results. In many AI tasks dures for FOL and its extensions are more suitable such as automatic code generation, robotic planning, for theorem provers. Inference in KBs is supposed to etc., the aim is actually a derivation itself not the mere be faster, even at the expense of completeness. Un- knowledge of the truth value. The approximation of like theorem provers, the number of facts and rules truth values based on NNs does not contribute to these involved in KB inference may be huge, which slows tasks. down the inference. The use of incomplete strategies Hybrid approaches that combine symbolic reason- is also justified by the fact that KBs are almost always ing and machine learning models are considered the incomplete. most promising (Marcus, 2020). To the best of au- Backward chaining is often explained in terms of thor’s knowledge, there are no known hybrid methods goal lists (sets). The set of negations of literals of one that retain the accuracy of inference, produce deriva- disjunction is considered as an original list of goals. tions, and take advantage of learned predicate repre- Every resolvent is also viewed as a goal list that is sentations. This work introduces such method. It is comprised of negations of its literals. We follow this a search strategy for backward chaining. This search tradition. Due to this explanation, backward chain- strategy utilizes learned predicate representations in ing is interpreted as inference based on generalized order to make better choices at every inference step, Modus Ponens (Russell and Norvig, 2009). Goals and thus, to reduce backtracking which usually con- have to be derived, not refuted. Not only this interpre- sumes the vast portion of time during inference. tation makes backward chaining more explainable, it is also more pertinent to our best-first strategy. This strategy aims to pick the facts or rules that lead to goal 2 RESOLUTION lists whose elements are more likely derivable. Various search strategies can be used in imple- Resolution is perhaps the most practical inference mentations of resolution. Search strategies determine method. The resolution calculus works on Skolem- the order in which literals or disjunctions are resolved. ized FOL formulas in the conjunctive normal form. These strategies include depth-first, breadth-first, it- The conjuctions are viewed as sets of disjunctions of erative deepening, and others. Prolog relies on the literals, i.e. non-Horn clauses. The resolution calcu- depth-first strategy. It is incomplete but efficient. Unit lus has two rules: resolution and factoring. The reso- preference (Russell and Norvig, 2009) is one well- lution rule produces disjunction known best-first search strategy for resolution. Facts are resolved before rules under unit preference. OT- A1q _ ::: _ Ai−1q _ Ai+1q _ ::: _ Akq TER (McCune, 2003) conducts best-first search on _ B1q _ ::: _ B j−1q _ B j+1q _ ::: _ Bmq the basis of rule weight. Lighter rules are preferred. from two disjunctions A1 _ ::: _ Ak and B1 _ ::: _ Bm Longer rules tend to have a higher weight. where substitution q is the most general unifier of Ai and :B j. The factoring rule produces disjunction The OTTER’s search strategy is perhaps the clos- A1q _ ::: _ Ai−1q _ Ai+1q _ ::: _ Akq est to the strategy presented in this paper. In the ex- from disjuction A1 _ ::: _ Ak where substitution q is amples given later, we compare the two. For cer- the most general unifier of Ai and A j. Factoring can tainty, we assume that the rule weight equals the num- be combined with the resolution rule. We assume ber of symbols in the rule including variables, con- reader’s familiarity with resolution. Please refer to stants, functions, predicates, and negations. Resolu- (Chang and Lee, 1973) for details. In this paper, tion strategies should include some form of loop de- we consider KB rules that are equivalent non-Horn tection (Shen et al., 2001). There also exist optimiza- clauses, and KB facts that are literals. tion techniques that make resolution implementations Unconstrained resolution may be very inefficient. more efficient. One notable example of these tech- What makes resolution practical is the availability of niques is tabling (Swift, 2009). strategies that constrain branching at every resolution Non-Horn clauses may contain Skolem functions step by prohibiting certain applications of the resolu- or constants. We refer to both of them as Skolem tion rule. These strategies include set of support reso- functions for short. They are introduced in the process lution, unit resolution, linear resolution, etc. Some of of eliminating existential quantifiers from FOL for- them are complete for FOL, some are not. mulas (Chang and Lee, 1973). Skolem functions are Backward chaining can be applied to non-Horn not evaluable because they are unknown. It is fair to clauses as well (Sakharov, 2020). In this strategy, one assume that all other functions in KB rules are evalu- of any two resolved literals is a rule head or a fact. able. Any term with a Skolem function at the top can It is an incomplete strategy for non-Horn clauses but be unified with a variable only. Usually, the majority it is relatively efficient. Complete inference proce- of rules do not contain Skolem functions. 983 ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence When predicates are approximated by NNs and on the depth of derivations.
Recommended publications
  • Examples of Forward Chaining and Backward Chaining
    Examples Of Forward Chaining And Backward Chaining Nauseous Frederich vernacularising superbly. Creamlaid and unspeakable Townsend justifying his Capek elasticates evinced firm. Undulant Filbert demonstrating surpassingly. Backward chaining AMA Behavioral Consulting LLC Our. What is fragile and backward chaining? Mixed chaining algorithm combining forward & backward. For real here draft a file of Pfc rules and facts which are appropriate either the. This narrative that contains sets increase its missiles, backward and backward chaining are readily apparent that value. What is a few hours starting and put on mastery of chaining backward chaining is web page has rules. Inference in first-order logic. For example students who have been approve to cargo the most butter near the bread. Backward chaining reasoning methods begin to a hit of hypotheses and work. Forward and backward chaining are great ways to teach children new skills. 1 Similar to proposi onal logic we are infer new facts using forward chaining. Let's bubble a millennium at it simple examples to perceive you differentiate. How do you went forward chaining? The backtracking process in backward chaining employs the Prolog programming language which rite also discussed in this thesis Some examples for better. The facility of forward chaining is backward chaining. Backward chaining is sitting opposite lever forward chaining. Questions 7. Forward Chaining and Backward Chaining PowerPoint. Helping Your Child be more Self-reliant Backward Chaining. 1Forward Chaining In Forward Chaining whenever the exercise value changes automatically the kidney value gets calculated For Example. Backward & Forward Chaining I Love ABA. Knowledge base with backward chaining to decide which an advantage of forward chaining backward and teach each component behaviors quickly or step that we would they cannot be.
    [Show full text]
  • Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm
    Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Incremental Update of Datalog Materialisation: The Backward/Forward Algorithm Boris Motik, Yavor Nenov, Robert Piro and Ian Horrocks Department of Computer Science, Oxford University Oxford, United Kingdom fi[email protected] Abstract much harder. Fact insertion can be efficiently handled us- ing the standard semina¨ıve algorithm (Abiteboul, Hull, and Datalog-based systems often materialise all consequences of a datalog program and the data, allowing users’ queries Vianu 1995): datalog (without negation-as-failure) is mono- tonic, so one can just ‘continue’ materialisation from E+; to be evaluated directly in the materialisation. This process, + however, can be computationally intensive, so most systems hence, in this paper we usually assume that E = ;. In con- update the materialisation incrementally when input data trast, fact deletion is much more involved since one must changes. We argue that existing solutions, such as the well- identify and retract all facts in I not derivable from E n E−. known Delete/Rederive (DRed) algorithm, can be inefficient Gupta and Mumick (1995) presents an extensive overview in cases when facts have many alternate derivations. As a pos- of the existing approaches to incremental update, which can sible remedy, we propose a novel Backward/Forward (B/F) be classified into two groups. algorithm that tries to reduce the amount of work by a combi- The approaches in the first group keep track of auxil- nation of backward and forward chaining. In our evaluation, iary information during materialisation to efficiently delete the B/F algorithm was several orders of magnitude more ef- ficient than the DRed algorithm on some inputs, and it was facts.
    [Show full text]
  • An Intelligent Student Assistant Pooja Lodhi, Omji Mishra, Shikha Jain*, Vasvi Bajaj
    Special Issue on Big Data and Open Education StuA: An Intelligent Student Assistant Pooja Lodhi, Omji Mishra, Shikha Jain*, Vasvi Bajaj Department of Computer Science, Jaypee Institute of Information Technology, Noida (India) Received 7 September 2017 | Accepted 23 January 2018 | Published 16 February 2018 Abstract Keywords With advanced innovation in digital technology, demand for virtual assistants is arising which can assist a person Artificial Intelligence, and at the same time, minimize the need for interaction with the human. Acknowledging the requirement, we Backward Chaining, propose an interactive and intelligent student assistant, StuA, which can help new-comer in a college who are CLIPS, Expert System, hesitant in interacting with the seniors as they fear of being ragged. StuA is capable of answering all types of Rule-based System, queries of a new-comer related to academics, examinations, library, hostel and extra curriculum activities. The Virtual Assistant. model is designed using CLIPS which allows inferring using forward chaining. Nevertheless, a generalized algorithm for backward chaining for CLIPS is also implemented. Validation of the proposed model is presented in five steps which show that the model is complete and consistent with 99.16% accuracy of the knowledge DOI: 10.9781/ijimai.2018.02.008 model. Moreover, the backward chaining algorithm is found to be 100% accurate. I. Introduction [11-19], design [20-21, 30], e-learning [28-29] and recommendations [31]. Some of the stochastic assistants are Chatbots, question answering rtificial Intelligence is the science of making computers systems and assistants like Apple’s Siri, Google’s Google Assistant, Aperceive their environment in a similar way as a human does and Amazon’s Alexa, Microsoft’s Cortana and Facebook’s M.
    [Show full text]
  • The Comparison Between Forward and Backward Chaining
    International Journal of Machine Learning and Computing, Vol. 5, No. 2, April 2015 The Comparison between Forward and Backward Chaining Ajlan Al-Ajlan system. The comparison will be based on the adequacy of the Abstract—Nowadays, more and more students all over the reasoning strategy, which means to what extent the given world need expert systems, especially in academic sectors. They strategy can be used to represent all knowledge required for need advice in order to be successful in their studies, and this the management expert system. The basic description of an advice must be provided by the academic management. There are two reasoning strategies in expert system, which have educational management expert system is the use of expert become the major practical application of artificial intelligence system techniques in management to be used in the academic research: forward chaining and backward chaining. The first sector. What does academic management mean? In any one starts from the available facts and attempts to draw academic sector there must be co-ordinator who co-ordinates conclusions about the goal. The second strategy starts from specific programmes and should provide suitable advice for expectations of what the goal is, and then attempts to find every student who needs it. The co-ordinator should consider evidence to support these hypotheses. The aim of this paper is to make a comparative study to identify which reasoning various factors before he provides the advice, and neglecting strategy system (forward chaining or backward chaining) is any one of them might affect the advice negatively. This more applicable when making evaluations in expert system will facilitate the task of the co-ordinator, and will management, especially in the academic field.
    [Show full text]
  • Introduction to Expert Systems, the MYCIN System
    Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 and MoSIG M1 Winter Semester 2012 Lecture 2 3 February 2012 Outline: Introduction to Expert Systems ..................................2 Application Domains.................................................................. 2 Programming Techniques for Expert Systems............................ 3 The MYCIN Expert System ........................................5 MYCIN: An Antibiotics Therapy Advisor.................................. 6 Facts........................................................................................... 8 PARAMETERS ....................................................................... 10 The MYCIN Confidence Factor ............................................... 10 RULES..................................................................................... 11 Evidential Reasoning and combining Hypotheses : .................. 12 Control ..................................................................................... 13 Introduction to Expert Systems: MYCIN Introduction to Expert Systems Application Domains Expert systems are a class of software that is useful for domains that are Subjective, Poorly formalized, and require manipulating large numbers of poorly related facts. Examples include diagnosis, counseling, debugging, game playing, design in complex spaces and problem solving. Expert system provides an alternative to algorithmic programming. Expert Systems are typically constructed by hand coding symbolic expressions of the expertise
    [Show full text]
  • Logical Reasoning Systems
    CS 1571 Introduction to AI Lecture 12 Logical reasoning systems Milos Hauskrecht [email protected] 5329 Sennott Square CS 1571 Intro to AI Logical inference in FOL Logical inference problem: • Given a knowledge base KB (a set of sentences) and a sentence α , does the KB semantically entail α ? KB |= α ? In other words: In all interpretations in which sentences in the KB are true, is also α true? Logical inference problem in the first-order logic is undecidable !!!. No procedure that can decide the entailment for all possible input sentences in finite number of steps. CS 1571 Intro to AI 1 Resolution inference rule • Recall: Resolution inference rule is sound and complete (refutation-complete) for the propositional logic and CNF A ∨ B, ¬A ∨ C B ∨ C • Generalized resolution rule is sound and complete (refutation- complete) for the first-order logic and CNF (w/o equalities) σ = φ ¬ψ ≠ UNIFY ( i , j ) fail φ ∨φ ∨φ ψ ∨ψ ∨ ψ 1 2 K k , 1 2 K n σ φ ∨ ∨φ ∨φ ∨φ ∨ψ ∨ ∨ψ ∨ψ ψ SUBST( , 1 K i−1 i+1 K k 1 K j−1 j+1 K n ) Example: P(x) ∨ Q(x), ¬Q(John) ∨ S(y) P(John) ∨ S(y) The rule can be also written in the implicative form (book) CS 1571 Intro to AI Inference with resolution rule • Proof by refutation: – Prove that KB , ¬ α is unsatisfiable – resolution is refutation-complete • Main procedure (steps): 1. Convert KB , ¬ α to CNF with ground terms and universal variables only 2. Apply repeatedly the resolution rule while keeping track and consistency of substitutions 3.
    [Show full text]
  • CSM10 Intelligent Information Systems
    CSM10 Intelligent Information Systems Week 4 1 Reasoning in the real world Case study: MYCIN Coursework project and report CSM10 Spring Semester 2007 Intelligent Information Systems Professor Ian Wells 2 The journey so far ... • introduction to the module • what is intelligence? • cognitive processes • how we perceive, remember, solve puzzles • how a computer can solve problems • representing knowledge using semantic networks and production rules • reasoning in the real world • case study (MYCIN), uncertainty, chess, problems 3 3 Intelligence Perception & cognitive processes Knowledge representation Real-world reasoning 4 Inside an expert system: MYCIN A case study of one of the first major expert systems and an analysis of its performance when compared with humans at solving complex problems 5 Once upon a time ... • it is a dark night in 1972 • you are on holiday in the USA • you have to be admitted to hospital • the doctor prescribes antibiotics • what should you do .... ? • be very worried ... !! 6 6 Treatment of infection • _____ % acceptable • _____ % questionable • _____ % clearly irrational • (Roberts & Visconti: Am J Hosp Pharm 1972) 7 7 History of MYCIN • diagnosis of bacteraemias • suggestions for suitable therapy • PhD thesis - Edward Shortliffe (Stanford 1974) • project lasting 10 years + 5 more PhDs • numerous offspring e.g. EMYCIN, PUFF, CLOT • only Oncocin was ever used routinely • foundation for much of today’s KBS technology 8 8 MYCIN • if ... then production rules • backward chaining inference • management of uncertainty
    [Show full text]
  • Environment KEE
    Knowledge Engineering Environment KEE KEE from Intellicorp is the most widely used of the high-end expert system development shells. It is available on Symbolics, Xerox and Texas Instru- ments Artificial Intelligence workstations, Apollo, DEC and Sun worksta- tions,DEC VAX mini computers and 80386-based IBM PC-compatible computers. It provides a powerful and sophisticated development environ- ment, offering hierarchical object representation of data and rules, partitioning of the rulebase, an inference engine providing agenda-driven chaining in addition to backward and forward chaining, and a Truth Maintenance System for what-if and assumption-based reasoning. It offers a suite of development tools, such as specialized structured editors and several methods to build the data structures, induding menus and an English-like construction language. A sophisticated graphics toolkit allows the creation of user interfaces which canbe directly driven by the data values in the knowledge base. KEE has facilities to call the C language, direct two-way communication with SQL-based database systems (through KEEConnection) and on IBM PCs and compatibles the ability to call on PC applications, datafiles and resources. A distributed processing version ofKEE, called Rim-TimeKEE, allows a host VAX computer to support KEE applications whose interfaces reside on PC terminals. Intellicorp also offers SIMKIT, an object-oriented simulation package built on KEE which integrates simulation and expert systems. James Martin 1988 KEE/ 1 Functionality Matrix CD IBM Mainframe Environment DEC Environment PC Environment Micro-to-Mainframe Link MF&PC Implementation ujW CD LAN Support cc _| 2 High-End Workstations 50< Other Al Workstations o 9_ Machines 5 > LISP S 2 CD LISP Co-Processor x UJ BS_ Hooks to External Sequential Files Hooks to Mainframe DBMS: R/W to File UJ Hooks to Non-IBM Mini DBMS: R/W to File O □ Hooks to PC DBMS/Spreadsheets -32.
    [Show full text]
  • Knowledge Based System Development Tools - John K.C
    ARTIFICIAL INTELLIGENCE – Knowledge Based System Development Tools - John K.C. Kingston KNOWLEDGE BASED SYSTEM DEVELOPMENT TOOLS John K.C. Kingston AIAI, University of Edinburgh, Scotland Keywords: Knowledge based systems, programming tools, Artificial Intelligence Contents 1. Introduction 2. KBS Tools: Functionality 2.1 Production rules; forward and backward chaining 2.2 Object oriented programming 2.3. Hypothetical reasoning 3. KBS Tools: Classification 3.1. Classifying KBS tools: Shells 3.2 Classifying KBS Tools: Procedural Languages 3.3 Classifying KBS Tools: Toolkits 3.4 Classifying KBS Tools: Specialised tools 3.5 Classifying KBS tools: ART-like and KEE-like 4. Selecting a KBS tool 4.1 Selecting KBS: Features of KBS tools 5. Selecting KBS Tools 5.1 Selecting KBS: Features of the Problem 5.2 Selecting KBS: Phase of Development 5.3 Selecting KBS: Organisational policies and capabilities 6. Conclusion Appendix Glossary Bibliography Biographical Sketch Summary Knowledge based system programming tools provide a range for facilities for representing knowledge and reasoning with knowledge. The purpose of these facilities is to allowUNESCO knowledge based systems to –be c onstructedEOLSS quickly. This article categorises knowledge based system development tools, supplies examples of developed KBS applications, and discusses the features to consider when selecting a tool for a project. SAMPLE CHAPTERS 1. Introduction Artificial Intelligence tools are almost exclusively software tools; programming languages or program development environments. For Artificial Intelligence is a software-based discipline; despite the popular image of AI research focusing on self- aware robots or autonomous vehicles, hardware problems for artificial intelligence rarely go beyond the integration of motors, gears and video cameras.
    [Show full text]
  • Resource-Constrained Reasoning Using a Reasoner Composition Approach
    Resource-Constrained Reasoning Using a Reasoner Composition Approach Editor(s): Name Surname, University, Country Solicited review(s): Name Surname, University, Country Open review(s): Name Surname, University, Country Wei Tai a, *, John Keeney b, Declan O’Sullivan a a Knowledge and Data Engineering Group, School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland b Network Management Lab, LM Ericsson, Athlone, Ireland Abstract: The marriage between semantic web and sensor-rich systems can semantically link the data related to the physical world with the existent machined-readable domain knowledge encoded on the web. This has allowed a better understanding of the inherently heterogeneous sensor data by allowing data access and processing based on semantics. Research in different domains has been started seeking a better utilization of data in this manner. Such research often assumes that the semantic data is processed in a centralized server and that reliable network connections are always available, which are not realistic in some critical situations. For such critical situations, a more robust semantic system needs to have some local- autonomy and hence on-device semantic data processing is required. As a key enabling part for this strategy, semantic reasoning needs to be developed for resource-constrained environments. This paper shows how reasoner composition (i.e. to automatically adjust a reasoning approach to preserve only the amount of reasoning needed for the ontology to be reasoned over) can achieve resource-efficient semantic reasoning. Two novel reasoner composition algorithms have been designed and implemented. Evaluation indicates that the reasoner composition algorithms greatly reduce the resources required for OWL reasoning, facilitating greater semantic data processing on sensor devices.
    [Show full text]
  • Knowledge-Based Systems
    Chapter 8 Preserving and Applying Human Expertise: Knowledge-Based Systems Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall Additional material © 2007 Dekai Wu Chapter Objectives • Recall: KB systems are adept at preserving captured and/or discovered knowledge for later sharing and/or application. • Introduce the student to the internal operation of knowledge-based systems, including: Knowledge representation Automated reasoning • Introduce the art of knowledge engineering - how to develop knowledge-based systems the tools the techniques. Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Chapter Objectives • Today it’s become much easier to learn about knowledge-based systems, because so many AI knowledge representation techniques from the Lisp Machine days have now been completely embraced and extended by mainstream CS & IT: Object-oriented representations (frames, classes, UML) Class inheritance hierarchies Logical databases (relational and object-relational) Integrated development environments (IDE) • What you may still find less familiar: Inference engines and shells Knowledge engineering Forward vs backward reasoning Default-based reasoning using frames Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Section 8.2 Objectives • Introduces a knowledge-based system from the points of view of those that work with them: The user The knowledge engineer • Introduce the different components of a KBS The inference engine The knowledge base The user interface The fact base The development environment Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Knowledge-Based System: User’s View • From end-user’s perspective, KB system has three components: Intelligent program User interface Problem-specific database (“workspace”) Becerra-Fernandez, et al.
    [Show full text]
  • Developing Expert Systems
    Publication No. FHWA-TS-88422 U.S.De~artment December 1988 of Transportation Developing Expert Systems Research, Development, and Technology Turner-Fairbank Highway Research Center 6300 Georgetown Pike McLean, Virginia 22101-2296 FOREWORD This technology share report provides an overview of expert systems and problems that are amenable to solution by expert systems, and guidelines for building expert systems. The guidelines were developed by the Federal Highway Administration to address the questions faced by highway admini- strators such as, “Does this technology have application in highway engineering and operations?”, “What types of problems can be addressed?”, “How are these computer programs different from other computer programs?” Properly constructed expert systems can be very effective training aids as well as supporting tools for virtually every aspect of highway engineering and management. Copies of the report are available from the National Technical Information Services, 5285 Port Royal Road, Springfield, Virginia 22161, (703) 487-4690. Stanl<y R.-Byi;gton, Director Office of Implementation NOTICE This document is disseminated under the sponsorship of the Department of Transportation in the interest of information exchange. The United States Government assumes no liability for its contents or use thereof. The contents of this report reflect the views of the contractor who is responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official policy of the Depart- ment of Transportation. This report does not constitute a standard, specification, or regulation. The United States Government does not endorse products or manufacturers. Trade or manufacturers’ names appear herein only because they are considered essential to the object of this document.
    [Show full text]