10. Logic Programming with Prolog

Total Page:16

File Type:pdf, Size:1020Kb

10. Logic Programming with Prolog Copyright (C) R.A. van Engelen, FSU Department of Computer Science, 2000-2003 Logic Programming Logic programming is a form of declarative programming 10. Logic Programming With A program is a collection of axioms Each axiom is a Horn clause of the form: Prolog H :- B1, B2, ..., Bn. where H is the head term and B are the body terms Overview i Meaning H is true if all Bi are true Logic Programming A user of the program states a goal (a theorem) to be proven Prolog The logic programming system attempts to find axioms using inference steps that imply the goal (theorem) is true Note: Study Section 11.3 of the textbook, excluding 11.3.2. Resolution Prolog To deduce a goal (theorem), the logic programming system Uses backward chaining searches axioms and combines sub-goals More efficient than forward chaining for larger collections For example, given the axioms: of axioms C :- A, B. Interactive (hybrid compiled/interpreted) D :- C. Applications: expert systems, artificial intelligence, natural To deduce goal D given that A and B are true: language understanding, logical puzzles and games Forward chaining deduces that C is true: Popular system: SWI-Prolog C :- A, B Login: program.cs.fsu.edu and then that D is true: Type: pl to start SWI-Prolog D :- C Type: halt. to halt Prolog (note that a period is used as a Backward chaining finds that D can be proven if sub-goal C command terminator) is true: D :- C the system then deduces that the sub-goal is C is true: C :- A, B Since the system could prove C it has proven D Prolog Terms Prolog Clauses Terms are symbolic expressions that form the building blocks of A program consists of a database of Horn clauses Prolog Each clause consists of a head predicate and body predicates: A Prolog program consists of terms H :- B1, B2, ..., Bn. Data structures processed by a Prolog program are terms A clause is either a rule, e.g. A term is either snowy(X) :- rainy(X), cold(X). a variable: a name beginning with an upper case letter Meaning "If X is rainy and X is cold then this implies that X a constant: a number or string is snowy" an atom: a symbol or a name beginning with a lower case Or a clause is a fact, e.g. letter rainy(rochester). a structure of the form: Meaning "Rochester is rainy." functor(arg1, arg2, ..., argn) This fact is identical to the rule with true as the body where functor is an atom and argi are terms predicate: Examples: rainy(rochester) :- true. X, Y, ABC, and Alice are variables A predicate is a term (must be an atom or a structure) 7, 3.14, and "hello" are constants rainy(rochester) foo, bAR, and + are atoms member(X,Y) bin_tree(foo, bin_tree(bar, glarch)) and +(3,4) are true structures Queries and Goals Example Queries are used to "execute" goals Program with three facts and one rule: A query is interactively entered by a user after a program is rainy(seattle). loaded and stored in the database rainy(rochester). A query has the form cold(rochester). ?- G1, G2, ..., Gn. snowy(X) :- rainy(X), cold(X). Query and response: where Gi are goals A goal is a predicate to be proven true by the programming ?- snowy(rochester). system yes Example program with two facts: Query and response: rainy(seattle). ?- snowy(seattle). rainy(rochester). no Query with one goal to find which city C is rainy (if any): Query and response: ?- rainy(C). ?- snowy(paris). Response by the interpreter: no C = seattle Type a semicolon ; to get next solution: C = rochester Type another semicolon ;: no (no more solutions) Example (cont'd) Backtracking Program: For every successful match of a (sub-)goal with a head predicate rainy(seattle). of a clause, the system keeps this execution point in memory rainy(rochester). together with the current variable bindings to enable cold(rochester). backtracking snowy(X) :- rainy(X), cold(X). An unsuccessful match later forces backtracking in which ?- snowy(C). alternative clauses are searched that match (sub-)goals C = rochester Backtracking unwinds variable bindings to enable establishing because rainy(rochester) and cold(rochester) are sub-goals that new bindings are both true facts in the database snowy(X) with X=seattle is a goal that fails, because cold(X) fails, triggering backtracking Example: Family Relationships attire(mr_pope, watch). % If sir_raymond had tattered cuffs then mr_woodley had the Facts: pincenez: male(albert). attire(mr_woodley, pincenez) :- male(edward). attire(sir_raymond, tattered_cuffs). female(alice). female(victoria). % and vice versa: Rules: attire(sir_raymond,pincenez) :- parents(edward, victoria, albert). attire(mr_woodley, tattered_cuffs). parents(alice, victoria, albert). sister(X,Y) :- female(X), parents(X,M,F), parents(Y,M,F). % A person has tattered cuffs if he is in room 16: Query: attire(X, tattered_cuffs) :- room(X, 16). ?- sister(alice, X1). The system applies backward chaining to find the answer: % A person has black hair if he is in room 14, etc: 1. sister(alice, X1) matches 2nd rule: X = alice, Y = X1 hair(X, black) :- room(X, 14). 2. New goals: female(alice), parents(alice, M, F), hair(X, grey) :- room(X, 12). parents(X1, M, F) hair(X, brown) :- attire(X, pincenez). 3. female(alice) matches 3rd fact hair(X, red) :- attire(X, tattered_cuffs). 4. parents(alice, M, F) matches 2nd fule: M = victoria, F = albert % mr_holman was in room 12, etc: 5. parents(X1, victoria, albert) matches 1st rule: X1 = room(mr_holman, 12). edward room(sir_raymond, 10). room(mr_woodley, 16). Example: Murder Mystery room(X, 14) :- attire(X, watch). % the murderer had brown hair: murderer(X) :- hair(X, brown). % mr_holman had a ring: attire(mr_holman, ring). % mr_pope had a watch: Example: Murder Mystery (cont'd) Unification and Variables Question: who is the murderer? In the previous examples we saw the use of variables, e.g. C and ?- murderer(X). X Trace (indentation showing nesting depth): A variable is instantiated to a term as a result of unification murderer(X) Unification takes place when goals are matched to head hair(X, brown) predicates of rules and facts attire(X, pincenez) Goal in query: rainy(C) X = mr_woodley Fact in database: rainy(seattle) attire(sir_raymond, tattered_cuffs) Unification is the result of the goal-fact match: C = seattle room(sir_raymond, 16) Unification is recursive: FAIL (no facts or rules) An uninstantiated variable unifies with anything, even with FAIL (no alternative rules) other variables which makes them identical (aliases) REDO (found one alternative rule) An atom unifies with an identical atom attire(X, pincenez) A constant unifies with an identical constant X = sir_raymond A structure unfies with another structure if the functor and attire(mr_woodley, tattered_cuffs) number of arguments are the same and the corresponding room(mr_woodley, 16) arguments unify recursively SUCCESS Once a variable is instantiated to a non-variable term, it cannot SUCCESS: X = sir_raymond be changed and cannot be instantiated with a term that has a SUCCESS: X = sir_raymond different structure SUCCESS: X = sir_raymond SUCCESS: X = sir_raymond Unification Examples Lists The built-in predicate =(A,B) succeeds if and only if A and B A list is of the form: can be unified [elt1,elt2, ..., eltn] The goal =(A,B)may be written as A = B where elti are terms ?- a = a. The special list form yes [elt1,elt2, ..., eltn | tail] ?- a = 5. denotes a list whose tail list is tail no ?- [a,b,c] = [a|T]. ?- 5 = 5.0. T = [b,c] no ?- [a,b,c] = [a,b|T]. ?- a = X. T = [c] X = a ?- [a,b,c] = [a,b,c|T]. ?- foo(a,b) = foo(a,b). T = [] yes ?- foo(a,b) = foo(X,b). X = a ?- foo(X,b) = Y. Y = foo(X,b) ?- foo(Z,Z) = foo(a,b). no List Membership Predicates are Relations List membership is tested with the member predicate, defined Predicates are not functions with distinct inputs and outputs by Predicates are more general and define relationships between member(X, [X|T]). objects (terms) member(X, [H|T]) :- member(X, T). member(b, [a,b,c]) relates term b to the list that contains b ?- member(b, [a,b,c]). ?- member(X, [a,b,c]). Execution: X = a ; % type ';' to try to find more solutions member(b, [a,b,c]) does not match predicate member(X1, X = b ; % ... try to find more solutions X = c ; % ... try to find more solutions [X1|T1]) no member(b, [a,b,c]) matches predicate member(X , 1 ?- member(b, [a,Y,c]). [H1|T1]) Y = b with X1 = b, H1 = a, and T1 = [b,c] ?- member(b, L). Sub-goal to prove:member(X , T ) with X = b and T = L = [b|_G255] 1 1 1 1 therefore, L is a list with b as head and _G255 as tail, where [b,c] _G255 is a new variable member(b, [b,c]) matches predicate member(X , [X |T ]) 2 2 2 List appending predicate: with X2 = b and T2 = [c] append([], A, A). The sub-goal is proven, so member(b, [a,b,c]) is proven append([H|T], A, [H|L]) :- append(T, A, L). (deduced) ?- append([a,b,c], [d,e], X). Note: variables are "local" to a clause (just like the formal X = [a,b,c,d,e] arguments of a function) ?- append(Y, [d,e], [a,b,c,d,e]). Local variables such as X1 and X2 are used to indicate a Y = [a,b,c] match of a (sub)-goal and a head predicate of a clause ?- append([a,b,c], Z, [a,b,c,d,e]).
Recommended publications
  • Examples of Forward Chaining and Backward Chaining
    Examples Of Forward Chaining And Backward Chaining Nauseous Frederich vernacularising superbly. Creamlaid and unspeakable Townsend justifying his Capek elasticates evinced firm. Undulant Filbert demonstrating surpassingly. Backward chaining AMA Behavioral Consulting LLC Our. What is fragile and backward chaining? Mixed chaining algorithm combining forward & backward. For real here draft a file of Pfc rules and facts which are appropriate either the. This narrative that contains sets increase its missiles, backward and backward chaining are readily apparent that value. What is a few hours starting and put on mastery of chaining backward chaining is web page has rules. Inference in first-order logic. For example students who have been approve to cargo the most butter near the bread. Backward chaining reasoning methods begin to a hit of hypotheses and work. Forward and backward chaining are great ways to teach children new skills. 1 Similar to proposi onal logic we are infer new facts using forward chaining. Let's bubble a millennium at it simple examples to perceive you differentiate. How do you went forward chaining? The backtracking process in backward chaining employs the Prolog programming language which rite also discussed in this thesis Some examples for better. The facility of forward chaining is backward chaining. Backward chaining is sitting opposite lever forward chaining. Questions 7. Forward Chaining and Backward Chaining PowerPoint. Helping Your Child be more Self-reliant Backward Chaining. 1Forward Chaining In Forward Chaining whenever the exercise value changes automatically the kidney value gets calculated For Example. Backward & Forward Chaining I Love ABA. Knowledge base with backward chaining to decide which an advantage of forward chaining backward and teach each component behaviors quickly or step that we would they cannot be.
    [Show full text]
  • A Best-First Backward-Chaining Search Strategy Based on Learned
    A Best-first Backward-chaining Search Strategy based on Learned Predicate Representations Alexander Sakharov Synstretch, Framingham, MA, U.S.A. Keywords: Knowledge Base, First-order Logic, Resolution, Backward Chaining, Neural-symbolic Computing, Tensorization. Abstract: Inference methods for first-order logic are widely used in knowledge base engines. These methods are pow- erful but slow in general. Neural networks make it possible to rapidly approximate the truth values of ground atoms. A hybrid neural-symbolic inference method is proposed in this paper. It is a best-first search strategy for backward chaining. The strategy is based on neural approximations of the truth values of literals. This method is precise and the results are explainable. It speeds up inference by reducing backtracking. 1 INTRODUCTION inference with neural networks (NN) (Rocktaschel,¨ 2017; Serafini and d’Avila Garcez, 2016; Dong et al., The facts and rules of knowledge bases (KB) are usu- 2019; Marra et al., 2019; Van Krieken et al., 2019; ally expressible in first-order logic (FOL) (Russell Sakharov, 2019). Most commonly, it is done via pred- and Norvig, 2009). Typically, KB facts are literals. icate tensorization. Objects are embedded as real- Quantifier-free implications A ( A1 ^ ::: ^ Ak, where valued vectors of a fixed length for the use in NNs. A;A1;:::;Ak are literals, are arguably the most com- Predicates are represented by one or more tensors of mon form of rules in KBs. All these literals are pos- various ranks which are learned. The truth values of itive in Prolog rules. In general logic programs, rule ground atoms of any predicate P are approximated by heads are positive.
    [Show full text]
  • Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm
    Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Incremental Update of Datalog Materialisation: The Backward/Forward Algorithm Boris Motik, Yavor Nenov, Robert Piro and Ian Horrocks Department of Computer Science, Oxford University Oxford, United Kingdom fi[email protected] Abstract much harder. Fact insertion can be efficiently handled us- ing the standard semina¨ıve algorithm (Abiteboul, Hull, and Datalog-based systems often materialise all consequences of a datalog program and the data, allowing users’ queries Vianu 1995): datalog (without negation-as-failure) is mono- tonic, so one can just ‘continue’ materialisation from E+; to be evaluated directly in the materialisation. This process, + however, can be computationally intensive, so most systems hence, in this paper we usually assume that E = ;. In con- update the materialisation incrementally when input data trast, fact deletion is much more involved since one must changes. We argue that existing solutions, such as the well- identify and retract all facts in I not derivable from E n E−. known Delete/Rederive (DRed) algorithm, can be inefficient Gupta and Mumick (1995) presents an extensive overview in cases when facts have many alternate derivations. As a pos- of the existing approaches to incremental update, which can sible remedy, we propose a novel Backward/Forward (B/F) be classified into two groups. algorithm that tries to reduce the amount of work by a combi- The approaches in the first group keep track of auxil- nation of backward and forward chaining. In our evaluation, iary information during materialisation to efficiently delete the B/F algorithm was several orders of magnitude more ef- ficient than the DRed algorithm on some inputs, and it was facts.
    [Show full text]
  • An Intelligent Student Assistant Pooja Lodhi, Omji Mishra, Shikha Jain*, Vasvi Bajaj
    Special Issue on Big Data and Open Education StuA: An Intelligent Student Assistant Pooja Lodhi, Omji Mishra, Shikha Jain*, Vasvi Bajaj Department of Computer Science, Jaypee Institute of Information Technology, Noida (India) Received 7 September 2017 | Accepted 23 January 2018 | Published 16 February 2018 Abstract Keywords With advanced innovation in digital technology, demand for virtual assistants is arising which can assist a person Artificial Intelligence, and at the same time, minimize the need for interaction with the human. Acknowledging the requirement, we Backward Chaining, propose an interactive and intelligent student assistant, StuA, which can help new-comer in a college who are CLIPS, Expert System, hesitant in interacting with the seniors as they fear of being ragged. StuA is capable of answering all types of Rule-based System, queries of a new-comer related to academics, examinations, library, hostel and extra curriculum activities. The Virtual Assistant. model is designed using CLIPS which allows inferring using forward chaining. Nevertheless, a generalized algorithm for backward chaining for CLIPS is also implemented. Validation of the proposed model is presented in five steps which show that the model is complete and consistent with 99.16% accuracy of the knowledge DOI: 10.9781/ijimai.2018.02.008 model. Moreover, the backward chaining algorithm is found to be 100% accurate. I. Introduction [11-19], design [20-21, 30], e-learning [28-29] and recommendations [31]. Some of the stochastic assistants are Chatbots, question answering rtificial Intelligence is the science of making computers systems and assistants like Apple’s Siri, Google’s Google Assistant, Aperceive their environment in a similar way as a human does and Amazon’s Alexa, Microsoft’s Cortana and Facebook’s M.
    [Show full text]
  • The Comparison Between Forward and Backward Chaining
    International Journal of Machine Learning and Computing, Vol. 5, No. 2, April 2015 The Comparison between Forward and Backward Chaining Ajlan Al-Ajlan system. The comparison will be based on the adequacy of the Abstract—Nowadays, more and more students all over the reasoning strategy, which means to what extent the given world need expert systems, especially in academic sectors. They strategy can be used to represent all knowledge required for need advice in order to be successful in their studies, and this the management expert system. The basic description of an advice must be provided by the academic management. There are two reasoning strategies in expert system, which have educational management expert system is the use of expert become the major practical application of artificial intelligence system techniques in management to be used in the academic research: forward chaining and backward chaining. The first sector. What does academic management mean? In any one starts from the available facts and attempts to draw academic sector there must be co-ordinator who co-ordinates conclusions about the goal. The second strategy starts from specific programmes and should provide suitable advice for expectations of what the goal is, and then attempts to find every student who needs it. The co-ordinator should consider evidence to support these hypotheses. The aim of this paper is to make a comparative study to identify which reasoning various factors before he provides the advice, and neglecting strategy system (forward chaining or backward chaining) is any one of them might affect the advice negatively. This more applicable when making evaluations in expert system will facilitate the task of the co-ordinator, and will management, especially in the academic field.
    [Show full text]
  • Introduction to Expert Systems, the MYCIN System
    Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 and MoSIG M1 Winter Semester 2012 Lecture 2 3 February 2012 Outline: Introduction to Expert Systems ..................................2 Application Domains.................................................................. 2 Programming Techniques for Expert Systems............................ 3 The MYCIN Expert System ........................................5 MYCIN: An Antibiotics Therapy Advisor.................................. 6 Facts........................................................................................... 8 PARAMETERS ....................................................................... 10 The MYCIN Confidence Factor ............................................... 10 RULES..................................................................................... 11 Evidential Reasoning and combining Hypotheses : .................. 12 Control ..................................................................................... 13 Introduction to Expert Systems: MYCIN Introduction to Expert Systems Application Domains Expert systems are a class of software that is useful for domains that are Subjective, Poorly formalized, and require manipulating large numbers of poorly related facts. Examples include diagnosis, counseling, debugging, game playing, design in complex spaces and problem solving. Expert system provides an alternative to algorithmic programming. Expert Systems are typically constructed by hand coding symbolic expressions of the expertise
    [Show full text]
  • Logical Reasoning Systems
    CS 1571 Introduction to AI Lecture 12 Logical reasoning systems Milos Hauskrecht [email protected] 5329 Sennott Square CS 1571 Intro to AI Logical inference in FOL Logical inference problem: • Given a knowledge base KB (a set of sentences) and a sentence α , does the KB semantically entail α ? KB |= α ? In other words: In all interpretations in which sentences in the KB are true, is also α true? Logical inference problem in the first-order logic is undecidable !!!. No procedure that can decide the entailment for all possible input sentences in finite number of steps. CS 1571 Intro to AI 1 Resolution inference rule • Recall: Resolution inference rule is sound and complete (refutation-complete) for the propositional logic and CNF A ∨ B, ¬A ∨ C B ∨ C • Generalized resolution rule is sound and complete (refutation- complete) for the first-order logic and CNF (w/o equalities) σ = φ ¬ψ ≠ UNIFY ( i , j ) fail φ ∨φ ∨φ ψ ∨ψ ∨ ψ 1 2 K k , 1 2 K n σ φ ∨ ∨φ ∨φ ∨φ ∨ψ ∨ ∨ψ ∨ψ ψ SUBST( , 1 K i−1 i+1 K k 1 K j−1 j+1 K n ) Example: P(x) ∨ Q(x), ¬Q(John) ∨ S(y) P(John) ∨ S(y) The rule can be also written in the implicative form (book) CS 1571 Intro to AI Inference with resolution rule • Proof by refutation: – Prove that KB , ¬ α is unsatisfiable – resolution is refutation-complete • Main procedure (steps): 1. Convert KB , ¬ α to CNF with ground terms and universal variables only 2. Apply repeatedly the resolution rule while keeping track and consistency of substitutions 3.
    [Show full text]
  • CSM10 Intelligent Information Systems
    CSM10 Intelligent Information Systems Week 4 1 Reasoning in the real world Case study: MYCIN Coursework project and report CSM10 Spring Semester 2007 Intelligent Information Systems Professor Ian Wells 2 The journey so far ... • introduction to the module • what is intelligence? • cognitive processes • how we perceive, remember, solve puzzles • how a computer can solve problems • representing knowledge using semantic networks and production rules • reasoning in the real world • case study (MYCIN), uncertainty, chess, problems 3 3 Intelligence Perception & cognitive processes Knowledge representation Real-world reasoning 4 Inside an expert system: MYCIN A case study of one of the first major expert systems and an analysis of its performance when compared with humans at solving complex problems 5 Once upon a time ... • it is a dark night in 1972 • you are on holiday in the USA • you have to be admitted to hospital • the doctor prescribes antibiotics • what should you do .... ? • be very worried ... !! 6 6 Treatment of infection • _____ % acceptable • _____ % questionable • _____ % clearly irrational • (Roberts & Visconti: Am J Hosp Pharm 1972) 7 7 History of MYCIN • diagnosis of bacteraemias • suggestions for suitable therapy • PhD thesis - Edward Shortliffe (Stanford 1974) • project lasting 10 years + 5 more PhDs • numerous offspring e.g. EMYCIN, PUFF, CLOT • only Oncocin was ever used routinely • foundation for much of today’s KBS technology 8 8 MYCIN • if ... then production rules • backward chaining inference • management of uncertainty
    [Show full text]
  • Environment KEE
    Knowledge Engineering Environment KEE KEE from Intellicorp is the most widely used of the high-end expert system development shells. It is available on Symbolics, Xerox and Texas Instru- ments Artificial Intelligence workstations, Apollo, DEC and Sun worksta- tions,DEC VAX mini computers and 80386-based IBM PC-compatible computers. It provides a powerful and sophisticated development environ- ment, offering hierarchical object representation of data and rules, partitioning of the rulebase, an inference engine providing agenda-driven chaining in addition to backward and forward chaining, and a Truth Maintenance System for what-if and assumption-based reasoning. It offers a suite of development tools, such as specialized structured editors and several methods to build the data structures, induding menus and an English-like construction language. A sophisticated graphics toolkit allows the creation of user interfaces which canbe directly driven by the data values in the knowledge base. KEE has facilities to call the C language, direct two-way communication with SQL-based database systems (through KEEConnection) and on IBM PCs and compatibles the ability to call on PC applications, datafiles and resources. A distributed processing version ofKEE, called Rim-TimeKEE, allows a host VAX computer to support KEE applications whose interfaces reside on PC terminals. Intellicorp also offers SIMKIT, an object-oriented simulation package built on KEE which integrates simulation and expert systems. James Martin 1988 KEE/ 1 Functionality Matrix CD IBM Mainframe Environment DEC Environment PC Environment Micro-to-Mainframe Link MF&PC Implementation ujW CD LAN Support cc _| 2 High-End Workstations 50< Other Al Workstations o 9_ Machines 5 > LISP S 2 CD LISP Co-Processor x UJ BS_ Hooks to External Sequential Files Hooks to Mainframe DBMS: R/W to File UJ Hooks to Non-IBM Mini DBMS: R/W to File O □ Hooks to PC DBMS/Spreadsheets -32.
    [Show full text]
  • Knowledge Based System Development Tools - John K.C
    ARTIFICIAL INTELLIGENCE – Knowledge Based System Development Tools - John K.C. Kingston KNOWLEDGE BASED SYSTEM DEVELOPMENT TOOLS John K.C. Kingston AIAI, University of Edinburgh, Scotland Keywords: Knowledge based systems, programming tools, Artificial Intelligence Contents 1. Introduction 2. KBS Tools: Functionality 2.1 Production rules; forward and backward chaining 2.2 Object oriented programming 2.3. Hypothetical reasoning 3. KBS Tools: Classification 3.1. Classifying KBS tools: Shells 3.2 Classifying KBS Tools: Procedural Languages 3.3 Classifying KBS Tools: Toolkits 3.4 Classifying KBS Tools: Specialised tools 3.5 Classifying KBS tools: ART-like and KEE-like 4. Selecting a KBS tool 4.1 Selecting KBS: Features of KBS tools 5. Selecting KBS Tools 5.1 Selecting KBS: Features of the Problem 5.2 Selecting KBS: Phase of Development 5.3 Selecting KBS: Organisational policies and capabilities 6. Conclusion Appendix Glossary Bibliography Biographical Sketch Summary Knowledge based system programming tools provide a range for facilities for representing knowledge and reasoning with knowledge. The purpose of these facilities is to allowUNESCO knowledge based systems to –be c onstructedEOLSS quickly. This article categorises knowledge based system development tools, supplies examples of developed KBS applications, and discusses the features to consider when selecting a tool for a project. SAMPLE CHAPTERS 1. Introduction Artificial Intelligence tools are almost exclusively software tools; programming languages or program development environments. For Artificial Intelligence is a software-based discipline; despite the popular image of AI research focusing on self- aware robots or autonomous vehicles, hardware problems for artificial intelligence rarely go beyond the integration of motors, gears and video cameras.
    [Show full text]
  • Resource-Constrained Reasoning Using a Reasoner Composition Approach
    Resource-Constrained Reasoning Using a Reasoner Composition Approach Editor(s): Name Surname, University, Country Solicited review(s): Name Surname, University, Country Open review(s): Name Surname, University, Country Wei Tai a, *, John Keeney b, Declan O’Sullivan a a Knowledge and Data Engineering Group, School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland b Network Management Lab, LM Ericsson, Athlone, Ireland Abstract: The marriage between semantic web and sensor-rich systems can semantically link the data related to the physical world with the existent machined-readable domain knowledge encoded on the web. This has allowed a better understanding of the inherently heterogeneous sensor data by allowing data access and processing based on semantics. Research in different domains has been started seeking a better utilization of data in this manner. Such research often assumes that the semantic data is processed in a centralized server and that reliable network connections are always available, which are not realistic in some critical situations. For such critical situations, a more robust semantic system needs to have some local- autonomy and hence on-device semantic data processing is required. As a key enabling part for this strategy, semantic reasoning needs to be developed for resource-constrained environments. This paper shows how reasoner composition (i.e. to automatically adjust a reasoning approach to preserve only the amount of reasoning needed for the ontology to be reasoned over) can achieve resource-efficient semantic reasoning. Two novel reasoner composition algorithms have been designed and implemented. Evaluation indicates that the reasoner composition algorithms greatly reduce the resources required for OWL reasoning, facilitating greater semantic data processing on sensor devices.
    [Show full text]
  • Knowledge-Based Systems
    Chapter 8 Preserving and Applying Human Expertise: Knowledge-Based Systems Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall Additional material © 2007 Dekai Wu Chapter Objectives • Recall: KB systems are adept at preserving captured and/or discovered knowledge for later sharing and/or application. • Introduce the student to the internal operation of knowledge-based systems, including: Knowledge representation Automated reasoning • Introduce the art of knowledge engineering - how to develop knowledge-based systems the tools the techniques. Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Chapter Objectives • Today it’s become much easier to learn about knowledge-based systems, because so many AI knowledge representation techniques from the Lisp Machine days have now been completely embraced and extended by mainstream CS & IT: Object-oriented representations (frames, classes, UML) Class inheritance hierarchies Logical databases (relational and object-relational) Integrated development environments (IDE) • What you may still find less familiar: Inference engines and shells Knowledge engineering Forward vs backward reasoning Default-based reasoning using frames Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Section 8.2 Objectives • Introduces a knowledge-based system from the points of view of those that work with them: The user The knowledge engineer • Introduce the different components of a KBS The inference engine The knowledge base The user interface The fact base The development environment Becerra-Fernandez, et al. -- Knowledge Management 1/e -- © 2004 Prentice Hall / Additional material © 2007 Dekai Wu Knowledge-Based System: User’s View • From end-user’s perspective, KB system has three components: Intelligent program User interface Problem-specific database (“workspace”) Becerra-Fernandez, et al.
    [Show full text]