Knowledge Acquisition, Representation, and Reasoning

Total Page:16

File Type:pdf, Size:1020Kb

Knowledge Acquisition, Representation, and Reasoning TURBMW18_0131986600.QXD 11/9/06 8:58 PM Page W-174 CHAPTER Knowledge 18 Acquisition, Representation, and Reasoning Learning Objectives ◆ Understand the nature of knowledge ◆ Understand the knowledge-engineering process ◆ Learn different approaches to knowledge acquisition ◆ Explain the pros and cons of different knowledge acquisition approaches ◆ Illustrate methods for knowledge verification and validation ◆ Understand inference strategies in rule-based intelligent systems ◆ Explain uncertainties and uncertainty processing in expert systems (ES) n this book, we have described the concepts and structure of knowledge-based ES. IFor such systems to be useful, it is critical to have powerful knowledge in the knowl- edge base and a good inference engine that can derive valid conclusions from the knowledge base. In this chapter, we describe the process for acquiring knowledge from human experts, validating and representing the knowledge, creating and operating inference mechanisms, and dealing with uncertainty.The chapter is divided into the fol- lowing sections: W18.1 Opening Vignette: Development of a Real-Time Knowledge-Based System at Eli Lilly W18.2 Concepts of Knowledge Engineering W18.3 The Scope and Types of Knowledge W18.4 Methods of Acquiring Knowledge from Experts W18.5 Acquiring Knowledge from Multiple Experts W18.6 Automated Knowledge Acquisition from Data and Documents W18.7 Knowledge Verification and Validation W18.8 Representation of Knowledge W18.9 Reasoning in Intelligent Systems W-174 TURBMW18_0131986600.QXD 11/9/06 8:58 PM Page W-175 Opening Vignette W-175 ◆ W18.10 Explanation and Metaknowledge W18.11 Inferencing with Uncertainty W18.12 Expert Systems Development W18.13 Knowledge Acquisition and the Internet W18.1 OPENING VIGNETTE: DEVELOPMENT OF A REAL-TIME KNOWLEDGE-BASED SYSTEM AT ELI LILLY PROBLEM Eli Lilly (lilly.com) is a large U.S.-based, global pharmaceutical manufacturing company that has 42,600 employees worldwide and markets it products in about 160 countries. Production of medicines requires a special process called fermentation. A typical fermentation process involves a series of stirred vessels in which a culture of microor- ganisms is grown and transferred to progressively larger tanks. In order to have quality products, the fermentation process must be monitored carefully and controlled consis- tently. Unfortunately, some key quality parameters are difficult to control using tradi- tional statistical process control. For example, it is difficult to quantify the state of a fer- mentation seed, but without this information, predictions of the behavior of a production cycle may be imprecise. Furthermore, Lilly operates many plants that use the same process but are operated by different employees. The quality of the product may vary substantially when different operators use their experience to control the process. The turnover of experienced operators also causes a problem for some plants. As a result, both productivity and quality problems emerge. SOLUTION Eli Lilly is using ES to overcome its problems. Its motivation for adopting a new sys- tem was to make the expertise of key technical employees available to the process operators 24 hours a day, anywhere. It was felt that an ES would allow relevant por- tions of the knowledge base to be cloned and made available to all of the company’s plants. Eli Lilly installed Gensym’s G2 (gensym.com/?p=g2_platform) at its Speke site to construct an intelligent quality-alert system that would provide consistent real-time advice to the operators. DEVELOPMENT PROCESS The Eli Lilly system focuses on the seed stage of the fermentation process. Four knowl- edge engineers were involved in the system development: three for knowledge elicitation and one for coding knowledge into G2 rules and procedures. The knowledge engineers had some domain knowledge, and they were asked to record the knowledge of the experts but not to preempt or optimize it with personal attitudes toward the domain— and in particular not to rationalize the knowledge into a framework with which the inter- viewer was familiar.They were also asked not to intimidate the experts by virtue of their domain expertise. The entire development process took six months to complete, and it included the following major steps: 1. Knowledge elicitation. The knowledge engineers interviewed 10 experienced experts to acquire their knowledge. A knowledge acquisition tool, KAT, was used to facilitate the knowledge acquisition. TURBMW18_0131986600.QXD 11/9/06 8:58 PM Page W-176 ◆ W-176 CHAPTER 18 Knowledge Acquisition, Representation, and Reasoning 2. Knowledge fusion. The interviews resulted in 10 different knowledge sets, repre- sented as graphs. A joint session integrated all sets. 3. Coding of the knowledge base. The resulting knowledge graph was converted into rules acceptable to G2. Initially, a total of 60 rules were produced. 4. Testing and evaluation. The system was tested using values that simulated anom- alies for the variables and output verification. Results Configuring the scalable automation was more intuitive and faster even when taking the learning curve into consideration. The program executes exactly as drawn making visual proof of calculations quick and easy.Two routines can be activated during simu- lation or start-up to find which gives the best response time. Such comparisons are very difficult with older automation. OLE for process control (OPC) standard augments communications between flex- ible production areas. It provides data exchange standards and constructs that allows information sharing among programs, much like Microsoft Office. OPC also provides more robust, secure, and object-oriented code and greater error checking. Sources: Adapted from A. Ranjan, J. Glassey, G. Montague, and P.Mohan, “From Process Experts to a Real-Time Knowledge-Based System,” Expert Systems, Vol. 19, No. 2, 2002, pp. 69–79; and A. Bullock, Life Sciences/Eli Lilly—Success Story, February 1988, easydeltav.com/successstories/ lilly.asp (accessed April 2006). Questions for the Opening Vignette 1. Why did Eli Lilly need to develop an intelligent system for providing advice to process operators? 2. Why were the coding knowledge engineers asked not to use their own knowledge framework but to record only knowledge from the experts? 3. Why do we need knowledge engineers to develop ES? Can we ask experts to develop the system by themselves? 4. What do you think of Eli Lilly’s approach, which developed 10 separate knowl- edge bases and then put them together through knowledge fusion? What are the pros and cons of this approach? 5. For system testing and evaluation, Eli Lilly used simulated data rather than real data. Why is this acceptable for real-time ES? Can you think of a better approach for system testing and evaluation? 6. What are the benefits of using knowledge acquisition tools? WHAT WE CAN LEARN FROM THIS VIGNETTE The opening vignette illustrates the process of acquiring knowledge and deploying it in a real-time monitoring system. It also illustrates the use of computer-aided knowledge acquisition tools to make the development job easier and to integrate knowledge from multiple experts. W18.2 CONCEPTS OF KNOWLEDGE ENGINEERING The process of acquiring knowledge from experts and building a knowledge base is called knowledge engineering. The activity of knowledge engineering is defined in the pioneering work of Feigenbaum and McCorduck (1983) as the art of bringing the TURBMW18_0131986600.QXD 11/9/06 8:58 PM Page W-177 Concepts of Knowledge Engineering W-177 ◆ principles and tools of artificial intelligence research to bear on difficult applications problems requiring the knowledge of experts for their solutions. Knowledge engineer- ing involves the cooperation of human experts in the domain working with the knowl- edge engineer to codify and make explicit the rules (or other procedures) that a human expert uses to solve real problems. The field is related to software engineering. Knowledge engineering can be viewed from two perspectives: narrow and broad. According to the narrow perspective, knowledge engineering deals with knowledge acquisition, representation, validation, inferencing, explanation, and maintenance. Alternatively; according to the broad perspective, the term describes the entire process of developing and maintaining intelligent systems. In this book, we use the narrow definition. The knowledge possessed by human experts is often unstructured and not explic- itly expressed. A major goal of knowledge engineering is to help experts articulate what they know and document the knowledge in a reusable form. For details, see en.wikipedia.org/wiki/Knowledge_engineering. THE KNOWLEDGE-ENGINEERING PROCESS The knowledge-engineering process includes five major activities: 1. Knowledge acquisition. Knowledge acquisition involves the acquisition of knowl- edge from human experts, books, documents, sensors, or computer files. The knowledge may be specific to the problem domain or to the problem-solving pro- cedures, it may be general knowledge (e.g., knowledge about business), or it may be metaknowledge (knowledge about knowledge). (By metaknowledge, we mean information about how experts use their knowledge to solve problems and about problem-solving procedures in general.) Byrd (1995) formally verified that knowl- edge acquisition is the bottleneck in ES development today; thus, much theoretical and applied research is still being conducted in this area. An analysis of more than 90 ES applications and their
Recommended publications
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Automatic Knowledge Retrieval from the Web
    Automatic Knowledge Retrieval from the Web Marcin Skowron and Kenji Araki Graduate School of Information Science and Technology, Hokkaido University, Kita-ku Kita 14-jo Nishi 8-chome, 060–0814 Sapporo, Japan Abstract. This paper presents the method of automatic knowledge retrieval from the web. The aim of the system that implements it, is to automatically create entries to a knowledge database, similar to the ones that are being provided by the volunteer contributors. As only a small fraction of the statements accessible on the web can be treated as valid knowledge concepts we considered the method for their filtering and verification, based on the similarity measurements with the concepts found in the manually created knowledge database. The results demonstrate that the system can retrieve valid knowledge concepts both for topics that are described in the manually created database, as well as the ones that are not covered there. 1 Introduction Despite the years of research in the field of Artificial Intelligence, the creation of a machine with the ability to think is still far from realization. Although computer systems are capable of performing several complicated tasks that require human beings to extensively use their thinking capabilities, machines still cannot engage into really meaningful conversation or understand what people talk about. One of the main unresolved problems is the lack of machine usable knowledge. Without it, machines cannot reason about the everyday world in a similar way to human beings. In the last decade we have witnessed a few attempts to create knowledge databases using various approaches: man- ual, machine learning and mass collaboration of volunteer contributors.
    [Show full text]
  • Theory-Based Knowledge Acquisition for Ontology Development
    UNIVERSITÄT HOHENHEIM Theory-based Knowledge Acquisition for Ontology Development A dissertation submitted to the Faculty of Business, Economics and Social Sciences of the University of Hohenheim in partial fulfillment of the requirements for the degree of Doctor of Economics (Dr. oec.) by Andreas Scheuermann July 2016 Principal advisor: Prof. Dr. Stefan Kirn Co-advisor: Prof. Dr. Ralph Bergmann Chair of the examination committee: Prof. Dr. Christian Ernst Dean: Prof. Dr. Dirk Hachmeister Date of oral examination: 7. July 2016 Abstract iii Abstract This thesis concerns the problem of knowledge acquisition in ontology development. Knowledge acquisition is essential for developing useful ontologies but it is a complex and error-prone task. When capturing specific knowledge about a particular domain of interest, the problem of knowledge acquisition occurs due to linguistic, cognitive, modelling, and methodical difficulties. For overcoming these four difficulties, this research proposes a theory-based knowledge acquisition method. By studying the knowledge base, basic terms and concepts in the areas of ontology, ontology development, and knowledge acquisition are defined. A theoretical analysis of knowledge acquisition identifies linguistic, cognitive, modelling, and methodical difficulties, for which a survey of 15 domain ontologies provides further empirical evidence. A review of existing knowledge acquisition approaches shows their insufficiencies for reducing the problem of knowledge acquisition. As the underpinning example, a description of the domain of transport chains is provided. Correspondingly, a theory in business economics, i.e. the Contingency Approach, is selected. This theory provides the key constructs, relationships, and dependencies that can guide knowledge acquisition in the business domain and, thus, theoretically substantiate knowledge acquisition.
    [Show full text]
  • Information Extraction Using Natural Language Processing
    INFORMATION EXTRACTION USING NATURAL LANGUAGE PROCESSING Cvetana Krstev University of Belgrade, Faculty of Philology Information Retrieval and/vs. Natural Language Processing So close yet so far Outline of the talk ◦ Views on Information Retrieval (IR) and Natural Language Processing (NLP) ◦ IR and NLP in Serbia ◦ Language Resources (LT) in the core of NLP ◦ at University of Belgrade (4 representative resources) ◦ LR and NLP for Information Retrieval and Information Extraction (IE) ◦ at University of Belgrade (4 representative applications) Wikipedia ◦ Information retrieval ◦ Information retrieval (IR) is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on full-text or other content-based indexing. ◦ Natural Language Processing ◦ Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language understanding, enabling computers to derive meaning from human or natural language input; and others involve natural language generation. Experts ◦ Information Retrieval ◦ As an academic field of study, Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collection (usually stored on computers). ◦ C. D. Manning, P. Raghavan, H. Schutze, “Introduction to Information Retrieval”, Cambridge University Press, 2008 ◦ Natural Language Processing ◦ The term ‘Natural Language Processing’ (NLP) is normally used to describe the function of software or hardware components in computer system which analyze or synthesize spoken or written language.
    [Show full text]
  • Open Mind Common Sense: Knowledge Acquisition from the General Public
    Open Mind Common Sense: Knowledge Acquisition from the General Public Push Singh, Grace Lim, Thomas Lin, Erik T. Mueller Travell Perkins, Mark Tompkins, Wan Li Zhu MIT Media Laboratory 20 Ames Street Cambridge, MA 02139 USA {push, glim, tlin, markt, wlz}@mit.edu, [email protected], [email protected] Abstract underpinnings for commonsense reasoning (Shanahan Open Mind Common Sense is a knowledge acquisition 1997), there has been far less work on finding ways to system designed to acquire commonsense knowledge from accumulate the knowledge to do so in practice. The most the general public over the web. We describe and evaluate well-known attempt has been the Cyc project (Lenat 1995) our first fielded system, which enabled the construction of which contains 1.5 million assertions built over 15 years a 400,000 assertion commonsense knowledge base. We at the cost of several tens of millions of dollars. then discuss how our second-generation system addresses Knowledge bases this large require a tremendous effort to weaknesses discovered in the first. The new system engineer. With the exception of Cyc, this problem of scale acquires facts, descriptions, and stories by allowing has made efforts to study and build commonsense participants to construct and fill in natural language knowledge bases nearly non-existent within the artificial templates. It employs word-sense disambiguation and intelligence community. methods of clarifying entered knowledge, analogical inference to provide feedback, and allows participants to validate knowledge and in turn each other. Turning to the general public 1 In this paper we explore a possible solution to this Introduction problem of scale, based on one critical observation: Every We would like to build software agents that can engage in ordinary person has common sense of the kind we want to commonsense reasoning about ordinary human affairs.
    [Show full text]
  • Propositional Knowledge: Acquisition and Application to Syntactic and Semantic Parsing
    Thesis for the Degree of Doctor of Philosophy 2017 Propositional Knowledge: Acquisition and Application to Syntactic and Semantic Parsing Bernardo Cabaleiro Barciela University Master's Degree in Languages and Computer Systems (National Distance Education University) Doctoral Programme in Intelligent Systems Academic advisor: Anselmo Peñas Padilla Associate Professor in the Languages and Computer Systems Department (National Distance Education University) Thesis for the Degree of Doctor of Philosophy 2017 Propositional Knowledge: Acquisition and Application to Syntactic and Semantic Parsing Bernardo Cabaleiro Barciela University Master's Degree in Languages and Computer Systems (National Distance Education University) Doctoral Programme in Intelligent Systems Academic advisor: Anselmo Peñas Padilla Associate Professor in the Languages and Computer Systems Department (National Distance Education University) A mi familia Acknowledgements Esta tesis pone punto final a mis estudios de doctorado, y, a la vez, supone el fin de una bonita etapa de mi vida. A lo largo de este tiempo me han acompañado muchas personas que han hecho que esta experiencia sea inolvidable. Estas líneas son para intentar agradecerles todo lo que han hecho por mí. En primer lugar, me gustaría mostrar mi agradecimiento a Anselmo Peñas por su dirección tanto del trabajo de fin de máster como de esta tesis doctoral. Durante todo este tiempo Anselmo ha puesto mucho empeño para formarme, no sólo en el campo del PLN sino también como investigador. Quiero agradecerle la libertad que me ha dado para trabajar en lo que me interesaba, y el especial cuidado que ha puesto en corregir y mejorar esos trabajos. Me gustaría dar las gracias también a mis compañeros en el departamento de lenguajes y sistemas, tanto a estuvieron como a los que están.
    [Show full text]
  • Intelligent Technique to Accomplish a Effective Knowledge Retrieval from Distributed Repositories
    INTELLI 2014 : The Third International Conference on Intelligent Systems and Applications Intelligent Technique to Accomplish a Effective Knowledge Retrieval from Distributed Repositories. Antonio Martín, Carlos León Department of Electronic Technology Higher Technical School of Computer Engineering Sevilla, Spain [email protected], [email protected] Abstract— Currently, an enormous quantity of heterogeneous the semantic and Artificial Intelligence (AI) issues from the and distributed information is stored in the current digital whole life cycle and architecture point of view [1]. Our work libraries. Access to these collections poses a serious challenge, differs from related projects in that we build ontology-based however, because present search techniques based on manually contextual profiles and we introduce an approaches used annotated metadata and linear replay of material selected by metadata-based in ontology search and expert systems [2]. the user do not scale effectively or efficiently to large We study improving the efficiency of search methods to collections. The Artificial Intelligence and Semantic Web search a distributed data space like a DL. The objective has provide a common framework that allows knowledge to be focused on creating technologically complex environments shared and reused. In this paper, we propose a comprehensive in Education, Learning and Teaching in the DL domain. We approach for discovering information objects in large digital presented an intelligent approach to develop an efficient collections based on analysis of recorded semantic metadata in those objects and the application of expert system technologies. semantic search engine. It incorporates semantic Web and AI We suggest a conceptual architecture for a semantic and technologies to enable not only precise location of DL intelligent search engine.
    [Show full text]
  • Knowledge Management Design Using Collaborative Knowledge Retrieval Function
    The Asian Journal of Technology Management Vol. 1 No. 2 (2009) 58-70 Knowledge Management Design Using Collaborative Knowledge Retrieval Function Kadarsah Suryadi1*, Cahyono Sigit Pramudyo2 1Industrial Engineering Department, Bandung Institute of Technology, Indonesia 2Industrial Engineering Department, ―Sunan Kalijaga‖ State Islamic University of Yogyakarta, Indonesia ABSTRACT Knowledge is a key word in the information age. Organizational knowledge provides businesses with a way to compete effectively and efficiently in the market. The performance of many organizations is determined more by their knowledge than their physical assets. Capturing and representing knowledge is critical in knowledge management. The spread of organizational knowledge has made a difficulty in sharing knowledge. This problem creates a longer learning cycle. This research proposes a web based knowledge map, using collaborative knowledge retrieval function, as a tool to represent and share knowledge. First, knowledge map gives a comprehensive understanding about what knowledge is needed to achieve objectives, what knowledge sources are available, and which knowledge sources are used by whom. Second, web facilitates knowledge sharing. Third, collaborative knowledge retrieval function will shorten learning cycle. Fourth, the proposed tool is applied to build a web based knowledge for an academic system in higher education. The result of this research is a design of web based knowledge map using collaborative knowledge retrieval function. Collaborative retrieval knowledge function gives recommendations of relevant knowledge between web documents. The proposed model contributes in shortening the learning cycle. Keywords: Knowledge map, Collaborative retrieval knowledge function, Web, Learning cycle, Knowledge Sharing. Introduction* performance of many organizations is determined more by their knowledge than their Knowledge is a key word in the physical assets.
    [Show full text]
  • A Method of Knowledgebase Curation Using RDF Knowledge Graph and SPARQL for a Knowledge-Based Clinical Decision Support System
    347 A method of knowledgebase curation using RDF Knowledge Graph and SPARQL for a knowledge-based clinical decision support system Xavierlal J Mattam and Ravi Lourdusamy Sacred Heart College(Autonomous), Tirupattur, Tamil Nadu, India Abstract Clinical decisions are considered crucial and lifesaving. At times, healthcare workers are overworked and there could be lapses in judgements or decisions that could lead to tragic consequences. The clinical decision support systems are very important to assist heath workers. But in spite of a lot of effort in building a perfect system for clinical decision support, such a system is yet to see the light of day. Any clinical decision support system is as good as its knowledgebase. So, the knowledgebase should be consistently maintained with updated knowledge available in medical literature. The challenge in doing it lies in the fact that there is huge amount of data in the web in varied format. A method of knowledgebase curation is proposed in the article using RDF Knowledge Graph and SPARQL queries. Keywords Clinical Decision Support System, RDF Knowledge Graph, Knowledgebase Curation. 1. Introduction and awareness in the diagnosis and treatment of the disease. Although there were many breakthroughs published in medical literature Decision Support has been a crucial part of globally, down-to-earth use of any of them were a healthcare unit. In every area of a health care slow and far-between. It would have not been facility, critical and urgent decisions have to be the case had there been CDSS that was capable made. In such extreme situations, leaving lives of automatically acquiring reliable knowledge at stake totally to mere human knowledge and from authenticated medical literature.
    [Show full text]
  • The Role of Artificial Intelligence (AI) in Adaptive Elearning System (AES) Content Formation: Risks and Opportunities Involved
    13th International Conference & Exhibition on ICT for Education, Training & Skills Development. ELearning Africa, Kigali Rwanda, September 26 - 28 2018. The Role of Artificial Intelligence (AI) in Adaptive eLearning System (AES) Content Formation: Risks and Opportunities involved. Suleiman Adamu1, Jamilu Awwalu2 1Information and Communications Technology Unit, Sule Lamido University Kafin Hausa, Jigawa State, Nigeria. 2Department of Computer Science, Faculty of Computing and Applied Sciences, Baze University Abuja, Nigeria. [email protected],[email protected] Abstract Artificial Intelligence (AI) plays varying roles in supporting both existing and emerging technologies. In the area of Learning and Tutoring, it plays key role in Intelligent Tutoring Systems (ITS). The fusion of ITS with Adaptive Hypermedia and Multimedia (AHAM) form the backbone of Adaptive eLearning Systems (AES) which provides personalized experiences to learners. This experience is important because it facilitates the accurate delivery of the learning modules in specific to the learner’s capacity and readiness. AES types vary, with Adaptive Web Based eLearning Systems (AWBES) being the popular type because of wider access offered by the web technology.The retrieval and aggregation ofcontents for any eLearning system is critical whichis determined by the relevance of learning material to the needs of the learner.In this paper, we discuss components of AES, role of AI in AES content aggregation, possible risks and available opportunities. Keywords Artificial Intelligence, Adaptive eLearning Systems, Personalized Learning, Intelligent Tutoring Systems Introduction The Internet contains several categories of materials, some of which can be used as educational resources. These educational resources can be retrieved for content formation of Adaptive eLearning Systems (AES).
    [Show full text]
  • Chapter 6 - Expert Systems and Knowledge Acquisition
    Universiity of Pretoria etd – De Kock, E (2003) Chapter 6 - Expert Systems and knowledge acquisition An expert system’s major objective is to provide expert advice and knowledge in specialised situations (Turban 1995). ES is a sub-discipline of AI (Turban et al 2001). For an ES to reason, provide explanations and give advice, it needs to process and store knowledge. Knowledge in the form of facts and rules are kept in a knowledge base. 6.1 Expert System definitions ES definitions by Turban et al (2001) and Olson & Courtney (1992) were presented in Paragraph 2.3.1 (p19). Raggad & Gargano (1999) state: “ES emerged, as a branch of AI, from the effort of AI researchers to develop computer programs that could reason as humans”. Goodall (1985) distinguishes two approaches when defining an expert system: • The human/AI oriented approach, and • The technology oriented approach Goodall’s (1985) AI approach defines an expert system as a computer system that performs functions similar to those normally performed by a human expert. He extends this definition to include the nature of an expert system, in other words, he includes how ES behaves, in the technology approach: “An expert system is a computer system that uses a representation of human expertise in a specialist domain in order to perform functions similar to those normally performed by a human expert in that domain”. In formulating the above definition, Goodall (1985) focused on the implementation of two types of ES: • ES performing at a suitable ‘high’ expert level, and • ES that performs tasks which no human expert has, using perhaps a very complex set of rules to control a chemical process at a speed and in circumstances (e.g.
    [Show full text]
  • A Knowledge Acquisition Assistant for the Expert System Shell Nexpert-Object
    Rochester Institute of Technology RIT Scholar Works Theses 1988 A knowledge acquisition assistant for the expert system shell Nexpert-Object Florence Perot Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Perot, Florence, "A knowledge acquisition assistant for the expert system shell Nexpert-Object" (1988). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Rochester Institute of Technology School of Computer Science and Technology A KNOWLEDGE ACQUISITION ASSISTANT FOR THE EXPERT SYSTEM SHELL NEXPERT-OBJECTTM By FLORENCE C. PEROT A thesis, submitted to The Faculty of the School ofCornputer Science and Technology In partial fulfillment of the requ.irements for the degree of Master of Science i.n COF~?uter Science Approved by: 'lIfer! r~ Pro John A. Biles (ICSG Department, R.I.TJ Date Dr. Alain T. Rappaport (President, Neuron Data) Date Dr. Peter G. Anderson (ICSG Chairman, R.I.T.) ;7 1J'{A{e? 'I.-g ROCHESTER, NEW-YORK August, 1988 ABSTRACT This study addresses the problems of knowledge acquisition in expert system development examines programs whose goal is to solve part of these problems. Among them are knowledge acquisition tools, which provide the knowledge engineer with a set of Artificial Intelligence primitives, knowledge acquisition aids, which offer to the knowledge engineer a guidance in knowledge elicitation, and finally, automated systems, which try to replace the human interviewer with a machine interface.
    [Show full text]