CS 343 Artificial Intelligence

Total Page:16

File Type:pdf, Size:1020Kb

CS 343 Artificial Intelligence CS 343 Artificial Intelligence Gordon S. Novak Jr. Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 (512) 471-9569 [email protected] http://www.cs.utexas.edu/users/novak Copyright c Gordon S. Novak Jr. 1 Artificial Intelligence as Science Intelligence should be placed in the context of biology: Intelligence connects perception to action to help an organism survive. Intelligence is computation in the service of life, just as metabolism is chemistry in the service of life. Intelligence does not imply perfect understanding; every intelligent being has limited perception, memory, and computation. Many points on the spectrum of intelligence-versus-cost are viable, from insects to humans. AI seeks to understand the computations required for intelligent behavior and to produce computer systems that exhibit intelligence. Aspects of intelligence studied by AI include perception, motor control, communication using human languages, reasoning, planning, learning, and memory. 2 Scientific Goals of AI AI seeks to understand the working of the mind in mechanistic terms, just as medicine seeks to understand the working of the body in mechanistic terms. The mind is what the brain does. { Marvin Minsky The strong AI position is that any aspect of human intelligence could, in principle, be mechanized. 3 A.I. as Engineering How can we make computer systems more intelligent? • Perception to get input directly from the real world. • Autonomy to perform tasks that currently require human operators without human intervention or monitoring. • Flexibility in dealing with variability in the environment. • Ease of use: computers that are able to understand what the user wants from limited instructions in natural languages. • Learning from experience. 4 Perception Machine Vision: It is easy to interface a TV camera to a computer and get an image into memory; the problem is understanding what the image represents. Vision takes lots of computation; in humans, roughly 10% of all calories consumed are burned in vision computation. Speech Understanding: Speech understanding is available now. Some systems must be trained for the individual user and require pauses between words. Understanding continuous speech with a larger vocabulary is harder. Touch (tactile or haptic) Sensation: Important for robot assembly tasks. 5 Robotics Although industrial robots have been expensive, robot hardware can be cheap: Radio Shack has sold a working robot arm and hand for $15. The limiting factor in application of robotics is not the cost of the robot hardware itself. What is needed is perception and intelligence to tell the robot what to do; \blind" robots are limited to very well- structured tasks (like spray painting car bodies). 6 Natural Language Understanding: Natural languages are human languages such as English. Making computers understand English allows non-programmers to use them with little training. Applications in limited areas (such as access to data bases) are easy. (askr '(where can i get ice cream in berkeley)) Natural Language Generation: Easier than NL understanding. Can be an inexpensive output device. Machine Translation: Usable translation of text is available now. Important for organizations that operate in many countries. In a not too far future develops for eleven-year old David in a research lab the first intelligent robot with human feelings in the shape. But its "foster parents" are overtaxed with the artificial spare child and suspend it. Posed on itself alone David tries to fathom its origin and the secret of its existence. 7 Planning Planning attempts to order actions to achieve goals. Planning applications include logistics, manufacturing scheduling, planning manufacturing steps to construct a desired product. There are huge amounts of money to be saved through better planning. 8 Expert Systems Expert Systems attempt to capture the knowledge of a human expert and make it available through a computer program. There have been many successful and economically valuable applications of expert systems. Benefits: • Reducing skill level needed to operate complex devices. • Diagnostic advice for device repair. • Interpretation of complex data. • \Cloning" of scarce expertise. • Capturing knowledge of expert who is about to retire. • Combining knowledge of multiple experts. • Intelligent training. 9 Theorem Proving Proving mathematical theorems might seem to be mainly of academic interest. However, many practical problems can be cast in terms of theorems. A general theorem prover can therefore be widely applicable. Examples: • Automatic construction of compiler code generators from a description of a CPU's instruction set. • J Moore and colleagues proved correctness of the floating-point division algorithm on AMD CPU chip. 10 Symbolic Mathematics Symbolic mathematics refers to manipulation of formulas, rather than arithmetic on numeric values. • Algebra • Differential and Integral Calculus Symbolic manipulation is often used in conjunction with ordinary scientific computation as a generator of programs used to actually do the calculations. Symbolic manipulation programs are an important component of scientific and engineering workstations. >(solvefor '(= v (* v0 (- 1 (exp (- (/ t (* r c))))))) 't) (= T (* (- (LOG (- 1 (/ V V0)))) (* R C))) 11 Game Playing Games are good vehicles for research because they are well formalized, small, and self-contained. They are therefore easily programmed. Games can be good models of competitive situations, so principles discovered in game-playing programs may be applicable to practical problems. Prof. Peter Stone does research on strategies for bidding at auctions. 12 Characteristics of A.I. Programs • Symbolic Reasoning: reasoning about objects represented by symbols, and their properties and relationships, not just numerical calculations. • Knowledge: General principles are stored in the program and used for reasoning about novel situations. • Search: a \weak method" for finding a solution to a problem when no direct method exists. Problem: combinatoric explosion of possibilities. • Flexible Control: Direction of processing can be changed by changing facts in the environment. 13 Symbolic Representation Most of the reasoning that people do is non-numeric. AI programs often do some numerical calculation, but focus on reasoning with symbols that represent objects and relationships in the real world. • Objects. • Properties of objects. • Relationships among objects. • Rules about classes of objects. Examples of symbolic processing: • Understanding English: (show me a good chinese restaurant in los altos) • Reasoning based on general principles: if: the patient is male then: the patient is not pregnant • Symbolic mathematics: If y = m*x+b, what is the derivative of y with respect to x? 14 Knowledge Representation It is necessary to represent the computer's knowledge of the world by some kind of data structures in the machine's memory. Traditional computer programs deal with large amounts of data that are structured in simple and uniform ways. A.I. programs need to deal with complex relationships, reflecting the complexity of the real world. Several kinds of knowledge need to be represented: • Factual Data: Known facts about the world. • General Principles: \Every dog is a mammal." • Hypothetical Data: The computer must consider hypotheticals in order to reason about the effects of actions that are being contemplated. 15 Semantic Networks / Frames A semantic network or frame system represents knowledge by nodes and labeled arcs among nodes. 16 Logic Facts can also be represented in a logical formalism. In principle, logic and semantic network representations can be equivalent. (huey huey-457) (all x (if (huey x) (helicopter x))) (all x (if (huey x) (payload x 4000))) (all x (if (helicopter x) (can-fly x))) 17 Logic ... Mathematical logic formalizes certain kinds of reasoning in terms of operations on mathematical formulas. It is important for people working in A.I. to know logic for several reasons: • Theory: Logic has a sound mathematical foundation; things can be proved about it. • Applications: For certain classes of applications (e.g., proving correctness of programs) logic is the representation of choice. • Comparison with Other Methods: Other representation methods are often reducible to logic. Knowing logic helps in understanding other methods and may help prevent reinvention of old techniques. 18 A.I. is the Future of Computing! Moore's Law predicts that computation will get faster/cheaper by a factor of 2 every 1.5 years; this means a factor of 1000 in 15 years. What will we do with all that cheap computation? Traditional applications of computers (with a few exceptions, such as large simulations) will not absorb enough additional computing power to make them major growth areas. In the past, computation was scarce and manufacturers could just sell cycles. Now that cycles are cheap, ease of use is a major selling criterion. Integration of computers with sensors and effectors will be common and essential for competitive position in many markets. Consumer markets will not be similar to existing computer applications, but will require intelligence to deal with poorly structured tasks (such as cleaning the house). 19 Advantages of Lisp • Recursion: A program can call itself as a subroutine. Dynamic type checking makes recursion more useful. • Garbage Collection: automatically recycles memory. • Uniform Representation: Programs and data are the same. { Programs can examine other programs. { Programs
Recommended publications
  • The Dramatic True Story of the Frame Default
    The Dramatic True Story of the Frame Default Vladimir Lifschitz University of Texas at Austin Abstract This is an expository article about the solution to the frame problem pro- posed in 1980 by Raymond Reiter. For years, his “frame default” remained untested and suspect. But developments in some seemingly unrelated areas of computer science—logic programming and satisfiability solvers—eventually exonerated the frame default and turned it into a basis for important appli- cations. 1 Introduction This is an expository article about the dramatic story of the solution to the frame problem proposed in 1980 by Raymond Reiter [22]. For years, his “frame default” remained untested and suspect. But developments in some seemingly unrelated ar- eas of computer science—logic programming and satisfiability solvers—eventually exonerated the frame default and turned it into a basis for important applications. This paper grew out of the Great Moments in KR talk given at the 13th Inter- national Conference on Principles of Knowledge Representation and Reasoning. It does not attempt to provide a comprehensive history of research on the frame problem: the story of the frame default is only a small part of the big picture, and many deep and valuable ideas are not even mentioned here. The reader can learn more about work on the frame problem from the monographs [17, 21, 24, 25, 27]. 2 What Is the Frame Problem? The frame problem [16] is the problem of describing a dynamic domain without explicitly specifying which conditions are not affected by executing actions. Here is a simple example. Initially, Alice is in the room, and Bob is not.
    [Show full text]
  • Why Dreyfus' Frame Problem Argument Cannot Justify Anti
    Why Dreyfus’ Frame Problem Argument Cannot Justify Anti- Representational AI Nancy Salay ([email protected]) Department of Philosophy, Watson Hall 309 Queen‘s University, Kingston, ON K7L 3N6 Abstract disembodied cognitive models will not work, and this Hubert Dreyfus has argued recently that the frame problem, conclusion needs to be heard. By disentangling the ideas of discussion of which has fallen out of favour in the AI embodiment and representation, at least with respect to community, is still a deal breaker for the majority of AI Dreyfus‘ frame problem argument, the real locus of the projects, despite the fact that the logical version of it has been general polemic between traditional computational- solved. (Shanahan 1997, Thielscher 1998). Dreyfus thinks representational cognitive science and the more recent that the frame problem will disappear only once we abandon the Cartesian foundations from which it stems and adopt, embodied approaches is revealed. From this, I hope that instead, a thoroughly Heideggerian model of cognition, in productive debate will ensue. particular one that does not appeal to representations. I argue The paper proceeds in the following way: in section I, I that Dreyfus is too hasty in his condemnation of all describe and distinguish the logical version of the frame representational views; the argument he provides licenses problem and the philosophical one that remains unsolved; in only a rejection of disembodied models of cognition. In casting his net too broadly, Dreyfus circumscribes the section II, I rehearse Dreyfus‘ negative argument, what I‘ll cognitive playing field so closely that one is left wondering be calling his frame problem argument; in section III, I how his Heideggerian alternative could ever provide a highlight some key points from Dreyfus‘ positive account of foundation explanatorily robust enough for a theory of a Heideggerian alternative; in section IV, I make my case cognition.
    [Show full text]
  • A Survey of Top-Level Ontologies to Inform the Ontological Choices for a Foundation Data Model
    A survey of Top-Level Ontologies To inform the ontological choices for a Foundation Data Model Version 1 Contents 1 Introduction and Purpose 3 F.13 FrameNet 92 2 Approach and contents 4 F.14 GFO – General Formal Ontology 94 2.1 Collect candidate top-level ontologies 4 F.15 gist 95 2.2 Develop assessment framework 4 F.16 HQDM – High Quality Data Models 97 2.3 Assessment of candidate top-level ontologies F.17 IDEAS – International Defence Enterprise against the framework 5 Architecture Specification 99 2.4 Terminological note 5 F.18 IEC 62541 100 3 Assessment framework – development basis 6 F.19 IEC 63088 100 3.1 General ontological requirements 6 F.20 ISO 12006-3 101 3.2 Overarching ontological architecture F.21 ISO 15926-2 102 framework 8 F.22 KKO: KBpedia Knowledge Ontology 103 4 Ontological commitment overview 11 F.23 KR Ontology – Knowledge Representation 4.1 General choices 11 Ontology 105 4.2 Formal structure – horizontal and vertical 14 F.24 MarineTLO: A Top-Level 4.3 Universal commitments 33 Ontology for the Marine Domain 106 5 Assessment Framework Results 37 F. 25 MIMOSA CCOM – (Common Conceptual 5.1 General choices 37 Object Model) 108 5.2 Formal structure: vertical aspects 38 F.26 OWL – Web Ontology Language 110 5.3 Formal structure: horizontal aspects 42 F.27 ProtOn – PROTo ONtology 111 5.4 Universal commitments 44 F.28 Schema.org 112 6 Summary 46 F.29 SENSUS 113 Appendix A F.30 SKOS 113 Pathway requirements for a Foundation Data F.31 SUMO 115 Model 48 F.32 TMRM/TMDM – Topic Map Reference/Data Appendix B Models 116 ISO IEC 21838-1:2019
    [Show full text]
  • Answers to Exercises
    Appendix A Answers to Exercises Answers to some of the exercises can be verified by running CLINGO, and they are not included here. 2.1. (c) Replace rule (1.1) by large(C) :- size(C,S), S > 500. 2.2. (a) X is a child of Y if Y is a parent of X. 2.3. (a) large(germany) :- size(germany,83), size(uk,64), 83 > 64. (b) child(dan,bob) :- parent(bob,dan). 2.4. (b), (c), and (d). 2.5. parent(ann,bob; bob,carol; bob,dan). 2.8. p(0,0*0+0+41) :- 0 = 0..3. 2.10. p(2**N,2**(N+1)) :- N = 0..3. 2.11. (a) p(N,(-1)**N) :- N = 0..4. (b) p(M,N) :- M = 1..4, N = 1..4, M >= N. 2.12. (a) grandparent(X,Z) :- parent(X,Y), parent(Y,Z). 2.13. (a) sibling(X,Y) :- parent(Z,X), parent(Z,Y), X != Y. 2.14. enrolled(S) :- enrolled(S,C). 2.15. same_city(X,Y) :- lives_in(X,C), lives_in(Y,C), X!=Y. 2.16. older(X,Y) :- age(X,M), age(Y,N), M > N. 2.18. Line 5: noncoprime(N) :- N = 1..n, I = 2..N, N\I = 0, k\I = 0. Line 10: coprime(N) :- N = 1..n, not noncoprime(N). © Springer Nature Switzerland AG 2019 149 V. Lifschitz, Answer Set Programming, https://doi.org/10.1007/978-3-030-24658-7 150 A Answers to Exercises 2.19. Line 6: three(N) :- N = 1..n, I = 0..n, J = 0..n, K = 0..n, N=I**2+J**2+K**2.
    [Show full text]
  • Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs
    Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2017 Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs Vinh Thi Kim Nguyen Wright State University Follow this and additional works at: https://corescholar.libraries.wright.edu/etd_all Part of the Computer Engineering Commons, and the Computer Sciences Commons Repository Citation Nguyen, Vinh Thi Kim, "Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs" (2017). Browse all Theses and Dissertations. 1912. https://corescholar.libraries.wright.edu/etd_all/1912 This Dissertation is brought to you for free and open access by the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations by an authorized administrator of CORE Scholar. For more information, please contact [email protected]. Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs A dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy By VINH THI KIM NGUYEN B.E., Vietnam National University, 2007 2017 Wright State University WRIGHT STATE UNIVERSITY GRADUATE SCHOOL December 15, 2017 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY Vinh Thi Kim Nguyen ENTITLED Semantic Web Foundations of Representing, Reasoning, and Traversing in Contextualized Knowledge Graphs BE ACCEPTED IN PARTIAL FUL- FILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy. Amit Sheth, Ph.D. Dissertation Director Michael Raymer, Ph.D. Director, Computer Science and Engineering Ph.D. Program Barry Milligan, Ph.D. Interim Dean of the Graduate School Committee on Final Examination Amit Sheth, Ph.D.
    [Show full text]
  • Temporal Projection and Explanation
    Temporal Projection and Explanation Andrew B. Baker and Matthew L. Ginsberg Computer Science Department Stanford University Stanford, California 94305 Abstract given is in fact somewhat simpler than that presented in [Hanks and McDermott, 1987], but still retains all of the We propose a solution to problems involving troublesome features of the original. In Section 3, we go temporal projection and explanation (e.g., the on to describe proposed solutions, and investigate their Yale shooting problem) based on the idea that technical shortcomings. whether a situation is abnormal should not de• The formal underpinnings of our own ideas are pre• pend upon historical information about how sented in Section 4, and we return to the Yale shooting the situation arose. We apply these ideas both scenario in Section 5, showing that our notions can be to the Yale shooting scenario and to a blocks used to solve both the original problem and the variant world domain that needs to address the quali• presented in Section 2. In Section 6, we extend our ideas fication problem. to deal with the qualification problem in a simple blocks world scenario. Concluding remarks are contained in 1 Introduction Section 7. The paper [1987] by Hanks and McDermott describing 2 The Yale shooting the Yale shooting problem has generated such a flurry of responses that it is difficult to imagine what another The Yale shooting problem involves reasoning about a one can contribute. The points raised by Hanks and sequence of actions. In order to keep our notation as McDermott, both formal and philosophical, have been manageable as possible, we will denote the fact that some discussed at substantial length elsewhere.
    [Show full text]
  • Application of Ontological Models for Representing Engineering Concepts in Engineering
    SCIENTIFIC PROCEEDINGS X INTERNATIONAL CONGRESS "MACHINES, TECHNOLОGIES, MATERIALS" 2013 ISSN 1310-3946 APPLICATION OF ONTOLOGICAL MODELS FOR REPRESENTING ENGINEERING CONCEPTS IN ENGINEERING Martin Ivanov1, Nikolay Tontchev2 New Bulgarian University, Sofia, Montivideo 211, University of Transport, Sofia, Geo Milev 1582 Abstract: The necessity, opportunities and advantages from developing ontological models of engineering concepts in the field of machine manufacturing technology are introduced. The characteristics and principles of the methodology for developing engineering technology models are presented. An experimental ontological model developed by the authors based on Protégé OWL 4.2 is described. Keywords: ONTOLOGICAL MODELS, ENGINEERING CONCEPTS, IDEF5, PROTEGE OWL 1. Introduction - Existing objects and phenomena in the subject area in the terms of the accepted terminology (vocabulary/vocabularies, Modern engineering activities take place in an evolving and in this case – the so called engineering lexicon or EL); enriching information environment characterized by exponentially - How objects and phenomena relate to each other; increasing information flows, variety of means for access and in the formats of data presentation, diversity of engineering problems, - How they are used within the subject area, and outside it; solved by automated means, penetration of the artificial intelligence methods and the semantic Web, too. Effectiveness of engineering is - Rules that define their existence and behavior. highly dependent on the access and the opportunities for efficient The ontology is composed on the basis of a specific vocabulary processing of this information, in particular by the use of models for and terminology characteristic for the subject area. This implies (a) formally describing, processing and dissemination of knowledge strict definitions (axioms) of the terms and rules (grammars) resulting from research and engineering practice.
    [Show full text]
  • AI/ML Finding a Doing Machine
    Digital Readiness: AI/ML Finding a doing machine Gregg Vesonder Stevens Institute of Technology http://vesonder.com Three Talks • Digital Readiness: AI/ML, The thinking system quest. – Artificial Intelligence and Machine Learning (AI/ML) have had a fascinating evolution from 1950 to the present. This talk sketches the main themes of AI and machine learning, tracing the evolution of the field since its beginning in the 1950s and explaining some of its main concepts. These eras are characterized as “from knowledge is power” to “data is king”. • Digital Readiness: AI/ML, Finding a doing machine. – In the last decade Machine Learning had a remarkable success record. We will review reasons for that success, review the technology, examine areas of need and explore what happened to the rest of AI, GOFAI (Good Old Fashion AI). • Digital Readiness: AI/ML, Common Sense prevails? – Will there be another AI Winter? We will explore some clues to where the current AI/ML may reunite with GOFAI (Good Old Fashioned AI) and hopefully expand the utility of both. This will include extrapolating on the necessary melding of AI with engineering, particularly systems engineering. Roadmap • Systems – Watson – CYC – NELL – Alexa, Siri, Google Home • Technologies – Semantic web – GPUs and CUDA – Back office (Hadoop) – ML Bias • Last week’s questions Winter is Coming? • First Summer: Irrational Exuberance (1948 – 1966) • First Winter (1967 – 1977) • Second Summer: Knowledge is Power (1978 – 1987) • Second Winter (1988 – 2011) • Third Summer (2012 – ?) • Why there might not be a third winter! Henry Kautz – Engelmore Lecture SYSTEMS Winter 2 Systems • Knowledge is power theme • Influence of the web, try to represent all knowledge – Creating a general ontology organizing everything in the world into a hierarchy of categories – Successful deep ontologies: Gene Ontology and CML Chemical Markup Language • Indeed extreme knowledge – CYC and Open CYC – IBM’s Watson Upper ontology of the world Russell and Norvig figure 12.1 Properties of a subject area and how they are related Ferrucci, D., et.al.
    [Show full text]
  • Ontology and Information Systems
    Ontology and Information Systems 1 Barry Smith Philosophical Ontology Ontology as a branch of philosophy is the science of what is, of the kinds and structures of objects, properties, events, processes and relations in every area of reality. ‘Ontology’ is often used by philosophers as a synonym for ‘metaphysics’ (literally: ‘what comes after the Physics’), a term which was used by early students of Aristotle to refer to what Aristotle himself called ‘first philosophy’.2 The term ‘ontology’ (or ontologia) was itself coined in 1613, independently, by two philosophers, Rudolf Göckel (Goclenius), in his Lexicon philosophicum and Jacob Lorhard (Lorhardus), in his Theatrum philosophicum. The first occurrence in English recorded by the OED appears in Bailey’s dictionary of 1721, which defines ontology as ‘an Account of being in the Abstract’. Methods and Goals of Philosophical Ontology The methods of philosophical ontology are the methods of philosophy in general. They include the development of theories of wider or narrower scope and the testing and refinement of such theories by measuring them up, either against difficult 1 This paper is based upon work supported by the National Science Foundation under Grant No. BCS-9975557 (“Ontology and Geographic Categories”) and by the Alexander von Humboldt Foundation under the auspices of its Wolfgang Paul Program. Thanks go to Thomas Bittner, Olivier Bodenreider, Anita Burgun, Charles Dement, Andrew Frank, Angelika Franzke, Wolfgang Grassl, Pierre Grenon, Nicola Guarino, Patrick Hayes, Kathleen Hornsby, Ingvar Johansson, Fritz Lehmann, Chris Menzel, Kevin Mulligan, Chris Partridge, David W. Smith, William Rapaport, Daniel von Wachter, Chris Welty and Graham White for helpful comments.
    [Show full text]
  • Ontology Languages – a Review
    International Journal of Computer Theory and Engineering, Vol.2, No.6, December, 2010 1793-8201 Ontology Languages – A Review V. Maniraj, Dr.R. Sivakumar 1) Logical Languages Abstract—Ontologies have been becoming a hot research • First order predicate logic topic for the application in artificial intelligence, semantic web, Software Engineering, Library Science and information • Rule based logic Architecture. Ontology is a formal representation of set of concepts within a domain and relationships between those • concepts. It is used to reason about the properties of that Description logic domain and may be used to define the domain. An ontology language is a formal language used to encode the ontologies. A 2) Frame based Languages number of research languages have been designed and released • Similar to relational databases during the past few years by the research community. They are both proprietary and standard based. In this paper a study has 3) Graph based Languages been reported on the different features and issues of these • languages. This paper also addresses the challenges for Semantic network research community in the further development of ontology languages. • Analogy with the web is rationale for the semantic web I. INTRODUCTION Ontology engineering (or ontology building) is a subfield II. BACKGROUND of knowledge engineering that studies the methods and CycL1 in computer science and artificial intelligence is an methodologies for building ontologies. It studies the ontology language used by Doug Lenat’s Cye artificial ontology development process, the ontology life cycle, the intelligence project. Ramanathan V. Guna was instrumental methods and methodologies for building ontologies, and the in the design of the language.
    [Show full text]
  • Which Semantics?
    This item is the archived peer-reviewed author-version of: Tracing the origins of the semantic web Reference: Guns Raf.- Tracing the origins of the semantic web Journal of the American Society for Information Science and Technology / American Society for Information Science and Technology - ISSN 1532-2882 - 64:10(2013), p. 2173-2181 DOI: http://dx.doi.org/doi:10.1002/asi.22907 Handle: http://hdl.handle.net/10067/1111170151162165141 Institutional repository IRUA This paper is a postprint of the following article: Guns, R. (2013). Tracing the origins of the Semantic Web. Journal of the American Society for Information Science and Technology, 64(10), 2173–2181. http://dx.doi.org/10.1002/asi.22907 Tracing the origins of the Semantic Web Raf Guns Universiteit Antwerpen, IBW, Venusstraat 35, B-2000 Antwerp, Belgium Abstract The Semantic Web has been criticized for not being semantic. This article examines the question: why and how has the Web of Data, expressed in RDF, come to be known as the Semantic Web? Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors are already present in early Web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older AI work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network.
    [Show full text]
  • Relating Ontology Languages and Web Standards
    Ontology Languages and Web Standards Relating Ontology Languages and Web Standards Dieter Fensel Abstract Currently computers are changing from single isolated devices to entry points in a world wide network of information exchange and business transactions called the World Wide Web (WWW). Therefore support in data, information, and knowledge exchange becomes the key issue in current computer technology. Ontologies provide a shared and common understanding of a domain that can be communicated between people and application systems. Therefore, they may play a major role in supporting information exchange processes in various areas. However, in order to develope their full power, the representation languages for ontologies must be compartible with existing data exchange standards in the World Wide Web. In this paper we investigate protoypical examples of ontology languages and web languages. Finally we show how these web standards can be used as a representation languages for ontologies. 1 Introduction Ontologies are a popular research topic in various communities such as knowledge engineering, natural language processing, cooperative information systems, intelligent information integration, and knowledge management. They provide a shared and common understanding of a domain that can be communicated between people and heterogeneous and distributed application systems. They have been developed in Artificial Intelligence to facilitate knowledge sharing and reuse. Ontologies are formal theories about a certain domain of discourse and therefore require a formal logical language to express them. The paper will discuss languages for describing ontologies in Section 2. In Section 3 we will investigate how recent web standards such as XML and RDF can be used as languages for expressing Ontologies, or at least some of their aspects.
    [Show full text]