Knowledge on the Web: Towards Robust and Scalable Harvesting of Entity-Relationship Facts
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Franck-Hertz Experiment
IIA2 Modul Atomic/Nuclear Physics Franck-Hertz Experiment This experiment by JAMES FRANCK and GUSTAV LUDWIG HERTZ from 1914 (Nobel Prize 1926) is one of the most impressive comparisons in the search for quantum theory: it shows a very simple arrangement in the existence of discrete stationary energy states of the electrons in the atoms. ÜÔ ÖÑÒØ Á Á¾ ¹ ÜÔ ÖÑÒØ This experiment by JAMES FRANCK and GUSTAV LUDWIG HERTZ from 1914 (Nobel Prize 1926) is one of the most impressive comparisons in the search for quantum theory: it shows a very simple arrangement in the existence of discrete stationary energy states of the electrons in the atoms. c AP, Department of Physics, University of Basel, September 2016 1.1 Preliminary Questions • Explain the FRANCK-HERTZ experiment in our own words. • What is the meaning of the unit eV and how is it defined? • Which experiment can verify the 1. excitation energy as well? • Why is an anode used in the tube? Why is the current not measured directly at the grid? 1.2 Theory 1.2.1 Light emission and absorption in the atom There has always been the question of the microscopic nature of matter, which is a key object of physical research. An important experimental approach in the "world of atoms "is the study of light absorption and emission of light from matter, that the accidental investigation of the spectral distribution of light absorbed or emitted by a particular substance. The strange phenomenon was observed (first from FRAUNHOFER with the spectrum of sunlight), and was unexplained until the beginning of this century when it finally appeared: • If light is a continuous spectrum (for example, incandescent light) through a gas of a particular type of atom, and subsequently , the spectrum is observed, it is found that the light is very special, atom dependent wavelengths have been absorbed by the gas and therefore, the spectrum is absent. -
A Survey of Top-Level Ontologies to Inform the Ontological Choices for a Foundation Data Model
A survey of Top-Level Ontologies To inform the ontological choices for a Foundation Data Model Version 1 Contents 1 Introduction and Purpose 3 F.13 FrameNet 92 2 Approach and contents 4 F.14 GFO – General Formal Ontology 94 2.1 Collect candidate top-level ontologies 4 F.15 gist 95 2.2 Develop assessment framework 4 F.16 HQDM – High Quality Data Models 97 2.3 Assessment of candidate top-level ontologies F.17 IDEAS – International Defence Enterprise against the framework 5 Architecture Specification 99 2.4 Terminological note 5 F.18 IEC 62541 100 3 Assessment framework – development basis 6 F.19 IEC 63088 100 3.1 General ontological requirements 6 F.20 ISO 12006-3 101 3.2 Overarching ontological architecture F.21 ISO 15926-2 102 framework 8 F.22 KKO: KBpedia Knowledge Ontology 103 4 Ontological commitment overview 11 F.23 KR Ontology – Knowledge Representation 4.1 General choices 11 Ontology 105 4.2 Formal structure – horizontal and vertical 14 F.24 MarineTLO: A Top-Level 4.3 Universal commitments 33 Ontology for the Marine Domain 106 5 Assessment Framework Results 37 F. 25 MIMOSA CCOM – (Common Conceptual 5.1 General choices 37 Object Model) 108 5.2 Formal structure: vertical aspects 38 F.26 OWL – Web Ontology Language 110 5.3 Formal structure: horizontal aspects 42 F.27 ProtOn – PROTo ONtology 111 5.4 Universal commitments 44 F.28 Schema.org 112 6 Summary 46 F.29 SENSUS 113 Appendix A F.30 SKOS 113 Pathway requirements for a Foundation Data F.31 SUMO 115 Model 48 F.32 TMRM/TMDM – Topic Map Reference/Data Appendix B Models 116 ISO IEC 21838-1:2019 -
Artificial Intelligence
BROAD AI now and later Michael Witbrock, PhD University of Auckland Broad AI Lab @witbrock Aristotle (384–322 BCE) Organon ROOTS OF AI ROOTS OF AI Santiago Ramón y Cajal (1852 -1934) Cerebral Cortex WHAT’S AI • OLD definition: AI is everything we don’t yet know how program • Now some things that people can’t do: • unique capabilities (e.g. Style transfer) • superhuman performance (some areas of speech, vision, games, some QA, etc) • Current AI Systems can be divided by their kind of capability: • Skilled (Image recognition, Game Playing (Chess, Atari, Go, DoTA), Driving) • Attentive (Trading: Aidyia; Senior Care: CareMedia, Driving) • Knowledgeable, (Google Now, Siri, Watson, Cortana) • High IQ (Cyc, Soar, Wolfram Alpha) GOFAI • Thought is symbol manipulation • Large numbers of precisely defined symbols (terms) • Based on mathematical logic (implies (and (isa ?INST1 LegalAgreement) (agreeingAgents ?INST1 ?INST2)) (isa ?INST2 LegalAgent)) • Problems solved by searching for transformations of symbolic representations that lead to a solution Slow Development Thinking Quickly Thinking Slowly (System I) (System II) Human Superpower c.f. other Done well by animals and people animals Massively parallel algorithms Serial and slow Done poorly until now by computers Done poorly by most people Not impressive to ordinary people Impressive (prizes, high pay) "Sir, an animal’s reasoning is like a dog's walking on his hind legs. It is not done well; but you are surprised to find it done at all.“ - apologies to Samuel Johnson Achieved on computers by high- Fundamental design principle of power, low density, slow computers simulation of vastly different Computer superpower c.f. neural hardware human Recurrent Deep Learning & Deep Reasoning MACHINE LEARNING • Meaning is implicit in the data • Thought is the transformation of learned representations http://karpathy.github.io/2015/05/21/rnn- effectiveness/ . -
The Franck-Hertz Experiment: 100 Years Ago and Now
The Franck-Hertz experiment: 100 years ago and now A tribute to two great German scientists Zoltán Donkó1, Péter Magyar2, Ihor Korolov1 1 Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Budapest, Hungary 2 Physics Faculty, Roland Eötvös University, Budapest, Hungary Franck-Hertz experiment anno (~1914) The Nobel Prize in Physics 1925 was awarded jointly to James Franck and Gustav Ludwig Hertz "for their discovery of the laws governing the impact of an electron upon an atom" Anode current Primary experimental result 4.9 V nobelprize.org Elv: ! # " Accelerating voltage ! " Verh. Dtsch. Phys. Ges. 16: 457–467 (1914). Franck-Hertz experiment anno (~1914) The Nobel Prize in Physics 1925 was awarded jointly to James Franck and Gustav Ludwig Hertz "for their discovery of the laws governing the impact of an electron upon an atom" nobelprize.org Verh. Dtsch. Phys. Ges. 16: 457–467 (1914). Franck-Hertz experiment anno (~1914) “The electrons in Hg vapor experience only elastic collisions up to a critical velocity” “We show a method using which the critical velocity (i.e. the accelerating voltage) can be determined to an accuracy of 0.1 V; its value is 4.9 V.” “We show that the energy of the ray with 4.9 V corresponds to the energy quantum of the resonance transition of Hg (λ = 253.6 nm)” ((( “Part of the energy goes into excitation and part goes into ionization” ))) Important experimental evidence for the quantized nature of the atomic energy levels. The Franck-Hertz experiment: 100 years ago and now Franck-Hertz experiment: published in 1914, Nobel prize in 1925 Why is it interesting today as well? “Simple” explanation (“The electrons ....”) → description based on kinetic theory (Robson, Sigeneger, ...) Modern experiments Various gases (Hg, He, Ne, Ar) Modern experiment + kinetic description (develop an experiment that can be modeled accurately ...) → P. -
Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications. -
Using Linked Data for Semi-Automatic Guesstimation
Using Linked Data for Semi-Automatic Guesstimation Jonathan A. Abourbih and Alan Bundy and Fiona McNeill∗ [email protected], [email protected], [email protected] University of Edinburgh, School of Informatics 10 Crichton Street, Edinburgh, EH8 9AB, United Kingdom Abstract and Semantic Web systems. Next, we outline the process of GORT is a system that combines Linked Data from across guesstimation. Then, we describe the organisation and im- several Semantic Web data sources to solve guesstimation plementation of GORT. Finally, we close with an evaluation problems, with user assistance. The system uses customised of the system’s performance and adaptability, and compare inference rules over the relationships in the OpenCyc ontol- it to several other related systems. We also conclude with a ogy, combined with data from DBPedia, to reason and per- brief section on future work. form its calculations. The system is extensible with new Linked Data, as it becomes available, and is capable of an- Literature Survey swering a small range of guesstimation questions. Combining facts to answer a user query is a mature field. The DEDUCOM system (Slagle 1965) was one of the ear- Introduction liest systems to perform deductive query answering. DE- The true power of the Semantic Web will come from com- DUCOM applies procedural knowledge to a set of facts in bining information from heterogeneous data sources to form a knowledge base to answer user queries, and a user can new knowledge. A system that is capable of deducing an an- also supplement the knowledge base with further facts. -
Why Has AI Failed? and How Can It Succeed?
Why Has AI Failed? And How Can It Succeed? John F. Sowa VivoMind Research, LLC 10 May 2015 Extended version of slides for MICAI'14 ProblemsProblems andand ChallengesChallenges Early hopes for artificial intelligence have not been realized. Language understanding is more difficult than anyone thought. A three-year-old child is better able to learn, understand, and generate language than any current computer system. Tasks that are easy for many animals are impossible for the latest and greatest robots. Questions: ● Have we been using the right theories, tools, and techniques? ● Why haven’t these tools worked as well as we had hoped? ● What other methods might be more promising? ● What can research in neuroscience and psycholinguistics tell us? ● Can it suggest better ways of designing intelligent systems? 2 Early Days of Artificial Intelligence 1960: Hao Wang’s theorem prover took 7 minutes to prove all 378 FOL theorems of Principia Mathematica on an IBM 704 – much faster than two brilliant logicians, Whitehead and Russell. 1960: Emile Delavenay, in a book on machine translation: “While a great deal remains to be done, it can be stated without hesitation that the essential has already been accomplished.” 1965: Irving John Good, in speculations on the future of AI: “It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.” 1968: Marvin Minsky, technical adviser for the movie 2001: “The HAL 9000 is a conservative estimate of the level of artificial intelligence in 2001.” 3 The Ultimate Understanding Engine Sentences uttered by a child named Laura before the age of 3. -
Bohr Model of Hydrogen
Chapter 3 Bohr model of hydrogen Figure 3.1: Democritus The atomic theory of matter has a long history, in some ways all the way back to the ancient Greeks (Democritus - ca. 400 BCE - suggested that all things are composed of indivisible \atoms"). From what we can observe, atoms have certain properties and behaviors, which can be summarized as follows: Atoms are small, with diameters on the order of 0:1 nm. Atoms are stable, they do not spontaneously break apart into smaller pieces or collapse. Atoms contain negatively charged electrons, but are electrically neutral. Atoms emit and absorb electromagnetic radiation. Any successful model of atoms must be capable of describing these observed properties. 1 (a) Isaac Newton (b) Joseph von Fraunhofer (c) Gustav Robert Kirch- hoff 3.1 Atomic spectra Even though the spectral nature of light is present in a rainbow, it was not until 1666 that Isaac Newton showed that white light from the sun is com- posed of a continuum of colors (frequencies). Newton introduced the term \spectrum" to describe this phenomenon. His method to measure the spec- trum of light consisted of a small aperture to define a point source of light, a lens to collimate this into a beam of light, a glass spectrum to disperse the colors and a screen on which to observe the resulting spectrum. This is indeed quite close to a modern spectrometer! Newton's analysis was the beginning of the science of spectroscopy (the study of the frequency distri- bution of light from different sources). The first observation of the discrete nature of emission and absorption from atomic systems was made by Joseph Fraunhofer in 1814. -
Sterns Lebensdaten Und Chronologie Seines Wirkens
Sterns Lebensdaten und Chronologie seines Wirkens Diese Chronologie von Otto Sterns Wirken basiert auf folgenden Quellen: 1. Otto Sterns selbst verfassten Lebensläufen, 2. Sterns Briefen und Sterns Publikationen, 3. Sterns Reisepässen 4. Sterns Züricher Interview 1961 5. Dokumenten der Hochschularchive (17.2.1888 bis 17.8.1969) 1888 Geb. 17.2.1888 als Otto Stern in Sohrau/Oberschlesien In allen Lebensläufen und Dokumenten findet man immer nur den VornamenOt- to. Im polizeilichen Führungszeugnis ausgestellt am 12.7.1912 vom königlichen Polizeipräsidium Abt. IV in Breslau wird bei Stern ebenfalls nur der Vorname Otto erwähnt. Nur im Emeritierungsdokument des Carnegie Institutes of Tech- nology wird ein zweiter Vorname Otto M. Stern erwähnt. Vater: Mühlenbesitzer Oskar Stern (*1850–1919) und Mutter Eugenie Stern geb. Rosenthal (*1863–1907) Nach Angabe von Diana Templeton-Killan, der Enkeltochter von Berta Kamm und somit Großnichte von Otto Stern (E-Mail vom 3.12.2015 an Horst Schmidt- Böcking) war Ottos Großvater Abraham Stern. Abraham hatte 5 Kinder mit seiner ersten Frau Nanni Freund. Nanni starb kurz nach der Geburt des fünften Kindes. Bald danach heiratete Abraham Berta Ben- der, mit der er 6 weitere Kinder hatte. Ottos Vater Oskar war das dritte Kind von Berta. Abraham und Nannis erstes Kind war Heinrich Stern (1833–1908). Heinrich hatte 4 Kinder. Das erste Kind war Richard Stern (1865–1911), der Toni Asch © Springer-Verlag GmbH Deutschland 2018 325 H. Schmidt-Böcking, A. Templeton, W. Trageser (Hrsg.), Otto Sterns gesammelte Briefe – Band 1, https://doi.org/10.1007/978-3-662-55735-8 326 Sterns Lebensdaten und Chronologie seines Wirkens heiratete. -
Logic-Based Technologies for Intelligent Systems: State of the Art and Perspectives
information Article Logic-Based Technologies for Intelligent Systems: State of the Art and Perspectives Roberta Calegari 1,* , Giovanni Ciatto 2 , Enrico Denti 3 and Andrea Omicini 2 1 Alma AI—Alma Mater Research Institute for Human-Centered Artificial Intelligence, Alma Mater Studiorum–Università di Bologna, 40121 Bologna, Italy 2 Dipartimento di Informatica–Scienza e Ingegneria (DISI), Alma Mater Studiorum–Università di Bologna, 47522 Cesena, Italy; [email protected] (G.C.); [email protected] (A.O.) 3 Dipartimento di Informatica–Scienza e Ingegneria (DISI), Alma Mater Studiorum–Università di Bologna, 40136 Bologna, Italy; [email protected] * Correspondence: [email protected] Received: 25 February 2020; Accepted: 18 March 2020; Published: 22 March 2020 Abstract: Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future. -
Talking to Computers in Natural Language
feature Talking to Computers in Natural Language Natural language understanding is as old as computing itself, but recent advances in machine learning and the rising demand of natural-language interfaces make it a promising time to once again tackle the long-standing challenge. By Percy Liang DOI: 10.1145/2659831 s you read this sentence, the words on the page are somehow absorbed into your brain and transformed into concepts, which then enter into a rich network of previously-acquired concepts. This process of language understanding has so far been the sole privilege of humans. But the universality of computation, Aas formalized by Alan Turing in the early 1930s—which states that any computation could be done on a Turing machine—offers a tantalizing possibility that a computer could understand language as well. Later, Turing went on in his seminal 1950 article, “Computing Machinery and Intelligence,” to propose the now-famous Turing test—a bold and speculative method to evaluate cial intelligence at the time. Daniel Bo- Figure 1). SHRDLU could both answer whether a computer actually under- brow built a system for his Ph.D. thesis questions and execute actions, for ex- stands language (or more broadly, is at MIT to solve algebra word problems ample: “Find a block that is taller than “intelligent”). While this test has led to found in high-school algebra books, the one you are holding and put it into the development of amusing chatbots for example: “If the number of custom- the box.” In this case, SHRDLU would that attempt to fool human judges by ers Tom gets is twice the square of 20% first have to understand that the blue engaging in light-hearted banter, the of the number of advertisements he runs, block is the referent and then perform grand challenge of developing serious and the number of advertisements is 45, the action by moving the small green programs that can truly understand then what is the numbers of customers block out of the way and then lifting the language in useful ways remains wide Tom gets?” [1]. -
Wikitology Wikipedia As an Ontology
Creating and Exploiting a Web of Semantic Data Tim Finin University of Maryland, Baltimore County joint work with Zareen Syed (UMBC) and colleagues at the Johns Hopkins University Human Language Technology Center of Excellence ICAART 2010, 24 January 2010 http://ebiquity.umbc.edu/resource/html/id/288/ Overview • Introduction (and conclusion) • A Web of linked data • Wikitology • Applications • Conclusion introduction linked data wikitology applications conclusion Conclusions • The Web has made people smarter and more capable, providing easy access to the world's knowledge and services • Software agents need better access to a Web of data and knowledge to enhance their intelligence • Some key technologies are ready to exploit: Semantic Web, linked data, RDF search engines, DBpedia, Wikitology, information extraction, etc. introduction linked data wikitology applications conclusion The Age of Big Data • Massive amounts of data is available today on the Web, both for people and agents • This is what‟s driving Google, Bing, Yahoo • Human language advances also driven by avail- ability of unstructured data, text and speech • Large amounts of structured & semi-structured data is also coming online, including RDF • We can exploit this data to enhance our intelligent agents and services introduction linked data wikitology applications conclusion Twenty years ago… Tim Berners-Lee‟s 1989 WWW proposal described a web of relationships among named objects unifying many info. management tasks. Capsule history • Guha‟s MCF (~94) • XML+MCF=>RDF