Deepdive: a Data Management System for Automatic Knowledge Base Construction

Total Page:16

File Type:pdf, Size:1020Kb

Deepdive: a Data Management System for Automatic Knowledge Base Construction DeepDive: A Data Management System for Automatic Knowledge Base Construction by Ce Zhang A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2015 Date of final oral examination: 08/13/2015 The dissertation is approved by the following members of the Final Oral Committee: Jude Shavlik, Professor, Computer Sciences (UW-Madison) Christopher R´e, Assistant Professor, Computer Science (Stanford University) Jeffrey Naughton, Professor, Computer Sciences (UW-Madison) David Page, Professor, Biostatistics and Medical Informatics (UW-Madison) Shanan Peters, Associate Professor, Geosciences (UW-Madison) c Copyright by Ce Zhang 2015 All Rights Reserved i ACKNOWLEDGMENTS I owe Christopher R´e my career as a researcher, the greatest dream of my life. Since the day I first met Chris and told him about my dream, he has done everything he could, as a scientist, an educator, and a friend, to help me. I am forever indebted to him for his completely honest criticisms and feedback, the most valuable gifts an advisor can give. His training equipped me with confidence and pride that I will carry for the rest of my career. He is the role model that I will follow. If my whole future career achieves an approximation of what he has done so far in his, I will be proud and happy. I am also indebted to Jude Shavlik and Miron Livny, who, after Chris left for Stanford, kindly helped me through all the paperwork and payments at Wisconsin. If it were not for their help, I would not have been able to continue my PhD studies. I am also profoundly grateful to Jude for being the chair of my committee. I am also likewise grateful to Jeffrey Naughton, David Page, and Shanan Peters for serving on my committee; and Thomas Reps for his feedback during defense. DeepDive would have not been possible without all its users. Shanan Peters was the first user, working with it before it even got its name. He spent three years going through a painful process with us before we understood the current abstraction of DeepDive. I am grateful to him for sticking with us for this long process. I would also like to thank all the scientists and students who have interacted with me, without whose generous sharing of knowledge we could not have refined DeepDive to its current state. A very incomplete list includes Gabor Angeli, Gill Bejerano, Noel Heim, Arun Kumar, Pradap Konda, Emily Mallory, Christopher Manning, Jonathan Payne, Eldon Ulrich, Robin Valenza, and Yuke Zhu. DeepDive is now a team effort, and I am grateful to the whole DeepDive team, especially Jaeho Shin and Michael Cafarella, who help in managing the DeepDive team. I am also extremely lucky to be surrounded by my friends. Heng Guo, Yinan Li, Linhai Song, Jia Xu, Junming Xu, Wentao Wu, and I have lunch frequently whenever we are in town, and this often amounts to the happiest hour in the entire day. Heng, a theoretician, has also borne the burden of listening to me pitch my system ideas for years, simply because he is, unfortunately for him, my roommate. Feng Niu mentored me through my early years at Wisconsin; even today, whenever I struggle to decide what the right thing to do is, I still imagine what he would have done in a similar situation. Victor Bittorf sparked my interest in modern hardware through his beautiful code and lucid tutorial. Over the years, I learned a lot from Yinan and Wentao about modern database systems, from Linhai about program analysis, and from Heng about counting. Finally, I would also like to thank my parents for all their love over the last twenty-seven years and, I hope, in the future. Anything that I could write about them in any languages, even in my native Chinese, would pale beside everything they have done to support me in my life. My graduate study has been supported by the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory prime contract No. FA8750- 09-C-0181, the DARPA DEFT Program under No. FA8750-13-2-0039, the National Science Foundation (NSF) EAGER Award under No. EAR-1242902, and the NSF EarthCube Award under No. ACI- 1343760. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, NSF, or the U.S. government. ii The results mentioned in this dissertation come from previously published work [16,84,90,148,157,171, 204–207]. Some descriptions are directly from these papers. These papers are joint efforts with different authors, including: F. Abuzaid, G. Angeli, V. Bittorf, V. Govindaraju, S. Gupta, S. Hadjis, M. Premkumar, A. Kumar, M. Livny, C. Manning, F. Niu, C. R´e, S. Peters, C. Sa, A. Sadeghian, Z. Shan, J. Shavlik, J. Shin, J. Tibshirani, F. Wang, J. Wu, and S. Wu. Although this dissertation bears my name, this collection of research would not have been possible without the contributions of all these collaborators. iii ABSTRACT Many pressing questions in science are macroscopic: they require scientists to consult information expressed in a wide range of resources, many of which are not organized in a structured relational form. Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database storing factual information, from unstructured inputs. KBC holds the promise of facilitating a range of macroscopic sciences by making information accessible to scientists. One key challenge in building a high-quality KBC system is that developers must often deal with data that are both diverse in type and large in size. Further complicating the scenario is that these data need to be manipulated by both relational operations and state-of-the-art machine-learning techniques. This dissertation focuses on supporting this complex process of building KBC systems. DeepDive is a data management system that we built to study this problem; its ultimate goal is to allow scientists to build a KBC system by declaratively specifying domain knowledge without worrying about any algorithmic, performance, or scalability issues. DeepDive was built by generalizing from our experience in building more than ten high-quality KBC systems, many of which exceed human quality or are top-performing systems in KBC competi- tions, and many of which were built completely by scientists or industry users using DeepDive. From these examples, we designed a declarative language to specify a KBC system and a concrete protocol that iteratively improves the quality of KBC systems. This flexible framework introduces challenges of scalability and performance–Many KBC systems built with DeepDive contain statistical inference and learning tasks over terabytes of data, and the iterative protocol also requires executing similar inference problems multiple times. Motivated by these challenges, we designed techniques that make both the batch execution and incremental execution of a KBC program up to two orders of magnitude more efficient and/or scalable. This dissertation describes the DeepDive framework, its applications, and these techniques, to demonstrate the thesis that it is feasible to build an efficient and scalable data management system for the end-to-end workflow of building KBC systems. iv SOFTWARE, DATA, AND VIDEOS • DeepDive is available at http://deepdive.stanford.edu. It is a team effort to further develop and maintain this system. • The DeepDive program of PaleoDeepDive (Chapter 4) is available at http://deepdive.stanford. edu/doc/paleo.html and can be executed with DeepDive v0.6.0α. • The code for the prototype systems we developed to study the tradeoff of performance and scalability is also available separately. 1. Elementary (Chapter 5.1): http://i.stanford.edu/hazy/hazy/elementary 2. DimmWitted (Chapter 5.2): http://github.com/HazyResearch/dimmwitted. This was developed together with Victor Bittorf. 3. Columbus (Chapter 6.1): http://github.com/HazyResearch/dimmwitted/tree/master/ columbus. This was developed together with Arun Kumar. 4. Incremental Feature Engineering (Chapter 6.2): DeepDive v0.6.0α. This was developed together with the DeepDive team, especially Jaeho Shin, Feiran Wang, and Sen Wu. • Most data that we can legally release are available as DeepDive Open Datasets at deepdive. stanford.edu/doc/opendata. The computational resources to produce these data are made possible by millions of machine hours provided by the Center of High Throughput Computing (CHTC), led by Miron Livny at UW-Madison. • These introductory videos in our YouTube channel are related to this dissertation: 1. PaleoDeepDive: http://www.youtube.com/watch?v=Cj2-dQ2nwoY 2. GeoDeepDive: http://www.youtube.com/watch?v=X8uhs28O3eA 3. Wisci: http://www.youtube.com/watch?v=Q1IpE9_pBu4 4. Columbus: http://www.youtube.com/watch?v=wdTds3yg_G4 5. Elementary: http://www.youtube.com/watch?v=Zf7sgMnR89c and http://www.youtube. com/watch?v=B5LxXGIkYe4 v DEPENDENCIES OF READING 2.1 3 4 1 7 8 2.2 5 6 • If you are only interested in how DeepDive can be used to construct knowledge bases to help your applications and example systems built with DeepDive, you can read following the red solid line. • If you are only interested in our work on efficient and scalable statistical analytics, you can read following the blue dotted line. • Otherwise, you can read in linear order. vi CONTENTS Acknowledgments i Abstract iii Software, Data, and Videos iv Dependencies of Reading v Contents vi List of Figures viii List of Tables x 1 Introduction 1 1.1 Sample Target Workload . 2 1.2 The Goal of DeepDive . 2 1.3 Technical Contributions . 4 2 Preliminaries 6 2.1 Knowledge Base Construction (KBC) . 6 2.2 Factor Graphs .
Recommended publications
  • Probabilistic Databases
    Series ISSN: 2153-5418 SUCIU • OLTEANU •RÉ •KOCH M SYNTHESIS LECTURES ON DATA MANAGEMENT &C Morgan & Claypool Publishers Series Editor: M. Tamer Özsu, University of Waterloo Probabilistic Databases Probabilistic Databases Dan Suciu, University of Washington, Dan Olteanu, University of Oxford Christopher Ré,University of Wisconsin-Madison and Christoph Koch, EPFL Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk DATABASES PROBABILISTIC assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for representing large probabilistic databases, by decomposing them into tuple-independent tables, block-independent-disjoint tables, or U-databases. Then it discusses two classes of techniques for query evaluation on probabilistic databases. In extensional query evaluation, the entire probabilistic inference can be pushed into the database engine and, therefore, processed as effectively as the evaluation of standard SQL queries. The relational queries that can be evaluated this way are called safe queries. In intensional query evaluation, the probabilistic Dan Suciu inference is performed over a propositional formula called lineage expression: every relational query can be evaluated this way, but the data complexity dramatically depends on the query being evaluated, and Dan Olteanu can be #P-hard. The book also discusses some advanced topics in probabilistic data management such as top-kquery processing, sequential probabilistic databases, indexing and materialized views, and Monte Carlo databases.
    [Show full text]
  • Adaptive Schema Databases ∗
    Adaptive Schema Databases ∗ William Spothb, Bahareh Sadat Arabi, Eric S. Chano, Dieter Gawlicko, Adel Ghoneimyo, Boris Glavici, Beda Hammerschmidto, Oliver Kennedyb, Seokki Leei, Zhen Hua Liuo, Xing Niui, Ying Yangb b: University at Buffalo i: Illinois Inst. Tech. o: Oracle {wmspoth|okennedy|yyang25}@buffalo.edu {barab|slee195|xniu7}@hawk.iit.edu [email protected] {eric.s.chan|dieter.gawlick|adel.ghoneimy|beda.hammerschmidt|zhen.liu}@oracle.com ABSTRACT in unstructured or semi-structured form, then an ETL (i.e., The rigid schemas of classical relational databases help users Extract, Transform, and Load) process needs to be designed in specifying queries and inform the storage organization of to translate the input data into relational form. Thus, clas- data. However, the advantages of schemas come at a high sical relational systems require a lot of upfront investment. upfront cost through schema and ETL process design. In This makes them unattractive when upfront costs cannot this work, we propose a new paradigm where the database be amortized, such as in workloads with rapidly evolving system takes a more active role in schema development and data or where individual elements of a schema are queried data integration. We refer to this approach as adaptive infrequently. Furthermore, in settings like data exploration, schema databases (ASDs). An ASD ingests semi-structured schema design simply takes too long to be practical. or unstructured data directly using a pluggable combina- Schema-on-query is an alternative approach popularized tion of extraction and data integration techniques. Over by NoSQL and Big Data systems that avoids the upfront time it discovers and adapts schemas for the ingested data investment in schema design by performing data extraction using information provided by data integration and infor- and integration at query-time.
    [Show full text]
  • Finding Interesting Itemsets Using a Probabilistic Model for Binary Databases
    Finding interesting itemsets using a probabilistic model for binary databases Tijl De Bie University of Bristol, Department of Engineering Mathematics Queen’s Building, University Walk, Bristol, BS8 1TR, UK [email protected] ABSTRACT elegance and e±ciency of the algorithms to search for fre- A good formalization of interestingness of a pattern should quent patterns. Unfortunately, the frequency of a pattern satisfy two criteria: it should conform well to intuition, and is only loosely related to interestingness. The output of fre- it should be computationally tractable to use. The focus quent pattern mining methods is usually an immense bag has long been on the latter, with the development of frequent of patterns that are not necessarily interesting, and often pattern mining methods. However, it is now recognized that highly redundant with each other. This has hampered the more appropriate measures than frequency are required. uptake of FPM and FIM in data mining practice. In this paper we report results in this direction for item- Recent research has shifted focus to the search for more set mining in binary databases. In particular, we introduce useful formalizations of interestingness that match practical a probabilistic model that can be ¯tted e±ciently to any needs more closely, while still being amenable to e±cient binary database, and that has a compact and explicit repre- algorithms. As FIM is arguably the simplest special case of sentation. We then show how this model enables the formal- frequent pattern mining, it is not surprising that most of the ization of an intuitive and tractable interestingness measure recent work has focussed on itemset patterns, see e.g.
    [Show full text]
  • Open-World Probabilistic Databases
    Open-World Probabilistic Databases Ismail˙ Ilkan˙ Ceylan and Adnan Darwiche and Guy Van den Broeck Computer Science Department University of California, Los Angeles [email protected],fdarwiche,[email protected] Abstract techniques, from part-of-speech tagging until relation ex- traction (Mintz et al. 2009). For knowledge-base comple- Large-scale probabilistic knowledge bases are becoming in- tion – the task of deriving new facts from existing knowl- creasingly important in academia and industry alike. They edge – statistical relational learning algorithms make use are constantly extended with new data, powered by modern information extraction tools that associate probabilities with of embeddings (Bordes et al. 2011; Socher et al. 2013) database tuples. In this paper, we revisit the semantics under- or probabilistic rules (Wang, Mazaitis, and Cohen 2013; lying such systems. In particular, the closed-world assump- De Raedt et al. 2015). In both settings, the output is a pre- tion of probabilistic databases, that facts not in the database dicted fact with its probability. It is therefore required to de- have probability zero, clearly conflicts with their everyday fine probabilistic semantics for knowledge bases. The classi- use. To address this discrepancy, we propose an open-world cal and most-basic model is that of tuple-independent prob- probabilistic database semantics, which relaxes the probabili- abilistic databases (PDBs) (Suciu et al. 2011), which in- ties of open facts to intervals. While still assuming a finite do- deed underlies many of these systems (Dong et al. 2014; main, this semantics can provide meaningful answers when Shin et al. 2015).
    [Show full text]
  • A Modern History of Probability Theory
    A Modern History of Probability Theory Kevin H. Knuth Depts. of Physics and Informatics University at Albany (SUNY) Albany NY USA 4/29/2016 Knuth - Bayes Forum 1 A Modern History of Probability Theory Kevin H. Knuth Depts. of Physics and Informatics University at Albany (SUNY) Albany NY USA 4/29/2016 Knuth - Bayes Forum 2 A Long History The History of Probability Theory, Anthony J.M. Garrett MaxEnt 1997, pp. 223-238. Hájek, Alan, "Interpretations of Probability", The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2012/entries/probability-interpret/>. 4/29/2016 Knuth - Bayes Forum 3 … la théorie des probabilités n'est, au fond, que le bon sens réduit au calcul … … the theory of probabilities is basically just common sense reduced to calculation … Pierre Simon de Laplace Théorie Analytique des Probabilités 4/29/2016 Knuth - Bayes Forum 4 Taken from Harold Jeffreys “Theory of Probability” 4/29/2016 Knuth - Bayes Forum 5 The terms certain and probable describe the various degrees of rational belief about a proposition which different amounts of knowledge authorise us to entertain. All propositions are true or false, but the knowledge we have of them depends on our circumstances; and while it is often convenient to speak of propositions as certain or probable, this expresses strictly a relationship in which they stand to a corpus of knowledge, actual or hypothetical, and not a characteristic of the propositions in themselves. A proposition is capable at the same time of varying degrees of this relationship, depending upon the knowledge to which it is related, so that it is without significance to call a John Maynard Keynes proposition probable unless we specify the knowledge to which we are relating it.
    [Show full text]
  • A Compositional Query Algebra for Second-Order Logic and Uncertain Databases
    A Compositional Query Algebra for Second-Order Logic and Uncertain Databases Christoph Koch Department of Computer Science Cornell University, Ithaca, NY, USA [email protected] ABSTRACT guages for querying possible worlds. Indeed, second-order World-set algebra is a variable-free query language for uncer- quantifiers are the essence of what-if reasoning in databases. tain databases. It constitutes the core of the query language World-set algebra seems to be a strong candidate for a core implemented in MayBMS, an uncertain database system. algebra for forming query plans and optimizing and execut- This paper shows that world-set algebra captures exactly ing them in uncertain database management systems. second-order logic over finite structures, or equivalently, the It was left open in previous work whether world-set alge- polynomial hierarchy. The proofs also imply that world- bra is closed under composition, or whether definitions are set algebra is closed under composition, a previously open adding to the expressive power of the language. Compo- problem. sitionality is a desirable and rather commonplace property of query algebras, but in the case of WSA it seems rather unlikely to hold. The reason for this is that the algebra con- 1. INTRODUCTION tains an uncertainty-introduction operation that on the level Developing suitable query languages for uncertain data- of possible worlds is nondeterministic. First materializing a bases is a substantial research challenge that is only cur- view and subsequently using it multiple times in the query is rently starting to get addressed. Conceptually, an uncertain semantically quite different from composing the query with database is a finite set of possible worlds, each one a re- the view and thus obtaining several copies of the view def- lational database.
    [Show full text]
  • Answering Queries from Statistics and Probabilistic Views∗
    Answering Queries from Statistics and Probabilistic Views∗ Nilesh Dalvi Dan Suciu University of Washington, Seattle, WA, USA 1 Introduction paper we will assume the Local As View (LAV) data integration paradigm [17, 16], which consists of defin- Systems integrating dozens of databases, in the scien- ing a global mediated schema R¯, then expressing each tific domain or in a large corporation, need to cope local source i as a view vi(R¯) over the global schema. with a wide variety of imprecisions, such as: different Users are allowed to ask queries over the global, me- representations of the same object in different sources; diated schema, q(R¯), however the data is given as in- imperfect and noisy schema alignments; contradictory stances J1; : : : ; Jm of the local data sources. In our information across sources; constraint violations; or in- model all instances are probabilistic, both the local sufficient evidence to answer a given query. If standard instances and the global instance. Statistics are given query semantics were applied to such data, all but the explicitly over the global schema R¯, and the probabil- most trivial queries will return an empty answer. ities are given explicitly over the local sources, hence We believe that probabilistic databases are the right over the views. We make the Open World Assumption paradigm to model all types of imprecisions in a uni- throughout the paper. form and principled way. A probabilistic database is a probability distribution on all instances [5, 4, 15, 1.1 Example: Using Statistics 12, 11, 8]. Their early motivation was to model im- precisions at the tuple level: tuples are not known Suppose we integrate two sources, one show- with certainty to belong to the database, or represent ing which employee works for what depart- noisy measurements, etc.
    [Show full text]
  • Uva-DARE (Digital Academic Repository)
    UvA-DARE (Digital Academic Repository) On entity resolution in probabilistic data Ayat, S.N. Publication date 2014 Document Version Final published version Link to publication Citation for published version (APA): Ayat, S. N. (2014). On entity resolution in probabilistic data. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:01 Oct 2021 On Entity Resolution in Probabilistic Data Naser Ayat On Entity Resolution in Probabilistic Data Naser Ayat On Entity Resolution in Probabilistic Data Naser Ayat On Entity Resolution in Probabilistic Data Academisch Proefschrift ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof.dr. D.C. van den Boom ten overstaan van een door het college voor promoties ingestelde commissie, in het openbaar te verdedigen in de Agnietenkapel op dinsdag 30 september 2014, te 12.00 uur door Seyed Naser Ayat geboren te Esfahan, Iran Promotors: prof.
    [Show full text]
  • A Brief Overview of Probability Theory in Data Science by Geert
    A brief overview of probability theory in data science Geert Verdoolaege 1Department of Applied Physics, Ghent University, Ghent, Belgium 2Laboratory for Plasma Physics, Royal Military Academy (LPP–ERM/KMS), Brussels, Belgium Tutorial 3rd IAEA Technical Meeting on Fusion Data Processing, Validation and Analysis, 27-05-2019 Overview 1 Origins of probability 2 Frequentist methods and statistics 3 Principles of Bayesian probability theory 4 Monte Carlo computational methods 5 Applications Classification Regression analysis 6 Conclusions and references 2 Overview 1 Origins of probability 2 Frequentist methods and statistics 3 Principles of Bayesian probability theory 4 Monte Carlo computational methods 5 Applications Classification Regression analysis 6 Conclusions and references 3 Early history of probability Earliest traces in Western civilization: Jewish writings, Aristotle Notion of probability in law, based on evidence Usage in finance Usage and demonstration in gambling 4 Middle Ages World is knowable but uncertainty due to human ignorance William of Ockham: Ockham’s razor Probabilis: a supposedly ‘provable’ opinion Counting of authorities Later: degree of truth, a scale Quantification: Law, faith ! Bayesian notion Gaming ! frequentist notion 5 Quantification 17th century: Pascal, Fermat, Huygens Comparative testing of hypotheses Population statistics 1713: Ars Conjectandi by Jacob Bernoulli: Weak law of large numbers Principle of indifference De Moivre (1718): The Doctrine of Chances 6 Bayes and Laplace Paper by Thomas Bayes (1763): inversion
    [Show full text]
  • BAYESSTORE: Managing Large, Uncertain Data Repositories with Probabilistic Graphical Models
    BAYESSTORE: Managing Large, Uncertain Data Repositories with Probabilistic Graphical Models Daisy Zhe Wang∗ , Eirinaios Michelakis∗ , Minos Garofalakisy∗ , and Joseph M. Hellerstein∗ ∗ Univ. of California, Berkeley EECS and y Yahoo! Research ABSTRACT Of course, the fundamental mathematical tools for managing un- certainty come from probability and statistics. In recent years, these Several real-world applications need to effectively manage and reason about tools have been aggressively imported into the computational do- large amounts of data that are inherently uncertain. For instance, perva- main under the rubric of Statistical Machine Learning (SML). Of sive computing applications must constantly reason about volumes of noisy special note here is the widespread use of Graphical Modeling tech- sensory readings for a variety of reasons, including motion prediction and niques, including the many variants of Bayesian Networks (BNs) human behavior modeling. Such probabilistic data analyses require so- and Markov Random Fields (MRFs) [13]. These techniques can phisticated machine-learning tools that can effectively model the complex provide robust statistical models that capture complex correlation spatio/temporal correlation patterns present in uncertain sensory data. Un- patterns among variables, while, at the same time, addressing some fortunately, to date, most existing approaches to probabilistic database sys- computational efficiency and scalability issues as well. Graphical tems have relied on somewhat simplistic models of uncertainty that can be models have been applied with great success in applications as di- easily mapped onto existing relational architectures: Probabilistic informa- verse as signal processing, information retrieval, sensornets and tion is typically associated with individual data tuples, with only limited pervasive computing, robotics, natural language processing, and or no support for effectively capturing and reasoning about complex data computer vision.
    [Show full text]
  • Maximum Entropy: the Universal Method for Inference
    MAXIMUM ENTROPY: THE UNIVERSAL METHOD FOR INFERENCE by Adom Giffin A Dissertation Submitted to the University at Albany, State University of New York in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy College of Arts & Sciences Department of Physics 2008 Abstract In this thesis we start by providing some detail regarding how we arrived at our present understanding of probabilities and how we manipulate them – the product and addition rules by Cox. We also discuss the modern view of entropy and how it relates to known entropies such as the thermodynamic entropy and the information entropy. Next, we show that Skilling's method of induction leads us to a unique general theory of inductive inference, the ME method and precisely how it is that other entropies such as those of Renyi or Tsallis are ruled out for problems of inference. We then explore the compatibility of Bayes and ME updating. After pointing out the distinction between Bayes' theorem and the Bayes' updating rule, we show that Bayes' rule is a special case of ME updating by translating information in the form of data into constraints that can be processed using ME. This implies that ME is capable of reproducing every aspect of orthodox Bayesian inference and proves the complete compatibility of Bayesian and entropy methods. We illustrated this by showing that ME can be used to derive two results traditionally in the domain of Bayesian statistics, Laplace's Succession rule and Jeffrey's conditioning rule. The realization that the ME method incorporates Bayes' rule as a special case allows us to go beyond Bayes' rule and to process both data and expected value constraints simultaneously.
    [Show full text]
  • UC Berkeley UC Berkeley Electronic Theses and Dissertations
    UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints Permalink https://escholarship.org/uc/item/1sx6w3qg Author Albanna, Badr Faisal Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints by Badr Faisal Albanna A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Physics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Michael R. DeWeese, Chair Professor Ahmet Yildiz Professor David Presti Fall 2013 Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints Copyright 2013 by Badr Faisal Albanna 1 Abstract Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints by Badr Faisal Albanna Doctor of Philosophy in Physics University of California, Berkeley Professor Michael R. DeWeese, Chair Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. In this dissertation, I provide lower and upper bounds on the entropy for both the minimum and maximum entropy distributions over binary units with any fixed set of mean values and pairwise correlations, and we construct distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size, unlike the possible linear behavior of the maximum entropy solution, for any set of first- and second-order statistics consistent with arbitrarily large systems.
    [Show full text]