
Statistical Inference in Graphical Models Kevin Gimpel October 30, 2006 ii ABSTRACT Graphical models fuse probability theory and graph theory in such a way as to permit ef- ficient representation and computation with probability distributions. They intuitively capture statistical relationships among random variables in a distribution and exploit these relationships to permit tractable algorithms for statistical inference. In recent years, certain types of graphical models, particularly undirected graphical models, Bayesian networks, and dynamic Bayesian net- works (DBNs), have been shown to be applicable to various problems in air and missile defense that involve decision making under uncertainty and estimation in dynamic systems. While the scope of problems addressed by such systems is quite diverse, all require mathematically-sound machinery for dealing with uncertainty. Graphical models provide a robust, flexible framework for representing and computationally handling uncertainty in real-world problems. While the graphical model regime is relatively new, it has deep roots in many fields, as the formalism generalizes many commonly-used stochastic models, including Kalman filters [25] and hidden Markov models [47]. Statistical inference on a graphical model is NP-Hard, but there have been extensive efforts aimed at developing efficient algorithms for certain classes of models as well as for obtaining approxima- tions for quantities of interest. The literature offers a rich collection of both exact and approximate statistical inference algorithms, several of which we will describe in detail in this report. iii iv TABLE OF CONTENTS Page Abstract iii List of Illustrations vii 1. INTRODUCTION 1 1.1 Two Examples 1 2. GRAPHICAL MODELS 5 2.1 Notation and Preliminary Definitions 6 2.2 Undirected Graphical Models 6 2.3 Bayesian Networks 10 2.4 Dynamic Bayesian Networks 15 3. STATISTICAL INFERENCE 19 3.1 Variable Elimination 20 3.2 Belief Propagation 23 3.3 Inference in Dynamic Bayesian Networks 25 4. EXACT INFERENCE ALGORITHMS 29 4.1 The Junction Tree Algorithm 29 4.2 Symbolic Probabilistic Inference 37 5. ALGORITHMS FOR APPROXIMATE INFERENCE 41 5.1 The Boyen-Koller Algorithm 42 5.2 Particle Filtering 43 5.3 Gibbs Sampling 49 APPENDIX A: 53 A.1 Independence = Conditional Independence 53 A.2 Derivation of pairwise factorization for acyclic undirected graphical models 53 v A.3 Derivation of belief propagation for continuous nodes with evidence applied 54 References 59 vi LIST OF ILLUSTRATIONS Figure Page No. 1 An example Bayesian network for medical diagnosis. 1 2 An undirected graphical model with five nodes. 7 3 u-separation and the global Markov property. 7 4 An acyclic undirected graphical model. 10 5 A Bayesian network with five nodes. 10 6 Serial connection. 12 7 Diverging connection. 12 8 Converging connection. 12 9 Boundary conditions. 12 10 Bayes ball algorithm example run. 12 11 The Markov blanket of a node X. 13 12 Equivalent way of specifying the local Markov property for Bayesian networks. 13 13 A 2-timeslice Bayesian network (2-TBN). 16 14 The 2-TBN for a hidden Markov model (HMM) or Kalman filter. 17 15 An acyclic undirected graphical model. 20 16 A message passed from node F to node E. 21 17 Message passes from an example run of the variable elim- ination algorithm to compute p(A). 22 18 Collect-to-root and Distribute-from-root, the two series of message passing. 24 19 The main types of inference for DBNs. 25 20 Moralization and triangulation stages in the junction tree algorithm. 30 21 Junction tree created from the triangulated graph. 31 vii 22 Message passing in the junction tree. 32 23 A 2-TBN showing timeslices t − 1 and t. 34 24 Steps to construct a junction tree from the prior B0. 35 25 Steps to construct a junction tree from the 2-TBN Bt. 36 26 Procedure for advancing timesteps. 37 27 A network to illustrate the set factoring algorithm. 39 28 The 2-TBN we used to demonstrate execution of the junction tree algorithm for DBNs. 43 29 Procedure for advancing timesteps in the Boyen-Koller algorithm. 44 30 The steps in particle filtering. 48 A-1 An undirected graphical model with observation nodes shown. 55 A-2 Collect-to-root and Distribute-from-root, the two series of message passing. 57 viii Cold? Angina? Fever? Sore Throat? See Spots? Figure 1. An example Bayesian network for medical diagnosis (adapted from [21]). 1. INTRODUCTION Research on graphical models has exploded in recent years due to their applicability in a wide range of fields, including machine learning, artificial intelligence, computer vision, signal processing, speech recognition, multi-target tracking, natural language processing, bioinformatics, error correction coding, and others. However, different fields tend to have their own classical problems and methodologies for solutions to these problems, so graphical model research in one field may take a very different shape than in another. As a result, innovations are scattered throughout the literature and it takes considerable effort to develop a cohesive perspective on the graphical model framework as a whole. 1.1 TWO EXAMPLES To concretize our discussion, we will first introduce examples of two types of real-world problems that graphical models are often used to solve. The first consists of building a system to determine the most appropriate classification of a situation by modeling the interactions among the various quantities involved. In particular, we will consider a graphical model that is a simplification of the types of models found in medical diagnosis. In this example, we want to determine the diseases a patient may have and their likelihoods given the reported symptoms. A natural way of approaching this problem is to model probabilistically the impact of each illness upon its possible symptoms, and then to infer a distribution over the most likely diseases when given an actual set of symptoms observed in a patient. Figure 1 shows a Bayesian network for a toy model consisting of 2 diseases and 3 symptoms. Each node of a Bayesian network corresponds to a random variable and the edges encode information about statistical relationships among the variables. In certain applications, it is natural to attribute a “causal” interpretation to the directed edges between variables. For example, in Figure 1 having a cold can be interpreted as having a direct or causal influence on the appearance of a fever or sore throat. When building a Bayesian network, a hyperparameter must be specified for each variable to concretize this influence. In particular, one must delineate the conditional probability distribution of each variable given the set of variables pointing to it. In Section 2, we will describe in detail why these parameters are sufficient to specify a Bayesian network and how we can make use of these parameters to allow for efficient representation and computation with the joint distribution represented by the network. The idea of doing medical diagnosis using graphical models has been used in several real-world systems, including the decision- theoretic formulation of the Quick Medical Reference (QMR) [56], which includes approximately 1 600 diseases and approximately 4000 symptoms “caused” by the diseases. The other type of problem we shall consider is that of estimation in dynamic systems. To approach this problem, we assume the existence of an underlying stochastic process that we cannot observe directly, but for which we have a time-series of noisy observations. These observations are used to refine an estimate of the process’s hidden state. An example of this type of problem is that of automatic speech recognition, in which the process being modeled is that of human speech and the hidden state consists of the phoneme or word that is currently being spoken. We use the observation sequence, which may be the sequence of speech signal waveforms produced by the speaker, to obtain an estimate for what the speaker actually said. This type of problem can also be represented using a graphical model, as we shall see in Section 2.4. Doing so provides access to the wealth of inference techniques developed by the graphical model community for estimating the hidden state. In general, problems such as these can be approached by representing the system being modeled as a joint probability distribution of all the quantities involved and then encapsulating this distribution in a graphical model. Frequently, the structure of the models is sparse due to statistical dependencies among the variables being modeled; the graphical model machinery exploits this sparsity to provide an efficient representative form for the distribution. Then, given observations, we can perform statistical inference on the models to obtain distributions on the desired quantities. The two examples described above use different kinds of graphical models, both of which we shall discuss in detail in this report. The expert system problem is well-suited to a Bayesian network, while the dynamic state estimation problem is often approached using a dynamic Bayesian network (DBN) model. Many commonly-used temporal models, including hidden Markov models (HMMs) and state-space models, are actually special cases of DBNs. The purpose of graphical modeling is to exploit the statistical relationships of the quantities being modeled for representational and computational efficiency. The relevant computations we need to perform may include computing likelihoods, expectations, entropies, or other statistical quantities of interest. Obtaining any of these quantities requires performing statistical inference on the model under consideration. In the most general case, exact statistical inference in graphical models is NP-Hard and the computations required become intractable on a single computer for even moderately-sized problems [9]. As a result, efficient inference algorithms have been developed to compute exact results for certain subclasses of networks or for particular sets of queries, and there has also been a large focus on the design and convergence analysis of approximation schemes for general networks that use Monte Carlo sampling techniques or Markov chain-Monte Carlo methods.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages63 Page
-
File Size-