Bayesian Model Selection of Stochastic Block Models
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Equivalence Between Minimal Generative Model Graphs and Directed Information Graphs
Equivalence Between Minimal Generative Model Graphs and Directed Information Graphs Christopher J. Quinn Negar Kiyavash Todd P. Coleman Department of Electrical and Department of Industrial and Department of Electrical and Computer Engineering Enterprise Systems Engineering Computer Engineering University of Illinois University of Illinois University of Illinois Urbana, Illinois 61801 Urbana, Illinois 61801 Urbana, Illinois 61801 Email: [email protected] Email: [email protected] Email: [email protected] Abstract—We propose a new type of probabilistic graphical own past [4], [5]. This improvement is measured in terms of model, based on directed information, to represent the causal average reduction in the encoding length of the information. dynamics between processes in a stochastic system. We show the Clearly such a definition of causality has the notion of timing practical significance of such graphs by proving their equivalence to generative model graphs which succinctly summarize interde- ingrained in it. Therefore, we expect a graphical model defined pendencies for causal dynamical systems under mild assumptions. based on directed information to deal with processes as its This equivalence means that directed information graphs may be nodes. used for causal inference and learning tasks in the same manner In this work, we formally define directed information Bayesian networks are used for correlative statistical inference graphs, an alternative type of graphical model. In these graphs, and learning. nodes represent processes, and directed edges correspond to whether there is directed information from one process to I. INTRODUCTION another. To demonstrate the practical significance of directed Graphical models facilitate design and analysis of interact- information graphs, we validate their ability to capture causal ing multivariate probabilistic complex systems that arise in interdependencies for a class of interacting processes corre- numerous fields such as statistics, information theory, machine sponding to a causal dynamical system. -
Arxiv:1705.10225V8 [Stat.ML] 6 Feb 2020
Bayesian stochastic blockmodelinga Tiago P. Peixotoy Department of Mathematical Sciences and Centre for Networks and Collective Behaviour, University of Bath, United Kingdom and ISI Foundation, Turin, Italy This chapter provides a self-contained introduction to the use of Bayesian inference to ex- tract large-scale modular structures from network data, based on the stochastic blockmodel (SBM), as well as its degree-corrected and overlapping generalizations. We focus on non- parametric formulations that allow their inference in a manner that prevents overfitting, and enables model selection. We discuss aspects of the choice of priors, in particular how to avoid underfitting via increased Bayesian hierarchies, and we contrast the task of sampling network partitions from the posterior distribution with finding the single point estimate that maximizes it, while describing efficient algorithms to perform either one. We also show how inferring the SBM can be used to predict missing and spurious links, and shed light on the fundamental limitations of the detectability of modular structures in networks. arXiv:1705.10225v8 [stat.ML] 6 Feb 2020 aTo appear in “Advances in Network Clustering and Blockmodeling,” edited by P. Doreian, V. Batagelj, A. Ferligoj, (Wiley, New York, 2019 [forthcoming]). y [email protected] 2 CONTENTS I. Introduction 3 II. Structure versus randomness in networks 3 III. The stochastic blockmodel (SBM) 5 IV. Bayesian inference: the posterior probability of partitions 7 V. Microcanonical models and the minimum description length principle (MDL) 11 VI. The “resolution limit” underfitting problem, and the nested SBM 13 VII. Model variations 17 A. Model selection 18 B. Degree correction 18 C. -
A Hierarchical Generative Model of Electrocardiogram Records
A Hierarchical Generative Model of Electrocardiogram Records Andrew C. Miller ∗ Sendhil Mullainathan School of Engineering and Applied Sciences Department of Economics Harvard University Harvard University Cambridge, MA 02138 Cambridge, MA 02138 [email protected] [email protected] Ziad Obermeyer Brigham and Women’s Hospital Harvard Medical School Boston, MA 02115 [email protected] Abstract We develop a probabilistic generative model of electrocardiogram (EKG) tracings. Our model describes multiple sources of variation in EKGs, including patient- specific cardiac cycle morphology and between-cycle variation that leads to quasi- periodicity. We use a deep generative network as a flexible model component to describe variation in beat-specific morphology. We apply our model to a set of 549 EKG records, including over 4,600 unique beats, and show that it is able to discover interpretable information, such as patient similarity and meaningful physiological features (e.g., T wave inversion). 1 Introduction An electrocardiogram (EKG) is a common non-invasive medical test that measures the electrical activity of a patient’s heart by recording the time-varying potential difference between electrodes placed on the surface of the skin. The resulting data is a multivariate time-series that reflects the depolarization and repolarization of the heart muscle that occurs during each heartbeat. These raw waveforms are then inspected by a physician to detect irregular patterns that are evidence of an underlying physiological problem. In this paper, we build a hierarchical generative model of electrocardiogram signals that disentangles sources of variation. We are motivated to build a generative model of EKGs for multiple reasons. -
Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting
Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting Aditya Grover1, Jiaming Song1, Alekh Agarwal2, Kenneth Tran2, Ashish Kapoor2, Eric Horvitz2, Stefano Ermon1 1Stanford University, 2Microsoft Research, Redmond Abstract A learned generative model often produces biased statistics relative to the under- lying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We show that this likelihood-free importance weighting method induces a new energy-based model and employ it to correct for the bias in existing models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative mod- els, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data. 1 Introduction Learning generative models of complex environments from high-dimensional observations is a long- standing challenge in machine learning. Once learned, these models are used to draw inferences and to plan future actions. For example, in data augmentation, samples from a learned model are used to enrich a dataset for supervised learning [1]. In model-based off-policy policy evaluation (henceforth MBOPE), a learned dynamics model is used to simulate and evaluate a target policy without real-world deployment [2], which is especially valuable for risk-sensitive applications [3]. -
Probabilistic PCA and Factor Analysis
Generative Models for Dimensionality Reduction: Probabilistic PCA and Factor Analysis Piyush Rai Machine Learning (CS771A) Oct 5, 2016 Machine Learning (CS771A) Generative Models for Dimensionality Reduction: Probabilistic PCA and Factor Analysis 1 K Generate latent variables z n 2 R (K D) as z n ∼ N (0; IK ) Generate data x n conditioned on z n as 2 x n ∼ N (Wz n; σ ID ) where W is the D × K \factor loading matrix" or \dictionary" z n is K-dim latent features or latent factors or \coding" of x n w.r.t. W Note: Can also write x n as a linear transformation of z n, plus Gaussian noise 2 x n = Wz n + n (where n ∼ N (0; σ ID )) This is \Probabilistic PCA" (PPCA) with Gaussian observation model 2 N Want to learn model parameters W; σ and latent factors fz ngn=1 When n ∼ N (0; Ψ), Ψ is diagonal, it is called \Factor Analysis" (FA) Generative Model for Dimensionality Reduction D Assume the following generative story for each x n 2 R Machine Learning (CS771A) Generative Models for Dimensionality Reduction: Probabilistic PCA and Factor Analysis 2 Generate data x n conditioned on z n as 2 x n ∼ N (Wz n; σ ID ) where W is the D × K \factor loading matrix" or \dictionary" z n is K-dim latent features or latent factors or \coding" of x n w.r.t. W Note: Can also write x n as a linear transformation of z n, plus Gaussian noise 2 x n = Wz n + n (where n ∼ N (0; σ ID )) This is \Probabilistic PCA" (PPCA) with Gaussian observation model 2 N Want to learn model parameters W; σ and latent factors fz ngn=1 When n ∼ N (0; Ψ), Ψ is diagonal, it is called \Factor Analysis" (FA) Generative Model for Dimensionality Reduction D Assume the following generative story for each x n 2 R K Generate latent variables z n 2 R (K D) as z n ∼ N (0; IK ) Machine Learning (CS771A) Generative Models for Dimensionality Reduction: Probabilistic PCA and Factor Analysis 2 where W is the D × K \factor loading matrix" or \dictionary" z n is K-dim latent features or latent factors or \coding" of x n w.r.t. -
Learning a Generative Model of Images by Factoring Appearance and Shape
1 Learning a generative model of images by factoring appearance and shape Nicolas Le Roux1, Nicolas Heess2, Jamie Shotton1, John Winn1 1Microsoft Research Cambridge 2University of Edinburgh Keywords: generative model of images, occlusion, segmentation, deep networks Abstract Computer vision has grown tremendously in the last two decades. Despite all efforts, existing attempts at matching parts of the human visual system’s extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step towards richer, more flexible models of images. After comparing various types of RBMs able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition. 1 Introduction Despite much progress in the field of computer vision in recent years, interpreting and modeling the bewildering structure of natural images remains a challenging problem. The limitations of even the most advanced systems become strikingly obvious when contrasted with the ease, flexibility, and robustness with which the human visual system analyzes and interprets an image. Computer vision is a problem domain where the structure that needs to be represented is complex and strongly task dependent, and the input data is often highly ambiguous. -
On Community Structure in Complex Networks: Challenges and Opportunities
On community structure in complex networks: challenges and opportunities Hocine Cherifi · Gergely Palla · Boleslaw K. Szymanski · Xiaoyan Lu Received: November 5, 2019 Abstract Community structure is one of the most relevant features encoun- tered in numerous real-world applications of networked systems. Despite the tremendous effort of a large interdisciplinary community of scientists working on this subject over the past few decades to characterize, model, and analyze communities, more investigations are needed in order to better understand the impact of community structure and its dynamics on networked systems. Here, we first focus on generative models of communities in complex networks and their role in developing strong foundation for community detection algorithms. We discuss modularity and the use of modularity maximization as the basis for community detection. Then, we follow with an overview of the Stochastic Block Model and its different variants as well as inference of community structures from such models. Next, we focus on time evolving networks, where existing nodes and links can disappear, and in parallel new nodes and links may be introduced. The extraction of communities under such circumstances poses an interesting and non-trivial problem that has gained considerable interest over Hocine Cherifi LIB EA 7534 University of Burgundy, Esplanade Erasme, Dijon, France E-mail: hocine.cherifi@u-bourgogne.fr Gergely Palla MTA-ELTE Statistical and Biological Physics Research Group P´azm´any P. stny. 1/A, Budapest, H-1117, Hungary E-mail: [email protected] Boleslaw K. Szymanski Department of Computer Science & Network Science and Technology Center Rensselaer Polytechnic Institute 110 8th Street, Troy, NY 12180, USA E-mail: [email protected] Xiaoyan Lu Department of Computer Science & Network Science and Technology Center Rensselaer Polytechnic Institute 110 8th Street, Troy, NY 12180, USA E-mail: [email protected] arXiv:1908.04901v3 [physics.soc-ph] 6 Nov 2019 2 Cherifi, Palla, Szymanski, Lu the last decade. -
Community Detection with Dependent Connectivity∗
Submitted to the Annals of Statistics arXiv: arXiv:0000.0000 COMMUNITY DETECTION WITH DEPENDENT CONNECTIVITY∗ BY YUBAI YUANy AND ANNIE QUy University of Illinois at Urbana-Champaigny In network analysis, within-community members are more likely to be connected than between-community members, which is reflected in that the edges within a community are intercorrelated. However, existing probabilistic models for community detection such as the stochastic block model (SBM) are not designed to capture the dependence among edges. In this paper, we propose a new community detection approach to incorporate intra-community dependence of connectivities through the Bahadur representation. The pro- posed method does not require specifying the likelihood function, which could be intractable for correlated binary connectivities. In addition, the pro- posed method allows for heterogeneity among edges between different com- munities. In theory, we show that incorporating correlation information can achieve a faster convergence rate compared to the independent SBM, and the proposed algorithm has a lower estimation bias and accelerated convergence compared to the variational EM. Our simulation studies show that the pro- posed algorithm outperforms the popular variational EM algorithm assuming conditional independence among edges. We also demonstrate the application of the proposed method to agricultural product trading networks from differ- ent countries. 1. Introduction. Network data has arisen as one of the most common forms of information collection. This is due to the fact that the scope of study not only focuses on subjects alone, but also on the relationships among subjects. Networks consist of two components: (1) nodes or ver- tices corresponding to basic units of a system, and (2) edges representing connections between nodes. -
Developing a Credit Scoring Model Using Social Network Analysis
Developing a Credit Scoring Model Using Social Network Analysis Ahmad Abd Rabuh UP821483 This thesis is submitted in fulfilment of the requirements for the award of the degree of Doctor of Philosophy in the School of Business and Law at the University of Portsmouth November 2020 1 Author’s Declaration Whilst registered as a candidate for the above degree, I have not been registered for any other research award. The results and conclusions embodied in this thesis are the work of the named candidate and have not been submitted for any other academic award. Signature: Name: Ahmad Abd Rabuh Date: the 30th of April 2020 Word Count: 63,809 2 Acknowledgements As I witness this global pandemic, my greatest gratitude goes to my mother, Hanan, for her moral support and encouragement through the good, the bad and the ugly. During the period of my PhD revision, I received mentoring and help from my future lifetime partner, Kawthar. Also, I have been blessed with the help of wonderful people from Scholars at Risk, Rose Anderson and Sarina Rosenthal, who assisted me in the PhD application and scholarship processes. Additionally, I cannot be thankful enough to the Associate Dean of Research, Andy Thorpe, who supported me in every possible way during my time at the University of Portsmouth. Finally, many thanks to my supervisors, Mark Xu and Renatas Kizys for their continuous guidance. 3 Developing a Credit Scoring Model Using Social Network Analysis Table of Contents Abstract ......................................................................................................................................... 10 CHAPTER 1: INTRODUCTION ................................................................................................. 12 1.1. Research Questions ........................................................................................................ 16 1.2. Aims and Objectives ..................................................................................................... -
Neural Decomposition: Functional ANOVA with Variational Autoencoders
Neural Decomposition: Functional ANOVA with Variational Autoencoders Kaspar Märtens Christopher Yau University of Oxford Alan Turing Institute University of Birmingham University of Manchester Abstract give insight into patterns of interest embedded within the data. Recently there has been particular interest in applications of Variational Autoencoders (VAEs) Variational Autoencoders (VAEs) have be- (Kingma and Welling, 2014) for generative modelling come a popular approach for dimensionality and dimensionality reduction. VAEs are a class of reduction. However, despite their ability to probabilistic neural-network-based latent variable mod- identify latent low-dimensional structures em- els. Their neural network (NN) underpinnings offer bedded within high-dimensional data, these considerable modelling flexibility while modern pro- latent representations are typically hard to gramming frameworks (e.g. TensorFlow and PyTorch) interpret on their own. Due to the black-box and computational techniques (e.g. stochastic varia- nature of VAEs, their utility for healthcare tional inference) provide immense scalability to large and genomics applications has been limited. data sets, including those in genomics (Lopez et al., In this paper, we focus on characterising the 2018; Eraslan et al., 2019; Märtens and Yau, 2020). The sources of variation in Conditional VAEs. Our Conditional VAE (CVAE) (Sohn et al., 2015) extends goal is to provide a feature-level variance de- this model to allow incorporating side information as composition, i.e. to decompose variation in additional fixed inputs. the data by separating out the marginal ad- ditive effects of latent variables z and fixed The ability of (C)VAEs to extract latent, low- inputs c from their non-linear interactions. -
Note 18 : Gaussian Discriminant Analysis
CS 189 Introduction to Machine Learning Spring 2018 Note 18 1 Gaussian Discriminant Analysis Recall the idea of generative models: we classify an arbitrary datapoint x with the class label that maximizes the joint probability p(x;Y ) over the label Y : y^ = arg max p(x;Y = k) k Generative models typically form the joint distribution by explicitly forming the following: • A prior probability distribution over all classes: P (k) = P (class = k) • A conditional probability distribution for each class k 2 f1; 2; :::; Kg: pk(X) = p(Xjclass k) In total there are K + 1 probability distributions: 1 for the prior, and K for all of the individual classes. Note that the prior probability distribution is a categorical distribution over the K discrete classes, whereas each class conditional probability distribution is a continuous distribution over Rd (often represented as a Gaussian). Using the prior and the conditional distributions in conjunction, we have (from Bayes’ rule) that maximizing the joint probability over the class labels is equivalent to maximizing the posterior probability of the class label: y^ = arg max p(x;Y = k) = arg max P (k) pk(x) = arg max P (Y = kjx) k k k Consider the example of digit classification. Suppose we are given dataset of images of hand- written digits each with known values in the range f0; 1; 2;:::; 9g. The task is, given an image of a handwritten digit, to classify it to the correct digit. A generative classifier for this this task would effectively form a the prior distribution and conditional probability distributions over the 10 possible digits and choose the digit that maximizes posterior probability: y^ = arg max p(digit = kjimage) = arg max P (digit = k) p(imagejdigit = k) k2f0;1;2:::;9g k2f0;1;2:::;9g Maximizing the posterior will induce regions in the feature space in which one class has the highest posterior probability, and decision boundaries in between classes where the posterior probability of two classes are equal. -
Logistic Regression, Generative and Discriminative Classifiers
Logistic Regression, Generative and Discriminative Classifiers Recommended reading: • Ng and Jordan paper “On Discriminative vs. Generative classifiers: A comparison of logistic regression and naïve Bayes,” A. Ng and M. Jordan, NIPS 2002. Machine Learning 10-701 Tom M. Mitchell Carnegie Mellon University Thanks to Ziv Bar-Joseph, Andrew Moore for some slides Overview Last lecture: • Naïve Bayes classifier • Number of parameters to estimate • Conditional independence This lecture: • Logistic regression • Generative and discriminative classifiers • (if time) Bias and variance in learning Generative vs. Discriminative Classifiers Training classifiers involves estimating f: X Æ Y, or P(Y|X) Generative classifiers: • Assume some functional form for P(X|Y), P(X) • Estimate parameters of P(X|Y), P(X) directly from training data • Use Bayes rule to calculate P(Y|X= xi) Discriminative classifiers: 1. Assume some functional form for P(Y|X) 2. Estimate parameters of P(Y|X) directly from training data • Consider learning f: X Æ Y, where • X is a vector of real-valued features, < X1 …Xn > • Y is boolean • So we use a Gaussian Naïve Bayes classifier • assume all Xi are conditionally independent given Y • model P(Xi | Y = yk) as Gaussian N(µik,σ) • model P(Y) as binomial (p) • What does that imply about the form of P(Y|X)? • Consider learning f: X Æ Y, where • X is a vector of real-valued features, < X1 …Xn > • Y is boolean • assume all Xi are conditionally independent given Y • model P(Xi | Y = yk) as Gaussian N(µik,σ) • model P(Y) as binomial (p)