Learning Belief Networks in the Presence of Missing Values and Hidden Variables

Total Page:16

File Type:pdf, Size:1020Kb

Learning Belief Networks in the Presence of Missing Values and Hidden Variables Learning Belief Networks in the Presence of Missing Values and Hidden Variables Nir Friedman Computer Science Division, 387 Soda Hall University of California, Berkeley, CA 94720 [email protected] Abstract sponds to a random variable. This graph represents a set of conditional independence properties of the represented In recent years there has been a ¯urry of works distribution. This component captures the structure of the on learning probabilistic belief networks. Cur- probability distribution, and is exploited for ef®cient infer- rent state of the art methods have been shown to ence and decision making. Thus, while belief networks can be successful for two learning scenarios: learn- represent arbitrary probability distributions, they provide ing both network structure and parameters from computational advantage for those distributions that can be complete data, and learning parameters for a ®xed represented with a simple structure. The second component network from incomplete dataÐthat is, in the is a collection of local interaction models that describe the presence of missing values or hidden variables. conditional probability of each variable given its parents However, no method has yet been demonstrated in the graph. Together, these two components represent a to effectively learn network structure from incom- unique probability distribution [Pearl 1988]. plete data. Eliciting belief networks from experts can be a laborious In this paper, we propose a new method for and expensive process in large applications. Thus, in recent learning network structure from incomplete data. years there has been a growing interest in learning belief This method is based on an extension of the networks from data [Cooper and Herskovits 1992; Lam and Expectation-Maximization (EM) algorithm for Bacchus 1994; Heckerman et al. 1995].1 Current methods model selection problems that performs search are successful at learning both the structure and parame- for the best structure inside the EM procedure. ters from complete dataÐthat is, when each data record We prove the convergence of this algorithm, and describes the values of all variables in the network. Unfor- adapt it for learning belief networks. We then tunately, things are different when the data is incomplete. describe how to learn networks in two scenarios: Current learning methods are essentially limited to learning when the data contains missing values, and in the the parameters for a ®xed network structure. presence of hidden variables. We provide exper- imental results that show the effectiveness of our This is a signi®cant problem for several reasons. First, procedure in both scenarios. most real-life data contains missing values (e.g., most of the non-synthetic datasets in the UC Irvine repository [Murphy and Aha 1995]). One of the cited advantages of belief 1 INTRODUCTION networks (e.g., [Heckerman 1995, p. 1]) is that they allow for principled methods for reasoning with incomplete data. Belief networks (BN) (also known as Bayesian networks However, it is unreasonable at the same time to require and directed probabilistic networks) are a graphical repre- complete data for training them. sentation for probability distributions. They are arguably Second, learning a concise structure is crucial both for the representation of choice for uncertainty in arti®cial in- avoiding over®tting and for ef®cient inference in the learned telligence. These networks provide a compact and natural model. By introducing hidden variables that do not appear representation, effective inference, and ef®cient learning. explicitly in the model we can often learn simpler models. They have been successfully applied in expert systems, A simple example, originally given by Binder et al. [1997], diagnostic engines, and optimal decision making systems is shown in Figure 1. Current methods for learning hidden (e.g., [Heckerman et al. 1995]). A Belief network consists of two components. The ®rst 1We refer the interested reader to the tutorial by Hecker- is a directed acyclic graph in which each vertex corre- man [1995] that overviews the current state of this ®eld. of the network (e.g., removing an arc from 1 to 3). Thus, after making one change, we do not need to reevaluate the score of most of the possible neighbors in the search space. H These properties allow for ef®cient learning procedures. When the data is incomplete, we can no longer decom- pose the likelihood function, and must perform inference (a) (b) to evaluate it. Moreover, to evaluate the optimal choice of parameters for a candidate network structure, we must per- Figure 1: (a) An example of a network with a hidden vari- form non-linear optimization using either EM [Lauritzen able. (b) The simplest network that can capture the same 1995] or gradient descent [Binder et al. 1997]. In par- distribution without using the hidden variable. ticular, the EM procedure iteratively improves its current choice of parameters ¡ using the following two steps. In the ®rst step, the current parameters are used for computing variables require that human experts choose a ®xed network the expected value of all the statistics needed to evaluate structure or a small set of possible structures. While this the current structure. In the second step, we replace ¡ by is reasonable in some domains, it is clearly infeasible in the parameters that maximize the complete data score with general. Moreover, the motivation for learning models in these expected statistics. This second step is essentially the ®rst place is to avoid such strong dependence on the equivalent to learning from complete data, and thus, can be expert. done ef®ciently. However, the ®rst step requires us to com- pute the probabilities of several events for each instance in In this paper, we propose a new method for learning the training data. Thus, learning parameters with the EM structure from incomplete data that uses a variant of the procedure is signi®cantly slower than learning parameters Expectation-Maximization (EM) [Dempster et al. 1977] from complete data. algorithm to facilitate ef®cient search over large number of candidate structures. Roughly speaking, we reduce the Finally, in the incomplete data case, a local change in one search problem to one in the complete data case, which part of the network, can affect the evaluation of a change can be solved ef®ciently. As we experimentally show, our in another part of the network. Thus, the current proposed method is capable of learning structure from non-trivial methods evaluate all the neighbors (e.g., networks that dif- datasets. For example, our procedure is able to learn struc- ferent by one or few local changes) of each candidate they tures in a domain with several dozen variables and 30% visit. This requires many calls to the EM procedure before missing values, and to learn the structure in the presence making a single change to the current candidate. To the best of several hidden variables. (We note that in both of these of our knowledge, such methods have been successfully ap- experiments, we did not rely on prior knowledge to reduce plied only to problems where there are few choices to be the number of candidate structures.) We believe that this made: Clustering methods (e.g., [Cheeseman et al. 1988; is a crucial step toward making belief network induction Chickering and Heckerman 1996]) select the number of applicable to real-life problems. values for a single hidden variable in networks with a ®xed structure, and Heckerman [1995] describes an experiment To convey the main idea of our approach, we must ®rst re- with a single missing value and ®ve observable variables. view the source of the dif®culties in learning belief networks from incomplete data. The common approach to learning The novel idea of our approach is to perform search for belief networks is to introduce a scoring metric that evalu- the best structure inside the EM procedure. Our procedure ates each network with respect to the training data, and then maintains a current network candidate, and at each iteration to search for the best network (according to this metric). it attempts to ®nd a better network structure by computing The current metrics are all based, to some extent, on the the expected statistics needed to evaluate alternative struc- likelihood function, that is, the probability of the data given tures. Since this search is done in a complete data setting, a candidate network. we can exploit the properties of scoring metric for effective search. (In fact, we use exactly the same search procedures When the data is complete, by using the independencies as in the complete data case.) In contrast to current practice, encoded in the network structure, we can decompose the this procedure allows us to make a signi®cant progress in likelihood function (and the score metric) into a product of the search in each EM iteration. As we show in our ex- terms, where each term depends only on the choice of par- perimental validation, our procedure requires relatively few ents for a particular variable and the relevant statistics in the EM iterations to learn non-trivial networks. dataÐthat is, counts of the number of common occurrences for each possible assignment of values to the variable and The rest of the paper is organized as follows. We start in its parents. This allows for a modular evaluation of a candi- Section 2 with a review of learning belief networks. In date network and of all local changes to it. Additionally, the Section 3, we describe a new variant of EM, the model se- evaluation of a particular change (e.g., adding an arc from lection EM (MS-EM) algorithm, and show that by choosing a model (e.g., network structure) that maximizes the com- 1 to 2) remains the same after changing a different part 1 £ @ plete data score in each MS-EM step we indeed improve x ¥§¦¨¦§¦¥ x be a training set where each x assigns a value the objective score.
Recommended publications
  • RECENT ADVANCES in BIOLOGY, BIOPHYSICS, BIOENGINEERING and COMPUTATIONAL CHEMISTRY
    RECENT ADVANCES in BIOLOGY, BIOPHYSICS, BIOENGINEERING and COMPUTATIONAL CHEMISTRY Proceedings of the 5th WSEAS International Conference on CELLULAR and MOLECULAR BIOLOGY, BIOPHYSICS and BIOENGINEERING (BIO '09) Proceedings of the 3rd WSEAS International Conference on COMPUTATIONAL CHEMISTRY (COMPUCHEM '09) Puerto De La Cruz, Tenerife, Canary Islands, Spain December 14-16, 2009 Recent Advances in Biology and Biomedicine A Series of Reference Books and Textbooks Published by WSEAS Press ISSN: 1790-5125 www.wseas.org ISBN: 978-960-474-141-0 RECENT ADVANCES in BIOLOGY, BIOPHYSICS, BIOENGINEERING and COMPUTATIONAL CHEMISTRY Proceedings of the 5th WSEAS International Conference on CELLULAR and MOLECULAR BIOLOGY, BIOPHYSICS and BIOENGINEERING (BIO '09) Proceedings of the 3rd WSEAS International Conference on COMPUTATIONAL CHEMISTRY (COMPUCHEM '09) Puerto De La Cruz, Tenerife, Canary Islands, Spain December 14-16, 2009 Recent Advances in Biology and Biomedicine A Series of Reference Books and Textbooks Published by WSEAS Press www.wseas.org Copyright © 2009, by WSEAS Press All the copyright of the present book belongs to the World Scientific and Engineering Academy and Society Press. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the Editor of World Scientific and Engineering Academy and Society Press. All papers of the present volume were peer reviewed
    [Show full text]
  • Program Book
    Pacific Symposium on Biocomputing 2016 January 4-8, 2016 Big Island of Hawaii Program Book PACIFIC SYMPOSIUM ON BIOCOMPUTING 2016 Big Island of Hawaii, January 4-8, 2016 Welcome to PSB 2016! We have prepared this program book to give you quick access to information you need for PSB 2016. Enclosed you will find • Logistics information • Menus for PSB hosted meals • Full conference schedule • Call for Session and Workshop Proposals for PSB 2017 • Poster/abstract titles and authors • Participant List Conference materials are also available online at http://psb.stanford.edu/conference-materials/. PSB 2016 gratefully acknowledges the support the Institute for Computational Biology, a collaborative effort of Case Western Reserve University, the Cleveland Clinic Foundation, and University Hospitals; the National Institutes of Health (NIH), the National Science Foundation (NSF); and the International Society for Computational Biology (ISCB). If you or your institution are interested in sponsoring, PSB, please contact Tiffany Murray at [email protected] If you have any questions, the PSB registration staff (Tiffany Murray, Georgia Hansen, Brant Hansen, Kasey Miller, and BJ Morrison-McKay) are happy to help you. Aloha! Russ Altman Keith Dunker Larry Hunter Teri Klein Maryln Ritchie The PSB 2016 Organizers PACIFIC SYMPOSIUM ON BIOCOMPUTING 2016 Big Island of Hawaii, January 4-8, 2016 SPEAKER INFORMATION Oral presentations of accepted proceedings papers will take place in Salon 2 & 3. Speakers are allotted 10 minutes for presentation and 5 minutes for questions for a total of 15 minutes. Instructions for uploading talks were sent to authors with oral presentations. If you need assistance with this, please see Tiffany Murray or another PSB staff member.
    [Show full text]
  • Modeling and Analysis of RNA-Seq Data: a Review from a Statistical Perspective
    Modeling and analysis of RNA-seq data: a review from a statistical perspective Wei Vivian Li 1 and Jingyi Jessica Li 1;2;∗ Abstract Background: Since the invention of next-generation RNA sequencing (RNA-seq) technolo- gies, they have become a powerful tool to study the presence and quantity of RNA molecules in biological samples and have revolutionized transcriptomic studies. The analysis of RNA-seq data at four different levels (samples, genes, transcripts, and exons) involve multiple statistical and computational questions, some of which remain challenging up to date. Results: We review RNA-seq analysis tools at the sample, gene, transcript, and exon levels from a statistical perspective. We also highlight the biological and statistical questions of most practical considerations. Conclusion: The development of statistical and computational methods for analyzing RNA- seq data has made significant advances in the past decade. However, methods developed to answer the same biological question often rely on diverse statical models and exhibit dif- ferent performance under different scenarios. This review discusses and compares multiple commonly used statistical models regarding their assumptions, in the hope of helping users select appropriate methods as needed, as well as assisting developers for future method development. 1 Introduction RNA sequencing (RNA-seq) uses the next generation sequencing (NGS) technologies to reveal arXiv:1804.06050v3 [q-bio.GN] 1 May 2018 the presence and quantity of RNA molecules in biological samples. Since its invention, RNA- seq has revolutionized transcriptome analysis in biological research. RNA-seq does not require any prior knowledge on RNA sequences, and its high-throughput manner allows for genome-wide profiling of transcriptome landscapes [1,2].
    [Show full text]
  • 24019 - Probabilistic Graphical Models
    Year : 2019/20 24019 - Probabilistic Graphical Models Syllabus Information Academic Course: 2019/20 Academic Center: 337 - Polytechnic School Study: 3379 - Bachelor's (degree) programme in Telecommunications Network Engineering Subject: 24019 - Probabilistic Graphical Models Credits: 5.0 Course: 3 Teaching languages: Theory: Grup 1: English Practice: Grup 101: English Grup 102: English Seminar: Grup 101: English Grup 102: English Teachers: Vicente Gomez Cerda Teaching Period: Second Quarter Schedule: --- Presentation Probabilistic graphical models (PGMs) are powerful modeling tools for reasoning and decision making under uncertainty. PGMs have many application domains, including computer vision, natural language processing, efficient coding, and computational biology. PGMs connects graph theory and probability theory, and provide a flexible framework for modeling large collections of random variables with complex interactions. This is an introductory course which focuses on two main principles: (1) emphasizing the role of PGMs as a unifying language in machine learning that allows a natural specification of many problem domains with inherent uncertainty, and (2) providing a set of computational tools for probabilistic inference (making predictions that can be used to aid decision making), and learning (estimating the graph structure and their parameters from data). Associated skills Basic Competences That the students can apply their knowledge to their work or vocation of a professional form in a professional way and possess the competences which are usually proved by means of the elaboration and defense of arguments and solving the solution of problems within their study area. That the students have the ability of collecting and interpreting relevant data (normally within their study area) to issue judgements which include a reflection about relevant topics of social, scientific or ethical nature.
    [Show full text]
  • Curriculum Vitae—Nir Friedman
    Nir Friedman Last update: February 14, 2019 Curriculum Vitae|Nir Friedman School of Computer Science and Engineering email: [email protected] Alexander Silberman Institute of Life Sciences Office: +972-73-388-4720 Hebrew University of Jerusalem Cellular: +972-54-882-0432 Jerusalem 91904, ISRAEL Professional History Professor 2009{present Alexander Silberman Institute of Life Sciences The Hebrew University of Jerusalem Professor 2007{present School of Computer Science & Engineering The Hebrew University of Jerusalem Associate Professor 2002{2007 School of Computer Science & Engineering The Hebrew University of Jerusalem Senior Lecturer 1998{2002 School of Computer Science & Engineering The Hebrew University of Jerusalem Postdoctoral Scholar 1996{1998 Division of Computer Science University of California, Berkeley Education Ph.D. Computer Science, Stanford University 1992{1997 M.Sc. Math. & Computer Science, Weizmann Institute of Science 1990{1992 B.Sc. Math. & Computer Science, Tel-Aviv University 1983{1987 Awards Alexander von Humboldt Foundation Research Award 2015 Fellow of the International Society of Computational Biology 2014 European Research Council \Advanced Grant" research award 2014-2018 \Test of Time" Award Research in Computational Molecular Biology (RECOMB) 2012 Most cited paper in 12-year window in RECOMB Michael Bruno Memorial Award 2010 \Israeli scholars and scientists of truly exceptional promise, whose achievements to date sug- gest future breakthroughs in their respective fields” [age < 50] European Research Council \Advanced
    [Show full text]
  • Using Bayesian Networks to Analyze Expression Data
    JOURNAL OFCOMPUTATIONAL BIOLOGY Volume7, Numbers 3/4,2000 MaryAnn Liebert,Inc. Pp. 601–620 Using Bayesian Networks to Analyze Expression Data NIR FRIEDMAN, 1 MICHAL LINIAL, 2 IFTACH NACHMAN, 3 andDANA PE’ER 1 ABSTRACT DNAhybridization arrayssimultaneously measurethe expression level forthousands of genes.These measurementsprovide a “snapshot”of transcription levels within the cell. Ama- jorchallenge in computationalbiology is touncover ,fromsuch measurements,gene/ protein interactions andkey biological features ofcellular systems. In this paper,wepropose a new frameworkfor discovering interactions betweengenes based onmultiple expression mea- surements. This frameworkbuilds onthe use of Bayesian networks forrepresenting statistical dependencies. ABayesiannetwork is agraph-basedmodel of joint multivariateprobability distributions thatcaptures properties ofconditional independence betweenvariables. Such models areattractive for their ability todescribe complexstochastic processes andbecause theyprovide a clear methodologyfor learning from(noisy) observations.We start byshowing howBayesian networks can describe interactions betweengenes. W ethen describe amethod forrecovering gene interactions frommicroarray data using tools forlearning Bayesiannet- works.Finally, we demonstratethis methodon the S.cerevisiae cell-cycle measurementsof Spellman et al. (1998). Key words: geneexpression, microarrays, Bayesian methods. 1.INTRODUCTION centralgoal of molecularbiology isto understand the regulation of protein synthesis and its Areactionsto external
    [Show full text]
  • Speakers Info
    CAJAL Online Lecture Series Single Cell Transcriptomics November 2nd-6th, 2020 Keynote speakers Naomi HABIB, PhD | Edmond & Lily Safra Center for Brain Sciences (ELSC), Israel Naomi Habib is an assistant professor at the ELSC Brain Center at the Hebrew University of Jerusalem since July 2018. Habib's research focuses on understanding how complex interactions between diverse cell types in the brain and between the brain and other systems in the body, are mediating neurodegenerative diseases and other aging-related pathologies. Naomi combines in her work computational biology, genomics and genome-engineering, and is a pioneer in single nucleus RNA-sequencing technologies and their applications to study cellular diversity and molecular processes in the brain. Naomi did her postdoctoral at the Broad Institute of MIT/Harvard working with Dr. Feng Zhang and Dr. Aviv Regev, and earned her PhD in computational biology from the Hebrew University of Jerusalem in Israel, working with Prof. Nir Friedman and Prof. Hanah Margalit. Selected publications: - Habib N, McCabe C*, Medina S*, Varshavsky M*, Kitsberg D, Dvir R, Green G, Dionne D, Nguyen L, Marshall J.L, Chen F, Zhang F, Kaplan T, Regev A, Schwartz M. (2019) Unique disease-associated astrocytes in Alzheimer’s disease. Nature Neuroscience. In Press. - Habib N*, Basu A*, Avraham-Davidi I*, Burks T, Choudhury SR, Aguet F, Gelfand E, Ardlie K, Weitz DA, Rozenblatt-Rosen O, Zhang F, and Regev A. (2017). Deciphering cell types in human archived brain tissues by massively-parallel single nucleus RNA-seq. Nature Methods. Oct;14(10):955-958. - Habib N*, Li Y*, Heidenreich M, Sweich L, Avraham-Davidi I, Trombetta J, Hession C, Zhang F, Regev A.
    [Show full text]
  • RNA Velocity Analysis for Pertrub-Seq Mesert Kebed
    RNA Velocity Analysis for Pertrub-Seq by Mesert Kebed B.S. Computer Science and Engineering, Massachusetts Institute of Technology (2018) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2020 ○c Massachusetts Institute of Technology 2020. All rights reserved. Author................................................................ Department of Electrical Engineering and Computer Science August 14, 2020 Certified by. Aviv Regev Professor of Biology Thesis Supervisor Accepted by . Katrina LaCurts Chair, Master of Engineering Thesis Committee 2 RNA Velocity Analysis for Pertrub-Seq by Mesert Kebed Submitted to the Department of Electrical Engineering and Computer Science on August 14, 2020, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Recent developments in single-cell RNA seq and CRISPR based perturbations have enabled researchers to carry out hundreds of perturbation experiments in a pooled format in an experimental approach called Perturb-Seq [7]. Prior analysis of Perturb- Seq measured the overall effect of a perturbation on each gene, however it remains difficult to capture temporal responses to a perturbation. In this thesis, we compare the effectiveness of three RNA velocity informed models and two cell-cell similarity based models in providing a pseudo-temporal ordering of cells. We find pseudotime estimated with the dynamical model for computing velocity provides the most reli- able ordering of cells. We use this pseudo-temporal ordering to bin cells into three time resolved groups and compute the effect of a perturbation at each time point.
    [Show full text]
  • David Van Dijk
    DAVID VAN DIJK Machine Learning; Big Biomedical Data; Computational Biology Yale Genetics & Computer Science (+1) 917 325 3940 [email protected] www.davidvandijk.com ⇧ ⇧ EDUCATION University of Amsterdam 2008 - 2013 Department of Computer Science Ph.D. Computer Science Dissertation title: The Symphony of Gene Regulation: Predicting gene expression and its cell-to-cell variability from promoter DNA sequence Advisors: Jaap Kaandorp (UoA), Peter Sloot (UoA), Eran Segal (Weizmann Institute of Science) Free University Amsterdam 2002 - 2008 Department of Computer Science M.Sc. Computer Science B.Sc. Computer Science RESEARCH SUMMARY My research focuses on the development of new machine learning algorithms for unsupervised learning on high-dimensional biological data, with an emphasis on single-cell RNA-sequencing data. Among other tools, I developed MAGIC, a data-di↵usion method for denoising and imputing sparse and noisy single-cell data, PHATE, a dimensionality reduction and visualization method that emphasizes pro- gression structure in single-cell and other high-dimensional data, and SAUCIE, a deep learning method for combined clustering, batch-correction, imputation and embedding of single-cell data. The mission for my lab will be to develop new unsupervised machine learning methods, based on deep learning and spectral graph theory, that model biological data manifolds, in order to 1) map cellular state spaces, 2) predict state changes as a result of perturbations, 3) infer regulatory logic that generates the state space, and 4) integrate diverse data modalities into a unified ’view’ of biology. PUBLICATIONS 1. van Dijk, David, S. Gigante, A. Strzalkowski, G. Wolf, and S. Krishnaswamy. Modeling dynamics of biological systems with deep generative neural networks.
    [Show full text]
  • UNIVERSITY of CALIFORNIA RIVERSIDE RNA-Seq
    UNIVERSITY OF CALIFORNIA RIVERSIDE RNA-Seq Based Transcriptome Assembly: Sparsity, Bias Correction and Multiple Sample Comparison A Dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science by Wei Li September 2012 Dissertation Committee: Dr. Tao Jiang , Chairperson Dr. Stefano Lonardi Dr. Marek Chrobak Dr. Thomas Girke Copyright by Wei Li 2012 The Dissertation of Wei Li is approved: Committee Chairperson University of California, Riverside Acknowledgments The completion of this dissertation would have been impossible without help from many people. First and foremost, I would like to thank my advisor, Dr. Tao Jiang, for his guidance and supervision during the four years of my Ph.D. He offered invaluable advice and support on almost every aspect of my study and research in UCR. He gave me the freedom in choosing a research problem I’m interested in, helped me do research and write high quality papers, Not only a great academic advisor, he is also a sincere and true friend of mine. I am always feeling appreciated and fortunate to be one of his students. Many thanks to all committee members of my dissertation: Dr. Stefano Lonardi, Dr. Marek Chrobak, and Dr. Thomas Girke. I will be greatly appreciated by the advice they offered on the dissertation. I would also like to thank Jianxing Feng, Prof. James Borneman and Paul Ruegger for their collaboration in publishing several papers. Thanks to the support from Vivien Chan, Jianjun Yu and other bioinformatics group members during my internship in the Novartis Institutes for Biomedical Research.
    [Show full text]
  • Bayesian Group Factor Analysis with Structured Sparsity
    Journal of Machine Learning Research 17 (2016) 1-47 Submitted 11/14; Revised 11/15; Published 4/16 Bayesian group factor analysis with structured sparsity Shiwen Zhao [email protected] Computational Biology and Bioinformatics Program Department of Statistical Science Duke University Durham, NC 27708, USA Chuan Gao [email protected] Department of Statistical Science Duke University Durham, NC 27708, USA Sayan Mukherjee [email protected] Departments of Statistical Science, Computer Science, Mathematics Duke University Durham, NC 27708, USA Barbara E Engelhardt [email protected] Department of Computer Science Center for Statistics and Machine Learning Princeton University Princeton, NJ 08540, USA Editor: Samuel Kaski Abstract Latent factor models are the canonical statistical tool for exploratory analyses of low- dimensional linear structure for a matrix of p features across n samples. We develop a structured Bayesian group factor analysis model that extends the factor model to multiple coupled observation matrices; in the case of two observations, this reduces to a Bayesian model of canonical correlation analysis. Here, we carefully define a structured Bayesian prior that encourages both element-wise and column-wise shrinkage and leads to desirable behavior on high-dimensional data. In particular, our model puts a structured prior on the joint factor loading matrix, regularizing at three levels, which enables element-wise sparsity and unsupervised recovery of latent factors corresponding to structured variance across arbitrary subsets of the observations. In addition, our structured prior allows for both dense and sparse latent factors so that covariation among either all features or only a subset of features can be recovered.
    [Show full text]
  • UC Berkeley UC Berkeley Electronic Theses and Dissertations
    UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Application of Statistical Methods to Integrative Analysis of Genomic Data Permalink https://escholarship.org/uc/item/3st7t3nw Author Kim, Kyung Pil Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Application of Statistical Methods to Integrative Analysis of Genomic Data by Kyung Pil Kim A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Biostatistics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Haiyan Huang, Chair Professor Peter J. Bickel Professor Lewis J. Feldman Spring 2013 Application of Statistical Methods to Integrative Analysis of Genomic Data Copyright 2013 by Kyung Pil Kim 1 Abstract Application of Statistical Methods to Integrative Analysis of Genomic Data by Kyung Pil Kim Doctor of Philosophy in Biostatistics University of California, Berkeley Professor Haiyan Huang, Chair The genomic revolution has resulted in both the development of techniques for obtain- ing large quantities of genomic data rapidly and a striking increase in our knowledge on genomics. At the same time, the genomic revolution also created numerous open questions and challenges in analyzing the enormous amount of data required to gain insights on the underlying biological mechanisms. This dissertation addresses these challenges by answer- ing fundamental questions arising from two closely related fields, functional genomics and pharmacogenomics, utilizing the nature and biology of microarray datasets. In the functional genomic study, we try to identify pathway genes which are a group of genes that work cooperatively in the same pathway constituting a fundamental functional grouping in a biological process.
    [Show full text]