The Perfect Marriage and Much More: Combining Dimension Reduction, Distance Measures and Covariance
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Conditional Central Limit Theorems for Gaussian Projections
1 Conditional Central Limit Theorems for Gaussian Projections Galen Reeves Abstract—This paper addresses the question of when projec- The focus of this paper is on the typical behavior when the tions of a high-dimensional random vector are approximately projections are generated randomly and independently of the Gaussian. This problem has been studied previously in the context random variables. Given an n-dimensional random vector X, of high-dimensional data analysis, where the focus is on low- dimensional projections of high-dimensional point clouds. The the k-dimensional linear projection Z is defined according to focus of this paper is on the typical behavior when the projections Z =ΘX, (1) are generated by an i.i.d. Gaussian projection matrix. The main results are bounds on the deviation between the conditional where Θ is a k n random matrix that is independent of X. distribution of the projections and a Gaussian approximation, Throughout this× paper it assumed that X has finite second where the conditioning is on the projection matrix. The bounds are given in terms of the quadratic Wasserstein distance and moment and that the entries of Θ are i.i.d. Gaussian random relative entropy and are stated explicitly as a function of the variables with mean zero and variance 1/n. number of projections and certain key properties of the random Our main results are bounds on the deviation between the vector. The proof uses Talagrand’s transportation inequality and conditional distribution of Z given Θ and a Gaussian approx- a general integral-moment inequality for mutual information. -
A Modified Energy Statistic for Unsupervised Anomaly Detection
A Modified Energy Statistic for Unsupervised Anomaly Detection Rupam Mukherjee GE Research, Bangalore, Karnataka, 560066, India [email protected], [email protected] ABSTRACT value. Anomaly is flagged when the distance metric exceeds a threshold and an alert is generated, thereby preventing sud- For prognostics in industrial applications, the degree of an- den unplanned failures. Although as is captured in (Goldstein omaly of a test point from a baseline cluster is estimated using & Uchida, 2016), anomaly may be both supervised and un- a statistical distance metric. Among different statistical dis- supervised in nature depending on the availability of labelled tance metrics, energy distance is an interesting concept based ground-truth data, in this paper anomaly detection is consid- on Newton’s Law of Gravitation, promising simpler comput- ered to be the unsupervised version which is more convenient ation than classical distance metrics. In this paper, we re- and more practical in many industrial systems. view the state of the art formulations of energy distance and point out several reasons why they are not directly applica- Choice of the statistical distance formulation has an important ble to the anomaly-detection problem. Thereby, we propose impact on the effectiveness of RUL estimation. In literature, a new energy-based metric called the -statistic which ad- statistical distances are described to be of two types as fol- P dresses these issues, is applicable to anomaly detection and lows. retains the computational simplicity of the energy distance. We also demonstrate its effectiveness on a real-life data-set. 1. Divergence measures: These estimate the distance (or, similarity) between probability distributions. -
Package 'Energy'
Package ‘energy’ May 27, 2018 Title E-Statistics: Multivariate Inference via the Energy of Data Version 1.7-4 Date 2018-05-27 Author Maria L. Rizzo and Gabor J. Szekely Description E-statistics (energy) tests and statistics for multivariate and univariate inference, including distance correlation, one-sample, two-sample, and multi-sample tests for comparing multivariate distributions, are implemented. Measuring and testing multivariate independence based on distance correlation, partial distance correlation, multivariate goodness-of-fit tests, clustering based on energy distance, testing for multivariate normality, distance components (disco) for non-parametric analysis of structured data, and other energy statistics/methods are implemented. Maintainer Maria Rizzo <[email protected]> Imports Rcpp (>= 0.12.6), stats, boot LinkingTo Rcpp Suggests MASS URL https://github.com/mariarizzo/energy License GPL (>= 2) NeedsCompilation yes Repository CRAN Date/Publication 2018-05-27 21:03:00 UTC R topics documented: energy-package . .2 centering distance matrices . .3 dcor.ttest . .4 dcov.test . .6 dcovU_stats . .8 disco . .9 distance correlation . 12 edist . 14 1 2 energy-package energy.hclust . 16 eqdist.etest . 19 indep.etest . 21 indep.test . 23 mvI.test . 25 mvnorm.etest . 27 pdcor . 28 poisson.mtest . 30 Unbiased distance covariance . 31 U_product . 32 Index 34 energy-package E-statistics: Multivariate Inference via the Energy of Data Description Description: E-statistics (energy) tests and statistics for multivariate and univariate inference, in- cluding distance correlation, one-sample, two-sample, and multi-sample tests for comparing mul- tivariate distributions, are implemented. Measuring and testing multivariate independence based on distance correlation, partial distance correlation, multivariate goodness-of-fit tests, clustering based on energy distance, testing for multivariate normality, distance components (disco) for non- parametric analysis of structured data, and other energy statistics/methods are implemented. -
Arxiv:1703.01127V4 [Cs.NE] 29 Jan 2018
On the Behavior of Convolutional Nets for Feature Extraction On the Behavior of Convolutional Nets for Feature Extraction Dario Garcia-Gasulla [email protected] Ferran Par´es [email protected] Armand Vilalta [email protected] Jonatan Moreno Barcelona Supercomputing Center (BSC) Omega Building, C. Jordi Girona, 1-3 08034 Barcelona, Catalonia Eduard Ayguad´e Jes´usLabarta Ulises Cort´es Barcelona Supercomputing Center (BSC) Universitat Polit`ecnica de Catalunya - BarcelonaTech (UPC) Toyotaro Suzumura IBM T.J. Watson Research Center, New York, USA Barcelona Supercomputing Center (BSC) Abstract Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of im- ages, without requiring an expensive training phase. Contributions to this field (commonly known as feature representation transfer or transfer learning) have been purely empirical so far, extracting all CNN features from a single layer close to the output and testing their performance by feeding them to a classifier. This approach has provided consistent results, although its relevance is limited to classification tasks. In a completely different approach, in this paper we statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets. We seek to provide new insights into the behavior of CNN features, particularly the ones from convolu- tional layers, as this can be relevant for their application to knowledge representation and reasoning. -
Testing the Causal Direction of Mediation Effects in Randomized Intervention Studies
Prevention Science (2019) 20:419–430 https://doi.org/10.1007/s11121-018-0900-y Testing the Causal Direction of Mediation Effects in Randomized Intervention Studies Wolfgang Wiedermann1 & Xintong Li1 & Alexander von Eye2 Published online: 21 May 2018 # Society for Prevention Research 2018 Abstract In a recent update of the standards for evidence in research on prevention interventions, the Society of Prevention Research emphasizes the importance of evaluating and testing the causal mechanism through which an intervention is expected to have an effect on an outcome. Mediation analysis is commonly applied to study such causal processes. However, these analytic tools are limited in their potential to fully understand the role of theorized mediators. For example, in a design where the treatment x is randomized and the mediator (m) and the outcome (y) are measured cross-sectionally, the causal direction of the hypothesized mediator-outcome relation is not uniquely identified. That is, both mediation models, x → m → y or x → y → m, may be plausible candidates to describe the underlying intervention theory. As a third explanation, unobserved confounders can still be responsible for the mediator-outcome association. The present study introduces principles of direction dependence which can be used to empirically evaluate these competing explanatory theories. We show that, under certain conditions, third higher moments of variables (i.e., skewness and co-skewness) can be used to uniquely identify the direction of a mediator-outcome relation. Significance procedures compatible with direction dependence are introduced and results of a simulation study are reported that demonstrate the performance of the tests. An empirical example is given for illustrative purposes and a software implementation of the proposed method is provided in SPSS. -
Independence Measures
INDEPENDENCE MEASURES Beatriz Bueno Larraz M´asteren Investigaci´one Innovaci´onen Tecnolog´ıasde la Informaci´ony las Comunicaciones. Escuela Polit´ecnica Superior. M´asteren Matem´aticasy Aplicaciones. Facultad de Ciencias. UNIVERSIDAD AUTONOMA´ DE MADRID 09/03/2015 Advisors: Alberto Su´arezGonz´alez Jo´seRam´onBerrendero D´ıaz ii Acknowledgements This work would not have been possible without the knowledge acquired during both Master degrees. Their subjects have provided me essential notions to carry out this study. The fulfilment of this Master's thesis is the result of the guidances, suggestions and encour- agement of professors D. Jos´eRam´onBerrendero D´ıazand D. Alberto Su´arezGonzalez. They have guided me throughout these months with an open and generous spirit. They have showed an excellent willingness facing the doubts that arose me, and have provided valuable observa- tions for this research. I would like to thank them very much for the opportunity of collaborate with them in this project and for initiating me into research. I would also like to express my gratitude to the postgraduate studies committees of both faculties, specially to the Master's degrees' coordinators. All of them have concerned about my situation due to the change of the normative, and have answered all the questions that I have had. I shall not want to forget, of course, of my family and friends, who have supported and encouraged me during all this time. iii iv Contents 1 Introduction 1 2 Reproducing Kernel Hilbert Spaces (RKHS)3 2.1 Definitions and principal properties..........................3 2.2 Characterizing reproducing kernels..........................6 3 Maximum Mean Discrepancy (MMD) 11 3.1 Definition of MMD.................................. -
Local White Matter Architecture Defines Functional Brain Dynamics
2018 IEEE International Conference on Systems, Man, and Cybernetics Local White Matter Architecture Defines Functional Brain Dynamics Yo Joong Choe* Sivaraman Balakrishnan Aarti Singh Kakao Dept. of Statistics and Data Science Machine Learning Dept. Seongnam-si, South Korea Carnegie Mellon University Carnegie Mellon University [email protected] Pittsburgh, PA, USA Pittsburgh, PA, USA [email protected] [email protected] Jean M. Vettel Timothy Verstynen U.S. Army Research Laboratory Dept. of Psychology and CNBC Aberdeen Proving Ground, MD, USA Carnegie Mellon University [email protected] Pittsburgh, PA, USA [email protected] Abstract—Large bundles of myelinated axons, called white the integrity of the myelin sheath is critical for synchroniz- matter, anatomically connect disparate brain regions together ing information transmission between distal brain areas [2], and compose the structural core of the human connectome. We fostering the ability of these networks to adapt over time recently proposed a method of measuring the local integrity along the length of each white matter fascicle, termed the local [4]. Thus, variability in the myelin sheath, as well as other connectome [1]. If communication efficiency is fundamentally cellular support mechanisms, would contribute to variability constrained by the integrity along the entire length of a white in functional coherence across the circuit. matter bundle [2], then variability in the functional dynamics To study the integrity of structural connectivity, we recently of brain networks should be associated with variability in the introduced the concept of the local connectome. This is local connectome. We test this prediction using two statistical ap- proaches that are capable of handling the high dimensionality of defined as the pattern of fiber systems (i.e., number of fibers, data. -
On Measures of Entropy and Information
On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy -
The Bhattacharyya Kernel Between Sets of Vectors
THE BHATTACHARYYA KERNEL BETWEEN SETS OF VECTORS YJ (YO JOONG) CHOE Abstract. In machine learning, we wish to find general patterns from a given set of examples, and in many cases we can naturally represent each example as a set of vectors rather than just a vector. In this expository paper, we show how to represent a learning problem in this \bag of tuples" approach and define the Bhattacharyya kernel, a similarity measure between such sets related to the Bhattacharyya distance. The kernel is particularly useful because it is invariant under small transformations between examples. We also show how to extend this method using kernel trick and still compute the kernel in closed form. Contents 1. Introduction 2 2. Background 2 2.1. Learning 2 2.2. Examples of learning algorithms 3 2.3. Kernels 5 2.4. The normal distribution 7 2.5. Maximum likelihood estimation (MLE) 8 2.6. Principal component analysis (PCA) 9 3. The Bhattacharyya kernel 11 4. The \bag of tuples" approach 12 5. The Bhattacharyya kernel between sets of vectors 13 5.1. The multivariate normal model using MLE 14 6. Extension to higher-dimensional spaces using the kernel trick 14 6.1. Analogue of the normal model using MLE: regularized covariance form 15 6.2. Kernel PCA 17 6.3. Summary of the process and computation of the Bhattacharyya kernel 20 Acknowledgments 22 References 22 1 2 YJ (YO JOONG) CHOE 1. Introduction In machine learning and statistics, we utilize kernels, which are measures of \similarity" between two examples. Kernels take place in various learning algo- rithms, such as the perceptron and support vector machines. -
Multivariate Statistics Chapter 5: Multidimensional Scaling
Multivariate Statistics Chapter 5: Multidimensional scaling Pedro Galeano Departamento de Estad´ıstica Universidad Carlos III de Madrid [email protected] Course 2017/2018 Master in Mathematical Engineering Pedro Galeano (Course 2017/2018) Multivariate Statistics - Chapter 5 Master in Mathematical Engineering 1 / 37 1 Introduction 2 Statistical distances 3 Metric MDS 4 Non-metric MDS Pedro Galeano (Course 2017/2018) Multivariate Statistics - Chapter 5 Master in Mathematical Engineering 2 / 37 Introduction As we have seen in previous chapters, principal components and factor analysis are important dimension reduction tools. However, in many applied sciences, data is recorded as ranked information. For example, in marketing, one may record \product A is better than product B". Multivariate observations therefore often have mixed data characteristics and contain information that would enable us to employ one of the multivariate techniques presented so far. Multidimensional scaling (MDS) is a method based on proximities between ob- jects, subjects, or stimuli used to produce a spatial representation of these items. MDS is a dimension reduction technique since the aim is to find a set of points in low dimension (typically two dimensions) that reflect the relative configuration of the high-dimensional data objects. Pedro Galeano (Course 2017/2018) Multivariate Statistics - Chapter 5 Master in Mathematical Engineering 3 / 37 Introduction The proximities between objects are defined as any set of numbers that express the amount of similarity or dissimilarity between pairs of objects. In contrast to the techniques considered so far, MDS does not start from a n×p dimensional data matrix, but from a n × n dimensional dissimilarity or distance 0 matrix, D, with elements δii 0 or dii 0 , respectively, for i; i = 1;:::; n. -
The Energy Goodness-Of-Fit Test and E-M Type Estimator for Asymmetric Laplace Distributions
THE ENERGY GOODNESS-OF-FIT TEST AND E-M TYPE ESTIMATOR FOR ASYMMETRIC LAPLACE DISTRIBUTIONS John Haman A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2018 Committee: Maria Rizzo, Advisor Joseph Chao, Graduate Faculty Representative Wei Ning Craig Zirbel Copyright c 2018 John Haman All rights reserved iii ABSTRACT Maria Rizzo, Advisor Recently the asymmetric Laplace distribution and its extensions have gained attention in the statistical literature. This may be due to its relatively simple form and its ability to model skew- ness and outliers. For these reasons, the asymmetric Laplace distribution is a reasonable candidate model for certain data that arise in finance, biology, engineering, and other disciplines. For a prac- titioner that wishes to use this distribution, it is very important to check the validity of the model before making inferences that depend on the model. These types of questions are traditionally addressed by goodness-of-fit tests in the statistical literature. In this dissertation, a new goodness-of-fit test is proposed based on energy statistics, a widely applicable class of statistics for which one application is goodness-of-fit testing. The energy goodness-of-fit test has a number of desirable properties. It is consistent against general alter- natives. If the null hypothesis is true, the distribution of the test statistic converges in distribution to an infinite, weighted sum of Chi-square random variables. In addition, we find through simula- tion that the energy test is among the most powerful tests for the asymmetric Laplace distribution in the scenarios considered. -
From Distance Correlation to Multiscale Graph Correlation Arxiv
From Distance Correlation to Multiscale Graph Correlation Cencheng Shen∗1, Carey E. Priebey2, and Joshua T. Vogelsteinz3 1Department of Applied Economics and Statistics, University of Delaware 2Department of Applied Mathematics and Statistics, Johns Hopkins University 3Department of Biomedical Engineering and Institute of Computational Medicine, Johns Hopkins University October 2, 2018 Abstract Understanding and developing a correlation measure that can detect general de- pendencies is not only imperative to statistics and machine learning, but also crucial to general scientific discovery in the big data age. In this paper, we establish a new framework that generalizes distance correlation — a correlation measure that was recently proposed and shown to be universally consistent for dependence testing against all joint distributions of finite moments — to the Multiscale Graph Correla- tion (MGC). By utilizing the characteristic functions and incorporating the nearest neighbor machinery, we formalize the population version of local distance correla- tions, define the optimal scale in a given dependency, and name the optimal local correlation as MGC. The new theoretical framework motivates a theoretically sound ∗[email protected] arXiv:1710.09768v3 [stat.ML] 30 Sep 2018 [email protected] [email protected] 1 Sample MGC and allows a number of desirable properties to be proved, includ- ing the universal consistency, convergence and almost unbiasedness of the sample version. The advantages of MGC are illustrated via a comprehensive set of simula- tions with linear, nonlinear, univariate, multivariate, and noisy dependencies, where it loses almost no power in monotone dependencies while achieving better perfor- mance in general dependencies, compared to distance correlation and other popular methods.