A New Family of Bounded Divergence Measures and Application to Signal Detection

Total Page:16

File Type:pdf, Size:1020Kb

A New Family of Bounded Divergence Measures and Application to Signal Detection A New Family of Bounded Divergence Measures and Application to Signal Detection Shivakumar Jolad1, Ahmed Roman2, Mahesh C. Shastry 3, Mihir Gadgil4 and Ayanendranath Basu 5 1Department of Physics, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, INDIA 2 Department of Mathematics, Virginia Tech , Blacksburg, VA, USA. 3 Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, Madhya Pradesh, INDIA. 4 Biomedical Engineering Department, Oregon Health & Science University, Portland, OR, USA. 5 Indian Statistical Institute, Kolkata, West Bengal-700108, INDIA [email protected], [email protected], [email protected], [email protected], [email protected] Keywords: Divergence Measures, Bhattacharyya Distance, Error Probability, F-divergence, Pattern Recognition, Signal Detection, Signal Classification. Abstract: We introduce a new one-parameter family of divergence measures, called bounded Bhattacharyya distance (BBD) measures, for quantifying the dissimilarity between probability distributions. These measures are bounded, symmetric and positive semi-definite and do not require absolute continuity. In the asymptotic limit, BBD measure approaches the squared Hellinger distance. A generalized BBD measure for multiple distributions is also introduced. We prove an extension of a theorem of Bradt and Karlin for BBD relating Bayes error probability and divergence ranking. We show that BBD belongs to the class of generalized Csiszar f-divergence and derive some properties such as curvature and relation to Fisher Information. For distributions with vector valued parameters, the curvature matrix is related to the Fisher-Rao metric. We derive certain inequalities between BBD and well known measures such as Hellinger and Jensen-Shannon divergence. We also derive bounds on the Bayesian error probability. We give an application of these measures to the problem of signal detection where we compare two monochromatic signals buried in white noise and differing in frequency and amplitude. 1 INTRODUCTION to assess the convergence (or divergence) of proba- bility distributions. Many of these measures are not Divergence measures for the distance between two metrics in the strict mathematical sense, as they may probability distributions are a statistical approach to not satisfy either the symmetry of arguments or the comparing data and have been extensively studied in triangle inequality. In applications, the choice of the the last six decades [Kullback and Leibler, 1951, Ali measure depends on the interpretation of the metric in and Silvey, 1966, Kapur, 1984, Kullback, 1968, Ku- terms of the problem considered, its analytical prop- mar et al., 1986]. These measures are widely used in erties and ease of computation [Gibbs and Su, 2002]. varied fields such as pattern recognition [Basseville, One of the most well-known and widely used di- 1989, Ben-Bassat, 1978, Choi and Lee, 2003], speech vergence measures, the Kullback-Leibler divergence recognition [Qiao and Minematsu, 2010, Lee, 1991], (KLD) [Kullback and Leibler, 1951,Kullback, 1968], arXiv:1201.0418v9 [math.ST] 10 Apr 2016 signal detection [Kailath, 1967, Kadota and Shepp, can create problems in specific applications. Specif- 1967,Poor, 1994], Bayesian model validation [Tumer ically, it is unbounded above and requires that the and Ghosh, 1996] and quantum information theory distributions be absolutely continuous with respect [Nielsen and Chuang, 2000, Lamberti et al., 2008]. to each other. Various other information theoretic Distance measures try to achieve two main objectives measures have been introduced keeping in view ease (which are not mutually exclusive): to assess (1) how of computation ease and utility in problems of sig- “close” two distributions are compared to others and nal selection and pattern recognition. Of these mea- (2) how “easy” it is to distinguish between one pair sures, Bhattacharyya distance [Bhattacharyya, 1946, than the other [Ali and Silvey, 1966]. Kailath, 1967,Nielsen and Boltz, 2011] and Chernoff There is a plethora of distance measures available distance [Chernoff, 1952, Basseville, 1989, Nielsen tions. We also characterize the performance of BBD and Boltz, 2011] have been widely used in signal for different signal to noise ratios, providing thresh- processing. However, these measures are again un- olds for signal separation. bounded from above. Many bounded divergence mea- Our paper is organized as follows: Section I is sures such as Variational, Hellinger distance [Bas- the current introduction. In Section II, we recall the seville, 1989, DasGupta, 2011] and Jensen-Shannon well known Kullback-Leibler and Bhattacharyya di- metric [Burbea and Rao, 1982,Rao, 1982b,Lin, 1991] vergence measures, and then introduce our bounded have been studied extensively. Utility of these mea- Bhattacharyya distance measures. We discuss some sures vary depending on properties such as tightness special cases of BBD, in particular Hellinger distance. of bounds on error probabilities, information theoretic We also introduce the generalized BBD for multi- interpretations, and the ability to generalize to multi- ple distributions. In Section III, we show the posi- ple probability distributions. tive semi-definiteness of BBD measure, applicability of the Bradt Karl theorem and prove that BBD be- Here we introduce a new one-parameter (a) fam- longs to generalized f-divergence class. We also de- ily of bounded measures based on the Bhattacharyya rive the relation between curvature and Fisher Infor- coefficient, called bounded Bhattacharyya distance mation, discuss the curvature metric and prove some (BBD) measures. These measures are symmet- inequalities with other measures such as Hellinger ric, positive-definite and bounded between 0 and 1. and Jensen Shannon divergence for a special case of In the asymptotic limit (a ! ±¥) they approach BBD. In Section IV, we move on to discuss applica- squared Hellinger divergence [Hellinger, 1909,Kaku- tion to signal detection problem. Here we first briefly tani, 1948]. Following Rao [Rao, 1982b] and Lin describe basic formulation of the problem, and then [Lin, 1991], a generalized BBD is introduced to cap- move on computing distance between random pro- ture the divergence (or convergence) between multi- cesses and comparing BBD measure with Fisher In- ple distributions. We show that BBD measures belong formation and KLD. In the Appendix we provide the to the generalized class of f-divergences and inherit expressions for BBD measures , with a = 2, for some useful properties such as curvature and its relation to commonly used distributions. We conclude the paper Fisher Information. Bayesian inference is useful in with summary and outlook. problems where a decision has to be made on clas- sifying an observation into one of the possible array of states, whose prior probabilities are known [Hell- man and Raviv, 1970, Varshney and Varshney, 2008]. 2 DIVERGENCE MEASURES Divergence measures are useful in estimating the er- ror in such classification [Ben-Bassat, 1978, Kailath, In the following subsection we consider a measur- 1967, Varshney, 2011]. We prove an extension of the able space W with s-algebra B and the set of all prob- Bradt Karlin theorem for BBD, which proves the exis- ability measures M on (W;B). Let P and Q denote tence of prior probabilities relating Bayes error prob- probability measures on (W;B) with p and q denoting abilities with ranking based on divergence measure. their densities with respect to a common measure l. Bounds on the error probabilities Pe can be calcu- We recall the definition of absolute continuity [Roy- lated through BBD measures using certain inequal- den, 1986]: ities between Bhattacharyya coefficient and Pe. We derive two inequalities for a special case of BBD Absolute Continuity A measure P on the Borel sub- (a = 2) with Hellinger and Jensen-Shannon diver- sets of the real line is absolutely continuous with re- gences. Our bounded measure with a = 2 has been spect to Lebesgue measure Q, if P(A) = 0, for every used by Sunmola [Sunmola, 2013] to calculate dis- Borel subset A 2 B for which Q(A) = 0, and is de- tance between Dirichlet distributions in the context of noted by P << Q. Markov decision process. We illustrate the applicabil- ity of BBD measures by focusing on signal detection 2.1 Kullback-Leibler divergence problem that comes up in areas such as gravitational wave detection [Finn, 1992]. Here we consider dis- The Kullback-Leibler divergence (KLD) (or rela- criminating two monochromatic signals, differing in tive entropy) [Kullback and Leibler, 1951, Kullback, frequency or amplitude, and corrupted with additive 1968] between two distributions P;Q with densities p white noise. We compare the Fisher Information of and q is given by: the BBD measures with that of KLD and Hellinger distance for these random processes, and highlight the Z p I(P;Q) ≡ plog dl: (1) regions where FI is insensitive large parameter devia- q The symmetrized version is given by where a 2 [−¥;0) [ (1;¥]. This gives the measure J(P;Q) ≡ (I(P;Q) + I(Q;P))= a 2 (1 − r) Ba(r(P;Q)) ≡ −log 1 −a 1 − ; (5) [Kailath, 1967], I(P;Q) 2 [0;¥]. It diverges if (1− a ) a 9 x0 : q(x0) = 0 and p(x0) 6= 0. which can be simplified to KLD is defined only when P is absolutely contin- h (1−r) i uous w.r.t. Q. This feature can be problematic in nu- log 1 − a merical computations when the measured distribution Ba(r) = : (6) log1 − 1 has zero values. a It is easy to see that B (0) = 1; B (1) = 0. 2.2 Bhattacharyya Distance a a Bhattacharyya distance is a widely used measure 2.4 Special cases in signal selection and pattern recognition [Kailath, 1967]. It is defined as: 1. For a = 2 we get, Z p B(P;Q) ≡ −ln pqdl = −ln(r); (2) 1 + r2 1 + r B2(r) = −log22 = −log2 : R p 2 2 where the term in parenthesis r(P;Q) ≡ pqdl (7) is called Bhattacharyya coefficient [Bhattacharyya, We study some of its special properties in Sec.3.7.
Recommended publications
  • Hellinger Distance Based Drift Detection for Nonstationary Environments
    Hellinger Distance Based Drift Detection for Nonstationary Environments Gregory Ditzler and Robi Polikar Dept. of Electrical & Computer Engineering Rowan University Glassboro, NJ, USA [email protected], [email protected] Abstract—Most machine learning algorithms, including many sion boundaries as a concept change, whereas gradual changes online learners, assume that the data distribution to be learned is in the data distribution as a concept drift. However, when the fixed. There are many real-world problems where the distribu- context does not require us to distinguish between the two, we tion of the data changes as a function of time. Changes in use the term concept drift to encompass both scenarios, as it is nonstationary data distributions can significantly reduce the ge- usually the more difficult one to detect. neralization ability of the learning algorithm on new or field data, if the algorithm is not equipped to track such changes. When the Learning from drifting environments is usually associated stationary data distribution assumption does not hold, the learner with a stream of incoming data, either one instance or one must take appropriate actions to ensure that the new/relevant in- batch at a time. There are two types of approaches for drift de- formation is learned. On the other hand, data distributions do tection in such streaming data: in passive drift detection, the not necessarily change continuously, necessitating the ability to learner assumes – every time new data become available – that monitor the distribution and detect when a significant change in some drift may have occurred, and updates the classifier ac- distribution has occurred.
    [Show full text]
  • Hellinger Distance-Based Similarity Measures for Recommender Systems
    Hellinger Distance-based Similarity Measures for Recommender Systems Roma Goussakov One year master thesis Ume˚aUniversity Abstract Recommender systems are used in online sales and e-commerce for recommend- ing potential items/products for customers to buy based on their previous buy- ing preferences and related behaviours. Collaborative filtering is a popular computational technique that has been used worldwide for such personalized recommendations. Among two forms of collaborative filtering, neighbourhood and model-based, the neighbourhood-based collaborative filtering is more pop- ular yet relatively simple. It relies on the concept that a certain item might be of interest to a given customer (active user) if, either he appreciated sim- ilar items in the buying space, or if the item is appreciated by similar users (neighbours). To implement this concept different kinds of similarity measures are used. This thesis is set to compare different user-based similarity measures along with defining meaningful measures based on Hellinger distance that is a metric in the space of probability distributions. Data from a popular database MovieLens will be used to show the effectiveness of different Hellinger distance- based measures compared to other popular measures such as Pearson correlation (PC), cosine similarity, constrained PC and JMSD. The performance of differ- ent similarity measures will then be evaluated with the help of mean absolute error, root mean squared error and F-score. From the results, no evidence were found to claim that Hellinger distance-based measures performed better than more popular similarity measures for the given dataset. Abstrakt Titel: Hellinger distance-baserad similaritetsm˚attf¨orrekomendationsystem Rekomendationsystem ¨aroftast anv¨andainom e-handel f¨orrekomenderingar av potentiella varor/produkter som en kund kommer att vara intresserad av att k¨opabaserat p˚aderas tidigare k¨oppreferenseroch relaterat beteende.
    [Show full text]
  • Arxiv:1703.01127V4 [Cs.NE] 29 Jan 2018
    On the Behavior of Convolutional Nets for Feature Extraction On the Behavior of Convolutional Nets for Feature Extraction Dario Garcia-Gasulla [email protected] Ferran Par´es [email protected] Armand Vilalta [email protected] Jonatan Moreno Barcelona Supercomputing Center (BSC) Omega Building, C. Jordi Girona, 1-3 08034 Barcelona, Catalonia Eduard Ayguad´e Jes´usLabarta Ulises Cort´es Barcelona Supercomputing Center (BSC) Universitat Polit`ecnica de Catalunya - BarcelonaTech (UPC) Toyotaro Suzumura IBM T.J. Watson Research Center, New York, USA Barcelona Supercomputing Center (BSC) Abstract Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of im- ages, without requiring an expensive training phase. Contributions to this field (commonly known as feature representation transfer or transfer learning) have been purely empirical so far, extracting all CNN features from a single layer close to the output and testing their performance by feeding them to a classifier. This approach has provided consistent results, although its relevance is limited to classification tasks. In a completely different approach, in this paper we statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets. We seek to provide new insights into the behavior of CNN features, particularly the ones from convolu- tional layers, as this can be relevant for their application to knowledge representation and reasoning.
    [Show full text]
  • On Measures of Entropy and Information
    On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy
    [Show full text]
  • The Bhattacharyya Kernel Between Sets of Vectors
    THE BHATTACHARYYA KERNEL BETWEEN SETS OF VECTORS YJ (YO JOONG) CHOE Abstract. In machine learning, we wish to find general patterns from a given set of examples, and in many cases we can naturally represent each example as a set of vectors rather than just a vector. In this expository paper, we show how to represent a learning problem in this \bag of tuples" approach and define the Bhattacharyya kernel, a similarity measure between such sets related to the Bhattacharyya distance. The kernel is particularly useful because it is invariant under small transformations between examples. We also show how to extend this method using kernel trick and still compute the kernel in closed form. Contents 1. Introduction 2 2. Background 2 2.1. Learning 2 2.2. Examples of learning algorithms 3 2.3. Kernels 5 2.4. The normal distribution 7 2.5. Maximum likelihood estimation (MLE) 8 2.6. Principal component analysis (PCA) 9 3. The Bhattacharyya kernel 11 4. The \bag of tuples" approach 12 5. The Bhattacharyya kernel between sets of vectors 13 5.1. The multivariate normal model using MLE 14 6. Extension to higher-dimensional spaces using the kernel trick 14 6.1. Analogue of the normal model using MLE: regularized covariance form 15 6.2. Kernel PCA 17 6.3. Summary of the process and computation of the Bhattacharyya kernel 20 Acknowledgments 22 References 22 1 2 YJ (YO JOONG) CHOE 1. Introduction In machine learning and statistics, we utilize kernels, which are measures of \similarity" between two examples. Kernels take place in various learning algo- rithms, such as the perceptron and support vector machines.
    [Show full text]
  • Tailoring Differentially Private Bayesian Inference to Distance
    Tailoring Differentially Private Bayesian Inference to Distance Between Distributions Jiawen Liu*, Mark Bun**, Gian Pietro Farina*, and Marco Gaboardi* *University at Buffalo, SUNY. fjliu223,gianpiet,gaboardig@buffalo.edu **Princeton University. [email protected] Contents 1 Introduction 3 2 Preliminaries 5 3 Technical Problem Statement and Motivations6 4 Mechanism Proposition8 4.1 Laplace Mechanism Family.............................8 4.1.1 using `1 norm metric.............................8 4.1.2 using improved `1 norm metric.......................8 4.2 Exponential Mechanism Family...........................9 4.2.1 Standard Exponential Mechanism.....................9 4.2.2 Exponential Mechanism with Hellinger Metric and Local Sensitivity.. 10 4.2.3 Exponential Mechanism with Hellinger Metric and Smoothed Sensitivity 10 5 Privacy Analysis 12 5.1 Privacy of Laplace Mechanism Family....................... 12 5.2 Privacy of Exponential Mechanism Family..................... 12 5.2.1 expMech(;; ) −Differential Privacy..................... 12 5.2.2 expMechlocal(;; ) non-Differential Privacy................. 12 5.2.3 expMechsmoo −Differential Privacy Proof................. 12 6 Accuracy Analysis 14 6.1 Accuracy Bound for Baseline Mechanisms..................... 14 6.1.1 Accuracy Bound for Laplace Mechanism.................. 14 6.1.2 Accuracy Bound for Improved Laplace Mechanism............ 16 6.2 Accuracy Bound for expMechsmoo ......................... 16 6.3 Accuracy Comparison between expMechsmoo, lapMech and ilapMech ...... 16 7 Experimental Evaluations 18 7.1 Efficiency Evaluation................................. 18 7.2 Accuracy Evaluation................................. 18 7.2.1 Theoretical Results.............................. 18 7.2.2 Experimental Results............................ 20 1 7.3 Privacy Evaluation.................................. 22 8 Conclusion and Future Work 22 2 Abstract Bayesian inference is a statistical method which allows one to derive a posterior distribution, starting from a prior distribution and observed data.
    [Show full text]
  • An Information-Geometric Approach to Feature Extraction and Moment
    An information-geometric approach to feature extraction and moment reconstruction in dynamical systems Suddhasattwa Dasa, Dimitrios Giannakisa, Enik˝oSz´ekelyb,∗ aCourant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA bSwiss Data Science Center, ETH Z¨urich and EPFL, 1015 Lausanne, Switzerland Abstract We propose a dimension reduction framework for feature extraction and moment reconstruction in dy- namical systems that operates on spaces of probability measures induced by observables of the system rather than directly in the original data space of the observables themselves as in more conventional methods. Our approach is based on the fact that orbits of a dynamical system induce probability measures over the measur- able space defined by (partial) observations of the system. We equip the space of these probability measures with a divergence, i.e., a distance between probability distributions, and use this divergence to define a kernel integral operator. The eigenfunctions of this operator create an orthonormal basis of functions that capture different timescales of the dynamical system. One of our main results shows that the evolution of the moments of the dynamics-dependent probability measures can be related to a time-averaging operator on the original dynamical system. Using this result, we show that the moments can be expanded in the eigenfunction basis, thus opening up the avenue for nonparametric forecasting of the moments. If the col- lection of probability measures is itself a manifold, we can in addition equip the statistical manifold with the Riemannian metric and use techniques from information geometry. We present applications to ergodic dynamical systems on the 2-torus and the Lorenz 63 system, and show on a real-world example that a small number of eigenvectors is sufficient to reconstruct the moments (here the first four moments) of an atmospheric time series, i.e., the realtime multivariate Madden-Julian oscillation index.
    [Show full text]
  • Issue PDF (13986
    European Mathematical Society NEWSLETTER No. 22 December 1996 Second European Congress of Mathematics 3 Report on the Second Junior Mathematical Congress 9 Obituary - Paul Erdos . 11 Report on the Council and Executive Committee Meetings . 15 Fifth Framework Programme for Research and Development . 17 Diderot Mathematics Forum . 19 Report on the Prague Mathematical Conference . 21 Preliminary report on EMS Summer School . 22 EMS Lectures . 27 European Won1en in Mathematics . 28 Euronews . 29 Problem Corner . 41 Book Reviews .....48 Produced at the Department of Mathematics, Glasgow Caledonian University Printed by Armstrong Press, Southampton, UK EDITORS Secretary Prof Roy Bradley Peter W. Michor Department of Mathematics Institut fiir Mathematik, Universitiit Wien, Strudlhof­ Glasgow Caledonian University gasse 4, A-1090 Wien, Austria. GLASGOW G4 OBA, SCOTLAND e-mail: [email protected] Editorial Team Glasgow: Treasurer R. Bradley, V. Jha, J. Gomatam, A. Lahtinen G. Kennedy, M. A. Speller, J. Wilson Department of Mathematics, P.O.Box 4 Editor - Mathematics Education FIN-00014 University of Helsinki Finland Prof. Vinicio Villani Dipartimento di Matematica e-mail: [email protected] Via Bounarroti, 2 56127 Pisa, Italy EMS Secretariat e-mail [email protected] Ms. T. Makelainen University of Helsinki (address above) Editors - Brief Reviews e-mail [email protected] I Netuka and V Soucek tel: +358-9-1912 2883 Mathematical Institute Charles University telex: 124690 Sokolovska 83 fax: +358-9-1912 3213 18600 Prague, Czech Republic e-mail: Newsletter editor [email protected] R. Bradley, Glasgow Caledonian University ( address [email protected] above) USEFUL ADDRESSES e-mail [email protected] President: Jean-Pierre Bourguignon Newsletter advertising officer IHES, Route de Chartres, F-94400 Bures-sur-Yvette, M.
    [Show full text]
  • The Similarity for Nominal Variables Based on F-Divergence
    International Journal of Database Theory and Application Vol.9, No.3 (2016), pp. 191-202 http://dx.doi.org/10.14257/ijdta.2016.9.3.19 The Similarity for Nominal Variables Based on F-Divergence Zhao Liang*,1 and Liu Jianhui2 1Institute of Graduate, Liaoning Technical University, Fuxin, Liaoning, 123000, P.R. China 2School of Electronic and Information Engineering, Liaoning Technical University, Huludao, Liaoning, 125000, P.R. China [email protected] [email protected] Abstract Measuring the similarity between nominal variables is an important problem in data mining. It's the base to measure the similarity of data objects which contain nominal variables. There are two kinds of traditional methods for this task, the first one simply distinguish variables by same or not same while the second one measures the similarity based on co-occurrence with variables of other attributes. Though they perform well in some conditions, but are still not enough in accuracy. This paper proposes an algorithm to measure the similarity between nominal variables of the same attribute based on the fact that the similarity between nominal variables depends on the relationship between subsets which hold them in the same dataset. This algorithm use the difference of the distribution which is quantified by f-divergence to form feature vector of nominal variables. The theoretical analysis helps to choose the best metric from four most common used forms of f-divergence. Time complexity of the method is linear with the size of dataset and it makes this method suitable for processing the large-scale data. The experiments which use the derived similarity metrics with K-modes on extensive UCI datasets demonstrate the effectiveness of our proposed method.
    [Show full text]
  • Statistics As Both a Purely Mathematical Activity and an Applied Science NAW 5/18 Nr
    Piet Groeneboom, Jan van Mill, Aad van der Vaart Statistics as both a purely mathematical activity and an applied science NAW 5/18 nr. 1 maart 2017 55 Piet Groeneboom Jan van Mill Aad van der Vaart Delft Institute of Applied Mathematics KdV Institute for Mathematics Mathematical Institute Delft University of Technology University of Amsterdam Leiden University [email protected] [email protected] [email protected] In Memoriam Kobus Oosterhoff (1933–2015) Statistics as both a purely mathematical activity and an applied science On 27 May 2015 Kobus Oosterhoff passed away at the age of 82. Kobus was employed at contact with Hemelrijk and the encourage- the Mathematisch Centrum in Amsterdam from 1961 to 1969, at the Roman Catholic Univer- ment received from him, but he did his the- ity of Nijmegen from 1970 to 1974, and then as professor in Mathematical Statistics at the sis under the direction of Willem van Zwet Vrije Universiteit Amsterdam from 1975 until his retirement in 1996. In this obituary Piet who, one year younger than Kobus, had Groeneboom, Jan van Mill and Aad van der Vaart look back on his life and work. been a professor at Leiden University since 1965. Kobus became Willem’s first PhD stu- Kobus (officially: Jacobus) Oosterhoff was diploma’ (comparable to a masters) in dent, defending his dissertation Combina- born on 7 May 1933 in Leeuwarden, the 1963. His favorite lecturer was the topolo- tion of One-sided Test Statistics on 26 June capital of the province of Friesland in the gist J. de Groot, who seems to have deeply 1969 at Leiden University.
    [Show full text]
  • 1.2In Robust Learning from Multiple Information Sources
    Robust Learning from Multiple Information Sources by Tianpei Xie A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical Engineering) in the University of Michigan 2017 Doctoral Committee: Professor Alfred O. Hero, Chair Professor Laura Balzano Professor Danai Koutra Professor Nasser M. Nasrabadi, University of West Virginia ©Tianpei Xie [email protected] Orcid iD: 0000-0002-8437-6069 2017 Dedication To my loving parents, Zhongwei Xie and Xiaoming Zhang. For your patience and love bring me strength and happi- ness. ii Acknowledgments This thesis marked a milestone for my life-long study. Since entering the master program of Electrical Engineering in the University of Michigan, Ann Arbor (Umich), I have spent more than 6 years in Ann Arbor, with 2 years as a master graduate student and nearly 5 year as a Phd graduate student. This is a unique experience for my life. Surrounded by friendly colleagues, professors and staffs in Umich, I feel myself a member of family in this place. This feelings should not fade in future. Honestly to say, I love this place more than anywhere else. I love its quiescence with bird chirping in the wood. I love its politeness and patience with people slowing walking along the street. I love its openness with people of different races, nationalities work so closely as they were life-long friends. I feel that I owe a debt of gratitude to my friends, my advisors and this university. Many people have made this thesis possible. I wish to thank the department of Electrical and Computing Engineering in University of Michigan, and the U.S.
    [Show full text]
  • On Some Properties of Goodness of Fit Measures Based on Statistical Entropy
    IJRRAS 13 (1) ● October 2012 www.arpapress.com/Volumes/Vol13Issue1/IJRRAS_13_1_22.pdf ON SOME PROPERTIES OF GOODNESS OF FIT MEASURES BASED ON STATISTICAL ENTROPY Atif Evren1 & Elif Tuna2 1,2 Yildiz Technical University Faculty of Sciences and Literature , Department of Statistics Davutpasa, Esenler, 34210, Istanbul Turkey ABSTRACT Goodness of fit tests can be categorized under several ways. One categorization may be due to the differences between observed and expected frequencies. The other categorization may be based upon the differences between some values of distribution functions. Still the other one may be based upon the divergence of one distribution from the other. Some widely used and well known divergences like Kullback-Leibler divergence or Jeffreys divergence are based on entropy concepts. In this study, we compare some basic goodness of fit tests in terms of their statistical properties with some applications. Keywords: Measures for goodness of fit, likelihood ratio, power divergence statistic, Kullback-Leibler divergence, Jeffreys’ divergence, Hellinger distance, Bhattacharya divergence. 1. INTRODUCTION Boltzmann may be the first scientist who emphasized the probabilistic meaning of thermodynamical entropy. For him, the entropy of a physical system is a measure of disorder related to it [35]. For probability distributions, on the other hand, the general idea is that observing a random variable whose value is known is uninformative. Entropy is zero if there is unit probability at a single point, whereas if the distribution is widely dispersed over a large number of individually small probabilities, the entropy is high [10]. Statistical entropy has some conflicting explanations so that sometimes it measures two complementary conceptions like information and lack of information.
    [Show full text]