entropy Article On Relations Between the Relative Entropy and c2-Divergence, Generalizations and Applications Tomohiro Nishiyama 1 and Igal Sason 2,* 1 Independent Researcher, Tokyo 206–0003, Japan;
[email protected] 2 Faculty of Electrical Engineering, Technion—Israel Institute of Technology, Technion City, Haifa 3200003, Israel * Correspondence:
[email protected]; Tel.: +972-4-8294699 Received: 22 April 2020; Accepted: 17 May 2020; Published: 18 May 2020 Abstract: The relative entropy and the chi-squared divergence are fundamental divergence measures in information theory and statistics. This paper is focused on a study of integral relations between the two divergences, the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of f -divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong data–processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains. Keywords: relative entropy; chi-squared divergence; f -divergences; method of types; large deviations; strong data–processing inequalities; information contraction; maximal correlation; Markov chains 1. Introduction The relative entropy (also known as the Kullback–Leibler divergence [1]) and the chi-squared divergence [2] are divergence measures which play a key role in information theory, statistics, learning, signal processing, and other theoretical and applied branches of mathematics. These divergence measures are fundamental in problems pertaining to source and channel coding, combinatorics and large deviations theory, goodness-of-fit and independence tests in statistics, expectation–maximization iterative algorithms for estimating a distribution from an incomplete data, and other sorts of problems (the reader is referred to the tutorial paper by Csiszár and Shields [3]).