2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Allerton Park and Retreat Center Monticello, IL, USA, October 2-5, 2018 Rearrangements and information theoretic inequalities

James Melbourne Electrical and Computer Engineering University of Minnesota Email: [email protected]

Abstract—We investigate the interaction of functional of Brunn-Minkowski inequality in terms of the volume rearrangements with information theoretic inequalities. In of Minkowski sums on spherical rearrangement, see II.4 particular we will prove the Relative below. from Gaussianity decreases on half-space rearrangement, as a consequence we get a qualitative sharpening of the This provides further evidence of the efficacy of usual Gaussian log-Sobolev inequality. Additionally, we cross pollination between the two subjects, in line with compare this half space rearrangement’s interaction with research of the last several decades has built intimate distance from Gaussianity, with the spherical rearrange- connections between the inequalities of information the- ment’s role in entropy power inequalities. ory and convex geometry. We direct the interested reader to [1], [10]–[12], [15], [19] for further examples of such I.INTRODUCTION work. This work explores the role of density rearrangements The paper is organized in the following manner; in in information theoretic inequalities. Some of the ma- Section II we will give definitions and background on terial is subsumed by a mathematical journal preprint a notion of rearrangements, in Section III we follow an [21] by the author. The purpose of this article is to argument of Bobkov and Ledoux [4] to show how PLI emphasize the information theoretic consequences. In can be used to obtain the Gaussian LSI. In subsequent particular we explore in more depth “Gaussian half- Section IV we prove a rearrangement sharpening of space” rearrangement, pivotal at a set theoretic level to an integrated Gaussian LSI. We conclude in Section V the Borell-Ehrhart inequality [6], [13] and the Gaussian where we pose two questions, one related to the effect re- isoperimetric inequality [5], [25], and show that when arrangement on the Rényi entropy of independent vectors combined with the Prékopa-Leindler inequality (PLI) under a linear map, the other related to rearrangement these inequalities deliver a rearrangement sharpening of inequalities of Rogers-Brascamp-Lieb-Luttinger [8], [22] the an integrated Gaussian log-Sobolev inequality (LSI). and the inequalities of Barthe-Brascamp-Lieb [2], [7] The theory can be viewed from two distinct (though which have been recently shown to have deep connec- arguably convergent, see [17]) perspectives. Firstly, the tions to . work here is described in relationship to the role of II.PRELIMINARIES spherical rearrangement in the entropy power inequality (EPI) and more general Rényi EPIs. An analogy between Define the standard Gaussian density in Rd by the two is demonstrated in proving that as the spherical −d −|x|2/2 rearrangement preserves the Rényi entropy of a ran- γ(x) = γd(x) = (2π) 2 e (1) dom variable, the Gaussian rearrangement preserves the and let Rényi divergence from Gaussianity. A main result shows Z x that the relative Fisher information from Gaussianity Φ(x) = γ1(t)dt, (2) decreases on Gaussian rearrangement thus sharpening −∞ the Gaussian log-Sobolev inequality. This should be with Φ−1 denoting the inverse function. compared to Wang and Madiman’s sharpening of the Let X be a random vector with density f. The EPI, showing that the entropy of independent sums Shannon entropy is defined by decreases on spherical rearrangement. Z Secondly, the Gaussian rearrangements developed h(X) = − f(x) ln f(x)dx, (3) here have a convex geometric inspiration. The thrust of [21] is on spherical rearrangements and the PLI as provided the integral exists, and its entropy power

well as generalizations thereof. This rearrangement PLI 2h(X)/d gives a qualitative version analogous to the statement N(X) = e . (4)

978-1-5386-6596-1/18/$31.00 ©2018 IEEE 9 More generally, for r ∈ (0, 1) ∪ (1, ∞), the Rényi A. Information metrics entropy can be defined by The Kullback-Leibler divergence with respect to the Z −1 standard Gaussian is given by, hr(X) = (1 − p) ln fr(x)dx. (5) Z f(x) D(X) = D(f||γ) = f(x) log dx. (13) Using | · | to denote the Lebesgue measure of a set, and γ(x) supp(X) to denote the support of X, the corresponding When ϕ = f(x)/γ(x) denotes the density of X written terms for p ∈ {0, 1, ∞} are defined through their limits, with respect to γ, D(X) can be written as as {|supp(X)|, h(X), − log kfk∞} and we define its Rényi entropy power, Z ϕ log ϕdγ. (14) 2hr (X)/d Nr(X) = e . (6) The Kullback-Leibler divergence is generalized by the There has been considerable recent progress in under- Rényi divergence defined by standing how the behavior of Rényi entropy compares log R f p(x)γ1−p(x)dx with the Shannon entropy, particularly in regards to the Dp(X) = Dp(f||γ) = (15) entropy power inequality. Let us give a formulation of (p − 1) the entropy power inequality. The cases p ∈ {0, 1, ∞} are defined by their limits, so that Theorem II.1. [23], [24] For X1 and X2 independent vectors, D0(X) = − log γ(f > 0), (16)

N(X1 + X2) ≥ N(Z1 + Z2). (7) D1(X) = D(X), (17)

D∞(X) = log kf/γk∞,γ . (18) Where Zi are independent scalings of the standard Gaussian, chosen such that N(Zi) = N(Xi). See [26] for more background on Rényi divergence. These developments have been buoyed by results re- We define the relative Fisher information with respect garding spherically symmetric rearrangements. We recall to the standard Gaussian by, the following result due to Wang and Madiman Z 2 f(x) I(X) = I(f||γ) = f(x) ∇ log dx (19) Theorem II.2. [27] For Xi independent with densities γ(x) f and X◦ independent with densities f ◦ i i i This allows a succinct statement of the Gaussian ◦ ◦ log-Sobolev inequality, as a comparison of information Np(X1 + ··· + Xn) ≥ Np(X1 + ··· + Xn). (8) metrics, A precise definition of f ◦ will be given later in this section. Let us mention that we deviate from the conven- Theorem II.3. [14] For a random vector X, tional notation for the spherically symmetric decreasing I(X) ∗ D(X) ≤ . (20) rearrangement of a function fi , and reserve the ∗ for a 2 halfspace rearrangement that will be central in Sections Observe that when ϕ(x) = f(x)/γ(x) denotes the III and IV. Theorem II.2 is particularly significant in the density of X with respect to γ, I(X) can be written as context of proving EPIs since Z |∇ϕ|2 N (X) = N (X◦). (9) dγ (21) p p ϕ It follows that proving a Rényi entropy inequality of the Thus Theorem II.3 can also be written as the following form inequality, for ϕ a probability density with respect to γ,

Np(X1 + ··· + Xn) ≥ ψ(Np(X1),...,Np(Xn)) (10) Z 1 Z |∇ϕ|2 ϕ log ϕdγ ≤ dγ. (22) for some coordinate increasing ψ, is reduced to under- 2 ϕ standing the case that the Xi have spherically symmetric B. Rearrangements decreasing densities, for an example see [20]. In the 1) Spherically symmetric non-decreasing rearrange- context of Shannon-Stam EPI, this sharpens Theorem d ments: Given a nonempty measurable set A ⊆ R we II.1 to define its symmetric rearrangement A◦ to be the origin ◦ ◦ centered ball of equal volume. Explicitly N(X1 + X2) ≥ N(X1 + X2 ) ≥ N(Z1 + Z2) (11) n 1 o = N(X ) + N(X ) ◦ d 1 2 A = x : |x|2 < (|A|/ωd) , (23) (12)

10 ∗ where ωd is the volume of the d-dimensional unit ball In particular ϕ is lower semi-continuous, and γ and with the understanding that A◦ = {0} in the case equimeasureable with ϕ in that γ{ϕ > λ} = γ{ϕ∗ > that |A| = 0 and A◦ = Rd when |A| = ∞. λ}. Via the layer-cake decomposition of a non-negative ∗ R ∞ Proof. Since ϕ (x) > λ implies 1 ∗ (x)dt > λ, function f as 0 {ϕ>t} implies by the monotonicity of the sets {ϕ > t}∗, a t > Z f(x) Z ∞ λ such that x ∈ {ϕ > t}∗. Thus, f(x) = 1dt = 1{y:f(y)>t}(x)dt (24) 0 0 {ϕ∗ > λ} ⊆ {ϕ > λ}∗. (31) we can extend this notion of symmetrization to functions via the following definition. In the opposite direction, x ∈ {ϕ > λ}∗, is by definition

−1 Definition II.1. For a measurable non-negative function x1 < Φ (γ{ϕ > λ}). (32) f define its non-decreasing symmetric rearrangement f ◦ by By continuity of measure, γ({ϕ > t}) increases to Z ∞ ◦ γ({ϕ > λ}) for t decreasing to λ. From the continuity f (x) = 1 ◦ (x)dt {y:f(y)>t} (25) Φ 0 and monotonicity of , equality of the two sets follows. Lower semi-continuity is immediate, as (31) shows that Finally the spherically symmetric rearrangement of a f ∗ has open super-level sets. Equimeasurability follows can be given. just as easily, For a random variable X with density Definition II.2. ∗ ∗ f, define X◦ to be a random variable with density f ◦. γ({ϕ > λ}) = γ({ϕ > λ} ) = γ({ϕ > λ}). (33) Let us collect some properties of ◦. We will omit the proofs, which are elementary and can be derived For a random variable X with density f so that its by analogy to the Gaussian half-space rearrangements ∗ arguments given below. density with respect to a Gaussian is ϕ = f/γ, let X to be a random variable drawn from the density ϕ∗γ Proposition II.3. f ◦ is characterized by the equality Now let us observe that Gaussian halfspace rearrange- ments preserve Rényi divergence from Gaussianity. {f ◦ > λ} = {f > λ}◦. (26) Proposition II.7. For a random vector X and p ∈ [0, ∞] Corollary II.4. f ◦ is lower semi-continuous, spherically ∗ symmetric and non-increasing in the sense that |x| ≤ |y| Dp(X) = Dp(X ). (34) implies f ◦(x) ≥ f ◦(y). Proof. We prove the result for p ∈ (0, 1) ∪ (1, ∞), the This notion and notation for rearrangements allow limiting cases are no more complicated. Observe that by a particularly simple statement of the classical Brunn- (15), Minkowski inequality, in analogy with II.1. log R f p(x)γ1−p(x)dx Theorem II.4 (Brunn-Minkowski). For Borel A and B, D (X) = (35) p (p − 1) ◦ ◦ R p |A + B|d ≥ |A + B |d. (27) log ϕ dγ = (36) 2) Gaussian half-space rearrangement: Define the (p − 1) d Gaussian half-space rearrangement of a set A ⊆ R , By the layer cake decomposition, ∗ −1 ∞ A = {x : x1 < Φ (γ(A))}. (28) Z Z ϕpdγ = γ({ϕp > t})dt. (37) Note that A∗ is the open first coordinate half space 0 satisfying γ(A∗) = γ(A). Our result now follows from the equimeasurability, We can extend this notion to functions in the following 1 ∗ 1 γ({ϕ > t p }) = γ({ϕ > t p }) (38) Definition II.5. For measurable non-negative ϕ define Z ∞ guaranteed by Proposition II.6. ∗ ϕ (x) = 1{ϕ>t}∗ (x)dt (29) 0 This gives an analog of the fact that spherical rear- Proposition II.6. The following inequality characterizes rangement with respect to Lebesgue measure preserves ϕ∗, Rényi entropy (9). Let us also pause to acknowledge that

∗ ∗ the Borell-Ehrhart inequality can be conveniently stated {ϕ > λ} = {ϕ > λ} . (30) within this rearrangement framework.

11 Theorem II.5. [13] [6] For A, B Borel subsets of Rd by the identity matrix. Thus, γ is strongly log-concave and t ∈ (0, 1) in the following sense

2 γ((1 − t)A + tB) ≥ γ((1 − t)A∗ + tB∗). (39) γ((1 − t)x + ty) ≥ γ1−t(x)γt(y)et(1−t)|x−y| /2. (44) This theorem is a generalization of the Gaussian It follows from routine algebra, that if one wishes to isoperimetric inequality, which we will need for later integrate against a Gaussian measure, the hypothesis in use. Theorem III.1 can be weakened, while still retaining the conclusion. Theorem II.6. [5], [25] When A is a Borel subset of Rd and B is an origin symmetric Euclidean ball, Theorem III.2 (Gaussian Prekopa-Leindler). For u, v, w : d → [0, ∞] such that, γ(A + B) ≥ γ(A∗ + B). (40) R 2 u((1 − t)x + ty) ≥ e−t(1−t)|x−y| /2v1−t(x)wt(y) In this case we will be rearranging a density with (45) respect to a Gaussian to one with symmetry with respect to the Gaussian’s isoperimetric sets. then with γ denoting the standard Gaussian measure in Rd, III.GAUSSIAN LOG-SOBOLEVINEQUALITY Z Z 1−t Z t The Prékopa-Leindler inequality (PLI) can be under- udγ ≥ v dγ wdγ (46) stood as a functional generalization of the dimension free statement of the Brunn-Minkowski inequality1 on Let us proceed, following arguments of Bobkov- Euclidean space. Ledoux to show the connection this strengthened PLI and the LSI. For a fixed p > 1 and f, take w = f p Theorem III.1 (Prékopa-Leindler). For f, g, h : d → R v = 1 and t = 1 then for any u, satisfying [0, ∞) Borel measurable satisfying for a fixed t ∈ (0, 1) p d 2 and any x, y ∈ R u((1 − t)x + ty) ≥ e−t(1−t)|x−y| /2f(y) (47) f((1 − t)x + ty) ≥ g1−t(x)ht(y), (41) we have an upper bound on the Lp(γ) norm of f from then III.2 1−t t Z Z Z  Z  u ≥ kfk (48) f(z)dz ≥ g(z)dz h(z)dz . p d d d R R R (42) With the interest of determining the optimal such u achievable through the methods of PLI, we define the Brunn-Minkowski can be recovered by taking indi- operator cator functions of sets f = 1(1−t)A+tB, g = 1A, and −t(1−t)|x−y|2/2 h = 1B. Conversely, let us mention PLI can be derived Qtf(z) = sup e f(y), from Brunn-Minkowski using elementary arguments in {(x,y):(1−t)x+ty=z} one-dimension, and “tensorized” (in the sense that PLI (49) for two spaces implies PLI for their product) to obtain and collect our observations as the following. the d-dimensional result. Theorem III.3. [4] For p > 1, and Borel f ≥ 0, If one wishes to integrate against against a log-concave Z measure rather than the Lebesgue, Theorem III.1 admits Q 1 fdγ ≥ kfkp,γ . (50) an easy generalization. That is, for f, g, h satisfying (41) p and convex function V , the conclusion (42) holds with 1 R p  p dz replaced by e−V (z)dz. This follows immediately from Here kfkp,γ = f dγ . the original PLI since f, g, h can be augmented to Corollary III.1. For a random vector X, ˜ −V −V ˜ −V 1 f = fe , g˜ = ge , h = he (43) D(X) ≤ I(X). (51) 2 which by the convexity of V satisfy (41), and our the Proof. By (22), with ϕ denoting the density of X with result follows by application of the theorem. respect to γ, this is equivalent to For our purposes, it is worthwhile to observe that |x|2/2 is strongly convex, in the sense that its Hessian is Z 1 Z |∇ϕ|2 ϕ log ϕdγ ≤ dγ. (52) bounded below (in the sense of positive definite matrices) 2 ϕ

1To recall the Brunn-Minkowski inequality, in a dimension free By the Taylor expansion of kϕkp about p = 1 to obtain d form, says that for A, B compact in R , then |(1 − t)A + tB| ≥ |A|1−t|B|t, where we have used | · | to denote the Lebesgue measure. kϕkp = kϕk1 + (p − 1)D(X) + o(p − 1). (53)

12 and then from (1−t)x+ty = z, writing λ = (1−t)/t = Denoting p − 1 and w = x − z, we have r 1 ψ(q ) = 2λ ln , (65) −λ|w|2/2 2 Qtϕ(z) = sup ϕ(z + λw)e . (54) q2 w and noticing that {|w| < ψ(q2)} is an origin symmetric The Taylor expansion about λ = 0, gives Euclidean ball, we can apply Theorem II.6,

−λ|w|2/2 ϕ(z + λw)e (55) γ({Qtϕ > λ}) (66)    |w|2  = ϕ(z) + λ ∇ϕ(z) · w − ϕ(z) + o(λ) (56) [ 2 = γ  {ϕ > q1} + {|w| < ψ(q2)} (67) q∈S This gives ≥ sup γ ({ϕ > q1} + {|w| < ψ(q2)}) (68)  |w|2  q∈S Qtϕ(z) = ϕ(z) + sup λ ∇ϕ(z) · w − ϕ(z) + o(λ) ∗ w 2 ≥ sup γ ({ϕ > q1} + {|w| < ψ(q2)}) . (69) q∈S 2 λ |∇ϕ(z)| ∗ ∗ = ϕ(z) + + o(λ). But {ϕ > q1} = {ϕ > q1} is a half space and hence 2 ϕ(z) ∗ the family of {ϕ > q1} + {|w| < ψ(q2)} indexed by Observing that λ = p − 1, this gives the following S(λ, q1, q2) is a family of totally ordered sets. Thus expansion. ∗ sup γ({ϕ > q1} + {|w| < ψ(q2)}) (70) Z q∈S Qtϕdγ (57)   [ ∗ p − 1 Z |∇ϕ|2 = γ  {ϕ > q1} + {|w| < ψ(q2)} . (71) = kϕk + dγ + o(p − 1) (58) 1 2 ϕ q∈S p − 1 Applying (63) we have = kϕk + I(X) + o(p − 1) (59) 1 2   [ ∗ Using the expansions in (57) and (53) in Theorem III.3, γ  {ϕ > q1} + {|w| < ψ(q2)} (72) we achieve our inequality with p → 1 after algebraic q∈S ∗ cancellation. = γ({Qtϕ > λ}), (73) Note that o(λ) is actually dependent on z, so the above and our theorem follows. is not fully rigorous derivation. We direct the reader Observe that Theorem IV.1 is equivalent to the state- interested in such details to [4]. ment that for an increasing Borel function g : [0, ∞) → IV. REARRANGEMENTSHARPENINGOFTHE [0, ∞), Z Z INTEGRAL GAUSSIAN LSI ∗ g(Qtf)dγ ≥ g(Qtf )dγ. (74) Theorem IV.1. For ϕ a density with respect to γ and λ > 0, For example, with s > 0 and g(x) = xs, Z Z ∗ s ∗ s γ({Qtϕ > λ}) ≥ γ({Qtϕ > λ}) (60) (Qtf) dγ ≥ (Qtf ) dγ. (75) ∗ where ϕ is the half-space rearrangement of ϕ. Combining Theorems III.2 and (75) we have the follow- Proof. Taking λ = (1 − t)/t and w = z − x, we obtain ing.

−λ|w|2/2 Theorem IV.2. For p > 1 Qtϕ(z) = sup e ϕ(z + λw) (61) Z Z w ∗ ∗ 2 kϕkp = kϕ kp ≤ Q 1 ϕ dγ ≤ Q 1 ϕdγ. (76) = sup e−|z−a| /2λϕ(a). (62) p p a Stating this in information theoretic terms and using Using the notation, S = S(λ, q1, q2) = {q = the same Taylor expansion arguments from the proof 2 (q1, q2) ∈ Q+ : q1q2 > λ}, the following holds, of Corollary III.1 as well as the invariance of Rényi divergence under halfspace rearrangement we arrive at {Qtf > λ} = (63) the following sharpening of the Gaussian LSI.   r  [ d 1 {f > q1} + w ∈ R : |w| < 2λ ln . Corollary IV.1. For a random vector X, q2 q∈S I(X∗) I(X) (64) D(X) = D(X∗) ≤ ≤ . (77) 2 2

13 V. FUTURE WORK Then this provides a rearrangement sharpening of PLI A. Generalizations of Wang-Madiman analogous to the rearrangement EPI of Wang and Madi- man, Theorem II.2. Indeed we have It is natural to ask if Wang-Madiman’s Rényi entropy Z Z ◦ ◦ rearrangement inequality II.2 holds for general linear fg ≥ f g (85) maps. There exists such results in at least two special Z 1−t Z t cases. For X = (X1,...,Xn) with Xi independent d ◦ ◦ ≥ f g . (86) R random vectors, let us denote X◦ = (X1 ,...,Xn) where the X◦ are independent and drawn from the i The result can easily be generalized to n-functions. rearrangement of Xi. We can now write the Wang- Madiman rearrangement theorem, as The reader familiar with Barthe-Brascamp-Lieb in- equalities [2], [7] and the Roger-Brascamp-Lieb- Np(P ⊗ Id(X)) ≥ Np(P ⊗ Id(X◦)) (78) Luttinger inequalities [8] may find the following question about the potential generalization of Theorem V.3 natu- where P = (1, 1,..., 1), Id denotes the d × d identity ral, see [21] for more details. Though strictly speaking, matrix and ⊗ denotes the Kronecker product. Note that this question is not explicitly information theoretic, the when d = 1 this reduces to ordinary matrix multiplica- work of [3], [9], [16] show that the Barthe-Brascamp- tion. Lieb inequality is deeply connected to information the-

Theorem V.1. [29] For X = (X1,...,Xn) with Xi ory. n independent real valued random variables, and A is an For x ∈ R expressed as x = (x1, . . . , xm) for xi ∈ d m × n matrix, R and Bi are linear maps of the form m N0(AX) ≥ N0(AX◦). (79) X Bix = Bijxj. (87) j=1 Theorem V.2. [18], [28] For X = (X1,...,Xn) with X independent d random vectors, and A is an m × n i R Question V.2. For Bi are of the form (87), and fi belong matrix, d to L1(R ), when is it true that

N∞(A ⊗ Id(X)) ≥ N∞(A ⊗ Id(X◦)). (80) Z ( m ) Y X 0 sup f(yi): Biyi = x dx (88) n Question V.1. Does the inequality R i=1 i Z ( m ) Np(A ⊗ Id(X)) ≥ Np(A ⊗ Id(X◦)) (81) Y ◦ X 0 ≥ sup f (yi): Biyi = x dx? (89) n hold in any other circumstance? To the author’s knowl- R i=1 i edge the question is open even when p = d = 1. REFERENCES

B. Prékopa-Leindler and Rearrangements [1] K. Ball, P. Nayar, and T. Tkocz. A reverse entropy power inequality for log-concave random vectors. Studia Mathematica, The proof of the Gaussian LSI given here hinges on 2016. the PLI, while our rearrangement sharpening is depen- [2] F. Barthe. On a reverse form of the Brascamp-Lieb inequality. Invent. Math., 134(2):335–361, 1998. dent on a rearrangement inequality for a special case [3] S. Beigi and C. Nair. Equivalent characterization of reverse (taking one of the functions to be a constant). There are brascamp-lieb-type inequalities using information measures. In more general inequalities, but us state a result from the Information Theory (ISIT), 2016 IEEE International Symposium on, pages 1038–1042. IEEE, 2016. spherical Lebesgue special case. [4] S. G. Bobkov and M. Ledoux. From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities. Geom. Theorem V.3. [21] For and f and g Borel measurable Funct. Anal., 10(5):1028–1052, 2000. from Rd to [0, ∞), and t ∈ (0, 1) [5] C Borell. The Brunn-Minkowski inequality in Gauss space. Inventiones mathematicae, 30(2):207–216, 1975. Z  1−t t [6] C. Borell. Inequalities of the Brunn-Minkowski type for Gaussian sup f (x)g (y) : (1 − t)x + ty = z dz (82) measures. Probab. Theory Related Fields, 140(1-2):195–205, Z 2008.  ◦ 1−t ◦ t [7] H. J. Brascamp and E. H. Lieb. Best constants in Young’s ≥ sup (f ) (x)(g ) (y) : (1 − t)x + ty = z dz inequality, its converse, and its generalization to more than three (83) functions. Advances in Math., 20(2):151–173, 1976. [8] H. J. Brascamp, E. H. Lieb, and J. M. Luttinger. A general If we denote rearrangement inequality for multiple integrals. J. Functional Analysis, 17:227–237, 1974. 1−t t [9] E. A. Carlen and D. Cordero-Erausquin. Subadditivity of the fg(z) = sup f (x)g (y), (84) entropy and its relation to Brascamp-Lieb type inequalities. (1−t)x+ty=z Geom. Funct. Anal., 19(2):373–405, 2009.

14 [10] M. H. M. Costa and T. M. Cover. On the similarity of the entropy power inequality and the Brunn-Minkowski inequality. IEEE Trans. Inform. Theory, 30(6):837–839, 1984. [11] T. Courtade. Links between the logarithmic Sobolev inequality and the convolution inequalities for entropy and Fisher informa- tion. arXiv preprint arXiv:1608.05431, 2016. [12] A. Dembo, T. M. Cover, and J. A. Thomas. Information-theoretic inequalities. IEEE Trans. Inform. Theory, 37(6):1501–1518, 1991. [13] A. Ehrhard. Symétrisation dans l’espace de Gauss. Math. Scand., 53(2):281–301, 1983. [14] L. Gross. Logarithmic Sobolev inequalities. Amer. J. Math., 97(4):1061–1083, 1975. [15] J. Li and J. Melbourne. Further investigations of the maximum entropy of the sum of two dependent random variables. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 1969–1972. IEEE, 2018. [16] J. Liu, T. Courtade, P. Cuff, and S. Verdú. A forward-reverse Brascamp-Lieb inequality: Entropic duality and Gaussian opti- mality. Entropy, 19(9):418, 2018. [17] M. Madiman, J. Melbourne, and P. Xu. Forward and reverse entropy power inequalities in convex geometry. Convexity and Concentration, pages 427–485, 2017. [18] M. Madiman, J. Melbourne, and P. Xu. Rogozin’s convolution in- equality for locally compact groups. Preprint, arXiv:1705.00642, 2017. [19] A. Marsiglietti and J. Melbourne. On the entropy power in- equality for the Rényi entropy of order [0, 1]. arXiv preprint arXiv:1710.00800, 2017. [20] A. Marsiglietti and J. Melbourne. A Rényi entropy power inequality for log-concave vectors and parameters in [0, 1]. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 1964–1968. IEEE, 2018. [21] J. Melbourne. Rearrangement and Prékopa-Leindler type inequal- ities. arXiv preprint arXiv:1806.08837, 2018. [22] C. A. Rogers. A single integral inequality. J. London Math. Soc., 32:102–108, 1957. [23] C.E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27:379–423, 623–656, 1948. [24] A.J. Stam. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Information and Control, 2:101–112, 1959. [25] V.N. Sudakov and B.S. Tsirel’son. Extremal properties of half- spaces for spherically invariant measures. Zap. Nauch. Sem. L.O.M.I., 41:14–24, translated in J. Soviet Math. 9, 9–18 (1978) 1974. [26] T. van Erven and P. Harremoës. Rényi divergence and Kullback- Leibler divergence. IEEE Trans. Inform. Theory, 60(7):3797– 3820, 2014. [27] L. Wang and M. Madiman. Beyond the entropy power inequality, via rearrangements. IEEE Trans. Inform. Theory, 60(9):5116– 5137, September 2014. [28] P. Xu, J. Melbourne, and M. Madiman. Infinity Entropy Power Inequalities. In Proc. IEEE Intl. Symp. Inform. Theory., 2017. [29] R. Zamir and M. Feder. On the volume of the Minkowski sum of line sets and the entropy-power inequality. IEEE Trans. Inform. Theory, 44(7):3039–3063, 1998.

15