Extending Owen's Integral Table and a New Multivariate Bernoulli

Total Page:16

File Type:pdf, Size:1020Kb

Extending Owen's Integral Table and a New Multivariate Bernoulli Extending Owen's integral table and a new multivariate Bernoulli distribution Marcelo Hartmann, Phd student1 1Department of Mathematics and Statistics, P.O. Box 68, 00014 University of Helsinki, Finland June 22, 2017 Corresponding author: marcelo.hartmann@helsinki.fi 1 Description The table of Owen (1980) presents a great variety of integrals involving the Gaussian density function and the Gaussian cumulative distribution function. For some of them analytical solution is presented and for some others, the solution is written in terms of the Owen's T -function (Owen, 1980). With little more algebra we can extend many of those equalities in his table (Owen, 1980) and moreover a new multivariate Bernoulli distribution could be found. This extension can be useful in many practical application where quadrature methods have been applied to solve integrals involving Gaussian function and the Owen's T -function, e.g., Riihim¨aki et al. (2013); J¨arvenp¨a¨aet al. (2017); Stathopoulos et al. (2017). 2 Gaussian integral I arXiv:1704.04736v3 [stat.ME] 21 Jun 2017 Lemma 1. Let Φ(·) be the standard-Gaussian cumulative distribution function and 2 2 N (·|µ, σ ) the Gaussian density function with parameters (µ, σ ) 2 R × R+. Then the following holds true, Z N Y x − mr Φ N (xjµ, σ2)dx = F (µ jm ;V ) (1) v N N N N r=1 r R where FN (·|c; C) is the N-dimensional Gaussian cumulative distribution function with parameters (c; C) 2 RN × R, with R the space of positive-definite matrices 1 T N N (covariance matrices). Furthermore, mN = [m1 ··· mN ] 2 R , µN = µ1N 2 R , vr > 0 8r and VN is a covariance matrix given by, 2 2 2 2 3 v1 + σ : : : σ 6 . .. 7 VN = 4 . 5 (2) 2 2 2 σ : : : vN + σ Proof. To show (1), start writing the left-hand side of the equation in full. Rewrite the integrand as the product of non-standard Gaussian density functions as well as the regions of integration, i.e., x x Z Z Z N Y 2 2 ··· N (yrjmr; vr )N (xjµ, σ )dy1 ··· dyN dx: (3) r=1 R −∞ −∞ T Rewrite again using the following transformation [x; y1; ··· ; yN ] = [w + µ, z1 + w + T m1; ··· ; zN +w +mN ] and note that j@(x; y1; ··· ; yN )=@(w; z1; ··· ; zN )j = 1. After changing variables, group the different terms in the exponentials together to have µ−m µ−m N 1 ( " N #) Z Z Z 2 1 1 X (zr+w) w2 ··· exp − 2 + 2 dz1 ··· dzN dw (4) c 2 vr σ r=1 R −∞ −∞ (N+1)=2 QN where c = σ(2π) r=1 vr. Now, the expression inside the squared bracket is a quadratic form which is written with the following matrix form, N N ! N N 2 2 X (zr+w) w 2 X 1 1 X zr X w zr 2 + 2 = w 2 + 2 + w 2 + zr 2 + 2 vr σ vr σ vr vr vr r=1 r=1 r=1 r=1 2 3T PN 1 1 PN zr w 2 + 2 + 2 2 3 r=1 vr σ r=1 vr w 6 7 6 w z1 7 6 z1 7 6 v2 + v2 7 6 7 = 6 1 1 7 6 . 7 6 . 7 6 . 7 6 . 7 4 5 4 5 w zN zN 2 + 2 vN vN 2 3T 2PN 1 1 1 1 3 2 3 w r=1 2 + 2 2 ··· 2 w vr σ v1 vN 6 7 6 1 1 7 6 7 z1 6 v2 v2 ··· 0 7 z1 6 7 6 1 1 7 6 7 = 6 . 7 . 6 . 7 (5) 6 . 7 6 . .. 7 6 . 7 4 5 6 . 7 4 5 4 1 1 5 zN 2 0 ··· 2 zN vN vN therefore (4) is the same as 8 2 3T 2PN 1 1 1 1 3 2 39 > w r=1 v2 + σ2 v2 ··· v2 w > µ−m µ−m > r 1 N > Z Z N Z 1 > 6 7 6 1 1 7 6 7> < 1 z1 6 v2 v2 ··· 0 7 z1 = ··· 1 exp − 6 7 6 1 1 7 6 7 d z dw c 6 . 7 6 . 7 6 . 7 > 2 6 . 7 6 . .. 7 6 . 7> R −∞ −∞ > 4 5 4 5> > 4 1 1 5 > :> zN 2 0 ··· 2 zN ;> vN vN (6) 2 Note that the integrand from (6) does have the full form of the multivariate Gaus- sian density with the specific precision matrix given above. To identify this we need to find the closed-form covariance matrix from the precision matrix and if the determinant of the covariance matrix is given by c2=(2π)N+1. Write the pre- PN 1 1 h 1 1 i cision matrix as block matrix such that A = r=1 2 + 2 , B = 2 ··· 2 , vr σ v1 vN T 1 1 C = B and D = diag 2 ; ··· ; 2 . Use the partitioned matrix inversion lemma v1 vN (Strang and Borre, 1997, equation 17.44) to get the blocks, (A − BD−1C)−1 = σ2,(BD−1C − A)−1BD−1 = −σ2[1 ··· 1], D−1C(BD−1C − A)−1 = −σ2[1 ··· 1]T −1 −1 −1 −1 −1 2 and D + D C(A − BD C) BD where its main diagonal equals to [v1 + 2 2 2 2 σ ; ··· ; vN + σ ] and all off-diagonal elements are given by σ . Put everything to- gether to have the covariance matrix 2 3 σ2 −σ2 · · · −σ2 6 2 2 2 2 7 6−σ v1 + σ ··· σ 7 6 . 7 (7) 6−σ2 . .. 7 4 5 2 2 2 2 −σ σ ··· vN + σ −1 2 QN 2 2 N+1 whose determinant equals to 1=[det(D) det(A−BD C)] = σ r=1 vr = c =(2π) by the partitioned matrix determinant lemma. Finally, in (6), interchange the order of integration with Fubini-Tonelli theorem (Folland, 2013) and integrate w.r.t. w to get 02 3 2 3 2 2 2 2 31 µ−mN µ−m1 z1 0 v1 + σ ··· σ Z Z B6 . 7 6.7 6 . .. 7C ··· N @4 . 5 4.5 ; 4 . 5A dz1 ··· dzN (8) −∞ −∞ 2 2 2 zN 0 σ ··· vN + σ that equals to 02 3 2 3 2 2 2 2 31 µ m1 v1 + σ ··· σ B6 . 7 6 . 7 6 . .. 7C FN @4 . 5 4 . 5 ; 4 . 5A (9) 2 2 2 µ mN σ ··· vN + σ and therefore the equality (1) holds. For N = 1 the result follows the same as in Rasmussen and Williams (2006). Lemma 2. Let Φ(·) be the standard-Gaussian cumulative distribution function and denote by N (·| µN ; Σ) the N-dimensional Gaussian density function with mean pa- rameter µN and covariance matrix Σ. Then the following holds true, Z N Y xr − mr Φ N (x j µ ; Σ)dx = F µ jm ; diag(v )2 + Σ (10) v N N N N N r=1 r RN where FN (·|c; C) is the N-dimensional Gaussian cumulative distribution function T N T with parameters (c; C). Furthermore, mN = [m1 ··· mN ] 2 R , µN = [µ1 ··· µN ] 2 N T N R , vN = [v1 ··· vN ] 2 R+ and Σ is a covariance matrix. 3 Proof. Let's rewrite the left-hand side of (10) in full and use the following transfor- T mation [x1; ··· ; xN ; y1; ··· ; yN ] = [w1 + µ1; ··· ; wN + µN ; z1 + w1 + m1; ··· ; zN + T wN + mN ] . From this we note that the Jacobian of the transformation simplifies to j@ (x1; ··· ; xN ; y1; ··· ; yN ) =@(w1; ··· ; wN ; z1; ··· ; zN )j = 1. Therefore we find that µ −m µ −m Z NZ N 1Z 1 2 ··· N (wj − z; diag(vN ) )N (wj 0; Σ)dzdw (11) RN −∞ −∞ T T where z = [z1 ··· zN ] and w = [w1 ··· wN ] . Note that the product of two mul- tivariate Gaussians is another unnormalized multivariate Gaussian (see Rasmussen and Williams, 2006, for example). Therefore we write µ −m µ −m Z NZ N 1Z 1 2 ··· N (zj 0; diag(vN ) + Σ)N (wjc; C)dzdw (12) RN −∞ −∞ 2 −1 2 −1 −1 −1 where c = −C[diag(vN ) ] z and C = ([diag(vN ) ] + Σ ) . Interchange the order of integration with Fubini-Tonelli theorem (Folland, 2013) and integrate w.r.t w to get that, µ −m µ −m NZ N 1Z 1 2 2 ··· N (zj 0; diag(vN ) + Σ)dz = FN (µN jmN ; diag(vN ) + Σ) (13) −∞ −∞ which completes the proof. Note that for N = 1 the result follows the same as in Rasmussen and Williams (2006). 3 Gaussian integral II T Theorem 1. Let f = [f1 ··· fN ] ∼ N (µ; Σ), where N (·) is the N-dimensional Gaussian density function with mean parameter µ and covariance matrix Σ. Suppose that, conditional on f, we perform N independent Bernoulli trials with probability Φ(fr), r = 1; ··· ;N, where Φ(·) is the standard-Gaussian distribution function, i.e., ind Yrjfr ∼ Bernoulli (Φ(fr)). Instead of record the values 0 or 1 we use the values −1 and 1, so that, each Bernoulli random variable Yrjfr has probability mass function π (y jf ) = Φ(y f )I (y ) (14) Yrjfr r r r r {−1;1g r where IA(·) is the indicator function of a set A. Hence the marginal distribution of T Y = [Y1 ··· YN ] is given by πY (y1; ··· ; yN ) = FN (0 j − Iy µ;IyΣIy + IN ) (15) where Iy = diag(y1; ··· ; yN ), IN is the N × N identity matrix and FN (·|c; C) is the N-dimensional Gaussian cumulative distribution function with mean parameter c and covariance matrix C.
Recommended publications
  • Problem Set 2
    22.02 – Introduction to Applied Nuclear Physics Problem set # 2 Issued on Wednesday Feb. 22, 2012. Due on Wednesday Feb. 29, 2012 Problem 1: Gaussian Integral (solved problem) 2 2 1 x /2σ The Gaussian function g(x) = e− is often used to describe the shape of a wave packet. Also, it represents √2πσ2 the probability density function (p.d.f.) of the Gaussian distribution. 2 2 ∞ 1 x /2σ a) Calculate the integral √ 2 e− −∞ 2πσ Solution R 2 2 x x Here I will give the calculation for the simpler function: G(x) = e− . The integral I = ∞ e− can be squared as: R−∞ 2 2 2 2 2 2 ∞ x ∞ ∞ x y ∞ ∞ (x +y ) I = dx e− = dx dy e− e− = dx dy e− Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ This corresponds to making an integral over a 2D plane, defined by the cartesian coordinates x and y. We can perform the same integral by a change of variables to polar coordinates: x = r cos ϑ y = r sin ϑ Then dxdy = rdrdϑ and the integral is: 2π 2 2 2 ∞ r ∞ r I = dϑ dr r e− = 2π dr r e− Z0 Z0 Z0 Now with another change of variables: s = r2, 2rdr = ds, we have: 2 ∞ s I = π ds e− = π Z0 2 x Thus we obtained I = ∞ e− = √π and going back to the function g(x) we see that its integral just gives −∞ ∞ g(x) = 1 (as neededR for a p.d.f). −∞ 2 2 R (x+b) /c Note: we can generalize this result to ∞ ae− dx = ac√π R−∞ Problem 2: Fourier Transform Give the Fourier transform of : (a – solved problem) The sine function sin(ax) Solution The Fourier Transform is given by: [f(x)][k] = 1 ∞ dx e ikxf(x).
    [Show full text]
  • The Error Function Mathematical Physics
    R. I. Badran The Error Function Mathematical Physics The Error Function and Stirling’s Formula The Error Function: x 2 The curve of the Gaussian function y e is called the bell-shaped graph. The error function is defined as the area under part of this curve: x 2 2 erf (x) et dt 1. . 0 There are other definitions of error functions. These are closely related integrals to the above one. 2. a) The normal or Gaussian distribution function. x t2 1 1 1 x P(, x) e 2 dt erf ( ) 2 2 2 2 Proof: Put t 2u and proceed, you might reach a step of x 1 2 P(0, x) eu du P(,x) P(,0) P(0,x) , where 0 1 x P(0, x) erf ( ) Here you can prove that 2 2 . This can be done by using the definition of error function in (1). 0 u2 I I e du Now you need to find P(,0) where . To find this integral you have to put u=x first, then u= y and multiply the two resulting integrals. Make the change of variables to polar coordinate you get R. I. Badran The Error Function Mathematical Physics 0 2 2 I 2 er rdr d 0 From this latter integral you get 1 I P(,0) 2 and 2 . 1 1 x P(, x) erf ( ) 2 2 2 Q. E. D. x 2 t 1 2 1 x 2.b P(0, x) e dt erf ( ) 2 0 2 2 (as proved earlier in 2.a).
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • Neural Network for the Fast Gaussian Distribution Test Author(S)
    Document Title: Neural Network for the Fast Gaussian Distribution Test Author(s): Igor Belic and Aleksander Pur Document No.: 208039 Date Received: December 2004 This paper appears in Policing in Central and Eastern Europe: Dilemmas of Contemporary Criminal Justice, edited by Gorazd Mesko, Milan Pagon, and Bojan Dobovsek, and published by the Faculty of Criminal Justice, University of Maribor, Slovenia. This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this final report available electronically in addition to NCJRS Library hard-copy format. Opinions and/or reference to any specific commercial products, processes, or services by trade name, trademark, manufacturer, or otherwise do not constitute or imply endorsement, recommendation, or favoring by the U.S. Government. Translation and editing were the responsibility of the source of the reports, and not of the U.S. Department of Justice, NCJRS, or any other affiliated bodies. IGOR BELI^, ALEKSANDER PUR NEURAL NETWORK FOR THE FAST GAUSSIAN DISTRIBUTION TEST There are several problems where it is very important to know whether the tested data are distributed according to the Gaussian law. At the detection of the hidden information within the digitized pictures (stega- nography), one of the key factors is the analysis of the noise contained in the picture. The incorporated noise should show the typically Gaussian distribution. The departure from the Gaussian distribution might be the first hint that the picture has been changed – possibly new information has been inserted. In such cases the fast Gaussian distribution test is a very valuable tool.
    [Show full text]
  • Error and Complementary Error Functions Outline
    Error and Complementary Error Functions Reading Problems Outline Background ...................................................................2 Definitions .....................................................................4 Theory .........................................................................6 Gaussian function .......................................................6 Error function ...........................................................8 Complementary Error function .......................................10 Relations and Selected Values of Error Functions ........................12 Numerical Computation of Error Functions ..............................19 Rationale Approximations of Error Functions ............................21 Assigned Problems ..........................................................23 References ....................................................................27 1 Background The error function and the complementary error function are important special functions which appear in the solutions of diffusion problems in heat, mass and momentum transfer, probability theory, the theory of errors and various branches of mathematical physics. It is interesting to note that there is a direct connection between the error function and the Gaussian function and the normalized Gaussian function that we know as the \bell curve". The Gaussian function is given as G(x) = Ae−x2=(2σ2) where σ is the standard deviation and A is a constant. The Gaussian function can be normalized so that the accumulated area under the
    [Show full text]
  • Derivation: the Volume of a D-Dimensional Hypershell
    Derivation: the volume of a D‐dimensional hypershell Phys 427 Yann Chemla Sept. 1, 2011 In class we worked out the multiplicity of an ideal gas by summing over volume elements in k‐space (each of which contains one quantum state) that have energy E. We showed that the surface of states that have energy E is a hypershell of radius k in D‐dimensions where . Integrating in D‐dimensions in Cartesian coordinates is easy, but how do you integrate over a hypershell in D‐ dimensions? We used dimensional arguments in class to relate a volume of k‐space in Cartesian vs. hyperspherical coordinates: Ω where Ω is the solid angle subtended by the hypershell. If we integrate over all D‐dimensional space in Cartesian vs. hyperspherical coordinates, we should get the same answer: where we integrated over the solid angle to obtain the factor gD. We would like to determine gD. The above formula is not very useful in that respect because both sides are infinite. What we want is a function that can be integrated over all of D‐dimensional space in both Cartesian and hyperspherical coordinates and give a finite answer. A Gaussian function over k1 ... kD will do the trick. We can write: where we wrote the Gaussian in terms of k on the right side of the equation. Let’s look at the left side of the equation first. The integral can be written in terms of the product of D identical 1‐dimensional integrals: / where we evaluated the Gaussian integral in the last step (look up Kittel & Kroemer Appendix A for help on evaluating Gaussian integrals).
    [Show full text]
  • The Amazing Poisson Calculation: a Method Or a Trick?
    The amazing Poisson calculation: a method or a trick? Denis Bell April 27, 2017 The Gaussian integral and Poisson's calculation The Gaussian function e−x2 plays a central role in probability and statistics. For this reason, it is important to know the value of the integral 1 Z 2 I = e−x dx: 0 The function e−x2 does not have an elementary antiderivative, so I cannot be evaluated by the FTC. There is the following argument, attributed to Poisson, for calculating I . Consider 1 1 Z 2 Z 2 I 2 = e−x dx · e−y dy: 0 0 Interpret this product as a double integral in the plane and transform to polar coordinates x = r cos θ; y = r sin θ to get 1 1 Z Z 2 2 I 2 = e−(x +y )dxdy 0 0 π=2 1 Z Z 2 = e−r r drdθ 0 0 1 π Z d 2 π = − e−r dr = : 4 0 dr 4 p π Thus I = 2 . Can this argument be used to evaluate other seemingly intractable improper integrals? Consider the integral Z 1 J = f (x)dx: 0 Proceeding as above, we obtain Z 1 Z 1 J2 = f (x)f (y) dxdy: 0 0 In order to take the argument further, there will need to exist functions g and h such that f (x)f (y) = g(x2 + y 2)h(y=x): Transformation to polar coordinates and the substitution u = r 2 will then yield 1 Z 1 Z π=2 J2 = g(u)du h(tan θ)dθ: (1) 2 0 0 Which functions f satisfy f (x)f (y) = g(x2 + y 2)h(y=x) (2) Theorem Suppose f : (0; 1) 7! R satisfies (2) and assume f is non-zero on a set of positive Lebesgue measure, and the discontinuity set of f is not dense in (0; 1).
    [Show full text]
  • Contents 1 Gaussian Integrals ...1 1.1
    Contents 1 Gaussian integrals . 1 1.1 Generating function . 1 1.2 Gaussian expectation values. Wick's theorem . 2 1.3 Perturbed gaussian measure. Connected contributions . 6 1.4 Expectation values. Generating function. Cumulants . 9 1.5 Steepest descent method . 12 1.6 Steepest descent method: Several variables, generating functions . 18 1.7 Gaussian integrals: Complex matrices . 20 Exercises . 23 2 Path integrals in quantum mechanics . 27 2.1 Local markovian processes . 28 2.2 Solution of the evolution equation for short times . 31 2.3 Path integral representation . 34 2.4 Explicit calculation: gaussian path integrals . 38 2.5 Correlation functions: generating functional . 41 2.6 General gaussian path integral and correlation functions . 44 2.7 Harmonic oscillator: the partition function . 48 2.8 Perturbed harmonic oscillator . 52 2.9 Perturbative expansion in powers of ~ . 54 2.10 Semi-classical expansion . 55 Exercises . 59 3 Partition function and spectrum . 63 3.1 Perturbative calculation . 63 3.2 Semi-classical or WKB expansion . 66 3.3 Path integral and variational principle . 73 3.4 O(N) symmetric quartic potential for N . 75 ! 1 3.5 Operator determinants . 84 3.6 Hamiltonian: structure of the ground state . 85 Exercises . 87 4 Classical and quantum statistical physics . 91 4.1 Classical partition function. Transfer matrix . 91 4.2 Correlation functions . 94 xii Contents 4.3 Classical model at low temperature: an example . 97 4.4 Continuum limit and path integral . 98 4.5 The two-point function: perturbative expansion, spectral representation 102 4.6 Operator formalism. Time-ordered products . 105 Exercises . 107 5 Path integrals and quantization .
    [Show full text]
  • PROPERTIES of the GAUSSIAN FUNCTION } ) ( { Exp )( C Bx Axy
    PROPERTIES OF THE GAUSSIAN FUNCTION The Gaussian in an important 2D function defined as- (x b)2 y(x) a exp{ } c , where a, b, and c are adjustable constants. It has a bell shape with a maximum of y=a occurring at x=b. The first two derivatives of y(x) are- 2a(x b) (x b)2 y'(x) exp{ } c c and 2a (x b)2 y"(x) c 2(x b)2 exp{ ) c2 c Thus the function has zero slope at x=b and an inflection point at x=bsqrt(c/2). Also y(x) is symmetric about x=b. It is our purpose here to look at some of the properties of y(x) and in particular examine the special case known as the probability density function. Karl Gauss first came up with the Gaussian in the early 18 hundreds while studying the binomial coefficient C[n,m]. This coefficient is defined as- n! C[n,m]= m!(n m)! Expanding this definition for constant n yields- 1 1 1 1 C[n,m] n! ... 0!(n 0)! 1!(n 1)! 2!(n 2)! n!(0)! As n gets large the magnitude of the individual terms within the curly bracket take on the value of a Gaussian. Let us demonstrate things for n=10. Here we have- C[10,m]= 1+10+45+120+210+252+210+120+45+10+1=1024=210 These coefficients already lie very close to a Gaussian with a maximum of 252 at m=5. For this discovery, and his numerous other mathematical contributions, Gauss has been honored on the German ten mark note as shown- If you look closely, it shows his curve.
    [Show full text]
  • The Gaussian Distribution
    The Gaussian distribution Probability density function: A continuous prob- ability density function, p(x), satisfies the fol- lowing properties: 1. The probability that x is between two points a and b Z b P (a < x < b) = p(x)dx a 2. It is non-negative for all real x. 3. The integral of the probability function is one, that is Z 1 p(x)dx = 1 −∞ Extending to the case of a vector x, we have non-negative p(x) with the following proper- ties: 1 1. The probability that x is inside a region R Z P = p(x)dx R 2. The integral of the probability function is one, that is Z p(x)dx = 1 The Gaussian distribution: The most com- monly used probability function is Gaussian func- tion (also known as Normal distribution) ! 1 (x − µ)2 p(x) = N (xjµ, σ2) = p exp − 2πσ 2σ2 where µ is the mean, σ2 is the variance and σ is the standard deviation. 2 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −5 −4 −3 −2 −1 0 1 2 3 4 5 Figure: Gaussian pdf with µ = 0, σ = 1. We are also interested in Gaussian function de- fined over a D-dimensional vector x ( ) 1 1 N (xjµ, Σ) = exp − (x − µ)TΣ−1(x − µ) (2π)D=2jΣjD=2 2 where µ is called the mean vector, Σ is called covariance matrix (positive definite). jΣj is called the determinant of Σ.
    [Show full text]
  • The Gaussians Distribution 1 the Real Fourier Transform
    CSE 206A: Lattice Algorithms and Applications Winter 2016 The Gaussians Distribution Instructor: Daniele Micciancio UCSD CSE 1 The real fourier transform Gaussian distributions and harmonic analysis play a fundamental role both in the design of lattice-based cryptographic functions, and the theoretical study of lattice problems. For any function f: n ! such that R jf(x)j dx < 1, the fourier transform of f is defined as R C x2R Z fb(y) = f(x) · e−2πihx;yi dx x2 n p R 2πiz z z where i = −1 is the imaginary unit and e = cos( 2π )+i sin( 2π ) the exponential function from the unit interval z 2 R=Z ≈ [0; 1) to the unit circle on the complex plane e2πiz 2 fc 2 C : jcj = 1g. So, the fourier transform is also a function fb: Rn ! C from the euclidean 2 space Rn to the complex numbers. The gaussian function ρ(x) = e−πkxk naturally arises in harmonic analysis as an eigenfunction of the fourier transform operator. −πkxk2 Lemma 1 The gaussian function ρ(x) = e equals its fourier transform ρb(x) = ρ(x). Proof. It is enough to prove the statement in dimension n = 1, as the general statement follows by Z −2πihx;yi ρb(y) = ρ(x)e dx x2Rn Z Y −2πixkyk = ρ(xk)e dx n x2R k Y Z = ρ(x)e−2πixyk dx k x2R Y = ρb(yk) = ρ(y): k So, let ρ(x) = e−πx2 the one-dimensional gaussian. We compute Z −2πixy ρb(y) = ρ(x)e dx x2R Z = e−π(x2+2ixy) dx x2R Z = e−πy2 e−π(x+iy)2 dx y Z = ρ(y) ρ(x) dx: x2R+iy Finally, we observe that R ρ(x) dx = R ρ(x) dx by Cauchy's theorem, and x2R+iy x2R Z sZ Z ρ(x) dx = ρ(x1) dx1 · ρ(x2) dx2 x2R x12R x22R sZ sZ 1 = ρ(x) dx = 2πrρ(r) dr = 1 x2R2 r=0 0 where the last equality follows from the fact that ρ (r) = −2πr · ρ(r).
    [Show full text]
  • The Fourier Transform of a Gaussian Function
    The Fourier transform of a gaussian function Kalle Rutanen 25.3.2007, 25.5.2007 1 1 Abstract In this paper I derive the Fourier transform of a family of functions of the form 2 f(x) = ae−bx . I thank ”Michael”, Randy Poe and ”porky_pig_jr” from the newsgroup sci.math for giving me the techniques to achieve this. The intent of this particular Fourier transform function is to give information about the frequency space behaviour of a Gaussian filter. 2 Integral of a gaussian function 2.1 Derivation Let 2 f(x) = ae−bx with a > 0, b > 0 Note that f(x) is positive everywhere. What is the integral I of f(x) over R for particular a and b? Z ∞ I = f(x)dx −∞ To solve this 1-dimensional integral, we will start by computing its square. By the separability property of the exponential function, it follows that we’ll get a 2-dimensional integral over a 2-dimensional gaussian. If we can compute that, the integral is given by the positive square root of this integral. Z ∞ Z ∞ I2 = f(x)dx f(y)dy −∞ −∞ Z ∞ Z ∞ = f(x)f(y)dydx −∞ −∞ ∞ ∞ Z Z 2 2 = ae−bx ae−by dydx −∞ −∞ ∞ ∞ Z Z 2 2 = a2e−b(x +y )dydx −∞ −∞ ∞ ∞ Z Z 2 2 = a2 e−b(x +y )dydx −∞ −∞ Now we will make a change of variables from (x, y) to polar coordinates (α, r). The determinant of the Jacobian of this transform is r. Therefore: 2 2π ∞ Z Z 2 I2 = a2 re−br drdα 0 0 2π ∞ Z 1 Z 2 = a2 −2bre−br drdα 0 −2b 0 2 2π ∞ a Z 2 = e−br drdα −2b 0 0 a2 Z 2π = −1dα −2b 0 −2πa2 = −2b πa2 = b Taking the positive square root gives: rπ I = a b 2.2 Example Requiring f(x) to integrate to 1 over R gives the equation: rπ I = a = 1 b ⇔ r b a = π And substitution of: 1 b = 2σ2 Gives the Gaussian distribution g(x) with zero mean and σ variance: 2 1 − x g(x) = √ e 2σ2 σ 2π 3 3 The Fourier transform We will continue to evaluate the bilateral Laplace transform B(s) of f(x) by using the intermediate result derived in the previous section.
    [Show full text]