<<

The University of Manchester Research

On Characteristic Functions of Products of Two Random Variables

DOI: 10.1007/s11277-019-06462-3

Document Version Accepted author manuscript

Link to publication record in Manchester Research Explorer

Citation for published version (APA): Jiang, X., & Nadarajah, S. (2019). On Characteristic Functions of Products of Two Random Variables. Wireless Personal Communications. https://doi.org/10.1007/s11277-019-06462-3

Published in: Wireless Personal Communications

Citing this paper Please note that where the full-text provided on Manchester Research Explorer is the Author Accepted Manuscript or Proof version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version.

General rights Copyright and moral rights for the publications made accessible in the Research Explorer are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

Takedown policy If you believe that this document breaches copyright please refer to the University of Manchester’s Takedown Procedures [http://man.ac.uk/04Y6Bo] or contact [email protected] providing relevant details, so we can investigate your claim.

Download date:26. Sep. 2021 On characteristic functions of products of two random variables by X. Jiang and S. Nadarajah School of Mathematics, University of Manchester, Manchester M13 9PL, UK email: [email protected]

Abstract: Motivated by a recent paper published in IEEE Signal Processing Letters, we study the distribution of the product of two independent random variables, one of them being the standard normal random variable and the other allowed to follow one of nearly fifty distributions. We give explicit expressions for the characteristic function of the product. The mathematical derivations are verified by simulations. Keywords: ; Independence; Simulation

1 Introduction

Many variables in the real world (including the signal processing area) can be assumed to follow the normal distribution. That is, we can write U = µ+σX, where X is a standard normal variable, µ is the mean and σ is the standard deviation. But often the mean and standard deviation are themselves random variables, so U involves a product of two random variables. Schoenecker and Luginbuhl (2016) derived the distribution of XY when X is a standard normal random variable and Y is an independent random variable following either the normal or a . They expressed the distribution√ of XY = W say in terms of its characteristic function φW (t) = E [exp (itW )], where i = −1 is the complex unit. In practice, there is no reason to limit Y to follow either the normal or a gamma distribution. Y can follow possibly any distribution. The aim of this note is to derive closed form expressions for the characteristic function φW (t) when X is a standard normal random variable and Y is an independent random variable following a wide range of other distributions. We consider the following distributions for Y : Pareto distribu- tion (Pareto, 1964), Moffat distribution (Moffat, 1969), , Argus distribution (Albrecht, 1990), , Student’s t distribution (Gosset, 1908), skewed Student’s t distribution (Zhu and Galbraith, 2010), asymmetric skewed Student’s t distribution (Zhu and Gal- braith, 2010), half Student’s t distribution, half Cauchy distribution, (Rice, 1945), symmetric (Laplace, 1774), Laplace distribution (Laplace, 1774), asymmetric Laplace distribution (Kozubowski and Podgorski, 2000), Poiraud-Casanova-Thomas-Agnan Laplace distribution (Poiraud-Casanova and Thomas-Agnan, 2000), Holla-Bhattacharya Laplace distribu- tion (Holla and Bhattacharya, 1968), McGill Laplace distribution (McGill, 1962), log Laplace distri- bution, , gamma distribution, reflected gamma distribution (Borghi, 1965), , gamma distribution (Madan and Seneta, 1990), normal inverse gamma distribution, (Nakagami, 1960), , Maxwell distribu- tion (Maxwell, 1860), quadratic distribution, uniform distribution, power function distribution, (Weibull, 1951), exponentiated Rayleigh distribution, beta Rayleigh distri- bution, normal distribution (de Moivre, 1738; Gauss, 1809), (Azzalini, 1985), truncated normal distribution, split normal distribution, q-Gaussian distribution (Tsallis,

1 2009), half-normal distribution, normal exponential gamma distribution, folded normal distribution (Leone et al., 1961), Wigner semicircle distribution, (Kumaraswamy, 1980), linear failure rate distribution (Bain, 1974) and Irwin Hall distribution (Irwin, 1927; Hall, 1927). Characteristic functions arise in many aspects of signal processing: capacity analysis of adaptive transmission with space-time block codes (Chauhan and Kumar, 2014); transmit antenna selection in cooperative communication (Agrawal and Kshetrimayum, 2017); second-order in mul- tipath fading environments (Dhaka et al., 2018); to mention just a few. The expressions given in Section 3 are as explicit as possible. They involve various special functions, including the defined by

Z +∞ Γ(a) = ta−1 exp (−t) dt 0 for a > 0; the incomplete gamma function defined by Z x γ(a, x) = ta−1 exp (−t) dt 0 for a > 0 and x > 0; the complementary incomplete gamma function defined by

Z +∞ Γ(a, x) = ta−1 exp (−t) dt x for x > 0; the beta function defined by

Z 1 B(a, b) = ta−1(1 − t)b−1dt 0 for a > 0 and b > 0; the incomplete beta function defined by Z x a−1 b−1 Bx(a, b) = t (1 − t) dt 0 for 0 < x < 1, a > 0 and b > 0; the error function defined by 2 Z x erf(x) = √ exp −t2 dt π 0 for x > 0; the complementary error function defined by

2 Z +∞ erfc(x) = √ exp −t2 dt π x for −∞ < x < +∞; the parabolic cylinder function of order ν defined by

2  Z +∞ exp −x /4 − ν −1 ν−1 2  Dν(x) = t 2 (1 + 2t) 2 exp −x t dt Γ(−ν/2) 0 for ν < 0 and x2 > 0; the Whittaker W function of orders ν, µ defined by

µ+ 1 +∞ µ−ν− 1 µ+ν− 1 x 2 Z  1 2  1 2 Wν,µ(x) = 1  t − t + exp (−xt) dt Γ µ − ν + 1 2 2 2 2

2 1 for µ − ν > − 2 and x > 0; the modified of the first kind of order ν defined by +∞ X 1 x2k+ν I (x) = ; ν Γ(k + ν + 1)k! 2 k=0 the modified Bessel function of the second kind of order ν defined by  πcsc(πν)  [I (x) − I (x)] , if ν 6∈ ,  2 −ν ν Z Kν(x) =   lim Kµ(x), if ν ∈ ;  µ→ν Z the confluent hypergeometric function defined by

+∞ k X (α)k x 1F1(α; β; x) = , (β)k k! k=0 where (α)k = α(α + 1) ··· (α + k − 1) denotes the ascending factorial; the Gauss hypergeometric function defined by

+∞ k X (α)k(β)k x 2F1(α, β; γ; x) = ; (γ)k k! k=0 the standard normal density function defined by 1  x2  φ(x) = √ exp − ; 2π 2 and the standard normal distribution function defined by 1 Z x  t2  Φ(x) = √ exp − dt. 2π −∞ 2 These special functions are well known and well established in the mathematics literature. Some details of their properties can be found in Prudnikov et al. (1986) and Gradshteyn and Ryzhik (2000). In-built routines for computing them are available in packages like Maple, Matlab and Mathematica. For example, the in-built routines in Mathematica for the stated special func- tions are: GAMMA[a] for the gamma function; GAMMA[a]-GAMMA[a,x] for the incomplete gamma function; GAMMA[a,x] for the complementary incomplete gamma function; Beta[a,b] for the beta function; Beta[x,a,b] for the incomplete beta function; Erf[x] for the error function; Erfc[x] for the complementary error function; ParabolicCylinderD[nu,x] for the parabolic cylinder function; Whit- taker[nu,mu,x] for the Whittaker W function of orders ν, µ; BesselI[nu,x] for the modified Bessel function of the first kind of order ν; BesselK[nu,x] for the modified Bessel function of the second kind of order ν; Hypergeometric1F1[alpha,beta,x] for the confluent hypergeometric function; Hypergeomet- ric2F1[alpha,beta,gamma,x] for the Gauss hypergeometric function; PDF[NormalDistribution[0,1],x] for the standard normal density function; CDF[NormalDistribution[0,1],x] for the standard normal distribution function. Mathematica like other algebraic manipulation packages allows for arbitrary precision, so the accuracy of computations is not an issue. The contents of this note are organized as follows. Section 2 provides simple derivations of the characteristic functions due to Schoenecker and Luginbuhl (2016). Section 3 lists explicit expressions for φW (t) for nearly fifty distributions for Y . The derivations of these expressions are not given and can be obtained from the corresponding author. Section 4 presents simulation results that verify correctness of the expressions in Section 3. The note concludes with Section 4.

3 2 Simpler derivations for normal and gamma cases

In this section, we present simpler derivations of the characteristic function of W = XY when: i) X is a standard normal random variable and Y is an independent normal random variable with mean µ and standard deviation σ; ii) X is a standard normal random variable and Y is an independent gamma random variable with α and scale parameter β. For any distribution of Y , we can write

  t2Y 2  φ (t) = E [exp(itW )] = E [exp(itXY )] = E {E [exp(itXY ) | Y ]} = E exp − . W 2

If Y is a normal random variable with mean µ and standard deviation σ then

1 Z ∞  t2y2 (y − µ)2  √ φW (t) = exp − − 2 dy 2πσ −∞ 2 2σ " # 1 Z ∞ σ2t2 + 1 y2 − 2µy + µ2 √ = exp − 2 dy 2πσ −∞ 2σ " # 1  µ2t2  Z ∞ σ2t2 + 1  µ 2 √ = exp − 2 exp − 2 y − 2 2 dy 2πσ 2 (σ2t2 + 1) −∞ 2σ σ t + 1 1  µ2t2  = √ exp − , σ2t2 + 1 2 (σ2t2 + 1) where the last step follows from the fact that any probability density function must integrate to one. If Y is a gamma random variable with shape parameter α and scale parameter β then

α Z ∞  2 2   α  2    β α−1 t y β β β φW (t) = y exp −βy − dy = exp 2 D−α , Γ(α) 0 2 t 4t t where the last step follows by direct application of equation (2.3.15.3) in Prudnikov et al. (1986, volume 1).

3 Expressions for characteristic functions

In this section, we list explicit expressions for φW (t) when X is a standard normal random variable and Y is an independent random variable following nearly fifty other distributions. (Pareto, 1964): For this distribution,

α −α−1 fY (y) = αK y ,  2 2  α − α −1 2 α K t φ (t) = αK 2 2 t Γ − , W 2 2 for y ≥ K > 0 and α > 0. Moffat distribution (Moffat, 1969): For this distribution,

β − 1  y2 −β f (y) = 1 + , Y πα2 α2 (β − 1)α2(β−1)t2β−1 t2α2  t2α2  φW (t) = exp K0 β+ 1 2 2 π 4 4

4 for y > 0, α > 0 and β > 1. Triangular distribution: For this distribution,  2(y − a)  , if a < y < c,   (b − a)(c − a)   f (y) = 2(b − y) Y , if c ≤ y < b,   (b − a)(b − c)    0, elsewhere, 2   t2a2 t2c2  φ (t) = exp − − W t2(b − a)(c − a) 2 2 2   t2c2 t2b2  − exp − − t2(b − a)(b − c) 2 2 √ 2a  1 t2a2  1 t2c2  − Γ , − Γ , t(b − a)(c − a) 2 2 2 2 √ 2b  1 t2c2  1 t2b2  + Γ , − Γ , t(b − a)(b − c) 2 2 2 2 for a < y < b. Argus distribution (Albrecht, 1990): For this distribution, r a3  a2  y2  y2 fY (y) = √ exp − 1 − y 1 − , 2πΨ(a)c2 2 c2 c2 a3  a2   5 a2 − c2t2  φW (t) = √ exp − 1F1 1; ; 3c 2πΨ(a) 2 2 2 1 for 0 < y < c and a > 0, where Ψ(x) = Φ(x) − xφ(x) − 2 . Cauchy distribution: For this distribution, γ f (y) = , Y π (γ2 + y2) γt γ2t2  γ2t2  φW (t) = √ exp K0 2π 4 4 for −∞ < y < +∞ and γ > 0. Student’s t distribution (Gosset, 1908): For this distribution,

ν+1 ν+1   2 − 2 Γ 2 y fY (y) = √ ν  1 + 2 , σ νπΓ 2 νσ ν ν ν+1  (σt) ν 2 Γ νσ2t2  νσ2t2  φ (t) = 2 exp K W ν √ ν 0 2  4 4 2 πΓ 2 for −∞ < y < +∞, ν > 0 and σ > 0. Skewed Student’s t distribution (Zhu and Galbraith, 2010): For this distribution,  h i− ν+1  K (ν) 1 + 1 y 2 2 , if y ≤ 0,  ν 2α  fY (y) = − ν+1   2 2  1  y   K (ν) 1 + ν 2(1−α) , if y > 0,

5 ν ν+1 ν +1 ν+1 2 2 2 2 φW (t) = K(ν)t ν 2 2 2 α exp 2να t K0 2να t ν ν+1 ν +1 ν+1 2 2 2 2 +K(ν)t ν 2 2 2 (1 − α) exp 2ν(1 − α) t K0 2ν(1 − α) t

Γ((ν+1)/2) √ for −∞ < y < +∞, ν > 0 and 0 < α < 1, where K(ν) = πνΓ(ν/2) . Asymmetric skewed Student’s t distribution (Zhu and Galbraith, 2010): For this distribution,

 − ν1+1 α h 1 y 2i 2  ∗ K (ν1) 1 + ∗ , if y ≤ 0,  α ν1 2α  fY (y) = − ν2+1   2 2  1−α 1  y   ∗ K (ν ) 1 + ∗ , if y > 0,  1−α 2 ν2 2(1−α )

ν1     ν1 ∗ ν1 +1 ∗ 2 2 ∗ 2 2 φW (t) = αK (ν1) t (α ) (2ν1) 2 exp 2ν1 (α ) t K0 2ν1 (α ) t

ν2     ν2 ∗ ν2 +1 ∗ 2 2 ∗ 2 2 +(1 − α)K (ν2) t (1 − α ) (2ν2) 2 exp 2ν2 (α ) t K0 2ν2 (α ) t

Γ((ν+1)/2) √ ∗ for −∞ < y < +∞, ν1 > 0, ν2 > 0 and 0 < α < 1, where K(ν) = πνΓ(ν/2) and α = αK(ν1) . αK(ν1)+(1−α)K(ν2) Half Student’s t distribution: For this distribution,

ν+1 ν+1   2 − 2 2Γ 2 y fY (y) = √ ν  1 + , νπΓ 2 ν ν ν+ 1 ν+1  t ν 2 Γ ν2t2  ν2t2  φ (t) = 2 exp K W √ ν ν 0 2  2 2 π2 Γ 2 for y > 0 and ν > 0. Half Cauchy distribution: For this distribution, 2 σ f (y) = , Y π y2 + σ2 tσ σ2t2  σ2t2  φW (t) = √ exp K0 π 2 2 2 for y > 0 and σ > 0. Rice distribution (Rice, 1945): For this distribution,

y  y2 + ν2  νy  f (y) = exp − I , Y σ2 2σ2 0 σ2 1  ν2t2  φ (t) = exp − W 1 + σ2t2 2 (1 + σ2t2) for y > 0, σ > 0 and ν ≥ 0. Symmetric Laplace distribution (Laplace, 1774): For this distribution,

1  | y | f (y) = exp − , Y 2λ λ √ π  1   1  φW (t) = √ exp erfc √ 2λt 2λ2t2 2λt

6 for −∞ < y < +∞ and λ > 0. Laplace distribution (Laplace, 1774): For this distribution,

1  | y − µ | f (y) = exp − , Y 2b b √ π  1 µ  1  φW (t) = √ exp − erfc −√ 2 2bt 2b2t2 b 2bt 1  1 µ 1 µt2  µbt2 − 1 − exp − + D 2bt 4b2t2 2 b 2 −1 t 1  1 µ  1 µt2  µbt2 + 1 + exp − − + D 2bt 4b2t2 2 b 2 −1 t for −∞ < y < +∞, −∞ < µ < +∞ and b > 0. Asymmetric Laplace distribution (Kozubowski and Podgorski, 2000): For this distribution,

 λ   exp (y − m) , if y < m, 1  κ fY (y) = 1 κ + κ   exp [λκ(y − m)] , if y ≥ m, √ 2πλ  λm λ2  φW (t) = 1  exp − + 2 2 κ + κ t κ 2κ t λ  λ2 m λ mt2   λ  − 1  exp 2 2 − + D−1 mt − κ + κ t 4κ t 2 κ 2 κt λ λ2κ2 m  mt2   λκ + 1  exp 2 − −λκ + D−1 mt + κ + κ t 4t 2 2 t for −∞ < y < +∞, −∞ < m < +∞, λ > 0 and κ > 0. Poiraud-Casanova-Thomas-Agnan Laplace distribution (Poiraud-Casanova and Thomas- Agnan, 2000): For this distribution,   α (1 − α) exp {(1 − α)(y − θ)} , if y ≤ θ, fY (y) =  α (1 − α) exp {α (θ − y)} , if y > θ, √ 2πα2(1 − α)2  (1 − α)2  φ (t) = exp −2(1 − α)θ + W t 2t2 α(1 − α) (1 − α)2 (1 − α)θ θ2t2   1 − α − exp − − D θt − t 4t2 2 2 −1 t α(1 − α)  α2 αθ θ2t2   α + exp + − D θt + t 4t2 2 2 −1 t for −∞ < y < +∞, −∞ < θ < +∞ and 0 < α < 1. Holla-Bhattacharya Laplace distribution (Holla and Bhattacharya, 1968): For this distribu- tion,   aφ exp {φ (y − θ)} , if y ≤ θ, fY (y) =  (1 − a) φ exp {φ (θ − y)} , if y > θ,

7 √ 2πaφ  φ2  φ (t) = exp −θφ + W t 2t2 aφ  φ2 θφ θ2t2   φ − exp − − D θt − t 4t2 2 2 −1 t (1 − a)φ  φ2 θφ θ2t2   φ + exp + − D θt + t 4t2 2 2 −1 t for −∞ < y < +∞, −∞ < θ < +∞, φ > 0 and 0 < a < 1. McGill Laplace distribution (McGill, 1962): For this distribution,

 1 y − θ    exp , if y ≤ θ,  2ψ ψ fY (y) =    1 θ − y  exp , if y > θ,  2φ φ √ π  θ 1  φW (t) = √ exp − + 2ψt ψ 2ψ2t2 1  1 θ θ2t2   1  − exp − − D θt − 2ψt 4ψ2t2 2ψ 2 −1 ψt 1  1 θ θ2t2   1  + exp + − D θt + 2φt 4φ2t2 2ψ 2 −1 φt for −∞ < y < +∞, −∞ < θ < +∞, φ > 0 and ψ > 0. Log Laplace distribution: For this distribution,

 1 −1  µ  y b exp − , if y < µ, 1  b f (y) = Y 2b  − 1 −1 µ  y b exp , if y ≥ µ, b  2 2   2 2  −1 − 1 1 −2  µ 1 µ t −1 1 − 1 −2 µ 1 µ t φ (t) = b t b 2 2b exp − γ , + b t b 2 2b exp Γ − , W b 2b 2 b 2b 2 for y > 0, b > 0 and µ > 0. Exponential distribution: For this distribution,

f (y) = λ exp(−λy), Y √ πλ  λ2   λ  φW (t) = √ exp erfc √ 2t 2t2 2t for y > 0 and λ > 0. Gamma distribution: For this distribution, βα f (y) = yα−1 exp(−βy), Y Γ(α) β α  β2  β  φ (t) = exp D W t 4t2 −α t for y > 0, α > 0 and β > 0.

8 Reflected gamma distribution (Borghi, 1965): For this distribution,

βα f (y) = | y |α−1 exp (−β | y |) , Y 2Γ(α) β α  β2  β  φ (t) = exp D W t 4t2 −α t for −∞ < y < +∞, α > 0 and β > 0. Chi distribution: For this distribution,

1− k 2 2 2  y  f (y) = yk−1 exp − , Y k  2 Γ 2 k 2− 2 φW (t) = 1 + t for y > 0 and k > 0. Variance gamma distribution (Madan and Seneta, 1990): For this distribution,

2λ | α | λ− 1 fY (y) = | y | 2 K 1 (α | y |), √ λ− 1 λ− πΓ(λ)(2α) 2 2 λ− 1 | α | 2  α2   α2  √ 1 λ λ 1 φW (t) = Γ(λ) exp 2 W − , − 2 2t 4t 4 2 2 4 2t for −∞ < y < +∞, −∞ < α < +∞ and λ > 0. Normal inverse gamma distribution: For this distribution, √ λβα  2β + λ(y − µ)2  fY (y) = √ exp − , 2πσ2α+3Γ(α) 2σ2 √ λβα 1  2β + λµ2 λ2µ2  φW (t) = √ exp − + σ2α+2Γ(α) σ2t2 + λ 2σ2 2 (λ + σ2t2) for −∞ < y < +∞, λ > 0, α > 0 and β > 0. Nakagami distribution (Nakagami, 1960): For this distribution,

2mm  my2  f (y) = y2m−1 exp − , Y ΩmΓ(m) Ω mm t2 m−m φ (t) = − W Ωm 2 Ω for y > 0, m > 0 and Ω > 0. Reciprocal distribution (Hamming, 1970): For this distribution,

C f (y) = , Y y C   t2a2   t2b2  φ (t) = Γ 0, − Γ 0, W 2 2 2 for 0 < a < y < b, where C denotes the normalizing constant.

9 Maxwell distribution (Maxwell, 1860): For this distribution,

r 2 1  y2  f (y) = y2 exp − , Y π a3 2a2 − 3 1  1  2 φ (t) = t2 + W a3 a2 for y > 0 and a > 0. Quadratic distribution: For this distribution,

f (y) = α(y − β)2, Y √ 2α  3 t2a2  3 t2b2  φ (t) = Γ , − Γ , W t3 2 2 2 2 2αβ   t2a2   t2b2  − exp − − exp − t2 2 2 αβ2  1 t2a2  1 t2b2  +√ Γ , − Γ , 2t 2 2 2 2

a+b 12 for −∞ < a < y < b < +∞, where β = 2 and α = (b−a)3 . Uniform distribution: For this distribution, 1 f (y) = , Y b − a 1  1 a2t2  1 b2t2  φW (t) = √ Γ , − Γ , (b − a) 2t 2 2 2 2 for −∞ < a < y < b < +∞. Power function distribution: For this distribution,

a−1 fY (y) = ay ,  2  a −1 −a a t φ (t) = a2 2 t γ , W 2 2 for 0 < y < 1 and a > 0. Rayleigh distribution (Weibull, 1951): For this distribution,

2 2 2 fY (y) = 2λ y exp −λ y , 2λ2 φ (t) = W t2 + 2λ2 for y > 0 and λ > 0. Exponentiated Rayleigh distribution (Kundu and Raqab, 2005): For this distribution,

2 2 2  2 2α−1 fY (y) = 2αλ y exp −λ y 1 − exp −λ y ,  t2  φ (t) = αB 1 + , α W 2λ2 for y > 0, α > 0 and λ > 0.

10 Beta Rayleigh distribution (Kundu and Raqab, 2005): For this distribution,

2 2αλ α−1 f (y) = y exp −βλ2y2 1 − exp −λ2y2 , Y B(α, β) 1  t2  φ (t) = B β + , α W B(α, β) 2λ2 for y > 0, α > 0, β > 0 and λ > 0. Normal distribution (de Moivre, 1738; Gauss, 1809): For this distribution,

1  (y − µ)2  fY (y) = √ exp − , 2πσ 2σ2 1  µ2t2  φW (t) = √ exp − σ2t2 + 1 2 (σ2t2 + 1) for −∞ < y < +∞, −∞ < µ < +∞ and σ > 0. Skew normal distribution (Azzalini, 1985): For this distribution,

2  y2  λy  fY (y) = √ exp − Φ , 2πσ 2σ2 σ 1 φW (t) = √ σ2t2 + 1 for −∞ < y < +∞, −∞ < λ < +∞ and σ > 0. Truncated normal distribution: For this distribution,

1  (y − µ)2  fY (y) = √ exp − , h  b−µ  a−µ i 2σ2 2πσ Φ σ − Φ σ ! ! b σ2t2 + 1 − µ a σ2t2 + 1 − µ Φ √ − Φ √ σ σ2t2 + 1 σ σ2t2 + 1  µ2t2  φ (t) = exp − W √  b − µ a − µ 2 (σ2t2 + 1) 2πσ Φ − Φ σ σ for −∞ < y < +∞, −∞ < µ < +∞ and σ > 0. Split normal distribution: For this distribution,

  (y − µ)2   exp − , if y < µ,  2  2σ1 fY (y) = C   2   (y − µ)  exp − 2 , if y ≥ µ, 2σ2 √ " 2 2 # 2πCσ1 µ t φW (t) = exp − p 2 2 2 2  σ1t + 1 2 t σ2 + 1  2 2 2 2  2 ! Cσ1 µ t t σ1 + 2 µσ1t − exp − D−1 p 2 2 4 t2σ2 + 1 p 2 2 σ1t + 1 1 σ1t + 1  2 2 2 2  2 ! Cσ2 µ t t σ2 + 2 µσ2t − exp − D−1 p 2 2 4 t2σ2 + 1 p 2 2 σ2t + 1 2 σ2t + 1

11 for −∞ < y < +∞, −∞ < µ < +∞, σ1 > 0 and σ2 > 0, where C denotes the normalizing constant. q-Gaussian distribution (Tsallis, 2009): For this distribution, √ β 1 f (y) = 1 − (1 − q)βy2 1−q , Y C  √ 1    2   2   2β 2(q − 1)β 1−q t t  exp K0 , if 1 ≤ q < 3,  Ct t2 4(q − 1)β 4(q − 1)β φW (t) =  1 1 2 − q  1 1 2 − q t2   √ B , F , + ; , if q < 1  C 1 − q 2 1 − q 1 1 2 2 1 − q 2(1 − q)β for −∞ < y < +∞ if 1 ≤ q < 3, −√ 1 < y < +√ 1 if q < 1 and β > 0. β(1−q) β(1−q) Half-normal distribution: For this distribution,

r 2 1  y2  f (y) = exp − , Y π σ 2σ2 1 φW (t) = √ 1 + σ2t2 for y > 0 and σ > 0. Normal exponential gamma distribution: For this distribution,

 y2  | y | f (y) = C exp − D , Y 4θ2 −2k−1 θ √  3 1 3  φ (t) = C πθ2−kΓ−1 k + F , 1; k + ; −t2θ2 W 2 2 1 2 2 for −∞ < y < +∞, k > 0 and θ > 0, where C denotes the normalizing constant. Folded normal distribution (Leone et al., 1961): For this distribution,

1   (y − µ)2   (y + µ)2  fY (y) = √ exp − + exp − , 2πσ 2σ2 2σ2  µ2t2  φ (t) = exp − W σ2t2 + 1 for −∞ < y < +∞, −∞ < µ < +∞ and σ > 0. Wigner semicircle distribution: For this distribution,

2pR2 − y2 f (y) = , Y πR2 3 t2R2  φ (t) = F ; 2; − W 1 1 2 2 for −R < y < R. Kumaraswamy distribution (Kumaraswamy, 1980): For this distribution,

2a−1 fY (y) = 2ay 1 − y , 2 2 −a  t   t  φ (t) = a −2t2 exp − γ a, − W 2 2

12 for 0 < y < 1 and a > 0. Linear failure rate distribution (Bain, 1974): For this distribution,

 by2  f (y) = (a + by) exp −ay − , Y 2 a  a2   a  a  a2   a  φW (t) = √ exp D−1 √ + exp D−2 √ b + t2 4 (b + t2) b + t2 b + t2 4 (b + t2) b + t2 for y > 0, a > 0 and b ≥ 0. Irwin Hall distribution (Irwin, 1927; Hall, 1927): For this distribution,

n 1 X n f (y) = (−1)k (y − k)n−1sign(y − k), Y 2(n − 1)! k k=0 n   n−1    2 2  1 X k n X n − 1 n−1−m m−1 −m−1 m + 1 n t φ (t) = (−1) (−k) 2 2 t γ , W 2(n − 1)! k m 2 2 k=0 m=0 n   n−1    2 2  1 X k n X n − 1 n−1−m m+1 −m−1 m + 1 k t − (−1) (−k) 2 2 t γ , 2(n − 1)! k m 2 2 k=0 m=0 for 0 < y < n and n ≥ 1.

4 Simulation results

In this section, we perform simulations to check the mathematical derivations in Section 3. We simulated the distribution of W for given distributions of X and Y as follows:

1. simulate 1000 random numbers from the distribution of X;

2. simulate 1000 random numbers from the distribution of Y ;

3. set W = XY ;

4. construct a histogram of the 1000 values of W .

The simulated histograms can be compared to the theoretical probability density functions of W computed using the characteristic functions in Section 3. The comparisons are illustrated in Figures 1 to 5 for five of the distributions of Y considered in Section 3: Figure 1 for the exponential distribution; Figure 2 for the uniform distribution; Figure 3 for the power function distribution; Figure 4 for the Rayleigh distribution; Figure 5 for the exponentiated Rayleigh distribution.

13 Simulated histogram and theoretical pdf histogram Simulated 0.0 0.5 1.0 1.5 2.0

−3 −2 −1 0 1 2 3

w

Figure 1: Simulated histogram and theoretical probability density function of W = XY when X is a standard normal random variable and Y is an independent unit exponential random variable.

14 Simulated histogram and theoretical pdf histogram Simulated 0.0 0.5 1.0 1.5 2.0

−3 −2 −1 0 1 2 3

w

Figure 2: Simulated histogram and theoretical probability density function of W = XY when X is a standard normal random variable and Y is an independent uniform [0, 1] random variable.

15 Simulated histogram and theoretical pdf histogram Simulated 0.0 0.2 0.4 0.6 0.8

−3 −2 −1 0 1 2 3

w

Figure 3: Simulated histogram and theoretical probability density function of W = XY when X is a standard normal random variable and Y is an independent power function random variable with a = 2.

16 Simulated histogram and theoretical pdf histogram Simulated 0.0 0.2 0.4 0.6 0.8

−3 −2 −1 0 1 2 3

w

Figure 4: Simulated histogram and theoretical probability density function of W = XY when X is a standard normal random variable and Y is an independent Rayleigh random variable with λ = 1.

17 Simulated histogram and theoretical pdf histogram Simulated 0.0 0.1 0.2 0.3 0.4 0.5

−3 −2 −1 0 1 2 3

w

Figure 5: Simulated histogram and theoretical probability density function of W = XY when X is a standard normal random variable and Y is an independent exponentiated Rayleigh random variable with α = 2, λ = 1.

We see that the simulated histogram and the theoretical probability density function agree well in each of the five figures. We have considered the five distributions for illustration. But the conclusions were the same for other distributions in Section 3.

5 Conclusions

Motivated by Schoenecker and Luginbuhl (2016), we have derived explicit expressions for the char- acteristic function of the product of two independent random variables, one of them being the standard normal random variable and the other allowed to follow one of nearly fifty distribu- tions. The explicit expressions involved the gamma, incomplete gamma, complementary incomplete gamma, beta, incomplete beta, error, complementary error, parabolic cylinder, Whittaker W , mod- ified Bessel, confluent hypergeometric, Gauss hypergeometric and the standard normal distribution functions. The mathematical derivations have been verified by simulations.

Acknowledgments

The authors would like to thank the Editor and the referee for careful reading and comments which greatly improved the paper.

References

18 [1] Agrawal, A. and Kshetrimayum, R. S. (2017). Transmit antenna selection in the cooperative communication based UWB system. Wireless Personal Communications, 94, 3001-3015.

[2] Albrecht, H. (1990). Search for hadronic b → u decays. Physics Letters B, 241, 278-282.

[3] Azzalini, A. (1985). A class of distributions which includes the normal ones. Scandinavian Journal of Statistics, 12, 171-178.

[4] Bain, L. J. (1974). Analysis for the linear failure-rate life-testing distribution. Technometrics, 16, 551-559.

[5] Borghi, O. (1965). Sobre una distribuci´onde frecuencias. Trabajos de Estadistica, 16, 171-192.

[6] Chauhan, S. S. and Kumar, S. (2014). Capacity analysis of adaptive transmission with space- time block codes in spatially correlated MIMO Nakagami-fading channels. Wireless Personal Communications, 79, 1211-1222.

[7] de Moivre, A. (1738). The Doctrine of Chances. ISBN 0821821032.

[8] Dhaka, A., Chauhan, S. and Bhaskar, V. (2018). Analysis and simulation of second-order statistics with modified characteristic function parameters in a multipath fading environment. Wireless Personal Communications, 100, 851-862.

[9] Gauss, C. F. (1809). Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambi- entivm (in Latin).

[10] Gosset, W. S. (1908). The probable error of a mean. Biometrika, 6, 1-25.

[11] Gradshteyn, I. S. and Ryzhik, I. M. (2000). Table of Integrals, Series, and Products, sixth edition. Academic Press, San Diego.

[12] Hall, P. (1927). The distribution of means for samples of size n drawn from a population in which the variate takes values between 0 and 1, all such values being equally probable. Biometrika, 19, 240-245.

[13] Hamming, R. W. (1970). On the distribution of numbers. The Bell System Technical Journal, 49, 1609-1625.

[14] Holla, M. S. and Bhattacharya, S. K. (1968). On a compound Gaussian distribution. Annals of the Institute of Statistical Mathematics, 20, 331-336.

[15] Irwin, J. O. (1927). On the frequency distribution of the means of samples from a population having any law of frequency with finite moments, with special reference to Pearson’s type II. Biometrika, 19, 225-239.

[16] Kozubowski, T. J. and Podgorski, K. (2000). A multivariate and asymmetric generalization of Laplace distribution. Computational Statistics, 15, 531-540.

[17] Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46, 79-88.

[18] Kundu, D. and Raqab, M. Z. (2005). Generalized Rayleigh distribution: Different methods of estimations. Computational Statistics and Data Analysis, 49, 187-200.

19 [19] Laplace, P. -S. (1774). M´emoiresur la probabilit´edes causes par les ´ev`enements. M´emoires de l’Academie Royale des Sciences Present´espar Divers Savan, 6, 621-656.

[20] Leone, F. C., Nottingham, R. B. and Nelson, L. S. (1961). The folded normal distribution. Technometrics, 3, 543-550.

[21] Madan, D. B. and Seneta, E. (1990). The variance gamma (V.G.) model for share market returns. Journal of Business, 63, 511-524.

[22] Maxwell, J. C. (1860). Illustrations of the dynamical theory of gases. Part I. On the motions and collisions of perfectly elastic spheres. Philosophical Magazine, 19, 19-32.

[23] McGill, W. J. (1962). Random fluctuations of response rate. Psychometrika, 27, 3-17.

[24] Moffat, A. F. J. (1969). A theoretical investigation of focal stellar images in the photographic emulsion and application to photographic photometry. Astronomy and Astrophysics, 3, 455- 461.

[25] Nakagami, M. (1960). The m-distribution, a general formula of intensity of rapid fading. In: Statistical Methods in Radio Wave Propagation, editor W. C. Hoffman, pp. 3-36, Pergamon Press.

[26] Pareto, V. (1964). Cours d’Economie´ Politique: Nouvelle ´editionpar G. -H. Bousquet et G. Busino, Librairie Droz, Geneva, pp. 299-345.

[27] Poiraud-Casanova, S. and Thomas-Agnan, C. (2000). About monotone regression quantiles. Statistics and Probability Letters, 48, 101-104.

[28] Prudnikov, A. P., Brychkov, Y. A. and Marichev, O. I. (1986). Integrals and Series, volumes 1, 2 and 3. Gordon and Breach Science Publishers, Amsterdam.

[29] Rice, S. O. (1945). Mathematical analysis of random noise. The Bell System Technical Journal, 24, 46-156.

[30] Schoenecker, S. and Luginbuhl, T. (2016). Characteristic functions of the product of two Gaussian random variables and the product of a Gaussian and a gamma random variable. IEEE Signal Processing Letters, 23, 644-647.

[31] Tsallis, C. (2009). Nonadditive entropy and nonextensive statistical mechanics-An overview after 20 years. Brazilian Journal of Physics, 39, 337-356.

[32] Weibull, W. (1951). A statistical distribution function of wide applicability. Journal of Applied Mechanics, 18, 293-297.

[33] Zhu, D. and Galbraith, J. W. (2010). A generalized asymmetric Student-t distribution with application to financial econometrics. Journal of Econometrics, 157, 297-305.

20