Error and Complementary Error Functions Outline

Total Page:16

File Type:pdf, Size:1020Kb

Error and Complementary Error Functions Outline Error and Complementary Error Functions Reading Problems Outline Background ...................................................................2 Definitions .....................................................................4 Theory .........................................................................6 Gaussian function .......................................................6 Error function ...........................................................8 Complementary Error function .......................................10 Relations and Selected Values of Error Functions ........................12 Numerical Computation of Error Functions ..............................19 Rationale Approximations of Error Functions ............................21 Assigned Problems ..........................................................23 References ....................................................................27 1 Background The error function and the complementary error function are important special functions which appear in the solutions of diffusion problems in heat, mass and momentum transfer, probability theory, the theory of errors and various branches of mathematical physics. It is interesting to note that there is a direct connection between the error function and the Gaussian function and the normalized Gaussian function that we know as the \bell curve". The Gaussian function is given as G(x) = Ae−x2=(2σ2) where σ is the standard deviation and A is a constant. The Gaussian function can be normalized so that the accumulated area under the curve is unity, i.e. the integral from −∞ to +1 equals 1. If we note that the definite integral Z 1 rπ e−ax2 dx = −∞ a then the normalized Gaussian function takes the form 1 2 2 G(x) = p e−x =(2σ ) 2πσ If we let x2 1 t2 = and dt = p dx 2σ2 2σ then the normalized Gaussian integrated between −x and +x can be written as x x Z 1 Z 2 G(x) dx = p e−t dt −x π −x or recognizing that the normalized Gaussian is symmetric about the y−axis, we can write 2 x x Z 2 Z 2 x G(x) dx = p e−t dt = erf x = erf p −x π 0 2σ and the complementary error function can be written as 1 2 Z 2 erfc x = 1 − erf x = p e−t dt π x Historical Perspective The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in the second edition of his Doctrine of Chances, 1738 ) in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812 ), and is now called the Theorem of de Moivre-Laplace. Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it in 1809 by assuming a normal distribution of the errors. The name bell curve goes back to Jouffret who used the term bell surface in 1872 for a bivariate normal with independent components. The name normal distribution was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875 [Stigler]. This terminology is unfortunate, since it reflects and encourages the fallacy that \everything is Gaussian". 3 Definitions 1. Gaussian Function The normalized Gaussian curve represents the probability distribution with standard distribution σ and mean µ relative to the average of a random distribution. 1 2 2 G(x) = p e−(x−µ) =(2σ ) 2πσ This is the curve we typically refer to as the \bell curve" where the mean is zero and the standard distribution is unity. 2. Error Function The error functionp equals twice the integral of a normalized Gaussian function between 0 and x/σ 2. x 2 Z 2 y = erf x = p e−t dt for x ≥ 0; y [0; 1] π 0 where x t = p 2 σ 3. Complementary Error Function The complementary error function equals one minus the error function 1 2 Z 2 1 − y = erfc x = 1 − erf x = p e−t dt for x ≥ 0; y [0; 1] π x 4. Inverse Error Function x = inerf y inerf y exists for y in the range −1 < y < 1 and is an odd function of y with a Maclaurin expansion of the form 4 1 X 2n−1 inverf y = cn y n=1 5. Inverse Complementary Error Function x = inerfc (1 − y) 5 Theory Gaussian Function The Gaussian function or the Gaussian probability distribution is one of the most fundamen- tal functions. The Gaussian probability distribution with mean µ and standard deviation σ is a normalized Gaussian function of the form 1 2 2 G(x) = p e−(x−µ) =(2σ ) (1.1) 2πσ where G(x), as shown in the plot below, gives the probability that a variate with a Gaussian distribution takes on a value in the range [x; x + dx]. Statisticians commonly call this distribution the normal distribution and, because of its shape, social scientists refer to it as the \bell curve." G(x) has been normalized so that the accumulated area under the curve between −∞ ≤ x ≤ +1 totals to unity. A cumulative distribution function, which totals the area under the normalized distribution curve is available and can be plotted as shown below. GHxL DHxL x x -4 -2 2 4 -4 -2 2 4 Figure 2.1: Plot of Gaussian Function and Cumulative Distribution Function When the mean is set to zero (µ = 0) and the standard deviation or variance is set to unity (σ = 1), we get the familiar normal distribution 1 2 G(x) = p e−x =2dx (1.2) 2π which is shown in the curve below. The normal distribution function N(x) gives the prob- ability that a variate assumes a value in the interval [0; x] x 1 Z 2 N(x) = p e−t =2 dt (1.3) 2π 0 6 NHxL 0.4 0.3 0.2 0.1 x -4 -2 2 4 Figure 2.2: Plot of the Normalized Gaussian Function Gaussian distributions have many convenient properties, so random variates with unknown distributions are often assumed to be Gaussian, especially in physics, astronomy and various aspects of engineering. Many common attributes such as test scores, height, etc., follow roughly Gaussian distributions, with few members at the high and low ends and many in the middle. Computer Algebra Systems Function Maple Mathematica Probability Density Function statevalf[pdf,dist](x) PDF[dist, x] - frequency of occurrence at x Cumulative Distribution Function statevalf[cdf,dist](x) CDF[dist, x] - integral of probability density function up to x dist = normald[µ, σ] dist = NormalDistribution[µ, σ] µ = 0 (mean) µ = 0 (mean) σ = 1 (std. dev.) σ = 1 (std. dev.) Potential Applications 1. Statistical Averaging: 7 Error Function The error function is obtained by integrating the normalized Gaussian distribution. x 2 Z 2 erf x = p e−t dt (1.4) π 0 where the coefficient in front of the integral normalizes erf(1) = 1. A plot of erf x over the range −3 ≤ x ≤ 3 is shown as follows. 1 0.5 L x H 0 erf -0.5 -1 -3 -2 -1 0 1 2 3 x Figure 2.3: Plot of the Error Function The error function is defined for all values of x and is considered an odd function in x since erf x = −erf (−x). The error function can be conveniently expressed in terms of other functions and series as follows: 1 1 erf x = p γ ; x2 (1.5) π 2 2x 1 3 2x 2 3 = p M ; ; −x2 = p e−x M 1; ; x2 (1.6) π 2 2 π 2 1 2 X (−1)nx2n+1 = p (1.7) π n=0 n!(2n + 1) where γ(·) is the incomplete gamma function, M(·) is the confluent hypergeometric function of the first kind and the series solution is a Maclaurin series. 8 Computer Algebra Systems Function Maple Mathematica Error Function erf(x) Erf[x] Complementary Error Function erfc(x) Erfc[x] Inverse Error Function fslove(erf(x)=s) InverseErf[s] Inverse Complementary fslove(erfc(x)=s) InverseErfc[s] Error Function where s is a numerical value and we solve for x Potential Applications 1. Diffusion: Transient conduction in a semi-infinite solid is governed by the diffusion equation, given as @2T 1 @T = @x2 α @t where α is thermal diffusivity. The solution to the diffusion equation is a function of either the erf x or erfc x depending on the boundary condition used. For instance, for constant surface temperature, where T (0; t) = Ts T (x; t) − Ts x = erfc p Ti − Ts 2 αt 9 complementary Error Function The complementary error function is defined as erfc x = 1 − erf x 1 2 Z 2 = p e−t dt (1.8) π x 2 1.5 L x H 1 erf 0.5 -3 -2 -1 0 1 2 3 x Figure 2.4: Plot of the complementary Error Function and similar to the error function, the complementary error function can be written in terms of the incomplete gamma functions as follows: 1 1 erfc x = p Γ ; x2 (1.9) π 2 As shown in Figure 2.5, the superposition of the error function and the complementary error function when the argument is greater than zero produces a constant value of unity. Potential Applications 1. Diffusion: In a similar manner to the transient conduction problem described for the error function, the complementary error function is used in the solution of the diffusion equation when the boundary conditions are constant surface heat flux, where qs = q0 1=2 2 2q0(αt/π) −x q0x x T (x; t) − Ti = exp − erfc p k 4αt k 2 αt 10 1 Erf x + Erfc x 0.8 Erf x L x H 0.6 erf 0.4 0.2 Erfc x 0 0.5 1 1.5 2 2.5 3 x Figure 2.5: Superposition of the Error and complementary Error Functions @T and surface convection, where −k = h[T1 − T (0; t)] @x x=0 p 2 " !# T (x; t) − Ti x hx h αt x h αt = erfc p − exp + erfc p + 2 T1 − Ti 2 αt k k 2 αt k 11 Relations and Selected Values of Error Functions erf (−x) = −erf x erfc (−x) = 2 − erfc x erf 0 = 0 erfc 0 = 1 erf 1 = 1 erfc 1 = 0 erf (−∞) = −1 Z 1 p erfc x dx = 1= π 0 Z 1 p p erfc2 x dx = (2 − 2)= π 0 Ten decimal place values for selected values of the argument appear in Table 2.1.
Recommended publications
  • Problem Set 2
    22.02 – Introduction to Applied Nuclear Physics Problem set # 2 Issued on Wednesday Feb. 22, 2012. Due on Wednesday Feb. 29, 2012 Problem 1: Gaussian Integral (solved problem) 2 2 1 x /2σ The Gaussian function g(x) = e− is often used to describe the shape of a wave packet. Also, it represents √2πσ2 the probability density function (p.d.f.) of the Gaussian distribution. 2 2 ∞ 1 x /2σ a) Calculate the integral √ 2 e− −∞ 2πσ Solution R 2 2 x x Here I will give the calculation for the simpler function: G(x) = e− . The integral I = ∞ e− can be squared as: R−∞ 2 2 2 2 2 2 ∞ x ∞ ∞ x y ∞ ∞ (x +y ) I = dx e− = dx dy e− e− = dx dy e− Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ This corresponds to making an integral over a 2D plane, defined by the cartesian coordinates x and y. We can perform the same integral by a change of variables to polar coordinates: x = r cos ϑ y = r sin ϑ Then dxdy = rdrdϑ and the integral is: 2π 2 2 2 ∞ r ∞ r I = dϑ dr r e− = 2π dr r e− Z0 Z0 Z0 Now with another change of variables: s = r2, 2rdr = ds, we have: 2 ∞ s I = π ds e− = π Z0 2 x Thus we obtained I = ∞ e− = √π and going back to the function g(x) we see that its integral just gives −∞ ∞ g(x) = 1 (as neededR for a p.d.f). −∞ 2 2 R (x+b) /c Note: we can generalize this result to ∞ ae− dx = ac√π R−∞ Problem 2: Fourier Transform Give the Fourier transform of : (a – solved problem) The sine function sin(ax) Solution The Fourier Transform is given by: [f(x)][k] = 1 ∞ dx e ikxf(x).
    [Show full text]
  • The Error Function Mathematical Physics
    R. I. Badran The Error Function Mathematical Physics The Error Function and Stirling’s Formula The Error Function: x 2 The curve of the Gaussian function y e is called the bell-shaped graph. The error function is defined as the area under part of this curve: x 2 2 erf (x) et dt 1. . 0 There are other definitions of error functions. These are closely related integrals to the above one. 2. a) The normal or Gaussian distribution function. x t2 1 1 1 x P(, x) e 2 dt erf ( ) 2 2 2 2 Proof: Put t 2u and proceed, you might reach a step of x 1 2 P(0, x) eu du P(,x) P(,0) P(0,x) , where 0 1 x P(0, x) erf ( ) Here you can prove that 2 2 . This can be done by using the definition of error function in (1). 0 u2 I I e du Now you need to find P(,0) where . To find this integral you have to put u=x first, then u= y and multiply the two resulting integrals. Make the change of variables to polar coordinate you get R. I. Badran The Error Function Mathematical Physics 0 2 2 I 2 er rdr d 0 From this latter integral you get 1 I P(,0) 2 and 2 . 1 1 x P(, x) erf ( ) 2 2 2 Q. E. D. x 2 t 1 2 1 x 2.b P(0, x) e dt erf ( ) 2 0 2 2 (as proved earlier in 2.a).
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • Error Functions
    Error functions Nikolai G. Lehtinen April 23, 2010 1 Error function erf x and complementary er- ror function erfc x (Gauss) error function is 2 x 2 erf x = e−t dt (1) √π Z0 and has properties erf ( )= 1, erf (+ ) = 1 −∞ − ∞ erf ( x)= erf (x), erf (x∗) = [erf(x)]∗ − − where the asterisk denotes complex conjugation. Complementary error function is defined as ∞ 2 2 erfc x = e−t dt = 1 erf x (2) √π Zx − Note also that 2 x 2 e−t dt = 1 + erf x −∞ √π Z Another useful formula: 2 x − t π x e 2σ2 dt = σ erf Z0 r 2 "√2σ # Some Russian authors (e.g., Mikhailovskiy, 1975; Bogdanov et al., 1976) call erf x a Cramp function. 1 2 Faddeeva function w(x) Faddeeva (or Fadeeva) function w(x)(Fadeeva and Terent’ev, 1954; Poppe and Wijers, 1990) does not have a name in Abramowitz and Stegun (1965, ch. 7). It is also called complex error function (or probability integral)(Weide- man, 1994; Baumjohann and Treumann, 1997, p. 310) or plasma dispersion function (Weideman, 1994). To avoid confusion, we will reserve the last name for Z(x), see below. Some Russian authors (e.g., Mikhailovskiy, 1975; Bogdanov et al., 1976) call it a (complex) Cramp function and denote as W (x). Faddeeva function is defined as 2 2i x 2 2 2 w(x)= e−x 1+ et dt = e−x [1+erf(ix)] = e−x erfc ( ix) (3) √π Z0 ! − Integral representations: 2 2 i ∞ e−t dt 2ix ∞ e−t dt w(x)= = (4) π −∞ x t π 0 x2 t2 Z − Z − where x> 0.
    [Show full text]
  • Neural Network for the Fast Gaussian Distribution Test Author(S)
    Document Title: Neural Network for the Fast Gaussian Distribution Test Author(s): Igor Belic and Aleksander Pur Document No.: 208039 Date Received: December 2004 This paper appears in Policing in Central and Eastern Europe: Dilemmas of Contemporary Criminal Justice, edited by Gorazd Mesko, Milan Pagon, and Bojan Dobovsek, and published by the Faculty of Criminal Justice, University of Maribor, Slovenia. This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this final report available electronically in addition to NCJRS Library hard-copy format. Opinions and/or reference to any specific commercial products, processes, or services by trade name, trademark, manufacturer, or otherwise do not constitute or imply endorsement, recommendation, or favoring by the U.S. Government. Translation and editing were the responsibility of the source of the reports, and not of the U.S. Department of Justice, NCJRS, or any other affiliated bodies. IGOR BELI^, ALEKSANDER PUR NEURAL NETWORK FOR THE FAST GAUSSIAN DISTRIBUTION TEST There are several problems where it is very important to know whether the tested data are distributed according to the Gaussian law. At the detection of the hidden information within the digitized pictures (stega- nography), one of the key factors is the analysis of the noise contained in the picture. The incorporated noise should show the typically Gaussian distribution. The departure from the Gaussian distribution might be the first hint that the picture has been changed – possibly new information has been inserted. In such cases the fast Gaussian distribution test is a very valuable tool.
    [Show full text]
  • Global Minimax Approximations and Bounds for the Gaussian Q-Functionbysumsofexponentials 3
    IEEE TRANSACTIONS ON COMMUNICATIONS 1 Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials Islam M. Tanash and Taneli Riihonen , Member, IEEE Abstract—This paper presents a novel systematic methodology The Gaussian Q-function has many applications in statis- to obtain new simple and tight approximations, lower bounds, tical performance analysis such as evaluating bit, symbol, and upper bounds for the Gaussian Q-function, and functions and block error probabilities for various digital modulation thereof, in the form of a weighted sum of exponential functions. They are based on minimizing the maximum absolute or relative schemes and different fading models [5]–[11], and evaluat- error, resulting in globally uniform error functions with equalized ing the performance of energy detectors for cognitive radio extrema. In particular, we construct sets of equations that applications [12], [13], whenever noise and interference or describe the behaviour of the targeted error functions and a channel can be modelled as a Gaussian random variable. solve them numerically in order to find the optimized sets of However, in many cases formulating such probabilities will coefficients for the sum of exponentials. This also allows for establishing a trade-off between absolute and relative error by result in complicated integrals of the Q-function that cannot controlling weights assigned to the error functions’ extrema. We be expressed in a closed form in terms of elementary functions. further extend the proposed procedure to derive approximations Therefore, finding tractable approximations and bounds for and bounds for any polynomial of the Q-function, which in the Q-function becomes a necessity in order to facilitate turn allows approximating and bounding many functions of the expression manipulations and enable its application over a Q-function that meet the Taylor series conditions, and consider the integer powers of the Q-function as a special case.
    [Show full text]
  • More Properties of the Incomplete Gamma Functions
    MORE PROPERTIES OF THE INCOMPLETE GAMMA FUNCTIONS R. AlAhmad Mathematics Department, Yarmouk University, Irbid, Jordan 21163, email:rami [email protected] February 17, 2015 Abstract In this paper, additional properties of the lower gamma functions and the error functions are introduced and proven. In particular, we prove interesting relations between the error functions and Laplace transform. AMS Subject Classification: 33B20 Key Words and Phrases: Incomplete beta and gamma functions, error functions. 1 Introduction Definition 1.1. The lower incomplete gamma function is defined as: arXiv:1502.04606v1 [math.CA] 29 Nov 2014 x s 1 t γ(s, x)= t − e− dt. Z0 Clearly, γ(s, x) Γ(s) as x . The properties of these functions are listed in −→ −→ ∞ many references( for example see [2], [3] and [4]). In particular, the following properties are needed: Proposition 1.2. [1] s x 1. γ(s +1, x)= sγ(s, x) x e− , − x 2. γ(1, x)=1 e− , − 1 3. γ 2 , x = √π erf (√x) . 1 2 Main Results Proposition 2.1. [1]For a< 0 and a + b> 0 ∞ a 1 Γ(a + b) x − γ(b, x) dx = . − a Z0 Proposition 2.2. For a =0 6 √t s (ar)2 1 s +1 2 r e− dr = γ( , a t). 2as+1 2 Z0 Proof. The substitution u = r2 gives √ 2 t a t s−1 s (ar)2 1 u 1 s +1 2 r e− dr = e− u 2 du = γ( , a t). 2as+1 2as+1 2 Z0 Z0 Proposition 2.3.
    [Show full text]
  • NM Temme 1. Introduction the Incomplete Gamma Functions Are Defined by the Integrals 7(A,*)
    Methods and Applications of Analysis © 1996 International Press 3 (3) 1996, pp. 335-344 ISSN 1073-2772 UNIFORM ASYMPTOTICS FOR THE INCOMPLETE GAMMA FUNCTIONS STARTING FROM NEGATIVE VALUES OF THE PARAMETERS N. M. Temme ABSTRACT. We consider the asymptotic behavior of the incomplete gamma func- tions 7(—a, —z) and r(—a, —z) as a —► oo. Uniform expansions are needed to describe the transition area z ~ a, in which case error functions are used as main approximants. We use integral representations of the incomplete gamma functions and derive a uniform expansion by applying techniques used for the existing uniform expansions for 7(0, z) and V(a,z). The result is compared with Olver's uniform expansion for the generalized exponential integral. A numerical verification of the expansion is given. 1. Introduction The incomplete gamma functions are defined by the integrals 7(a,*)= / T-Vcft, r(a,s)= / t^e^dt, (1.1) where a and z are complex parameters and ta takes its principal value. For 7(0,, z), we need the condition ^Ra > 0; for r(a, z), we assume that |arg2:| < TT. Analytic continuation can be based on these integrals or on series representations of 7(0,2). We have 7(0, z) + r(a, z) = T(a). Another important function is defined by 7>>*) = S7(a,s) = =^ fu^e—du. (1.2) This function is a single-valued entire function of both a and z and is real for pos- itive and negative values of a and z. For r(a,z), we have the additional integral representation e-z poo -zt j.-a r(a, z) = — r / dt, ^a < 1, ^z > 0, (1.3) L [i — a) JQ t ■+■ 1 which can be verified by differentiating the right-hand side with respect to z.
    [Show full text]
  • An Ad Hoc Approximation to the Gauss Error Function and a Correction Method
    Applied Mathematical Sciences, Vol. 8, 2014, no. 86, 4261 - 4273 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.45345 An Ad hoc Approximation to the Gauss Error Function and a Correction Method Beong In Yun Department of Statistics and Computer Science, Kunsan National University, Gunsan, Republic of Korea Copyright c 2014 Beong In Yun. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract We propose an algebraic type ad hoc approximation method, taking a simple form with two parameters, for the Gauss error function which is valid on a whole interval (−∞, ∞). To determine the parameters in the presented approximation formula we employ some appropriate con- straints. Furthermore, we provide a correction method to improve the accuracy by adding an auxiliary term. The plausibility of the presented method is demonstrated by the results of the numerical implementation. Mathematics Subject Classification: 62E17, 65D10 Keywords: Error function, ad hoc approximation, cumulative distribution function, correction method 1 Introduction The Gauss error function defined below is a special function appearing in statistics and partial differential equations. x 2 2 erf(x)=√ e−t dt , −∞ <x<∞ , (1) π 0 which is strictly increasing and maps the real line R onto an interval (−1, 1). The integral can not be represented in a closed form and its Taylor series 4262 Beong In Yun expansion is ∞ 2 (−1)nx2n+1 erf(x)=√ (2) π n=0 n!(2n +1) which converges for all −∞ <x<∞ [2].
    [Show full text]
  • Gammachi: a Package for the Inversion and Computation of The
    GammaCHI: a package for the inversion and computation of the gamma and chi-square cumulative distribution functions (central and noncentral) Amparo Gila, Javier Segurab, Nico M. Temmec aDepto. de Matem´atica Aplicada y Ciencias de la Comput. Universidad de Cantabria. 39005-Santander, Spain. e-mail: [email protected] bDepto. de Matem´aticas, Estad´ıstica y Comput. Universidad de Cantabria. 39005-Santander, Spain c IAA, 1391 VD 18, Abcoude, The Netherlands1 Abstract A Fortran 90 module GammaCHI for computing and inverting the gamma and chi-square cumulative distribution functions (central and noncentral) is presented. The main novelty of this package are the reliable and accurate inversion routines for the noncentral cumulative distribution functions. Ad- ditionally, the package also provides routines for computing the gamma func- tion, the error function and other functions related to the gamma function. The module includes the routines cdfgamC, invcdfgamC, cdfgamNC, invcdfgamNC, errorfunction, inverfc, gamma, loggam, gamstar and quotgamm for the computation of the central gamma distribution function (and its complementary function), the inversion of the central gamma dis- tribution function, the computation of the noncentral gamma distribution function (and its complementary function), the inversion of the noncentral arXiv:1501.01578v1 [cs.MS] 7 Jan 2015 gamma distribution function, the computation of the error function and its complementary function, the inversion of the complementary error function, the computation of: the gamma function, the logarithm of the gamma func- tion, the regulated gamma function and the ratio of two gamma functions, respectively. PROGRAM SUMMARY 1Former address: CWI, 1098 XG Amsterdam, The Netherlands Preprint submitted to Computer Physics Communications September 12, 2018 Manuscript Title: GammaCHI: a package for the inversion and computation of the gamma and chi- square cumulative distribution functions (central and noncentral) Authors: Amparo Gil, Javier Segura, Nico M.
    [Show full text]
  • PROPERTIES of the GAUSSIAN FUNCTION } ) ( { Exp )( C Bx Axy
    PROPERTIES OF THE GAUSSIAN FUNCTION The Gaussian in an important 2D function defined as- (x b)2 y(x) a exp{ } c , where a, b, and c are adjustable constants. It has a bell shape with a maximum of y=a occurring at x=b. The first two derivatives of y(x) are- 2a(x b) (x b)2 y'(x) exp{ } c c and 2a (x b)2 y"(x) c 2(x b)2 exp{ ) c2 c Thus the function has zero slope at x=b and an inflection point at x=bsqrt(c/2). Also y(x) is symmetric about x=b. It is our purpose here to look at some of the properties of y(x) and in particular examine the special case known as the probability density function. Karl Gauss first came up with the Gaussian in the early 18 hundreds while studying the binomial coefficient C[n,m]. This coefficient is defined as- n! C[n,m]= m!(n m)! Expanding this definition for constant n yields- 1 1 1 1 C[n,m] n! ... 0!(n 0)! 1!(n 1)! 2!(n 2)! n!(0)! As n gets large the magnitude of the individual terms within the curly bracket take on the value of a Gaussian. Let us demonstrate things for n=10. Here we have- C[10,m]= 1+10+45+120+210+252+210+120+45+10+1=1024=210 These coefficients already lie very close to a Gaussian with a maximum of 252 at m=5. For this discovery, and his numerous other mathematical contributions, Gauss has been honored on the German ten mark note as shown- If you look closely, it shows his curve.
    [Show full text]
  • 6.2 Incomplete Gamma Function, Error Function, Chi-Square Probability Function, Cumulative Poisson Function
    216 Chapter 6. Special Functions which is related to the gamma function by Γ(z)Γ(w) B(z, w)= (6.1.9) Γ(z + w) hence go to http://world.std.com/~nr or call 1-800-872-7423 (North America only),or send email [email protected] (outside North America). machine-readable files (including this one) to anyserver computer, is strictly prohibited. To order Numerical Recipesbooks and diskettes, is granted for users of the World Wide Web to make one paper copy their own personal use. Further reproduction, or any copying Copyright (C) 1988-1992 by Cambridge University Press.Programs Numerical Recipes Software. Permission World Wide Web sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) #include <math.h> float beta(float z, float w) Returns the value of the beta function B(z, w). { float gammln(float xx); return exp(gammln(z)+gammln(w)-gammln(z+w)); } CITED REFERENCES AND FURTHER READING: Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Math- ematics Series, vol. 55 (Washington: National Bureau of Standards; reprinted 1968 by Dover Publications, New York), Chapter 6. Lanczos, C. 1964, SIAM Journal on Numerical Analysis, ser. B, vol. 1, pp. 86±96. [1] 6.2 Incomplete Gamma Function, Error Function, Chi-Square Probability Function, Cumulative Poisson Function The incomplete gamma function is de®ned by x γ(a, x) 1 t a 1 P (a, x) e− t − dt (a>0) (6.2.1) ≡ Γ(a) ≡ Γ(a) Z0 It has the limiting values P (a, 0) = 0 and P (a, )=1 (6.2.2) ∞ The incomplete gamma function P (a, x) is monotonic and (for a greater than one or so) rises from ªnear-zeroº to ªnear-unityº in a range of x centered on about a 1, and of width about √a (see Figure 6.2.1).
    [Show full text]