
Selected Topics on Data Analysis in Particle Physics Lecture Notes V.Karim¨aki HEP Summer School, Lammi Biological Station March 1992 Printed version edited from hand made transparencies, Spring, 2010 Contents 1 Introduction 1 2 Propagation of measurement errors 2 2.1 Error propagation in case of linear transformation ............ 2 2.2 Error propagation in case of non-linear transformation ......... 4 3 Fitting parameterized model with experimental data 7 3.1 The method of maximum likelihood .................... 7 3.2 Least squares method as a special case of maximum likelihood ..... 8 4 Least squares solutions with error estimates 10 4.1 Linear χ2 solution .............................. 10 4.2 Non-linear least squares fit ......................... 11 4.2.1 Convergence criteria ........................ 13 4.3 Least squares fit with constraints ..................... 13 4.4 Fit quality tests ............................... 16 4.4.1 Pull or stretch values of fitted parameters ............ 16 4.4.2 Fit probability distribution ..................... 16 4.5 Maximum Likelihood and Poisson statistics ................ 18 4.5.1 Maximum Likelihood and parameter errors ............ 20 5 Introduction to Monte Carlo simulation 21 5.1 Generation of non-uniform distribution by inversion method ...... 21 5.2 Inversion method for discrete distribution ................. 22 5.3 Approximate inversion method using tabulation ............. 23 5.4 Hit-or-Miss method ............................. 24 5.5 Hit-or-Miss method by comparison function ............... 24 5.6 Composition method ............................ 25 i 6 Generation from common continuous distributions 27 6.1 Uniform distribution ............................ 27 6.2 Gaussian distribution ........................... 27 6.2.1 Polar or Box-Muller method .................... 29 6.2.2 Faster variation of the polar method ............... 30 6.2.3 Gaussian generation by summation method ........... 31 6.2.4 Kahn’s method ........................... 32 6.3 Multidimensional Gaussian distribution .................. 32 6.4 Exponential distribution .......................... 35 6.5 χ2 distribution ............................... 37 6.6 Cauchy or Breit-Wigner distribution ................... 39 6.7 Landau distribution ............................. 41 7 Monte Carlo integration 43 7.1 Basics of the MC integration ........................ 43 7.2 Convergence and precision of the MC integration ............ 45 7.3 Methods to accelerate the convergence and precision ........... 46 7.3.1 Stratified sampling ......................... 47 7.3.2 Importance sampling ........................ 48 7.3.3 Control variates ........................... 49 7.3.4 Antithetic variates ......................... 50 7.4 Limits of integration in the MC method ................. 51 7.5 Comparison of convergence with other methods ............. 52 A Basic statistics for simulation A53 A.1 Introduction ................................. A53 A.2 Definition of probability .......................... A53 A.3 Combined probabilities ........................... A54 A.4 Probability density function ........................ A54 A.5 Cumulative distribution function ..................... A55 A.6 Marginal and conditional distribution ................... A55 A.7 Expectation value, mean value and variance ............... A56 A.8 Covariance and correlation ......................... A56 A.9 Independent variates, covariance matrix .................. A57 A.10 Change of variables ............................. A57 A.11 Distribution of a function of random variates ............... A58 A.12 Moments of a distribution ......................... A59 A.13 Characteristic function ........................... A60 ii A.14 Cumulants of a distribution ........................ A61 A.15 Probability generating function ...................... A61 B Exercises B62 B.1 Basic statistics exercises .......................... B62 B.2 Uniform random number generation exercises .............. B63 B.3 Generation from arbitrary distributions exercises ............. B64 B.4 Wellknown continuous distributions exercises ............... B65 B.5 Discrete distributions exercises ....................... B68 B.6 Monte Carlo integration exercises ..................... B69 B.7 Application examples ............................ B70 iii 1 Introduction BlaBla ... 1 2 Propagation of measurement errors Error propagation is needed when quantities to be studied must be calculated from directly measurable quantities with given covariance matrix. Let us denote a set of measurements as follows: • Set of measurements: x = (x1, x2, . , xn) • Covariance matrix: Vx, with vij = cov(xi, xj) The components xi represent individual measurements which may be correlated. In case of uncorrelated xi the covariance matrix is diagonal. Furthermore, let us suppose that we are interested in a set of transformed variables which are derived from the measured variables x by some transformation formula: x0 = x0(x) (1) 0 0 0 0 or: x = (x1, x2, . , xm), m ≤ n (2) 0 where the components xi are functions of x. For example: The measured variables could be the Cartesian coordinates x = (x, y, z) and the transformed variables could be the spherical coordinates x0 = (r, ϕ, θ). 0 Now the question is: What is the covariance matrix Vx0 of the new variables x ? This is the problem of the error propagation. The answer is derived by recalling first the definition of the covariance matrix: Z (Vx)ij ≡ cov(xi, xj) ≡ cov(x)ij ≡ σij ≡ vij = (xi − hxii)(xj − hxji)f(x)dx (3) where f(x) is the probability density and where we have listed various notations for the covariance. From the definition it readily follows that cov(axi, bxj) = ab cov(xi, xj) (4) or more generally: X X X cov( aixi, bjxj) = aibjcov(xi, xj) (5) i j i,j 2.1 Error propagation in case of linear transformation We first consider linear transformations x → x0. A linear transformation can be written in matrix notation: x0 = J x (6) where J is an m × n matrix independent of x. According to the definition (3) we can derive the expression for the covariance matrix element i, j: X X (Vx0 )ij = cov( Jikxk , Jjlxl) k l X = JikJjl cov(xk, xl) k,l T = (JVxJ )ij 2 where we have used equation (5). The above result implies the following transformation law for the covariance matrix: T Vx0 = JVx J (7) This the error propagation formula in case of linear transformation of variables x0 = Jx. Examples Pn Example 1: Error estimate of sum of measured quantities s = 1 xi. Error ∆s = ? In matrix notation we write s = Jx where J = (1 1 ··· 1) and it follows that n 2 2 T X X (∆s) ≡ σs = (1 1 ··· 1)Vx(1 1 ··· 1) = σii + 2 σij (8) 1 i>j 2 where we use the fact that covariance matrix is symmetric and the notation (∆xi) = σii. If Vx is diagonal (uncorrelated measurements xi), we get: n 2 X 2 σs = σi (9) 1 2 where our notation is: σii = σi . Example 2: Error estimate of weighted mean P i wixi µ = P . i wi of N quantities. Error estimate σµ = ? Here we assume that xi are independent of each other so that cov(xi, xj) = 0 ↔ i 6= j and 2 σ1 σ2 (0) 2 Vx = .. (10) (0) . 2 σN where all the off-diagonal elements are zero. Now the J matrix is 1 by N: X −1 J = ( wi) (w1 w2 ··· wN ) i and we get 2 X −2 T X −2 X 2 σµ = ( wi) (w1 w2 ··· wN )Vx(w1 w2 ··· wN ) = ( wi) (wiσi) (11) i i i −2 Normally the weights are inverse variances i.e. wi = σi and inserting to the above result we obtain: 1 X 1 = . (12) σ2 σ2 µ i i 3 A special case is that the weights are equal: σi = ∆x for all i. In this case we get N 1 X ∆x µ = x ; σ = √ (13) N i µ 1 N which are the formulae for a simple (unweighted) mean and its error estimation. Example 3: Error estimate of simple sums of three quantities x1, x2 and x3: u1 = x1 + x2 (14) u2 = x2 + x3. (15) Here the transformation is from 3D to 2D so that n = 3 and m = 2. The transformation matrix is: 1 1 0 J = 0 1 1 so that the covariance matrix of u = (u1, u2) is: σ σ σ 1 0 1 1 0 11 12 13 V = σ σ σ 1 1 (16) u 0 1 1 12 22 23 σ13 σ23 σ33 0 1 where the middle most matrix is the covariance matrix Vx of the variables x = (x1, x2, x3). 2.2 Error propagation in case of non-linear transformation We define again a transformation of a set of measurements x as: x0 = x0(x) x = (x1, . , xn) 0 0 0 x = (x1, . , xm) where x0 is under-constrained i.e. m ≤ n. Here x is again an array of measurements with known covariance matrix Vx which is symmetric n × n matrix. The problem is now how to calculate the covariance matrix of quantities x0 i.e. the covariance matrix 0 0 Vx0 of the m quantities x1, . , xm. We expand x0 in Taylor series around the expectation value of x: x = hxi. Each 0 component xi is expanded as: 0 0 0 xi = xi(hxi) + ∇xxi(x = hxi)(x − hxi) + ··· (17) so that the expansion in matrix form reads: x0 = x0(hxi) + J(x − hxi) + ··· (18) where J is now the Jacobian derivative matrix of the transformation: ∂x0 ∂x0 ∇x0 1 ··· 1 1 ∂x1 ∂xn . . .. J = . = . (19) 0 0 0 ∂xm ∂xm ∇xm ··· ∂x1 ∂xn 4 computed at x = hxi. Neglecting higher order terms we have the expansion: x0 = x0(hxi) + Jx − Jhxi. (20) The first and the third terms are constant, because they are calculated at a fixed point x = hxi, so that their covariances vanish and we have cov(x0) = cov(Jx) = J cov(x) JT . Using the V notation for the covariance matrix, the error propagation formula in case of non-linear transfromation reads: T Vx0 = JVx J (21) which is the same formula as in case of linear transformation except that the matrix J is now the Jacobian derivative matrix of the non-linear transformation computed at x = hxi. Examples Example 1: Error estimate of a product u = x1x2. Given the covariance matrix of (x1, x2) what is the error estimate ∆u? The Jacobian is J = ∂u/∂x1 ∂u/∂x2 = x2 x1 so that 2 2 2 σ1 σ12 x2 2 2 2 2 (∆u) ≡ σu = x2 x1 2 = x2σ1 + 2x1x2σ12 + x1σ2.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages78 Page
-
File Size-