
Best Linear Unbiased Estimation and Kriging in Stochastic Analysis and Inverse Modeling Presented by Gordon A. Fenton 113 Introduction We often want some way of estimating what is happening at unobserved locations given nearby observations. Common ways of doing so are as follows; 1. Linear Regression: we fit a curve to the data and use that curve to interpolate/extrapolate. Uses only geometry and the data values to determine the curve (by minimizing the sum of squared errors). 2. Best Linear Unbiased Estimation (BLUE): similar to Linear Regression, except that correlation is used, instead of geometry, to obtain the best fit. 3. Kriging: similar to BLUE except the mean is estimated rather than being assumed known. 114 Best Linear Unbiased Estimation (BLUE) We express our estimate at the unobserved location Xn+1, as a linear combination of our observations, X1, X2, …, Xn; n ˆ XXn++11=+−µβµ n∑ kk( k) k=1 Notice that position does not enter into this estimate at all – we st nd will determine the unknown coefficients, βk , using 1 and 2 moment information only (i.e. means and covariances). To determine the “Best” estimator, we look at the estimator error, defined as the difference between our estimate and the actual value; n ˆ EX=n+1 − X n + 1 = X n ++ 11 −−µβµ n∑ kk( X − k) k=1 115 Best Linear Unbiased Estimation (BLUE) Aside: If the covariances are known, then they include information both about distances between observation points, but also about the effects of differing geological units. Linear regression considers only distance between points. Thus, regression cannot properly account for two observations which are close together, but lie in largely independent layers. The covariance between these two points will be small, so BLUE will properly reflect their effective influence on one another. 116 Best Linear Unbiased Estimation (BLUE) To make the estimator error as small as possible, its mean should be zero and its variance minimal. The mean is automatically zero by the selected form of the estimator; n −ˆ = −−µβµ − EEXXn+1 n + 1 X n ++ 11 n∑ kk( X k) k=1 n =−−µµn++11 n∑ β kE[ X kk − µ] k=1 n =−−∑ βµkk( µ k) k=1 = 0 In other words, this estimator is unbiased. 117 Best Linear Unbiased Estimation (BLUE) Now we want to minimize the variance of the estimator error: 2 Var XX−=ˆˆE XX − nn++11( nn ++ 11) =−+22ˆˆ E2Xn+1 XX nn ++ 11 X n + 1 =−+22ˆˆ EXn+1 2E XXnn++11 E X n + 1 To simplify the algebra, we will assume that μ = 0 everywhere (this is no loss in generality). In this case, our estimator simplifies to n ˆ XXn+1 = ∑ β kk k=1 118 Best Linear Unbiased Estimation (BLUE) Our estimator error variance becomes n nn −=ˆ 2 −β +ββ Var XXXn++11 n En + 1 2∑k E[ XX n+1 k] ∑∑ kjE XX k j k=1 kj=11 = n nn =−+2 β ββ VarXn++11 2∑ k Cov[ XX n , k] ∑∑ kjCovXX k , j k=1 kj=11 = We minimize this with respect to our unknown coefficients, β1, β2, …, βn, by setting derivatives to zero; ∂ −=ˆ = Var XXnn++110 for l1,2, , n ∂βl 119 Best Linear Unbiased Estimation (BLUE) ∂ Var[ X n+1 ] = 0 ∂βl ∂ n ∑ βkCov[ XX nk++11 ,] = Cov[ XX nl , ] = b l ∂βl k=1 ∂ nn n n ββ = β = β ∑∑ j k CovXXj , k 2∑ k Cov[ XXl , k ] 2∑ k C lk ∂βl kj=11 = k=1 k=1 ∂ n −ˆ =−+β = so that Var XXn++11 n 2 bl 2∑ k C lk 0 ∂βl k=1 in other words, β is the solution to −1 Cββ=→= b Cb 120 BLUE: Example Suppose that ground penetrating radar suggests that the mean depth to bedrock, μ, in metres, shows a slow increase with distance, s, in metres, along the proposed line of a roadway. 121 BLUE: Example However, because of various impediments, we are unable to measure beyond s = 20 m. The best estimate of the depth to bedrock at s = 30 m is desired. 122 BLUE: Example Suppose that the following covariance function has been established regarding the variation of bedrock depth with distance 2 τ C (τσ) =X exp − 40 where σX = 5 m and τ is the separation distance between points. We want to estimate the bedrock depth, X3, at s = 30 m, given the following observations of X1 and X2 at s = 10 m and 20 m, respectively : at s = 10 m, x1 = 21.3 m at s = 20 m, x2 = 23.2 m 123 BLUE: Example Solution: We start by finding the components of the covariance matrix and vector: Cov[ XX , ] e−20/ 40 = 13= σ 2 b X −10/ 40 Cov[ XX23 , ] e Cov[ XX ,] Cov[ XX , ] 1 e−10/ 40 = 11 12= σ 2 C X −10/ 40 Cov[ XX21 ,] Cov[ XX22 , ] e 1 Note that b contains the covariances between the observation and prediction points, while C contains the covariances between observation points only. This means that it is simple to compute predictions at other points (C only needs to be inverted once). 124 BLUE: Example Now we want to solve Cb β = for the unknown coefficients β 1 ee−−10/ 40 β 20/ 40 σσ221 = XX−−10/ 40 10/ 40 ee1 β2 Notice that the variance cancels out – this is typical of stationary processes. Solving, gives us −1 β 1 ee−−10/ 40 20/ 40 0 1 = = −−10/ 40 10/ 40 −10/ 40 β2 ee1 e −10/40 Thus, β1 = 0 and β2 = e . Note the Markov property: the ‘future’ (X3) depends only on the most recent past (X2) and not on the more distance past (X1). 125 BLUE: Example The optimal linear estimate of X3 is thus −10/ 40 xˆ32=+−µµ(30) ex( (20)) −1/4 =20 + 0.3( 30) +e ( 23.2 −− 20 0.3( 20)) =29.0 − 2.8e−1/4 = 26.8 m 126 BLUE: Estimator Error Once the best linear unbiased estimate has been determined, it is of interest to ask how confident are we in our estimate? If we reconsider our zero mean process, then our estimator is given by n ˆ XXn+1 = ∑ β kk k=1 which has variance n Var XXˆ = σβ2 =Var n+1 Xˆ ∑ kk k=1 nn = ββ ∑∑ kjCovXX k , j kj=11 = TT =βββCb = 127 BLUE: Estimator Error The estimator variance is often more of academic interest. We are typically more interested in asking questions such as: What is the probability that the true value of Xn+1 exceeds our estimate by a certain amount? For example, we may want to compute >+=ˆˆ −> PPXn+1 Xb n + 1 XXbnn++11 where b is some constant. To determine this, we need to know the ˆ distribution of the estimator error ()XXnn++11− If X is normally distributed, then our estimator error is also normally distributed with mean zero, since unbiased, µ −=ˆ E = E XX nn++110 128 BLUE: Estimator Error The variance of the estimator error is n nn σ22=−+ β ββ E VarXn++11 2∑ k Cov[ XX n , k] ∑∑ kjCovXX k , j k=1 kj=11 = 2 TT =+−σX ββCb2 β 2 T TT =+σX βC ββ −− bb β 2 TT =σββX +()Cb −− β b 2 T =σβX − b where we made use of the fact that β is the solution to Cβ = b, or, equivalently, Cβ − b = 0 129 BLUE: Conditional Distribution ˆ The estimator X n + 1 is also the conditional mean of Xn+1 given the observations. That is, ˆ E[ Xn++1 | XX 12 , ,, Xnn] = X1 The condition variance of Xn+1 is just the variance of the estimator error; 2 Var[ Xn+1 | XX 12 , , , XnE] = σ In general, questions regarding the probability that Xn+1 lies in some region should employ the conditional mean and variance of Xn+1, since this makes use of all of the information. 130 BLUE: Example Consider again the previous example. 1. compute the variance of the estimator and the estimator error ˆ 2. estimate the probability that X3 exceeds X 3 by more than 4 m. 131 BLUE: Example Solution: − 1 e 1/4 We had = σ 2 C X −1/4 e 1 0 e−2/4 and βσ= = 2 −10/40 , b X −1/4 e e e−2/4 so that σβ2 =Var Xˆ =T b = 52 0 ee−− 1/4 = 52 2/4 Xˆ 3 { }−1/4 e σ =−1/4 = which gives Xˆ 5e 3.894 m 132 BLUE: Example e−2/4 The covariance vector found previously was = σ 2 b X −1/4 e The variance of the estimator error is then σ22= −=−ˆ σβT EXVar XX33 b e−2/4 =σσ2 − 2 −1/4 XX{0 e }−1/4 e =512( − e− 2/4 ) −2/4 The standard deviation is thus σ E =−=5 1e 3.136 m This is less than the estimator variability and significantly less than the variability of X (σX = 5). This is due to the restraining effect of correlation between points. 133 BLUE: Example −>ˆ We wish to compute the probability P4XX33 ˆ We first need to assume a distribution for ( XX33− ) Let us assume that X is normally distributed. Then since the ˆ estimator X 3 is simply a sum of X’s, it too must be normally ˆ distributed. This, in turn implies that the quantity ( XX33− ) is normally distributed. ˆ µ = −=ˆ Since X 3 is an unbiased estimate of X3, E E0XX33 and we have just computed σE = 3.136 m.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages43 Page
-
File Size-