
Chapter 3 Optimal Interpolation Optimal Interpolation or OI is a commonly used and fairly simple but powerful method of data assimilation. Most weather centers around the world used OI for operational numerical weather forecasts throughout the 1970s and 80s. In fact, Canada was the ¯rst to implement an OI for operational weather forecasting in the 1970's with the others following suit. Only recently, in the last decade, has there been a shift away from OI and toward variational methods. OI is also a good place to start when studying data assimilation since only the spatial dimensions are used. Later on we will introduce the time dimension as well. OI is so named because it is a minimum variance estimator, as we shall see. However, this is only in theory. In practice, it will never be optimal, so it is often referred to as \Statistical Interpolation" or SI. This chapter is primarily based on Daley (1991) chapter 4. For more details, the interested reader is referred to Daley (1991), chapters 3, 4 and 5. 3.1 A two city example Let us return to our simple example in which we estimate temperature or ozone levels at Toronto and Montreal but now based on observations and background estimates at both places. We also assume that the observation and background errors are unbiased and that we have access to their variances. The instrument type is assumed the same for both locations and equals (σr)2. The correlation of observation error between Toronto and Montreal is 0. The background error variance b 2 b 2 is (σT ) at Toronto and (σM ) at Montreal. The correlation of background error between Toronto and Montreal is given by ½. Recall that we introduced an analysis equation: xa = xb + W(xo xb): (3.1) ¡ where b b b T x = (xT ; xM ) ; o o o T x = (xT ; xM ) ; a a a T x = (xT ; xM ) ; where the subscripts T and M refer to Toronto and Montreal and the superscripts b, a and o refer to the background, analyses and observations, respectively. The weight matrix is 2 x 2 with 31 components: w w W = T T T M : (3.2) " wMT wMM # What we'd like to do is determine the weight matrix using the knowledge we possess. To do this, let's follow the simple scalar example of section 1.5. First, assume the existence of a truth although we don't need to know what this is. By doing so, we can rewrite the analysis equation in terms of errors. With the errors de¯ned as ea = xa xt ¡ eb = xb xt ¡ er = xo xt ¡ and the de¯nitions, Pa =< (ea)(ea)T > Pb =< (eb)(eb)T > R =< (er)(er)T >; (3.3) the analysis equation is ea = eb + W(er eb): (3.4) ¡ Now recall the assumption about unbiased errors. Because of this assumption we can easily see (by taking the expectation of (3.4)) that the analysis error will be unbiased also. This assumption is not critical however, because if biases exist we can de¯ne new analysis variables that subtract the biases as in problem 1.3. To solve for the weights, let's use the information we have, namely, the variances. Since the variances are the diagonals of the covariance matrix we shall form the latter by multiplying (3.4) on the right by the transpose of itself and then applying the expectation operator. The result is Pa = Pb + W(R + Pb)WT WPb PbWT (3.5) ¡ ¡ if we assume no correlation between background and observation errors. If the background comes from a model forecast, then this is a completely independent information source from the mea- surement so this assumption is reasonable, to ¯rst order. In reality, both the forecast error and observation error may be functions of the true atmospheric state and are therefore not independent. However, the error of this assumption is much smaller than some approximations used in practice so it is safe to make the assumption. Now, let's take the derivative of the trace of (3.4) with respect to the weight matrix and set it to zero: 0 = 2W(R + Pb) 2Pb: (3.6) ¡ At this point it should be clear that it is necessary to become familiar with vector algebra for this course. While the problem can be solved by writing out the components of the vector, this is onerous. It is easier to use the rules found in Todling (1999) ch. 4, problem 4: d[Tr(AB)] d[Tr(BA)T] d[Tr(ACAT)] = BT; = B; = 2AC (3.7) dA dA dA 32 where AB and C are symmetric. These rules are easy to con¯rm by writing out the terms of each side. As an example, the ¯rst rule is derived in Appendix B. Try to derive the other rules in the same manner. Now we can rearrange the above to solve for W: W = (Pb)(R + Pb)¡1: (3.8) With this weight we can obtain our minimum variance analysis. We can also substitute this expression into (3.5) to obtain an error estimate of our analysis: Pa = Pb(I WT) = (I W)Pb ¡ ¡ = [I Pb(R + Pb)¡1]Pb ¡ = [(R + Pb)(R + Pb)¡1 Pb(R + Pb)¡1]Pb ¡ = R(R + Pb)¡1Pb: (3.9) This can be inverted to get (Pa)¡1 = (Pb)¡1(R + Pb)R¡1 = [(Pb)¡1R + I]R¡1 = [(Pb)¡1 + R¡1] Thus we have solved our problem. The analysis equation is (3.1) with weights given by (3.8) and analysis error, (3.9). Note the similarity between (3.8) and (1.6) and between (3.9) and (1.7). Of course the vector case is more general and includes the scalar case when the dimension of x and xo are equal to 1. We can do a similar analysis of the impact of observations on the analysis, as in section 1.5. If the observations are perfect or the background error is very poor, then r 2 b 2 b b (σ ) << (σT;M ) so R << P and W = I. (Note that in this simple example, both P and R are 2x2 matrices so we can de¯ne Pb > R as meaning Pb R is positive de¯nite.) The observations ¡ are given full weight and the analysis at Toronto and Montreal equals the observations at Toronto and Montreal, respectively. In contrast, if the observations are of very poor quality or are not available, Pb << R and (σr)2 so W=0 and xa = xb. In the absence of observations, the ! 1 analysis reverts to the background estimates. Finally, if Pb = R, then W=0.5 I and observations and background are combined with equal weights at both locations. We have discussed the solution to the data assimilation problem for some very special cases. What about the more general case? In order to consider the more general case, let us ¯rst write everything back in component form. From (3.1), we have xa = xb + w (xo xb ) + w (xo xb ) (3.10) T T T T T ¡ T T M M ¡ M xa = xb + w (xo xb ) + w (xo xb ): (3.11) M M MT T ¡ T MM M ¡ M The terms in round brackets are components of the innovation vector or observation increment. The innovations are simply the di®erences between the observation and background values evaluated at observation locations. Now we see that the observation at Toronto a®ects the analysis at both Toronto and Montreal. What is the weight given to the observation increment at Toronto? To see this let's write the components of the weight matrix according to (3.8). First note that b 2 b b r 2 b (σT ) ρσT σM (σ ) 0 P = b b b 2 ; R = r 2 (3.12) " ρσM σT (σM ) # " 0 (σ ) # 33 so that r 2 b 2 b b b (σ ) + (σT ) ρσT σM R + P = b b r 2 b 2 (3.13) " ρσM σT (σ ) + (σM ) # and r 2 b 2 b b b ¡1 1 (σ ) + (σM ) ρσT σM (R + P ) = ¡2 2 (3.14) D ρσb σb (σr) + (σb ) j j " ¡ M T T # with D = R + Pb = ((σr)2 + (σb )2)((σr)2 + (σb )2) + ½2(σb )2(σb )2: (3.15) j j j j M T T M Combining (3.12), (3.14) and (3.15) according to (3.8) yields 2 1 (1 + ®M ) ½ ½p®M p®T W = ¡ 2 (3.16) (1 + ® )(1 + ® ) + ½2 ½p® p® (1 + ® ) ½ M T " M T T ¡ # r 2 b 2 r 2 b 2 with ®M = (σ ) =(σM ) and ®T = (σ ) =(σT ) . If the correlation between the background (fore- cast) error at Toronto and Montreal is zero, then the weight simpli¯es to: ¡1 (1 + ®T ) 0 W = ¡1 : (3.17) " 0 (1 + ®M ) # Thus only the observation at Toronto impacts the analysis at Toronto and similarly for Montreal. As in the scalar case, the observation and background are combined according to their relative accuracies. As the correlation increases from zero, the o®-diagonal terms in (3.16) increase in magnitude. Thus the weight of an observation at Montreal on the analysis at Toronto increases and vice versa. The maximum weight that can be given to an observation at Montreal for an analysis at Toronto is when ½=1 and max p®M p®T wT M = : (1 + ®M )(1 + ®T ) + 1 When ½=1, the observation at Toronto receives a weight of ®M wT T = (1 + ®M )(1 + ®T ) + 1 max and wT M =wT T = p®T =p®M .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-