APPENDIX 1

Existence of random processes with given finite-dimensional distributions

Let X = X(t,w) be a in the nota• tion of Chapter I, section 1, with state-space Sand para- meter set T. Let ~ denote the mapping from the t l' ... , tn space n into the n-tuple (X(tl,w), ... , X(tn,w». Then the finite-dimensional distributions of the process X are the probability measures defined by

P (C) = p(~-l C) (1 ) tl.···.tn t l •··• .tn where C is any measurable set in Sn; this is the same as

(1) on page 2 and makes sense since ~ must be t l •·· .• tn an ~-measurable mapping. Such a measure P is t l •· .. ,tn defined for each finite. ordered subset of T. The measures defined by (1) must satisfy certain con• sistency conditions. Let w denote any permutation of the integers I.2 •...• n and let f be that one-to-one mapping 'If of Sn into itself defined by

n. f or any n-tup 1 e ( xI ••.. ,xn ) E S It is clear that

t't t (w) = f • fJ t t (w), 'If 1 •••• , 'lfn 'If 1····' n and from this it follows that the distributions defined in

(1) must satisfy

(2) Appendix 1 251

This is the first consistency condition.

Now define a n+m,n to be the projection mapping from Sn+m to Sn which sends (x , ... ,x , ... ,x ) into 1 n n+m (xp,,,,x ); then 0-1 C consists of all those points in n n+m,n Sn+m whose first n coordinates determine a point of the

set C l'n 8n. For any or d ere d set 0 f n+m e 1 ements 0 f T we then must have

and so from (1) it follows that

(3)

n for all measurable C C 8 and all n and m > O. This is the second condition. The finite-dimensional distributions of any stochastic process on T with state-space 8 must automatically satisfy (2) and (3). The problem now is to prove a converse. That is, sup• pose we are given any set T, any measurable space (S,S/), and a family of measures {p } on the measurable t l ,· ., , tn sets of the products 8n which satisfy both conditions (2) and (3). Does there exist a random process X = X(t,w) on some probability space which has parameter set T, state• space 8 and whose finite-dimensional distributions are the given family? The answer, in this generality, is "no." but when some assumptions are made about (8,5') a positive re• sult can be proved. We will sketch the proof with S = Rl and then discuss briefly how far it can be generalized.

Theorem. (Kolmogorov). Given any ~ T and any family of measures 252 APPENDIX 1

~ Euclidean spaces Rn which satisfy (2) and (3), there

exists a probability space (n, .5',P) ~.! real random pro•

cess {Xt : t E T} defined ~ it such that condition (1) holds; i.e., the measures {P } are the finite-dimen- -- - tl, .. ·,tn ----...... ;:.:-..:. sional probability distributions of {Xt }.

Sketch of Proof. We use a direct product to define our measurable space (n,.5'), although of course P will not be a product measure except in the ·special case when the ran• dom variables are to be independent. Accordingly we put

TR n = II Rt (R,Rt is the real line); (4) tET

a typical element of n will be a function w('): T 0+- RI.

The field .5' will be the 0- field generated by all sets of the form

E' = {w En: (w(t ),·· • ,w(t » E e} = f1 (e), (5) l n t 1 , ... , tn n where e is some Borel set in R and (tl,···,tn) is any finite ordered subset of T; these are called (Borel) cylin- der sets. The random variables of the process we are trying to construct will be the coordinate functions defined by

X(t,w) = wet), t E T, (6)

and so equation (1) must be used to define the measure P on the class of cylinder sets (5). There are two obstacles to be overcome. First, we must show that P is unambiguously defined on the cylinder sets. That is, if two· different subsets of T and/or Borel sets e can be used in (5) to define the same cyiinder set E it must be verified that the proposed finite-dimensional Appendix 1 253

distributions. used in (1). assign the same measure to E regardless of which representation is chosen. It is precisely here that the consistency conditions (2) and (3) come in; we omit the details. Second. the measure P which is now defined for cylin• der sets must be extended to a true countably-additive prob•

ability measure on the a-field ~ According to the basic "extension theorem" which is fundamental in measure-theory.

this ~ be done. and uniquely. provided that whenever En is a decreasing sequence of cylinder sets with nE n n we have lim (7) n + 00

To establish this "continuity condition." suppose that (7)

does not hold so that P(E) > d > 0; we will show that then n the intersection of the En can't be empty. Each cylinder set En is defined by (5). using a cer• tain ordered finite subset Tn of T and a "base" Cn eRn. It is easy to see there is no loss of generality in assuming that the sets Tn are increasing. It is also possible. though less trivial. to assume that each Cn is compact. The idea here is that if one of them is not compact. because the measure P (.) must be regular on Rn that set t l •··· .tn C can be replaced by a compact subset C' whose measure n n differs arbitrarily little from P (C ) = peE ). t l •· ... t n n n The corresponding new cylinder sets E' can be made to n satisfy the same conditions as the original E. and in par• n ticular can be chosen so that peE') > d/2 for all n. n - Now it is possible to show that n E + .; clearly n it is enough to show n E~ +~. For each n. choose a point 254 APPENDIX I

Wn E E'. By the definition of E'I we have ~T (wI) E C" n I I' since W E E' C E' we also have ~T (wn ) E Ci· Since C' n n I I I is compact, there is a subsequence {wn ' } such that lim w, (t) exists for each t E TI and these limits are n' .... co n the coordinates of a point in Ci. Next, by a similar argu- ment it is possible to choose a sub-subsequence {wn,,} such that the sequences {wn" (t)} have limits for each t E T2 and the limits define a point in CZ' Continue this process for all n, and then use Cantor's "diagonal argument." The

result is a subsequence {wn*} such that

lim n* .... 00 exists for each t E UTn; moreover, if (tl ,t2, ... ,tk) = Tn then (w(tl ), .•. ,w(tk)) E C~. Finally, define wet) = 0 (or any other fixed value) when t Et: UTn . The function w now belongs to every set E', and the continuity condition is n proved. The extension theorem then gives us the desired measure on ~ and Kolmogorov's theorem is established. As noted, the theorem is not true when the real line is replaced by an arbitrary measurable space. The proof above, however, still works if a locally-compact metric space is used. The critical point is that any finite Borel measure on such a space must be regular, so that measurable sets can be approximated from within by compact sets. This gen• eralization suffices for the processes studied in this book. APPENDIX 2

Review of conditional probability

Let (n, ~,P) be a probability space, ~' a a• additive subalgebra of ~ and X E Ll an integrable random variable. (The Ll space may be real or complex.) We call YELl a version of the conditional expectation of X with respect to gr', and wri te

Y = E (X! Y'), (1) provided that (i) Y is measurable with respect to .$I' and (ii) for every A E Y' we have

f YdP = f XdP. A A

When X is the characteristic function ~B of a set B E ~ Y is called the conditional probability of B and written

Y PCB! gr'). (2)

If gr' = IB{X : a E Q} is the a-algebra generated by a set a of random variables {X a , a E Q}, we can write

Y=E(XIXa:aEQ) or Y=P(B!Xa:aEQ) instead of (1) or (2) and speak of the conditional expecta- tion (or probability) "given" the random variables {X }. a Some of the properties of conditional expectation (enough for the purposes of this book) are listed below:

Proposition. Let X E Ll and.$l' C 9 be given as above. 256 APPENDIX 2

(a) and Y are both versions of E(xl~l) 2 then YI = Y2 a.s. (b) If X is measurable with respect to ~', then

E(XIJFt) = X; if JF' ~ {.,O}, then E(XljTl) = E(X) • (c) Conditional expectation is linear in X.

(d) If X ~ 0, E(xljPl) ~ 0 (a.s.). Consequently,

IE(XI~I)I ~E(IXIIJFI) a.s.

(e) If Xn + X ELI' and if either {Xn} increases or {Xn} is dominated, then E(XnljTl) + E(XljTl) a.s.

(f) If X £ Ll and if X is independent of every set in~~, tl E(x~~) = E(X) a.s. (g) If X and XU ELI and U is measurable JFI , then E(xuIJF') = UE(XIJP') a.s. (h) If JF" and JFI are sub-algebras of jT with

JF" C JF', then E(xIJF") = E [E (Xl .91") I~'] = E [E (X I JF' ) I JFII] •

We will not prove these here. All are quite simple to prove on the basis of the definition of conditional expecta• tion and some elementary measure theory. Please think over anything which doesn't seem clear or familiar~ The existence of any version of E(XIJF') has not yet been established. The usual proof consists in noting that the set-function

IleA) = fA XdP defines a (signed) measure which is absolutely continuous Appendix 2 257

with respect to P. Let us consider ~ restricted to sets

A E~'. and apply the Radon-Nikodym theorem. Then the den•

sity f such that f is ~'-measurable and

f fdP = ~(A) for all A E ~, • A whose existence is asserted by the theorem, is precisely the function Y we are seeking.

When X E L2 , however, another approach is possible which does not require the Radon-Nikodym theorem and which adds a little geometric insight. Consider the Hilbert space

L2 (Q, ~P) and note that those functions in L2 which are ~' measurable form a closed subspace M. Let PM denote the operator which projects perpendicularly onto M. Then

a.s. (3)

The proof is very easy. PMX E M and so by definition it is

~, measurable. Let AE?'; then ~AEM and we have

which verifies condition (ii). Here we have only used the facts that a projection operator acts as identity on its range and is self-adjoint. Suppose that the real random variables Xl"" ,Xn

have a joint density function p(x1 , ... ,xn), and assume for simplicity that p > 0 everywhere in Rn. (This can easily be generalized.) In elementary probability, one defines the conditional density of Xn given to be the function 258 APPENDIX 2

It is an important exercise to verify that if EClx n Il < ""t then the conditional expectation of Xn as defined above can be computed in this "elementary" way:

f~"" xp(XI •· ..• xn_l,x)dx a.s. (4) f~<» p(XI •··· ,Xn_l,x)dx

The proof uses the fact that if g: Rn + RI is a Borel func- tion, one can compute

E(g(XI.···,Xn)) = f g(XI (w), .•• ,Xn(w))dP(w) o equally well by integrating over Rn instead of 0:

E(g(XI ,··· ,Xn)) = fRn g(xl ,··· ,xn)dPx(xI ,··· ,xn), where Px is the joint distribution of (Xl"" ,Xn) defined by

for Borel sets C in Rn. Except for this clue, the ver• ification of (4) is left for the reader. If the -random variables (Xl" .. ,Xn) have a non• degenerate Gaussian joint distribution with means equal to 0, they will have a density of the form

p(xl' •••• x ) = constant· exp(- 12 a· .x.x.), (5) n i,jr 1) 1 ) where [a .. J is a symmetric, positive-definite matrix. *) 1)

*)For a nice short discussion of the multi-dimensional nor• mal distribution see [F 2J. Chapter 3, section 6. Appendix 2 259

In this case it follows easily from (4) and (5) that

n-l akn E (X I Xl" •• ,X 1) = - L - X (6) n n- k=l ann k so that the conditional expectation is a linear function of the variables Xl"" ,Xn_l . In particular, if two random variables X and Y have a joint normal distribution with zero means (6) becomes

E(xIY) E(XY) . Y a.s. (7) E (y2) Finally we observe that, for any X E L2 and any ran• dom variables Xl'" "Xm' the conditional expectation E(xlxl , •.• ,Xm) can be thought of as the closest approximation to X in the L2 norm by means of functions of Xl' ... ,Xm; this follows from (3) since any random variable which is measurable with respect to (Xl"" ,Xm) is of the form g(Xl, ... ,Xm) for some measurable function g: Rn + Rl. In general the best linear approximation, using only functions of the spec- m ial form l a.X., is not as close to X as the conditional i=l 1 1 expectation; it is, however, often much simpler to evaluate and study. In the Gaussian case di~cussed above, linear ap- proximation is the best possible. Since Gaussian processes and random variables occur very frequently in applications, this fact is of considerable importance. BIBLIOGRAPHY Books

[AJ J. Allen (editor), March 4: Scientists, Students and Society, 1970 (M.I.T.). [CJ N. Chomsky, American Power and The New Mandarins, 1969. [Do] J. L. Doob, Stochastic Processes, 1953 (Wiley). [Dy] E. B. Dynkin, Markov Processes, vols. I and II, 1965 (Springer) . [DY] E. B. Dynkin and A. A. Yushkevich, Markov Processes: Theorems and Problems, 1969 (Plenum).

[F l ,2] W. Feller, An Introduction to Probability Theor and its Applica~ons, vols. I ana II, 1968 r~rd edl t ion) and 1966 (Wiley). [GS] U. Grenander and G. Szego, Toeplitz Forms and Their Applications, 1958 (Univ. of Callforii1'1i')." -- ---

[H] K. Hoffman, Banach Spaces of Analytic Functions, 1962 (Prentice-Hall). [HP] E. Hille and R. Phillips, Functional Analysis and Semi- groups, 1957 (Amer. Math. Society). -- --- [K] O. D. Kellogg, Foundations of Potential Theory, 1929 (Dover reprint 1953).

[L] J. Lamperti, Probability: ! survey of the mathematical theory, 1960 (BenJamIn).

[R] I\'. Rudin, Real and Complex Analysis, 1966 (McGraw-Hill) ..

[S] F. Spitzer, Introduction ~ Processus Ie Markov ~ para• metres dans Zv' 1974 (in Springer lecture note volume 390). -- [T] H. Totoki, , 1969 (Aarhus University lecture note series). [WI N. Weiner, Cybernetics, ind edition, 1961 (M.I.T.). [Y] A. M. Yaglom, An Introduction to the Theory of Station• ary Random FunCtions, 1962 (PrentICe-RaIl). Articles Mentioned in the Text

Chapter 4: F. and M. Riesz (1916): "tiber die Randwerte einer ana1y• tischen Funktion," Skandinaviske Mathematikerkongres 4, pp. 27 - 44. Chapter 6: K. L. Chung (1964): The general theory of Markov Processes according to Doeblin," Z. Wahrscheinlichkeitstheorie 2, pp. 230-254. A. N. Ko1mogorov (1931): "tiber die ana1ytischen Methoden in der Wahrscheinlichkeitsrechnung," Math. Ann. 104, pp. 415-458. -- -- W. Feller (1936): "Zur Theorie der stochastischen Pro• zesse," Math. Ann. 113, pp. 113-160. Chapter 7: W. Feller (1952): "The parabolic differential equations and the associated semigroups of transformations," Ann. of Math. 55, pp. 468-519. Chapter 10: J. L. Snell (1952): "Applications of martingale system theorems," Trans. Am. Math. Soc. 73, pp. 293-312. INDEX absolutelty-continuous spec• compound-Poisson process, 124 tral measure, 50, 58 155 adapted (to a family of conditional probability and a-fields), 205 expectation, 8, 53, 255-259 almost-sure convergence, 13 continuity conditions: aperiodic stochastic matrix, for semigroups, 159-161 111 for transition functions, 154- approximation, linear, 53, 161 259 continuity: in L2, 14-17 backward equation: see 'Kolmogorov equations' in probability, 244 Birkhoff, George, 92 continuity of paths, 5, 125- 126, 190-195 Bochner-Khintchine theorem, 43, 45 contraction semigroup, defini• tion, 137 Borel fields: see 'sigma fields' correlation theory: see 'second-order theory' boundary conditions, 126, 173-179 covariance function, 3, 12, 25- 31 , 4, 8, 10, 17, 18, 125, 192, 202-203, Cramer, H., 33 225-227 cylinder sets, 85, 99, 252 and the Dirichlet problem, 227-233 derivative of a random process, 13-18 generator of, 170-176 deterministic motion, 169-170 Cauchy process, 126 difference equations: see Chapman-Kolmogorov equation, 'finite difference equations' 123-124, 183 differential calculus, 13-18 characteristic function, 45 differential operators, 169- characteristic operator, 222- 179, 194 225 diffusion, 126 Chomsky, N., ix diffusion equation, 127 Chung, K. L., 123 Dirichlet problem, 227-233 closed operator, 153-154 Dobrushin, R. L., 190 communicating states, 111 Doeblin, W., 123 complete a-fields, 207 Index 263

Dynkin, E. B., 190, 215 fundamental theorem of cal• culus. 21 Dynkin's lemma, 220 Garsia. A•• 92 embedding problem for Markov chains. ll8, ll9 Gaussian Markov process. 29-31 equivalent processes. 5. 189 , 26 ergodic theorem, 92-95 generator (of a semigroup). 115, 136, 138-151, 165-168 ergodic theory. 83, 87-100 see also "resolvent,' 'Hille• ergodic transformation: see Yosida thoerem.' closed metric transitivity' operator' exponential: of a matrix. 114 generator examples. 168-180

of an operato~. 140 generator and path continuity. 193-195 factorization of spectral density. 70-77 Grenander. U•• 81 fair game. 234 Hamilton's equations. 90 favorable game, 234 harmonic functions: see 'Dirichlet problem' Feller, W.• 133, 134. 180 Herglotz theorem. 43-45 Feller property. definition. 156 Hilbert space. 3. 4. 12. 38-42 filtered process. 49 Hille. E., 136 filtering. 52, 81-82 Hille-Yosida theorem, 147-152. 153-154, 165-167 finite-difference equations. 34-37,74-75 Hoffman, K., 81 finite-dimensional distribu• holding times, 195-198 tions, 2, 250-254 independent increments. 10, finite-parameter schemes. 11, 125 35, 74-75 inequality. 78 first-passage times fas stopping times). 2 3. 219 infinitely-divisible distri• butions, 125 Fleming. D. F .• ix infinitesimal generator: see Fortet, R•• 133 'generator' forward equation: see integrability, 19-21 'Kolmogorov equations' integral calculus, 19-24 fundamental solution, 127 integral of a stationary pro- cess, 51 264 INDEX interpolation, 52, 54-57 Markov process: Gaussian, 29- 30 interpolation error, 55, 78 in the wide sense, 30, 35, irreducible stochastic mat- 75 rix, 111, 117, 118 path functions of, 188-195, irregular point (for the 246-249 Dirichlet problem), 232-233 , 8-11, 183- isometry, 4, 23, 46 187, 206 Ito, K., ix, 214 Markov random fields, 10 Karhunen, K., 33 Markov time: see '' Khintchine, 32, 43, 45 Markov transition function, killed Markov process, 167, 154-180 174 definition, 123 Kinney, J. R., 190 governing a process, 182 Kolmogorov, A. N., 3, 32, 85, 129, 133 martingale, 189, 235-246 see also 'Chapman• convergence theorem, 239 Kolmogorov equation' system theorem, 236 Kolmogorov equations, 127- 133, '154, 155, 177 measurable process, definition, 6 Kolmogorov existence theorem, 251-254 measure-preserving transforma• tion, 87-100 Lamperti, J., vii mechanical system, 90 : see 'weak law' and 'strong law' metric transitivity, 95-99 linear approximation, 53, 259 , 96, 98 linear interpolation: see moving-average process, 34, 'interpolation' 37, 50,63, 84 linear least-squares predic• non-negative definite func• tion: see 'prediction' tions, 25-29, 43 Liouville's theorem, 90 normal Markov process, 209 local ~perator, 223 normal Markov semigroup, 162- 168 Markov chains, 8, 84-85, 89, 98-100, 107-112 examples, 168-180 continuous time, 113-121 normal transition function, definition, 158 Markov kernal, 121-123, 124 Index 265 or!hogonal increments, 22 Riemann integral, 19-21 orthonormal sequence, 34, 49 Riesz, F. and M., theorem of, 65, 78-80 partial differential equa- tions: see 'Kolmogorov Riesz representation theorem, equations' 162 path functions: see 'sample right-continuous process, 6, functions' 189, 244- 249 Phillips, R., 136 right-continuous sigma fields, 207 Poincare, H., 90 sample functions, 2, 188-195 Poisson process, llA 18 1 119- 121, 124, 155, 19~-20~ see also 'right-continuous process' positive-definite function, 25-29 second-order theory, 4 potential theory, 112, 227- see also 'wide-sense prop• 233 erties' prediction, 52, 57, 62-78 semigroups, definition, 137 prediction error, 64, 77, 78 of stochastic matrices, 113, 168 predictor, optimal, 64 shift, 89, 98 quasi-left continuity, 218- 220 sigma fields (= sigma alge• bras), 9, 205-208 random clock, 119-121, 124 singular process, 62, 64 , I, 7, 10 Snell, J. L., 235 random process: see 'sto• chastic process' social responsibility, vii-ix random time change, 232 Speakman, Jane, 119 rational spectral density, spectral decomposition, 57-62 73-75 spectral density, factoriza- recurrence theorem, 90-91 tion of, 70-77 reflecting barrier, 126, 172- spectral form of Wold repre• 173 sentation, 68-73 regular point (for the spectral measure, 44, 48-51 Dirichlet problem), 232 spectral representation, 43-50 regular process, 62 spectral theorem, 32 resolvent (of a semigroup), 141-146 state space, 2, 188-195 266 INDEX stationary increments, 7 supermartingales: see 'martingales' stationary independent incre• ments, 10, 17 Szego, G., 81 stationary Markov chains: see trajectory: see 'sample func• 'Markov chains' tion' stationary processes, defini• transient state, 112 tion, 6-8 transition probability func• stationary random field, 7, tion: see 'Markov transition 81 function' stationary, wide sense, 7 trap, 169 sticking barrier, 174, 179 uniform stochastic continuity, 157, 159 sticky barrier, 174-175, 179 uniqueness of semigroup with Stieltjes integral: see given generator, 144-146 'stochastic integral' unitary operator, 38, 39 stochastic continuity (of a transition function), 157 upcrossing lemma, 237-239 stochastic difference equa• wai~in~ time: see 'holding tions: see 'finite dif• tIme ference equations' weak convergence (of measures), stochastic integral, 22-24, 44 37 weak law of large numbers, 13, see also 'spectral repre• 39-41 sentation' Wentzel, A. D., ix stochastic kerna1: see 'Markov kernal' wide-sense properties, 7, 30, 35 stochastic matrix, 84, 107 Wiener. N.• vii, 33, 52, 203 stochastic process, defini• tion, 1 : see 'Brownian motion' Stone, I. F., x Wold, H., 33 stopping time, 210-213 Wold decomposition, 62-73, 75 strong law of large numbers, 7, 14,85-87, 100-105 Yag10m, A., 81 strong Markov pro~erty, 198, Yosida, K., 136 204-205, 213-217 see also 'Hi1le-Yosida submartinga1es: see 'martin• theorem' gales' Yushkevich, A. A., 215 Applied Mathematical Sciences

1. John: Partial Differential Equations, 4th ed. (cloth) 2. Sirovich: Techniques of Asymptotic Analysis. 3. Hale: Theory of Functional Differential Equations, 2nd ed. (cloth) 4. Percus: Combinatorial Methods. 5. von Mises/Friedrichs: Fluid Dynamics. 6. Freiberger/Grenander: A Short Course in Computational Probability and . 7. Pipkin: Lectures on Viscoelasticity Theory. 8. Giacaglia: Perturbation Methods in Non-Linear Systems. 9. Friedrichs: Spectral Theory of Operators in Hilbert Space. 10. Stroud: Numerical Quadrature and Solution of Ordinary Differential Equations. 11. Wolovich: Linear Multivariable Systems. 12. Berkovitz: Optimal Control Theory. 13. Bluman/Cole: Similarity Methods for Differential Equations. 14. Yoshizawa: Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutions. 15. Braun: Differential Equations and Their Applications, 2nd ed. (cloth) 16. Lefschetz: Applications of Algebraic Topology. 17. Collatz/Wetterling: Optimization Problems. 18. Grenander: Pattern Synthesis: Lectures in Pattern Theory, Vol. I. 19. Marsden/McCracken: The Hopf Bifurcation and Its Applications. 20. Driver: Ordinary and Delay Differential Equations. 21. Courant/Friedrichs: Supersonic Flow and Shock Waves. (cloth) 22. Rouche/Habets/Laloy: Stability Theory by Liapunov's Direct Method. 23. Lamperti: Stochastic Processes: A Survey of the Mathematical Theory. 24. Grenander: Pattern Analysis: Lectures in Pattern Theory, Vol. II. 25. Davies: Integral Transforms and Their Applications. 26. Kushner/Clark: Stochastic Approximation Methods for Constrained and Unconstrained Systems. 27. de Boor: A Practical Guide to Splines. 28. Keilson: Models-Rarity and Exponentiality. 29. de Veubeke: A Course in Elasticity. 30. Sniatycki: Geometric Quantization and Quantum Mechanics. 31. Reid: Sturmian Theory for Ordinary Differential Equations. 32. Meis/Marcowitz: Numerical Solution of Partial Differential Equations. 33. Grenander: Regular Structures: Lectures in Pattern Theory, Vol. III. 34. Kevorkian/Cole: Pertubation Methods in Applied Mathematics. (cloth) 35. Carr: Applications of Centre Manifold Theory. 36. Bengtsson/Ghil/Kallen: Dynamic Meteorology: Data Assimilation Methods. 37. Saperstone: Semidynamical Systems in Infinite Dimensional Spaces. 38. Lichtenberg/Lieberman: Regular and Stochastic Motion. (cloth)