<<

Some facts on and integration theory

1.  algebra: A family of subsets of a set is called  algebra, if A (i) : 2 A (ii) A = Ac (Ac := A) 2 A ) 2 A n

(iii) (An) with An for all n = An element of : n 1 2 A ) A  n 1 [ Examples of  algebras: a) = ?; b) = ( ) the class of all subsets of : A f g A P c) = ?; 1 ; 2; 3 ; 1; 2; 3 ; where = 1; 2; 3 : d) Borel- algebra (R) : = R; A f f g f g f gg f g B = (R); where (R) := smallest  algebra, which contains the intervals [a; b); a b: A B B  2. measure: Let be a  algebra on : A function  : [0; ] is called measure, if A A ! 1

(i) (?) = 0:

(ii) (An) with Ai Aj = ?, i = j =  An =  (An) ( additivity) n 1 \ 6 ) 0 1 n 1  n 1  [ P @ A If ( ) = 1, then  is called on ( ; ) : In this case ( ; ; ) is referred to as probability space A A

Useful properties:

a) A B = (A) (B):  ) 

b)  An  (An) for not necessarily disjoint An: 0 1  n 1 n 1  [ P @ A c) limn  (An) =  (A) ; if A1 A2 ::: and A = An: !1   n 1 [ Examples of measures: a) = N natural numbers, = ( ) class of all subsets of : Counting measure: (A) := number of elements in A; e.g.A (P1; 2; 3 ) = 3: f g b) Dirac measure x: 1; x A  (A) := : ! 0; x2 = A  2 Since x( ) = 1 the Dirac measure is a probability measure.

1 c) Lebesgue (-Borel) measure  : = R; = (R) Borel- algebra: One shows there A B exists a unique measure  : (R) [0; ] such that B ! 1 ([a; b)) = b a for all a b:  d) More generally Lebesgue-Stieltjes measures: = R; = (R): Let f : R R be a A B ! non-decreasing continuous function. Then there exists a unique measure f on (R) such that B  ([a; b)) = f(b) f(a) for all a b: f  f(x) = x gives the :

3. product  algebra 1 2 : Let 1 be a  algebra on 1 and 2 a  algebra on 2: The product algebraA 1 A 2 of A1; 2 is a  algebra on 1 A2 de…ned as the smallest  algebra, which containsA all A theA setsA of the form A B for A 1;B 2: So 1 2 is the intersection of all  algebras on = 1 2 containing such2 A sets.2 A A A  4. product measure 1 2 : Let i be a measure on ( i; i), i = 1; 2. The product measure is the unique measure on 1 2 such that A A A (A B) = 1(A) 2(B) for all A 1;B 2:   2 A 2 A The measure  is denoted by 1 2:  Example: Let  be the Lebesgue measure on (R) and 1 = 2 = (R): Then   is B A A B  the unique measure on (R) (R) such that B B ( )(A B) = (A) (B) for all A; B (R):    2 B The measure   is a 2-dimensional Lebesgue measure.  5. measurable function: Let 1 be a  algebra on 1 and 2 a  algebra on 2. A function f : 1 2 is called 1A; 2 measurable , if A ! A A 1 1 1 1 1 2 f [B] := ! : f(! ) B for all B : 2 2 2 A 2 A 1 2  2 1 Useful criterion for ; measurable functions, where = (R): A function f : R A A A B ! is 1; (R) measurable, if and only if A B 1 1 1 1 ! : f(! ) < for all R: 2 2 A 2 1 2 1 2 Example: = = R; = = (R); f continuous. A random variable orArandomA elementB is a measurable function on a probability space.

6. integral with respect to general measures: Let  be a measure on ( ; ): Let A f : R be ; (R) measurable and f 0: Then ! A B  n2n n f(!)(d!) := lim i2 (Ai;n); n !1 i=1 Z X 2 where ! :(i + 1)2 n > f(!) i2 n ; i = 0; 1; :::; n2n 1: A := : i;n f ! : f(!) n ;g i = n2n:  f  g For not necessarily positive f we de…ne

+ f(!)(d!) := f (!)(d!) f (!)(d!); Z Z Z + + provided f (!)(d!); f (!)(d!) are real numbers (f = max(f; 0); f = ( f; 0)): A mesurable function f : is called  integrable, if f +(!)(d!); f (!)(d!) R R R are real numbers. ! R R Properties:

a) ( f(!) + g(!))(d!) = f(!)(d!) + g(!)(d!) (linearity) b) R f(!)(d!) g(!)(d!),R if f g: R   R R We de…ne

f(!)(d!) := 1A(!)f(!)(d!);A : 2 A ZA Z Then f(!)(d!) = f(!)(d!) + f(!)(d!); if A B = ?: A B A B \ Z [ Z Z Example: x Dirac measure. Then f(!)x(d!) = f(x): 7. Lp spaces and Lp convergence:R We de…ne for 1 p <  1 Lp( ; ; ) := f : f p  integrable : A f j j g p p We say: A sequence of functions fn converges to f in L ( ; ; ); if fn, f L ( ; ; ) A 2 A and p fn(!) f(!) (d!) 0 as n : j j ! ! 1 Z 8.  a.e. convergence : We say a sequence of measurable functions fn converges to f  a.e., if set N with (N) = 0 (i.e. null set) ! N c (i.e. N c := N)) : 9 2 A 8 2 n

fn(!) f(!) as n : ! ! 1

Some important results in measure and integration theory:

Theorem 1 (Riesz-Fischer) Let fn be a sequence of functions converging to f in p L ( ; ;P ): Then there exists a subsequence of fn such that fn converges to f  a.e., that A c c is nk and a set N with (N) = 0 (i.e. null set) ! N (i.e. N := N): 9 % 1 2 A 8 2 n

fn (!) f(!) as k : k ! ! 1 3 Theorem 2 (Lemma of Fatou) Let fn 0 be measurable. Then 

limn fn(!)(d!) limn fn(!)(d!) (limn limes inferior). !1  !1 !1 Z Z

Theorem 3 (Dominated convergence) Let fn be a sequence of measurable functions p converging to f  a.e. Assume that fn L ( ; ; ) for all n and 2 A

fn g j j  for a function g 0 in Lp( ; ; ): Then we have Lp convergence, that is  A p fn(!) f(!) (d!) 0 as n : j j ! ! 1 Z

Theorem 4 (Fubini) Let 1 resp. 2 be probability measure on ( 1; 1) resp. ( 2; 2): A A Let f : 1 2 R be measurable function and assume that f is integrable with respect to the product measure!  = 1 2 on = 1 2 (See 4). Then  

2 1 f(!)(d!) = f(!1;!2) (d!2)  (d!1) 1 2 Z Z Z  1 2 = f(!1;!2) (d!1)  (d!2): 2 1 Z Z 

Theorem 5 (Hölder inequality) Let f; g be measurable functions and p; q > 1 such 1 1 that p + q = 1: Then

p 1 q 1 f(!)g(!) (d!) ( f(!) (d!)) p ( g(!) (d!)) q : j j  j j j j Z Z Z

Links to probability theory: Let X be a random variable on the probability space ( ; ; ) : We use the notation A E[X] := X(!)(d!): Z The integral E[X] is called ( ) expectation of X. 1. distribution of a random variable: By a distribution of a random variable X we understand the image measure X on (R) given by B

X (A) := ( ! : X(!) A );A (R): f 2 g 2 B Example: A random variable X has normal distribution with mean a and variance 2 > 0; if

1 (x a)2 X (A) = exp 2 dx; A (R): p22 A 2 2 B Z   4 2. characteristic function: Let X be a random variable. The expectation

iX 'X () := E e := E [cos(X)] + iE [sin(X)] ;  R, i = p 1 2 h i is called characteristic function of X:

One can show: Two random variables X and Y have the same distribution, if and only if

'X () = 'X () for all :

Example: Let X have a normal distribution with mean a and variance 2 > 0: Then

ia 22=2 ' () = e e : X  So a random variable is normal if and only if it has such a characteristic function. A multidimensional extension of the characteristic function for random vectors X = (X1; :::; Xn) is de…ned as

i(1X1+:::+nXn) 'X () = E e ; 1; :::; n R: 2 h i Other comments: (i) Let X be a random variable being independent of a -algebra . If a random variable Y is measurable, then X and Y are independent. C  A (ii) Let X and Y be independentC square integrable random variables. Then

E[XY ] = E[X]E[Y ]:

(iii) Let X and Y be independent random variables and f; g measurable functions. Then

f(X) and g(Y ) are independent.

In particular we conclude for f(x) = ei1x and g(x) = ei2x from (ii) that

' (1; 2) = ' (1) ' (2): (X;Y ) X  Y

5