Measure and Integration [Pdf]

Total Page:16

File Type:pdf, Size:1020Kb

Measure and Integration [Pdf] Lecture 2. Measure and Integration There are several ways of presenting the definition of integration with respect to a measure. We will follow, more or less, the approach of Rudin's `Real and Complex Analysis' - this is probably the fastest route. Conventions concerning ±∞. In measure theory it is often essential to use 1 and −∞. We shall view these as two objects and define the following relationship with real numbers (the set of all real numbers will be denoted by R) : −∞ < c < 1 for every c 2 R c + 1 = 1 if −∞ < c ≤ 1, and c + (−∞) = −∞ if −∞ ≤ c < 1 0 · 1 = 0 ±∞; if 0 < c ≤ 1; c(±∞) = ∓∞; if −∞ ≤ c < 0; Note that 1−1 is not defined! The convention 0·1 = 0 might seem like an arbitrary choice, but it is the really the right one : it will be useful in integration and there it will appear as the natural choice. These operations have the usual properties on [0; 1] : commutative, associative and distributive. Throughout our discussion (X; B; µ) will be a measure space. This means that X is a set, B is a collection of subsets of X forming a σ−algebra, and µ is a measure. Recall that to say that B is a σ−algebra means that : (S1) ; 2 B; (S2) if A 2 B then the complement Ac 2 B; 1 (S3) if A1; A2; A3; ::: is a countable collection of sets with each Ai 2 B then [i=1Ai 2 B. Sets in B will be called B−measurable sets, or simply measurable sets. Observations. (i) The intersection of a countable collection A1; A2; ::: of measurable 1 c c sets is also measurable (because \i=1Ai = ([iAi ) is measurable by (S2) and (S3).) m (ii) If A1; :::; Am 2 B then [i=1Ai 2 B; this follows from (S1) and (S3) by taking An = ; m for all n bigger. Using the complements we obtain also that \i=1Ai 2 B. (iii) Notice also that if A; B 2 B then A n B, being equal to A \ Bc, is also in B. To say that µ is a measure on (X; B) means that µ is a mapping B ! [0; 1] such that : (M1) µ(;) = 0 (M2) if A1; A2; ::: is a countable collection of mutually disjoint sets in B (i.e. each Ai 2 B and Ai \ Aj = ; for every i 6= j) then: 1 1 µ([i=1Ai) = µ(Ai): Xi=1 Taking Ai = ; for i bigger than some m, we see (from (M1) and (M2)) that µ is finitely-additive : 1 m m µ([i=1Ai) = µ(Ai) Xi=1 provided, of course, that the Ai are disjoint measurable sets. (Comment : It might seem that finite additivity implies the condition (M1): we have µ(;) = µ(; [ ;) = µ(;) + µ(;), \and so" µ(;) = 0. What is wrong with this argument? Under what additional condition on µ would finite additivity imply (M1) ?) Exercise. If A and B are measurable sets with A ⊂ B, show that µ(A) ≤ µ(B). (Hint : write B as the union of A and B n A; check that B n A (which is B \ Ac) is measurable and use additivity.) The simplest meaningful example of a measure is counting measure: µ(A) = number of elements in A (taken to be 1 if A is infinite) 2.1. Definition. A function f : X ! [−∞; 1] is measurable with respect to the σ−algebra B if the set f −1[a; b] (i.e. the set fx 2 X : f(x) 2 [a; b]g ) is in B, for every a; b 2 [−∞; 1]. Note that, in particular, if f is measurable then (taking b = a) every set of the type f −1(a) is measurable (a 2 [−∞; 1]). 2.2. Terminology. (i) If A ⊂ X, then the indicator function of A is the function 1A : X ! R, defined by : 1; if x 2 A; 1A(x) = 0; otherwise. (ii) A simple function is a function s : X ! R whose range consists of a finite set of points (i.e. the image s(X) is a finite subset of R). If a1; :::; an are all the distinct values of the simple function s then (verify that) n s = ai1Ai Xi=1 n where Ai = fx : s(x) = aig. Notice (verify) that the Ai are disjoint sets and [i=1Ai = X. Notice also that a simple function can be written in different ways in the form c cj1Cj . For instance, 1X is also equal to 1A + 1A , for any A ⊂ X. The represen- tationX described above in terms of the distinct values of s is, of course, unique; which is why we use it in the definition (2.4) of s dµ below. Z 2.3. Exercise. Verify that s is measurable if and only if each Ai is. 2.4. Definition. (Integrating simple functions) If s : X ! [0; 1) is a measurable simple function of the form n s = ai1Ai Xi=1 2 −1 where a1; :::; an are all the distinct values of s (and Ai = s (ai)), then n s dµ = aiµ(Ai): Z Xi=1 Note that a term of the form 0 · 1 might occur in the sum on the right side of the above equation; recall, for this purpose, that 0 · 1 = 0. 2.5. Question. What is the relevance of Problem 2.3 to Definition 2.4 ? m 2.6. Note. If s = cj1Cj is a non-negative measurable simple function then a fact Xj=1 m that is not immediately obvious from Definition 2.4 is that s dµ = cjµ(Cj) (provided Z Xj=1 this sum makes sense). Definition 2.4 says that this is true only when the ci are all the distinct values of s. 2.7. Proposition. If s and t are nonnegative measurable simple functions then so is s + t and (s + t) dµ = s dµ + t dµ. Z Z Z If c 2 [0; 1) is a constant then : cs dµ = c s dµ Z Z n m Proof. Write s = ai1Ai and t = bj1Bj , where a1; :::; an are all the distinct Xi=1 Xj=1 values of s and b1; :::; bm are all the distinct values of t. The function s + t takes the value ai + bj on the set Ai \ Bj (because on this set s takes the value ai and t takes the value bj). Moreover, (verify that) the sets Ai \ Bj are disjoint and cover all of X (i.e. 0 0 (Ai \Bj)\(Ai0 \Bj0 ) = ; unless i = i and j = j ; and [i;j(Ai \Bj) = X.) This is because the Ai's are disjoint and cover X and the Bj's are disjoint and cover X. Therefore, s + t = (ai + bj)1Ai\Bj : 1≤i≤n;X1≤j≤m (To see this, consider any point x 2 X. It must belong to exactly one of the sets Ai \ Bj; say x 2 A1 \ B2. Then the function defined by the sum on the right hand side of the above equation takes the value a1 + b2 at x. But this also the value of s(x) + t(x).) By Problem 2.3 we see that s + t is measurable. It is also clear that s + t is simple and non-negative. We would like to conclude, by Definition 2.4, that it follows that : (s + t) dµ = (ai + bj)µ(Ai \ Bj) (2:1) Z 1≤i≤n;X1≤j≤m 3 However, there is no guarantee that the values ai + bj are distinct and therefore Definition 2.4 cannot be applied directly. Fortunately, equation (2.1) is still true because the sets Ai \ Bj are disjoint (this is the content of Lemma 2.8 below). So we accept equation (2.1). The right side of equation (2.1) can be split up into the sum of two sums : n m m n (ai + bj)µ(Ai \ Bj) = ai µ(Ai \ Bj) + bj µ(Ai \ Bj) : 1≤i≤n;X1≤j≤m Xi=1 Xj=1 Xj=1 Xi=1 Now the sets Ai \ Bj are disjoint and, for fixed i and variable j, they cover Ai ( for [j (Ai \ Bj) = Ai \ [j Bj = Ai \ X = Ai:) Therefore : m µ(Ai \ Bj) = µ(Ai): Xj=1 Similarly, n µ(Ai \ Bj) = µ(Bj): Xi=1 Putting all this together we obtain : n m (s + t) dµ = aiµ(Ai) + bjµ(Bj) Z Xi=1 Xj=1 and we recognize the the two sums on the right hand side to be s dµ and t dµ. Z Z Thus : (s + t) dµ = s dµ + t dµ. Z Z Z That cs dµ = c s dµ, for c any constant in [0; 1), is verified easily by writing out Z Z s as a linear combination of indicator functions and using Definition 2.4. 2.8. Lemma. If C1; :::; Cn are disjoint measurable sets and c1; :::; cn non-negative real numbers then n n ( ci1C ) dµ = ciµ(Ci): Z i Xi=1 Xi=1 n Proof. We shall assume that [i=1Ci = X; for if this is not so then we can always n throw in another set Cn+1 = X n Ci, and use cn+1 = 0, and this would not alter the i[=1 4 value of either side of the above equation (note : here we need to use 0 · 1 = 0 in case n µ(Cn+1) = 1.) Let s = ci1Ci , and let a1; :::am be all the distinct values of s. Note that Xi=1 s is a non-negative measurable simple function. Moreover, (verify that) c1; :::; cn constitute all the values of s (i.e.
Recommended publications
  • Measure-Theoretic Probability I
    Measure-Theoretic Probability I Steven P.Lalley Winter 2017 1 1 Measure Theory 1.1 Why Measure Theory? There are two different views – not necessarily exclusive – on what “probability” means: the subjectivist view and the frequentist view. To the subjectivist, probability is a system of laws that should govern a rational person’s behavior in situations where a bet must be placed (not necessarily just in a casino, but in situations where a decision must be made about how to proceed when only imperfect information about the outcome of the decision is available, for instance, should I allow Dr. Scissorhands to replace my arthritic knee by a plastic joint?). To the frequentist, the laws of probability describe the long- run relative frequencies of different events in “experiments” that can be repeated under roughly identical conditions, for instance, rolling a pair of dice. For the frequentist inter- pretation, it is imperative that probability spaces be large enough to allow a description of an experiment, like dice-rolling, that is repeated infinitely many times, and that the mathematical laws should permit easy handling of limits, so that one can make sense of things like “the probability that the long-run fraction of dice rolls where the two dice sum to 7 is 1/6”. But even for the subjectivist, the laws of probability should allow for description of situations where there might be a continuum of possible outcomes, or pos- sible actions to be taken. Once one is reconciled to the need for such flexibility, it soon becomes apparent that measure theory (the theory of countably additive, as opposed to merely finitely additive measures) is the only way to go.
    [Show full text]
  • A Convenient Category for Higher-Order Probability Theory
    A Convenient Category for Higher-Order Probability Theory Chris Heunen Ohad Kammar Sam Staton Hongseok Yang University of Edinburgh, UK University of Oxford, UK University of Oxford, UK University of Oxford, UK Abstract—Higher-order probabilistic programming languages 1 (defquery Bayesian-linear-regression allow programmers to write sophisticated models in machine 2 let let sample normal learning and statistics in a succinct and structured way, but step ( [f ( [s ( ( 0.0 3.0)) 3 sample normal outside the standard measure-theoretic formalization of proba- b ( ( 0.0 3.0))] 4 fn + * bility theory. Programs may use both higher-order functions and ( [x] ( ( s x) b)))] continuous distributions, or even define a probability distribution 5 (observe (normal (f 1.0) 0.5) 2.5) on functions. But standard probability theory does not handle 6 (observe (normal (f 2.0) 0.5) 3.8) higher-order functions well: the category of measurable spaces 7 (observe (normal (f 3.0) 0.5) 4.5) is not cartesian closed. 8 (observe (normal (f 4.0) 0.5) 6.2) Here we introduce quasi-Borel spaces. We show that these 9 (observe (normal (f 5.0) 0.5) 8.0) spaces: form a new formalization of probability theory replacing 10 (predict :f f))) measurable spaces; form a cartesian closed category and so support higher-order functions; form a well-pointed category and so support good proof principles for equational reasoning; and support continuous probability distributions. We demonstrate the use of quasi-Borel spaces for higher-order functions and proba- bility by: showing that a well-known construction of probability theory involving random functions gains a cleaner expression; and generalizing de Finetti’s theorem, that is a crucial theorem in probability theory, to quasi-Borel spaces.
    [Show full text]
  • Jointly Measurable and Progressively Measurable Stochastic Processes
    Jointly measurable and progressively measurable stochastic processes Jordan Bell [email protected] Department of Mathematics, University of Toronto June 18, 2015 1 Jointly measurable stochastic processes d Let E = R with Borel E , let I = R≥0, which is a topological space with the subspace topology inherited from R, and let (Ω; F ;P ) be a probability space. For a stochastic process (Xt)t2I with state space E, we say that X is jointly measurable if the map (t; !) 7! Xt(!) is measurable BI ⊗ F ! E . For ! 2 Ω, the path t 7! Xt(!) is called left-continuous if for each t 2 I, Xs(!) ! Xt(!); s " t: We prove that if the paths of a stochastic process are left-continuous then the stochastic process is jointly measurable.1 Theorem 1. If X is a stochastic process with state space E and all the paths of X are left-continuous, then X is jointly measurable. Proof. For n ≥ 1, t 2 I, and ! 2 Ω, let 1 n X Xt (!) = 1[k2−n;(k+1)2−n)(t)Xk2−n (!): k=0 n Each X is measurable BI ⊗ F ! E : for B 2 E , 1 n [ −n −n f(t; !) 2 I × Ω: Xt (!) 2 Bg = [k2 ; (k + 1)2 ) × fXk2−n 2 Bg: k=0 −n −n Let t 2 I. For each n there is a unique kn for which t 2 [kn2 ; (kn +1)2 ), and n −n −n thus Xt (!) = Xkn2 (!). Furthermore, kn2 " t, and because s 7! Xs(!) is n −n left-continuous, Xkn2 (!) ! Xt(!).
    [Show full text]
  • Integration 1 Measurable Functions
    Integration References: Bass (Real Analysis for Graduate Students), Folland (Real Analysis), Athreya and Lahiri (Measure Theory and Probability Theory). 1 Measurable Functions Let (Ω1; F1) and (Ω2; F2) be measurable spaces. Definition 1 A function T :Ω1 ! Ω2 is (F1; F2)-measurable if for every −1 E 2 F2, T (E) 2 F1. Terminology: If (Ω; F) is a measurable space and f is a real-valued func- tion on Ω, it's called F-measurable or simply measurable, if it is (F; B(<))- measurable. A function f : < ! < is called Borel measurable if the σ-algebra used on the domain and codomain is B(<). If the σ-algebra on the domain is Lebesgue, f is called Lebesgue measurable. Example 1 Measurability of a function is related to the σ-algebras that are chosen in the domain and codomain. Let Ω = f0; 1g. If the σ-algebra is P(Ω), every real valued function is measurable. Indeed, let f :Ω ! <, and E 2 B(<). It is clear that f −1(E) 2 P(Ω) (this includes the case where f −1(E) = ;). However, if F = f;; Ωg is the σ-algebra, only the constant functions are measurable. Indeed, if f(x) = a; 8x 2 Ω, then for any Borel set E containing a, f −1(E) = Ω 2 F. But if f is a function s.t. f(0) 6= f(1), then, any Borel set E containing f(0) but not f(1) will satisfy f −1(E) = f0g 2= F. 1 It is hard to check for measurability of a function using the definition, because it requires checking the preimages of all sets in F2.
    [Show full text]
  • Shape Analysis, Lebesgue Integration and Absolute Continuity Connections
    NISTIR 8217 Shape Analysis, Lebesgue Integration and Absolute Continuity Connections Javier Bernal This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8217 NISTIR 8217 Shape Analysis, Lebesgue Integration and Absolute Continuity Connections Javier Bernal Applied and Computational Mathematics Division Information Technology Laboratory This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8217 July 2018 INCLUDES UPDATES AS OF 07-18-2018; SEE APPENDIX U.S. Department of Commerce Wilbur L. Ross, Jr., Secretary National Institute of Standards and Technology Walter Copan, NIST Director and Undersecretary of Commerce for Standards and Technology ______________________________________________________________________________________________________ This Shape Analysis, Lebesgue Integration and publication Absolute Continuity Connections Javier Bernal is National Institute of Standards and Technology, available Gaithersburg, MD 20899, USA free of Abstract charge As shape analysis of the form presented in Srivastava and Klassen’s textbook “Functional and Shape Data Analysis” is intricately related to Lebesgue integration and absolute continuity, it is advantageous from: to have a good grasp of the latter two notions. Accordingly, in these notes we review basic concepts and results about Lebesgue integration https://doi.org/10.6028/NIST.IR.8217 and absolute continuity. In particular, we review fundamental results connecting them to each other and to the kind of shape analysis, or more generally, functional data analysis presented in the aforeme- tioned textbook, in the process shedding light on important aspects of all three notions. Many well-known results, especially most results about Lebesgue integration and some results about absolute conti- nuity, are presented without proofs.
    [Show full text]
  • LEBESGUE MEASURE and L2 SPACE. Contents 1. Measure Spaces 1 2. Lebesgue Integration 2 3. L2 Space 4 Acknowledgments 9 References
    LEBESGUE MEASURE AND L2 SPACE. ANNIE WANG Abstract. This paper begins with an introduction to measure spaces and the Lebesgue theory of measure and integration. Several important theorems regarding the Lebesgue integral are then developed. Finally, we prove the completeness of the L2(µ) space and show that it is a metric space, and a Hilbert space. Contents 1. Measure Spaces 1 2. Lebesgue Integration 2 3. L2 Space 4 Acknowledgments 9 References 9 1. Measure Spaces Definition 1.1. Suppose X is a set. Then X is said to be a measure space if there exists a σ-ring M (that is, M is a nonempty family of subsets of X closed under countable unions and under complements)of subsets of X and a non-negative countably additive set function µ (called a measure) defined on M . If X 2 M, then X is said to be a measurable space. For example, let X = Rp, M the collection of Lebesgue-measurable subsets of Rp, and µ the Lebesgue measure. Another measure space can be found by taking X to be the set of all positive integers, M the collection of all subsets of X, and µ(E) the number of elements of E. We will be interested only in a special case of the measure, the Lebesgue measure. The Lebesgue measure allows us to extend the notions of length and volume to more complicated sets. Definition 1.2. Let Rp be a p-dimensional Euclidean space . We denote an interval p of R by the set of points x = (x1; :::; xp) such that (1.3) ai ≤ xi ≤ bi (i = 1; : : : ; p) Definition 1.4.
    [Show full text]
  • 1 Measurable Functions
    36-752 Advanced Probability Overview Spring 2018 2. Measurable Functions, Random Variables, and Integration Instructor: Alessandro Rinaldo Associated reading: Sec 1.5 of Ash and Dol´eans-Dade; Sec 1.3 and 1.4 of Durrett. 1 Measurable Functions 1.1 Measurable functions Measurable functions are functions that we can integrate with respect to measures in much the same way that continuous functions can be integrated \dx". Recall that the Riemann integral of a continuous function f over a bounded interval is defined as a limit of sums of lengths of subintervals times values of f on the subintervals. The measure of a set generalizes the length while elements of the σ-field generalize the intervals. Recall that a real-valued function is continuous if and only if the inverse image of every open set is open. This generalizes to the inverse image of every measurable set being measurable. Definition 1 (Measurable Functions). Let (Ω; F) and (S; A) be measurable spaces. Let f :Ω ! S be a function that satisfies f −1(A) 2 F for each A 2 A. Then we say that f is F=A-measurable. If the σ-field’s are to be understood from context, we simply say that f is measurable. Example 2. Let F = 2Ω. Then every function from Ω to a set S is measurable no matter what A is. Example 3. Let A = f?;Sg. Then every function from a set Ω to S is measurable, no matter what F is. Proving that a function is measurable is facilitated by noticing that inverse image commutes with union, complement, and intersection.
    [Show full text]
  • Notes 2 : Measure-Theoretic Foundations II
    Notes 2 : Measure-theoretic foundations II Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapters 4-6, 8], [Dur10, Sections 1.4-1.7, 2.1]. 1 Independence 1.1 Definition of independence Let (Ω; F; P) be a probability space. DEF 2.1 (Independence) Sub-σ-algebras G1; G2;::: of F are independent for all Gi 2 Gi, i ≥ 1, and distinct i1; : : : ; in we have n Y P[Gi1 \···\ Gin ] = P[Gij ]: j=1 Specializing to events and random variables: DEF 2.2 (Independent RVs) RVs X1;X2;::: are independent if the σ-algebras σ(X1); σ(X2);::: are independent. DEF 2.3 (Independent Events) Events E1;E2;::: are independent if the σ-algebras c Ei = f;;Ei;Ei ; Ωg; i ≥ 1; are independent. The more familiar definitions are the following: THM 2.4 (Independent RVs: Familiar definition) RVs X, Y are independent if and only if for all x; y 2 R P[X ≤ x; Y ≤ y] = P[X ≤ x]P[Y ≤ y]: THM 2.5 (Independent events: Familiar definition) Events E1, E2 are indepen- dent if and only if P[E1 \ E2] = P[E1]P[E2]: 1 Lecture 2: Measure-theoretic foundations II 2 The proofs of these characterizations follows immediately from the following lemma. LEM 2.6 (Independence and π-systems) Suppose that G and H are sub-σ-algebras and that I and J are π-systems such that σ(I) = G; σ(J ) = H: Then G and H are independent if and only if I and J are, i.e., P[I \ J] = P[I]P[J]; 8I 2 I;J 2 J : Proof: Suppose I and J are independent.
    [Show full text]
  • (Measure Theory for Dummies) UWEE Technical Report Number UWEETR-2006-0008
    A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 UWEE Technical Report Number UWEETR-2006-0008 May 2006 Department of Electrical Engineering University of Washington Box 352500 Seattle, Washington 98195-2500 PHN: (206) 543-2150 FAX: (206) 543-3842 URL: http://www.ee.washington.edu A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 University of Washington, Dept. of EE, UWEETR-2006-0008 May 2006 Abstract This tutorial is an informal introduction to measure theory for people who are interested in reading papers that use measure theory. The tutorial assumes one has had at least a year of college-level calculus, some graduate level exposure to random processes, and familiarity with terms like “closed” and “open.” The focus is on the terms and ideas relevant to applied probability and information theory. There are no proofs and no exercises. Measure theory is a bit like grammar, many people communicate clearly without worrying about all the details, but the details do exist and for good reasons. There are a number of great texts that do measure theory justice. This is not one of them. Rather this is a hack way to get the basic ideas down so you can read through research papers and follow what’s going on. Hopefully, you’ll get curious and excited enough about the details to check out some of the references for a deeper understanding.
    [Show full text]
  • Measure Theory and Probability
    Measure theory and probability Alexander Grigoryan University of Bielefeld Lecture Notes, October 2007 - February 2008 Contents 1 Construction of measures 3 1.1Introductionandexamples........................... 3 1.2 σ-additive measures ............................... 5 1.3 An example of using probability theory . .................. 7 1.4Extensionofmeasurefromsemi-ringtoaring................ 8 1.5 Extension of measure to a σ-algebra...................... 11 1.5.1 σ-rings and σ-algebras......................... 11 1.5.2 Outermeasure............................. 13 1.5.3 Symmetric difference.......................... 14 1.5.4 Measurable sets . ............................ 16 1.6 σ-finitemeasures................................ 20 1.7Nullsets..................................... 23 1.8 Lebesgue measure in Rn ............................ 25 1.8.1 Productmeasure............................ 25 1.8.2 Construction of measure in Rn. .................... 26 1.9 Probability spaces ................................ 28 1.10 Independence . ................................. 29 2 Integration 38 2.1 Measurable functions.............................. 38 2.2Sequencesofmeasurablefunctions....................... 42 2.3 The Lebesgue integral for finitemeasures................... 47 2.3.1 Simplefunctions............................ 47 2.3.2 Positivemeasurablefunctions..................... 49 2.3.3 Integrablefunctions........................... 52 2.4Integrationoversubsets............................ 56 2.5 The Lebesgue integral for σ-finitemeasure.................
    [Show full text]
  • 1 Probability Measure and Random Variables
    1 Probability measure and random variables 1.1 Probability spaces and measures We will use the term experiment in a very general way to refer to some process that produces a random outcome. Definition 1. The set of possible outcomes is called the sample space. We will typically denote an individual outcome by ω and the sample space by Ω. Set notation: A B, A is a subset of B, means that every element of A is also in B. The union⊂ A B of A and B is the of all elements that are in A or B, including those that∪ are in both. The intersection A B of A and B is the set of all elements that are in both of A and B. ∩ n j=1Aj is the set of elements that are in at least one of the Aj. ∪n j=1Aj is the set of elements that are in all of the Aj. ∩∞ ∞ j=1Aj, j=1Aj are ... Two∩ sets A∪ and B are disjoint if A B = . denotes the empty set, the set with no elements. ∩ ∅ ∅ Complements: The complement of an event A, denoted Ac, is the set of outcomes (in Ω) which are not in A. Note that the book writes it as Ω A. De Morgan’s laws: \ (A B)c = Ac Bc ∪ ∩ (A B)c = Ac Bc ∩ ∪ c c ( Aj) = Aj j j [ \ c c ( Aj) = Aj j j \ [ (1) Definition 2. Let Ω be a sample space. A collection of subsets of Ω is a σ-field if F 1.
    [Show full text]
  • Measurable Functions and Simple Functions
    Measure theory class notes - 8 September 2010, class 9 1 Measurable functions and simple functions The class of all real measurable functions on (Ω, A ) is too vast to study directly. We identify ways to study them via simpler functions or collections of functions. Recall that L is the set of all measurable functions from (Ω, A ) to R and E⊆L are all the simple functions. ∞ Theorem. Suppose f ∈ L is bounded. Then there exists an increasing sequence {fn}n=1 in E whose uniform limit is f. Proof. First assume that f takes values in [0, 1). We divide [0, 1) into 2n intervals and use this to construct fn: n− 2 1 k − k k+1 fn = 1f 1 n , n 2n ([ 2 2 )) Xk=0 k k+1 k Whenever f takes a value in 2n , 2n , fn takes the value 2n . We have • For all n, fn ≤ f. 1 • For all n, x, |fn(x) − f(x)|≤ 2n . This is clear from the construction. • fn ∈E, since fn is a finite linear combination of indicator functions of sets in A . k 2k 2k+1 • fn ≤ fn+1: For any x, if fn(x)= 2n , then fn+1(x) ∈ 2n+1 , 2n+1 . ∞ So {fn}n=1 is an increasing sequence in E converging uniformly to f. Now for a general f, if the image of f lies in [a,b), then let f − a1 g = Ω b − a (note that a1Ω is the constant function a) ∞ Im g ⊆ [0, 1), so by the above we have {gn}n=1 from E increasing uniformly to g.
    [Show full text]