Modes of Convergence in Probability Theory

Total Page:16

File Type:pdf, Size:1020Kb

Modes of Convergence in Probability Theory Modes of Convergence in Probability Theory David Mandel November 5, 2015 Below, fix a probability space (Ω; F;P ) on which all random variables fXng and X are defined. All random variables are assumed to take values in R. Propositions marked with \F" denote results that rely on our finite measure space. That is, those marked results may not hold on a non-finite measure space. Since we already know uniform convergence =) pointwise convergence this proof is omitted, but we include a proof that shows pointwise convergence =) almost sure convergence, and hence uniform convergence =) almost sure convergence. The hierarchy we will show is diagrammed in Fig. 1, where some famous theorems that demonstrate the type of convergence are in parentheses: (SLLN) = strong long of large numbers, (WLLN) = weak law of large numbers, (CLT) ^ = central limit theorem. In parameter estimation, θn is said to be a consistent ^ estimator or θ if θn ! θ in probability. For example, by the SLLN, X¯n ! µ a.s., and hence X¯n ! µ in probability. Therefore the sample mean is a consistent estimator of the population mean. Figure 1: Hierarchy of modes of convergence in probability. 1 1 Definitions of Convergence 1.1 Modes from Calculus Definition Xn ! X pointwise if 8! 2 Ω, 8 > 0, 9N 2 N such that 8n ≥ N, jXn(!) − X(!)j < . Definition Xn ! X uniformly if 8 > 0, 9N 2 N such that 8! 2 Ω and 8n ≥ N, jXn(!) − X(!)j < . 1.2 Modes Unique to Measure Theory Definition Xn ! X in probability if 8 > 0, 8δ > 0, 9N 2 N such that 8n ≥ N, P (jXn − Xj ≥ ) < δ: Or, Xn ! X in probability if 8 > 0, lim P (jXn − Xj ≥ 0) = 0: n!1 The explicit epsilon-delta definition of convergence in probability is useful for proving a.s. =) prob. p Definition Xn ! X in L , p ≥ 1, if 8 > 0, 9N 2 N such that 8n ≥ N, Z p jXn(!) − X(!)j dP (!) =: jjXn − XjjLp < . Ω p Or, Xn ! X in L if p lim jjXn − Xjj p = 0: n!1 L Definition Xn ! X almost surely (a.s.) if 9E 2 F with P (E) = 0 such that 8! 2 Ec and 8 > 0, 9N 2 N such that 8n ≥ N, jXn(!) − X(!)j < . Or, Xn ! X a.s. if 9E 2 F with P (E) = 0 such that Xn ! X pointwise on Ec. 1.3 Mode Unique to Probability Theory Definition Xn ! X in distribution if the distribution functions of the Xn converge pointwise to the distribution function of X at all points x where F (x) is continuous. That is, Xn ! X in distribution if 8x 2 R such that F (x) is continuous, 8 > 0, 9N 2 N such that 8n ≥ N, jFn(x) − F (x)j < . Or, Xn ! X in distribution if 8x 2 R such that F (x) is continuous, lim Fn(x) = F (x): n!1 2 2 Convergence Results Proposition Pointwise convergence =) almost sure convergence. Proof Let ! 2 Ω, > 0 and assume Xn ! X pointwise. Then 9N 2 N such that 8n ≥ N, jXn(!) − X(!)j < . Hence Xn ! X almost surely since this convergence takes place on all sets E 2 F. Proposition Uniform convergence =) convergence in probability. Proof Let > 0 and assume Xn ! X uniformly. Then 9N 2 N such that 8! 2 Ω and 8n ≥ N, jXn(!) − X(!)j < . Let n ≥ N. Then P (jXn − Xj ≥ ) = P (;) = 0, so of course P (jXn − Xj ≥ ) ! 0 as n ! 1. Proposition Lp convergence =) convergence in probability. p Proof Let > 0 and assume Xn ! X in L , p ≥ 1. Then p jjX − Xjj p P (jX − Xj ≥ ) ≤ n L ! 0 as n ! 1; n p where the first inequality is Chebyshev's inequality. p F Proposition Uniform convergence =) L convergence. Proof Let > 0 and assume Xn ! X uniformly. Then 9N 2 N such that 8! 2 Ω and 8n ≥ N, jXn(!) − X(!)j < . Let n ≥ N. Then Z p p jjXn − XjjLp = jXn(!) − X(!)j dP (!) < P (Ω) = . Ω F Proposition Almost sure convergence =) convergence in probability. This result relies on Egoroff's theorem, which we state now without proof. Theorem 2.1 (Egoroff) Let P (Ω) < 1 and assume Xn ! X a.s. Then 8δ > c 0, 9E 2 F with P (E) < δ such that Xn ! X uniformly on E . Proof Let > 0. Also let δ > 0 and let E 2 F such a set as in Egoroff's c c theorem. Since Xn ! X uniformly on E , 9N 2 N such that 8! 2 E and 8n ≥ N, jXn(!) − X(!)j < . Let n ≥ N. Then c P (jXn − Xj ≥ ) = P (jXn − Xj ≥ \ E) + P (jXn − Xj ≥ \ E ) < δ + P (;) = δ: 3 Proposition Convergence in probability =) convergence in distribution. Proof First, a lemma. Lemma 2.2 Let Xn and X be random variables. Then 8 > 0 and x 2 R, P (Xn ≤ x) ≤ P (X ≤ x + ) + P (jXn − Xj > ); P (X ≤ x − ) ≤ P (Xn ≤ x) + P (jXn − Xj > ); Proof (lemma) For the first line, P (Xn ≤ x) = P (Xn ≤ x; X ≤ x + ) + P (Xn ≤ x; X > x + ) ≤ P (X ≤ x + ) + P (Xn − X ≤ x − X;x − X < −) ≤ P (X ≤ x + ) + P (Xn − X < −) ≤ P (X ≤ x + ) + P (Xn − X < −) + P (Xn − X > ) = P (X ≤ x + ) + P (jXn − Xj > ): For the second line, P (X ≤ x − ) = P (X ≤ x − , Xn ≤ x) + P (X ≤ x − , Xn > x) ≤ P (Xn ≤ x) + P (x − X ≥ , Xn − X > x − X) ≤ P (Xn ≤ x) + P (Xn − X > ) ≤ P (Xn ≤ x) + P (Xn − X > ) + P (Xn − X < −) = P (Xn ≤ x) + P (jXn − Xj > ): Let x 2 R be a continuity point of F and let > 0. The lemma then tells us that Fn(x) ≤ F (x + ) + P (jXn − Xj > ); F (x − ) ≤ Fn(x) + P (jXn − Xj > ): Hence F (x − ) − P (jXn − Xj > ) ≤ Fn(x) ≤ F (x + ) + P (jXn − Xj > ): Now let n ! 1, in which case P (jXn − Xj > ) ! 0: F (x − ) ≤ lim inf Fn(x) ≤ lim sup Fn(x) ≤ F (x + ): n!1 n!1 Finally let ! 0+ and use continuity of F at x: F (x) ≤ lim inf Fn(x) ≤ lim sup Fn(x) ≤ F (x); n!1 n!1 hence limn!1 Fn(x) = F (x). 4 3 References References [1] Wikipedia Proofs of Convergence of Random Variables https://en.wikipedia.org/wiki/Proofs_of_convergence_of_ random_variables [2] University of Alabama, Birmingham Convergence in Distribution (Lecture Notes) http://www.math.uah.edu/stat/dist/Convergence.html [3] Gerald B. Folland Real Analysis: Modern Techniques and Their Applications http://www.amazon.com/Real-Analysis-Modern-Techniques-Applications/ dp/0471317160 5.
Recommended publications
  • Arxiv:2102.05840V2 [Math.PR]
    SEQUENTIAL CONVERGENCE ON THE SPACE OF BOREL MEASURES LIANGANG MA Abstract We study equivalent descriptions of the vague, weak, setwise and total- variation (TV) convergence of sequences of Borel measures on metrizable and non-metrizable topological spaces in this work. On metrizable spaces, we give some equivalent conditions on the vague convergence of sequences of measures following Kallenberg, and some equivalent conditions on the TV convergence of sequences of measures following Feinberg-Kasyanov-Zgurovsky. There is usually some hierarchy structure on the equivalent descriptions of convergence in different modes, but not always. On non-metrizable spaces, we give examples to show that these conditions are seldom enough to guarantee any convergence of sequences of measures. There are some remarks on the attainability of the TV distance and more modes of sequential convergence at the end of the work. 1. Introduction Let X be a topological space with its Borel σ-algebra B. Consider the collection M˜ (X) of all the Borel measures on (X, B). When we consider the regularity of some mapping f : M˜ (X) → Y with Y being a topological space, some topology or even metric is necessary on the space M˜ (X) of Borel measures. Various notions of topology and metric grow out of arXiv:2102.05840v2 [math.PR] 28 Apr 2021 different situations on the space M˜ (X) in due course to deal with the corresponding concerns of regularity. In those topology and metric endowed on M˜ (X), it has been recognized that the vague, weak, setwise topology as well as the total-variation (TV) metric are highlighted notions on the topological and metric description of M˜ (X) in various circumstances, refer to [Kal, GR, Wul].
    [Show full text]
  • Noncommutative Ergodic Theorems for Connected Amenable Groups 3
    NONCOMMUTATIVE ERGODIC THEOREMS FOR CONNECTED AMENABLE GROUPS MU SUN Abstract. This paper is devoted to the study of noncommutative ergodic theorems for con- nected amenable locally compact groups. For a dynamical system (M,τ,G,σ), where (M, τ) is a von Neumann algebra with a normal faithful finite trace and (G, σ) is a connected amenable locally compact group with a well defined representation on M, we try to find the largest non- commutative function spaces constructed from M on which the individual ergodic theorems hold. By using the Emerson-Greenleaf’s structure theorem, we transfer the key question to proving the ergodic theorems for Rd group actions. Splitting the Rd actions problem in two cases accord- ing to different multi-parameter convergence types—cube convergence and unrestricted conver- gence, we can give maximal ergodic inequalities on L1(M) and on noncommutative Orlicz space 2(d−1) L1 log L(M), each of which is deduced from the result already known in discrete case. Fi- 2(d−1) nally we give the individual ergodic theorems for G acting on L1(M) and on L1 log L(M), where the ergodic averages are taken along certain sequences of measurable subsets of G. 1. Introduction The study of ergodic theorems is an old branch of dynamical system theory which was started in 1931 by von Neumann and Birkhoff, having its origins in statistical mechanics. While new applications to mathematical physics continued to come in, the theory soon earned its own rights as an important chapter in functional analysis and probability. In the classical situation the sta- tionarity is described by a measure preserving transformation T , and one considers averages taken along a sequence f, f ◦ T, f ◦ T 2,..
    [Show full text]
  • Sequences and Series of Functions, Convergence, Power Series
    6: SEQUENCES AND SERIES OF FUNCTIONS, CONVERGENCE STEVEN HEILMAN Contents 1. Review 1 2. Sequences of Functions 2 3. Uniform Convergence and Continuity 3 4. Series of Functions and the Weierstrass M-test 5 5. Uniform Convergence and Integration 6 6. Uniform Convergence and Differentiation 7 7. Uniform Approximation by Polynomials 9 8. Power Series 10 9. The Exponential and Logarithm 15 10. Trigonometric Functions 17 11. Appendix: Notation 20 1. Review Remark 1.1. From now on, unless otherwise specified, Rn refers to Euclidean space Rn n with n ≥ 1 a positive integer, and where we use the metric d`2 on R . In particular, R refers to the metric space R equipped with the metric d(x; y) = jx − yj. (j) 1 Proposition 1.2. Let (X; d) be a metric space. Let (x )j=k be a sequence of elements of X. 0 (j) 1 Let x; x be elements of X. Assume that the sequence (x )j=k converges to x with respect to (j) 1 0 0 d. Assume also that the sequence (x )j=k converges to x with respect to d. Then x = x . Proposition 1.3. Let a < b be real numbers, and let f :[a; b] ! R be a function which is both continuous and strictly monotone increasing. Then f is a bijection from [a; b] to [f(a); f(b)], and the inverse function f −1 :[f(a); f(b)] ! [a; b] is also continuous and strictly monotone increasing. Theorem 1.4 (Inverse Function Theorem). Let X; Y be subsets of R.
    [Show full text]
  • Basic Functional Analysis Master 1 UPMC MM005
    Basic Functional Analysis Master 1 UPMC MM005 Jean-Fran¸coisBabadjian, Didier Smets and Franck Sueur October 18, 2011 2 Contents 1 Topology 5 1.1 Basic definitions . 5 1.1.1 General topology . 5 1.1.2 Metric spaces . 6 1.2 Completeness . 7 1.2.1 Definition . 7 1.2.2 Banach fixed point theorem for contraction mapping . 7 1.2.3 Baire's theorem . 7 1.2.4 Extension of uniformly continuous functions . 8 1.2.5 Banach spaces and algebra . 8 1.3 Compactness . 11 1.4 Separability . 12 2 Spaces of continuous functions 13 2.1 Basic definitions . 13 2.2 Completeness . 13 2.3 Compactness . 14 2.4 Separability . 15 3 Measure theory and Lebesgue integration 19 3.1 Measurable spaces and measurable functions . 19 3.2 Positive measures . 20 3.3 Definition and properties of the Lebesgue integral . 21 3.3.1 Lebesgue integral of non negative measurable functions . 21 3.3.2 Lebesgue integral of real valued measurable functions . 23 3.4 Modes of convergence . 25 3.4.1 Definitions and relationships . 25 3.4.2 Equi-integrability . 27 3.5 Positive Radon measures . 29 3.6 Construction of the Lebesgue measure . 34 4 Lebesgue spaces 39 4.1 First definitions and properties . 39 4.2 Completeness . 41 4.3 Density and separability . 42 4.4 Convolution . 42 4.4.1 Definition and Young's inequality . 43 4.4.2 Mollifier . 44 4.5 A compactness result . 45 5 Continuous linear maps 47 5.1 Space of continuous linear maps . 47 5.2 Uniform boundedness principle{Banach-Steinhaus theorem .
    [Show full text]
  • Almost Uniform Convergence Versus Pointwise Convergence
    ALMOST UNIFORM CONVERGENCE VERSUS POINTWISE CONVERGENCE JOHN W. BRACE1 In many an example of a function space whose topology is the topology of almost uniform convergence it is observed that the same topology is obtained in a natural way by considering pointwise con- vergence of extensions of the functions on a larger domain [l; 2]. This paper displays necessary conditions and sufficient conditions for the above situation to occur. Consider a linear space G(5, F) of functions with domain 5 and range in a real or complex locally convex linear topological space £. Assume that there are sufficient functions in G(5, £) to distinguish between points of 5. Let Sß denote the closure of the image of 5 in the cartesian product space X{g(5): g£G(5, £)}. Theorems 4.1 and 4.2 of reference [2] give the following theorem. Theorem. If g(5) is relatively compact for every g in G(5, £), then pointwise convergence of the extended functions on Sß is equivalent to almost uniform converqence on S. When almost uniform convergence is known to be equivalent to pointwise convergence on a larger domain the situation can usually be converted to one of equivalence of the two modes of convergence on the same domain by means of Theorem 4.1 of [2]. In the new formulation the following theorem is applicable. In preparation for the theorem, let £(5, £) denote all bounded real valued functions on S which are uniformly continuous for the uniformity which G(5, F) generates on 5. G(5, £) will be called a full linear space if for every/ in £(5, £) and every g in GiS, F) the function fg obtained from their pointwise product is a member of G (5, £).
    [Show full text]
  • ACM 217 Handout: Completeness of Lp the Following
    ACM 217 Handout: Completeness of Lp The following statements should have been part of section 1.5 in the lecture notes; unfortunately, I forgot to include them. p Definition 1. A sequence {Xn} of random variables in L , p ≥ 1 is called a Cauchy p sequence (in L ) if supm,n≥N kXm − Xnkp → 0 as N → ∞. p p Proposition 2 (Completeness of L ). Let Xn be a Cauchy sequence in L . Then there p p exists a random variable X∞ ∈ L such that Xn → X∞ in L . When is such a result useful? All our previous convergence theorems, such as the dominated convergence theorem etc., assume that we already know that our sequence converges to a particular random variable Xn → X in some sense; they tell us how to convert between the various modes of convergence. However, we are often just given a sequence Xn, and we still need to establish that Xn converges to something. Proving that Xn is a Cauchy sequence is one way to show that the sequence converges, without knowing in advance what it converges to. We will encounter another way to show that a sequence converges in the next chapter (the martingale convergence theorem). Remark 3. As you know from your calculus course, Rn also has the property that any Cauchy sequence converges: if supm,n≥N |xm − xn| → 0 as N → ∞ for some n n sequence {xn} ⊂ R , then there is some x∞ ∈ R such that xn → x∞. In fact, many (but not all) metric spaces have this property, so it is not shocking that it is true also for Lp.
    [Show full text]
  • MAA6617 COURSE NOTES SPRING 2014 19. Normed Vector Spaces Let
    MAA6617 COURSE NOTES SPRING 2014 19. Normed vector spaces Let X be a vector space over a field K (in this course we always have either K = R or K = C). Definition 19.1. A norm on X is a function k · k : X! K satisfying: (i) (postivity) kxk ≥ 0 for all x 2 X , and kxk = 0 if and only if x = 0; (ii) (homogeneity) kkxk = jkjkxk for all x 2 X and k 2 K, and (iii) (triangle inequality) kx + yk ≤ kxk + kyk for all x; y 2 X . Using these three properties it is straightforward to check that the quantity d(x; y) := kx − yk (19.1) defines a metric on X . The resulting topology is called the norm topology. The next propo- sition is simple but fundamental; it says that the norm and the vector space operations are continuous in the norm topology. Proposition 19.2 (Continuity of vector space operations). Let X be a normed vector space over K. a) If xn ! x in X , then kxnk ! kxk in R. b) If kn ! k in K and xn ! x in X , then knxn ! kx in X . c) If xn ! x and yn ! y in X , then xn + yn ! x + y in X . Proof. The proofs follow readily from the properties of the norm, and are left as exercises. We say that two norms k · k1; k · k2 on X are equivalent if there exist absolute constants C; c > 0 such that ckxk1 ≤ kxk2 ≤ Ckxk1 for all x 2 X : (19.2) Equivalent norms defined the same topology on X , and the same Cauchy sequences (Prob- lem 20.2).
    [Show full text]
  • 9 Modes of Convergence
    9 Modes of convergence 9.1 Weak convergence in normed spaces We recall that the notion of convergence on a normed space X, which we used so far, is the convergence with respect to the norm on X: namely, for a sequence (xn)n≥1, we say that xn ! x if kxn − xk ! 0 as n ! 1. (1) However, in many cases, this notion does not capture the full information about behaviour of sequences. Moreover, we recall that according to the Bolzano{Weierstrass Theorem, a bounded sequence in R has a convergent subsequence. An analogue of this result fails for in infinite dimensional normed spaces if we only deal with convergence with respect to norm. Example 9.1. Consider the sequencep en = (0;:::; 0; 1; 0;:::) in the space 2 ` . Then for n 6= m, ken −emk2 = 2. So that we conclude that the sequence (en)n≥1 is not Cauchy, so it cannot converge, and it even does not contain any convergent subsequence. Nonetheless, in some \weak" sense, we may say that this sequence \converges" to (0;:::; 0;:::). For instance, for every 2 x = (xn)n≥1 2 ` , hx; eni = xn ! 0: This example motivates the notion of weak convergence which we now introduce. Definition 9.2. A sequence (xn)n≥1 in a normed space X converges weakly ∗ to x 2 X if for every f 2 X , we have that f(xn) ! f(x). We write w xn −! x: To emphasise the difference with the usual notion of convergence, if (1) hold, we say (xn)n≥1 converges in norm or converges strongly.
    [Show full text]
  • Chapter 3: Integration
    Chapter 3 Integration 3.1 Measurable Functions Definition 3.1.1. Let (X; A) be a measurable space. We say that f : X ! R is measurable if f −1((α; 1; )) 2 A for every α 2 R. Let M(X; A) denote the set of measurable functions. Lemma 3.1.2. The following are equivalent: 1) f is measurable. 2) f −1((−∞; α]) 2 A for all α 2 R. 3) f −1([α; 1)) 2 A for all α 2 R. 4) f −1((−∞; α)) 2 A for all α 2 R. Proof. 1) , 2) This follows since (f −1((−∞; α]))c = f −1((α; 1)): 1) ) 3) Observe that 1 \ 1 f −1([α; 1)) = f −1((α − ; 1)) 2 A: n n=1 3) ) 1) Observe that 1 [ 1 f −1((α; 1)) = f −1([α + ; 1)) 2 A: n n=1 3) , 4) This follows since (f −1((−∞; α)))c = f −1([α; 1)): Example 3.1.3. 1) Given any (X; A), the constant function f(x) = c for all x 2 X is measurable for any c 2 R. 2) If A ⊆ X, the characteristic function χA(x) of A is measurable if and only if A 2 A. 37 3) If X = R and B(R) ⊆ A, then the continuous functions are measurable since f −1((α; 1)) is open and hence measurable. Proposition 3.1.4. [Arithmetic of Measurable Functions] Assume that f; g 2 M(X; A). 1) cf 2 M(X; A) for every c 2 R 2) f 2 2 M(X; A) 3) f + g 2 M(X; A) 4) fg 2 M(X; A) 5) jfj 2 M(X; A) 6) maxff; gg 2 M(X; A) 7) minff; gg 2 M(X; A) Proof.
    [Show full text]
  • Probability Theory - Part 2 Independent Random Variables
    PROBABILITY THEORY - PART 2 INDEPENDENT RANDOM VARIABLES MANJUNATH KRISHNAPUR CONTENTS 1. Introduction 2 2. Some basic tools in probability2 3. Applications of first and second moment methods 11 4. Applications of Borel-Cantelli lemmas and Kolmogorov’s zero-one law 18 5. Weak law of large numbers 20 6. Applications of weak law of large numbers 22 7. Modes of convergence 24 8. Uniform integrability 28 9. Strong law of large numbers 29 10. The law of iterated logarithm 32 11. Proof of LIL for Bernoulli random variables 34 12. Hoeffding’s inequality 36 13. Random series with independent terms 38 14. Central limit theorem - statement, heuristics and discussion 40 15. Strategies of proof of central limit theorem 43 16. Central limit theorem - two proofs assuming third moments 45 17. Central limit theorem for triangular arrays 46 18. Two proofs of the Lindeberg-Feller CLT 48 19. Sums of more heavy-tailed random variables 51 20. Appendix: Characteristic functions and their properties 52 1 1. INTRODUCTION In this second part of the course, we shall study independent random variables. Much of what we do is devoted to the following single question: Given independent random variables with known distributions, what can you say about the distribution of the sum? In the process of finding answers, we shall weave through various topics. Here is a guide to the essential aspects that you might pay attention to. Firstly, the results. We shall cover fundamental limit theorems of probability, such as the weak and strong law of large numbers, central limit theorems, poisson limit theorem, in addition to results on random series with independent summands.
    [Show full text]
  • Probability I Fall 2011 Contents
    Probability I Fall 2011 Contents 1 Measures 2 1.1 σ-fields and generators . 2 1.2 Outer measure . 4 1.3 Carath´eodory Theorem . 7 1.4 Product measure I . 8 1.5 Hausdorff measure and Hausdorff dimension . 10 2 Integrals 12 2.1 Measurable functions . 12 2.2 Monotone and bounded convergence . 14 2.3 Various modes of convergence . 14 2.4 Approximation by continuous functions . 15 2.5 Fubini and Radon-Nikodym Theorem . 17 3 Probability 18 3.1 Probabilistic terminology and notation . 18 3.2 Independence, Borel-Cantelli, 0-1 Law . 19 3.3 Lp-spaces . 21 3.4 Weak convergence of measures . 23 3.5 Measures on a metric space, tightness vs. compactness . 24 4 Appendix: Selected proofs 27 4.1 Section 1 . 27 4.2 Section 2 . 30 4.3 Section 3 . 33 1 1 Measures 1.1 σ-fields and generators A family F of subsets of a set S is called a field if (F1) ; 2 F and S 2 F; (F2) if A; B 2 F, then A [ B 2 F; (F3) if A 2 F, then Ac = S n A 2 F. Condition (F2), repeating itself, is equivalent to [n (F2f ) if A1;:::;An 2 F, then Ak 2 F. k=1 A field F is called a σ-field, if (F2f ) is strengthened to [1 (F2c) if A1;A2;::: 2 F, then Ak 2 F. k=1 In view of (3) and the De Morgan's laws, the union \[" in (F2) or (F2f ) or (F2c) can be replaced by the intersection \\".
    [Show full text]
  • Week 6: Topology & Real Analysis Notes
    Christian Parkinson GRE Prep: Topology & Real Analysis Notes 1 Week 6: Topology & Real Analysis Notes To this point, we have covered Calculus I, Calculus II, Calculus III, Differential Equations, Linear Algebra, Complex Analysis and Abstract Algebra. These topics probably comprise more than 90% of the GRE math subject exam. The remainder of the exam is comprised of a seemingly random selection of problems from a variety of different fields (topology, real analysis, probability, combinatorics, discrete math, graph theory, algorithms, etc.). We can't hope to cover all of this, but we will state some relevant definitions and theorems in Topology and Real Analysis. Topology The field of topology is concerned with the shape of spaces and their behavior under continuous transformations. Properties regarding shape and continuity are phrased using the concept of open sets. Definition 1 (Topology / Open Sets). Let X be a set and τ be a collection of subsets of X. We say that τ is a topology on X if the following three properties hold: (i) ?;X 2 τ n \ (ii) If T1;:::Tn is a finite collection of members of τ, then Ti 2 τ i=1 [ (iii) If fTigi2I is any collection of members of τ, then Ti 2 τ i2I In this case, we call the pair (X; τ) a topological space and we call the sets T 2 τ open sets. Note, there are two topologies which we can always place on any set X: the trivial topol- ogy τ = f?;Xg and the discrete topology τ = P(X). Having defined open sets, we are able to define closed sets.
    [Show full text]