Introduction to Stochastic Calculus with Applications

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Stochastic Calculus with Applications Chapter 2 Concepts of Probability Theory In this chapter we give fundamental definitions of probabilistic concepts. Since the theory is more transparent in the discrete case, it is presented first. The most important concepts not met in elementary courses are the models for in- formation, its flow and conditional expectation. This is only a brief description, and a more detailed treatment can be found in many books on Probability The- ory, for example, Breiman (1968), Loeve (1978), Durret (1991). Conditional expectation with its properties is central for further development, but some of the material in this chapter may be treated as an appendix. 2.1 Discrete Probability Model A probability model consists of a filtered probability space on which variables of interest are defined. In this section we introduce a discrete probability model by using an example of discrete trading in stock. Filtered Probability Space A filtered probability space consists of: a sample space of elementary events, a field of events, a probability defined on that field, and a filtration of increasing subfields. 21 22 CHAPTER 2. CONCEPTS OF PROBABILITY THEORY Sample Space Consider a single stock with price St at time t =1, 2,...T.DenotebyΩthe set of all possible values of stock during these times. Ω={ω : ω =(S1,S2,...,ST )}. If we assume that the stock price can go up by a factor u and down by afactord, then the relevant information reduces to the knowledge of the movements at each time. Ω={ω : ω =(a1,a2,...,aT )} at = u or d. To model uncertainty about the price in the future, we “list” all possible future prices, and call it possible states of the world. The unknown future is just one of many possible outcomes, called the true state of the world. As time passes more and more information is revealed about the true state of the world. At time t = 1 we know prices S0 and S1. Thus the true state of the world lies in a smaller set, subset of Ω, A ⊂ Ω. After observing S1 we know which prices did not happen at time 1. Therefore we know that the true state of the world is in A and not in Ω \ A = A¯. Fields of Events Define by Ft the information available to investors at time t, which consists of stock prices before and at time t. For example when T =2,att = 0 we have no information about S1 and S2,andF0 = {∅, Ω}, all we know is that a true state of the world is in Ω. Consider the situation at t = 1. Suppose at t =1stockwentupbyu.Then we know that the true state of the world is in A, and not in its complement A¯,where A = {(u, S2),S2 = u or d} = {(u, u), (u, d)}. Thus our information at time t =1is F1 = {∅, Ω,A,A¯}. Note that F0 ⊂F1, since we don’t forget the previous information. At time t investors know which part of Ω contains the true state of the world. Ft is called a field or algebra of sets. F is a field if 1. ∅, Ω ∈F 2. If A ∈F,andB ∈F then A ∪ B ∈F, A ∩ B ∈F, A \ B ∈F. 2.1. DISCRETE PROBABILITY MODEL 23 Example 2.1: (Examples of fields.) It is easy to verify that any of the following is a field of sets. 1. {Ω, ∅} is called the trivial field F0. 2. {∅, Ω,A,A¯} is called the field generated by set A, and denoted by FA. 3. {A : A ⊆ Ω} the field of all the subsets of Ω. It is denoted by 2Ω. A partition of Ω is a collection of exhaustive and mutually exclusive subsets, {D1,...,Dk}, such that Di ∩ Dj = ∅, and Di =Ω. i The field generated by the partition is the collection of all finite unions of Dj’s and their complements. These sets are like the basic building blocks for the field. If Ω is finite then any field is generated by a partition. If one field is included in the other, F1 ⊂F2,thenanysetfromF1 is also in F2. In other words, a set from F1 is either a set or a union of sets from the partition generating F2. This means that the partition that generates F2 has “finer” sets than the ones that generate F1. Filtration A filtration IF is the collection of fields, IF = {F0, F1,...,Ft,...,FT }Ft ⊂Ft+1. IF is used to model a flow of information. As time passes, an observer knows more and more detailed information, that is, finer and finer partitions of Ω. In the example of the price of stock, IF describes how the information about prices is revealed to investors. Ω Example 2.2: IF={F0, FA, 2 }, is an example of filtration. Stochastic Processes If Ω is a finite sample space, then a function X defined on Ω attaches numerical values to each ω ∈ Ω. Since Ω is finite, X takes only finitely many values xi, i =1,...,k. If a field of events F is specified, then any set in it is called a measurable set. If F =2Ω, then any subset of Ω is measurable. A function X on Ω is called F-measurable or a random variable on (Ω, F) if all the sets {X = xi}, i =1,...,k,aremembersofF. This means that if we have the information described by F, that is, we know which event in F has occurred, then we know which value of X has occurred. Note that if F =2Ω, then any function on Ω is a random variable. 24 CHAPTER 2. CONCEPTS OF PROBABILITY THEORY Example 2.3: Consider the model for trading in stock, t =1, 2, where at each time the stock can go up by the factor u or down by the factor d. Ω={ω1 =(u, u),ω2 =(u, d),ω3 =(d, u),ω4 =(d, d)}.TakeA = {ω1,ω2},whichis Ω the event that at t = 1 the stock goes up. F1 = {∅, Ω,A,A¯},andF2 =2 contains all 16 subsets of Ω. Consider the following functions on Ω. X(ω1)=X(ω2)=1.5, X(ω3)=X(ω4)=0.5. X is a random variable on F1. Indeed, the set {ω : X(ω)= 2 1.5} = {ω1,ω2} = A ∈F1.Also{ω : X(ω)=0.5} = A¯ ∈F1.IfY (ω1)=(1.5) , 2 Y (ω2)=0.75, Y (ω3)=0.75, and Y (ω4)=0.5 ,thenY is not a random variable on F1,itisnotF1-measurable. Indeed, {ω : Y (ω)=0.75} = {ω2,ω3} ∈F/ 1. Y is F2-measurable. Definition 2.1 A stochastic process is a collection of random variables {X(t)}. For any fixed t, t =0, 1,...,T, X(t) is a random variable on (Ω, FT ). A stochastic process is called adapted to filtration IF if for all t =0, 1,...,T, X(t) is a random variable on Ft, that is, if X(t) is Ft-measurable. Example 2.4: (Example 2.3 continued.) X1 = X, X2 = Y is a stochastic process adapted to IF={F1, F2}.Thisprocess represents stock prices at time t under the assumption that the stock can appreciate or depreciate by 50% in a unit of time. Field Generated by a Random Variable Let (Ω, 2Ω) be a sample space with the field of all events, and X be a random variable with values xi, i =1, 2,...k.Considersets Ai = {ω : X(ω)=xi}⊆Ω. These sets form a partition of Ω, and the field generated by this partition is called the field generated by X. It is the smallest field that contains all the sets of the form Ai = {X = xi} and it is denoted by FX or σ(X). The field generated by X represents the information we can extract about the true state ω by observing X. Example 2.5: (Example 2.3 continued.) {ω : X(ω)=1.5} = {ω1,ω2} = A, {ω : X(ω)=0.5} = {ω3,ω4} = A¯. FX = F1 = {∅, Ω,A,A¯}. Filtration Generated by a Stochastic Process Given (Ω, F) and a stochastic process {X(t)} let Ft = σ({Xs, 0 ≤ s ≤ t}) be the field generated by random variables Xs, s =0,...,t.Itisallthe information available from the observation of the process up to time t. Clearly, Ft ⊆Ft+1, so that these fields form a filtration. This filtration is called the natural filtration of the process {X(t)}. 2.1. DISCRETE PROBABILITY MODEL 25 If A ∈Ft then by observing the process from 0 to t we know at time t whether the true state of the world is in A or not. We illustrate this on our financial example. Example 2.6: Take T = 3, and assume that at each trading time the stock can go up by the factor u or down by d. uuu duu B¯ uud dud Ω= udu ddu B udd ddd A A¯ Look at the sets generated by information about S1. This is a partition of Ω, {A, A¯}. Together with the empty set and the whole set, this is the field F1. Sets generated by information about S2 are B and B¯. Thus the sets formed by knowledge of S1 and S2 is the partition of Ω, consisting of all intersections of the above sets. Together with the empty set and the whole set this is the field F2. Clearly any set in F1 is also in F2, for example A =(A∩B)∪(A∩B¯). Similarly if we add information about Ω S3 we obtain all the elementary sets, ω’s and hence all subsets of Ω, F3 =2 .In particular we will know the true state of the world when T =3.
Recommended publications
  • Arxiv:1906.06684V1 [Math.NA]
    Randomized Computation of Continuous Data: Is Brownian Motion Computable?∗ Willem L. Fouch´e1, Hyunwoo Lee2, Donghyun Lim2, Sewon Park2, Matthias Schr¨oder3, Martin Ziegler2 1 University of South Africa 2 KAIST 3 University of Birmingham Abstract. We consider randomized computation of continuous data in the sense of Computable Analysis. Our first contribution formally confirms that it is no loss of generality to take as sample space the Cantor space of infinite fair coin flips. This extends [Schr¨oder&Simpson’05] and [Hoyrup&Rojas’09] considering sequences of suitably and adaptively biased coins. Our second contribution is concerned with 1D Brownian Motion (aka Wiener Process), a prob- ability distribution on the space of continuous functions f : [0, 1] → R with f(0) = 0 whose computability has been conjectured [Davie&Fouch´e’2013; arXiv:1409.4667,§6]. We establish that this (higher-type) random variable is computable iff some/every computable family of moduli of continuity (as ordinary random variables) has a computable probability distribution with respect to the Wiener Measure. 1 Introduction Randomization is a powerful technique in classical (i.e. discrete) Computer Science: Many difficult problems have turned out to admit simple solutions by algorithms that ‘roll dice’ and are efficient/correct/optimal with high probability [DKM+94,BMadHS99,CS00,BV04]. Indeed, fair coin flips have been shown computationally universal [Wal77]. Over continuous data, well-known closely connected to topology [Grz57] [Wei00, 2.2+ 3], notions of proba- § § bilistic computation are more subtle [BGH15,Col15]. 1.1 Overview Section 2 resumes from [SS06] the question of how to represent Borel probability measures.
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • Index English to Chinese
    probability A absolutely continuous random variables 绝对连续 随机变 量 analytic theory of probability 解析学概率论 antithetic variables 对 偶变 量 Archimedes 阿基米德 Ars Conjectandi associative law for events (概率)事件结 合律 axioms of probability 概率论 基本公理 axioms of surprise B ballot problem 选 票问题 Banach match problem 巴拿赫火柴问题 basic principle of counting 计 数基本原理 generalized (广义 ) Bayes’s formula ⻉ 叶斯公式 Bernoulli random variables 伯努利随机变 量 Bernoulli trials 伯努利实验 Bernstein polynomials 伯恩斯坦多项 式 Bertand’s paradox ⻉ 特朗悖论 best prize problem beta distribution ⻉ 塔分布 binary symmetric channel binomial coefficents 二项 式系数 binomial random variables 二项 随机变 量 normal approximation 正态 分布逼近 approximation to hypergeometric 超几何分布逼近 computing its mass function 计 算分布函数 moments of 矩(动 差) simulation of 模拟 sums of independent 独立 二项 随机变 量 之和 binomial theorem 二项 式定理 birthday problem 生日问题 probability bivariate exponential distribution 二元指数分布 bivariate normal distribution 二元正态 分布 Bonferroni’s inequality 邦费罗 尼不等式 Boole’s inequality 布尔不等式 Box-Muller simulation technique Box-Muller模拟 Branching process 分支过 程 Buffon’s needle problem 普丰投针问题 C Cantor distribution 康托尔分布 Cauchy distribution 柯西分布 Cauchy - Schwarz inequality 柯西-施瓦兹不等式 center of gravity 重心 central limit theorem 中心极限定理 channel capacity 信道容量 Chapman-Kolmogorov equation 查 普曼-科尔莫戈罗 夫等式 Chebychev’s inequality 切比雪夫不等式 Chernoff bound 切诺 夫界 chi-squared distribution 卡方分布 density function 密度函数 relation to gamma distribution 与伽⻢ 函数关系 simulation of 模拟 coding theory 编码 理论 and entropy 熵 combination 组 合 combinatorial analysis 组 合分析 combinatorial identities
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • A Complete Bibliography of Publications in Statistics and Probability Letters: 2010–2019
    A Complete Bibliography of Publications in Statistics and Probability Letters: 2010{2019 Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 24 April 2020 Version 1.24 Title word cross-reference (1; 2) [Yan19a]. (α, β)[Kay15].(1; 1) [BO19]. (k1;k2) [UK18]. (n − k +1) [MF15, ZY10a]. (q; d) [BG16]. (X + Y ) [AV13b]. 1 [Hel19, Red17, SC15, Sze10]. 1=2[Efr10].f2g [Sma15, Jac13, JKD15a, JKD15b, Kha14, LM17, MK10, Ose14a, WL11]. 2k [Lu16a, Lu16b]. 2n−p [Yan13]. 2 × 2 [BB12, MBTC13]. 2 × c [SB16]. 3 [CGLN17, HWB10]. 3x + 1 [DP16]. > [HB18]. > 1=2 [DO11a]. [0; 1] [LP19a]. L1 q (` ) [Ose15a]. k [BML14]. r [VCM14]. A [CDS17, MPA12, PPM18, KDW15]. A1 [Ose13a]. α [AH12b, hCyP15, GT14, LT19b, MY14, MU13, Mic11, MM14b, PP14b, Sre12, Yan17b, ZZT17, ZYS19, ZZ13b, ZLC18, ZZ19a]. AR(1) [CL10, Mar16, PPN10]. AR(p) [WN14b]. ARH(1) [RMAL19].´ β [EM10, JW16, Kre18, Poi19, RBY10, Su10, XXY17, YZ16]. β !1[JW16]. 1 C[0; 1] [Mor18]. C[0; 1] [KM19]. C [LL17c]. D [CDS17, DS15, Jac11, KHN12, Sin19, Sma14, ZP12, Dav12b, OA15,¨ Rez15]. 1 2 Ds [KHN12]. D \ L [BS12a]. δ [Ery12, PB15a]. d ≥ 3 [DH19]. DS [WZ17]. d × R [BS19b]. E [DS15, HR19a, JKD14, KDW15]. E(fNOD)[CKF17].`1 N [AH11]. `1 [Ose19]. `p [Zen14]. [BCD19]. exp(x)[BM11].F [MMM13].
    [Show full text]
  • List of Glossary Terms
    List of Glossary Terms A Ageofaclusterorfuzzyrule 1054 A correlated equilibrium 3064 Agent 58, 76, 105, 1767, 2578, 3004 Amechanism 1837 Agent architecture 105 Anearestneighbor 790 Agent based models 2940 A social choice function 1837 Agent (or software agent) 2999 A stochastic game 3064 Agent-based computational models 2898 Astrategy 3064 Agent-based model 58 Abelian group 2780 Agent-based modeling 1767 Absolute temperature 940 Agent-based modeling (ABM) 39 Absorbing state 1080 Agent-based simulation 18, 88 Abstract game 675 Aggregation 862 Abstract game of network formation with respect to Aggregation operators 122 irreflexive dominance 2044 Aging 2564, 2611 Abstract game of network formation with respect to path Algebra 2925 dominance 2044 Algebraic models 2898 Accuracy 161, 827 Algorithm 2496 Accuracy (rate) 862 Algorithmic complexity of object x 132 Achievable mate 3235 Algorithmic self-assembly of DNA tiles 1894 Action profile 3023 Allowable set of partners 3234 Action set 3023 Almost equicontinuous CA 914, 3212 Action type 2656 Alphabet of a cellular automaton 1 Activation function 813 ˛-Level set and support 1240 Activator 622 Alternating independent-2-paths 2953 Active membranes 1851 Alternating k-stars 2953 Actors 2029 Alternating k-triangles 2953 Adaptation 39 Amorphous computer 147 Adaptive system 1619 Analog 3187 Additive cellular automata 1 Analog circuit 3260 Additively separable preferences 3235 Ancilla qubits 2478 Adiabatic switching 1998 Anisotropic elements 754 Adjacency matrix 1746, 3114 Annealed law 2564 Adjacent 2864,
    [Show full text]
  • Probability Theory Oral Exam Study Notes
    Probability Theory Oral Exam study notes Notes transcribed by Mihai Nica Abstract. These are some study notes that I made while studying for my oral exams on the topic of Probability Theory. I took these notes from a few dierent sources, and they are not in any particular order. They tend to move around a lot. They also skip some of the basics of measure theory which are covered in the real analysis notes. Please be extremely caution with these notes: they are rough notes and were originally only for me to help me study. They are not complete and likely have errors. I have made them available to help other students on their oral exams (Note: I specialized in probability theory, so these go a bit further into a few topics that most people would do for their oral exam). See also the sections on Conditional Expectation and the Law of the Iterated Logarithm from my Limit Theorem II notes. Contents Independence and Weak Law of Large Numbers 5 1.1. Independence 5 1.2. Weak Law of Large Numbers 9 Borel Cantelli Lemmas 12 2.3. Borel Cantelli Lemmas 12 2.4. Bounded Convergence Theorem 15 Central Limit Theorems 17 3.5. The De Moivre-Laplace Theorem 17 3.6. Weak Convergence 17 3.7. Characteristic Functions 24 3.8. The moment problem 31 3.9. The Central Limit Theorem 33 3.10. Other Facts about CLT results 35 3.11. Law of the Iterated Log 36 Moment Methods 39 4.12. Basics of the Moment Method 39 4.13.
    [Show full text]
  • Distribution Theory & Its Summability Perspective
    1 2 Summer Workshop on Distribution Theory & its Summability Perspective 50 45 40 35 30 25 Frequencies 20 15 10 5 0 | 16 17 18 18.9 20 21 22 Center of mass Amount of Drink Mix (in ounces) M. Kazım Khan Kent State University (USA) Place: Ankara University, Department of Mathematics Dates: 16 May - 27 May 2011 Supported by: The Scientific and Technical Research Council of Turkey (TUBITAK)¨ 4 Preface This is a collection of lecture notes I gave at Ankara University, department of math- ematics, during the last two weeks of May, 2011. I am greatful to Professor Cihan Orhan and the Scientific and Technical Research Council of Turkey (TUBITAK)¨ for the invitation and support. The primary focus of the lectures was to introduce the basic components of distribution theory and bring out how summability theory plays out its role in it. I did not assume any prior knowledge of probability theory on the part of the participants. Therefore, the first few lectures were completely devoted to building the language of probability and distribution theory. These are then used freely in the rest of the lectures. To save some time, I did not prove most of these results. Then a few lectures deal with Fourier inversion theory specifically from the summability perspective. The next batch consists of convergence concepts, where I introduce the weak and the strong laws of large numbers. Again long proofs were omitted. A noteable exception deals with the results that involve the uniformly in- tegrable sequence spaces. Since this is a new concept from summability perspective, I have tried to sketch some of the proofs.
    [Show full text]
  • 2017 Timo Koski Department of Math
    Lecture Notes: Probability and Random Processes at KTH for sf2940 Probability Theory Edition: 2017 Timo Koski Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden 2 Contents Foreword 9 1 Probability Spaces and Random Variables 11 1.1 Introduction...................................... .......... 11 1.2 Terminology and Notations in Elementary Set Theory . .............. 11 1.3 AlgebrasofSets.................................... .......... 14 1.4 ProbabilitySpace.................................... ......... 19 1.4.1 Probability Measures . ...... 19 1.4.2 Continuity from below and Continuity from above . .......... 21 1.4.3 Why Do We Need Sigma-Fields? . .... 23 1.4.4 P - Negligible Events and P -AlmostSureProperties . 25 1.5 Random Variables and Distribution Functions . ............ 25 1.5.1 Randomness?..................................... ...... 25 1.5.2 Random Variables and Sigma Fields Generated by Random Variables ........... 26 1.5.3 Distribution Functions . ...... 28 1.6 Independence of Random Variables and Sigma Fields, I.I.D. r.v.’s . ............... 29 1.7 TheBorel-CantelliLemmas ............................. .......... 30 1.8 ExpectedValueofaRandomVariable . ........... 32 1.8.1 A First Definition and Some Developments . ........ 32 1.8.2 TheGeneralDefinition............................... ....... 34 1.8.3 The Law of the Unconscious Statistician . ......... 34 1.8.4 Three Inequalities for Expectations . ........... 35 1.8.5 Limits and Integrals . ...... 37 1.9 Appendix: lim sup xn and lim inf xn .................................
    [Show full text]
  • Quantization for Infinite Affine Transformations
    Quantization for infinite affine transformations Do˘ganC¸¨omez NDSU Department of Mathematics, Analysis Seminar January 26, 2021 Outline 1. Main problems of quantization theory 2. Probability measures and distributions 3. Probabilities associated to affine maps 4. Some relevant results 5. Infinite affine systems Outline: 1. Main Problems of quantization theory 2. Probability measures and distributions 3. Probabilities associated to affine maps 4. Some relevant results 5. Infinite affine systems 6. Main results (i): optimal sets and associated errors 7. Main results (ii): quantization dimension and coefficients Do˘ganC¸¨omez Quantization for infinite affine transformations Quantization: The process of approximating a given probability measure µ with a discrete probability measure of finite support. It concerns determining an appropriate partitioning of the underlying space for the discrete measure, and error analysis. Hence, main goals of the theory are: 1. find the optimal discrete measure that yields \good" approximation of the given probability to within an allowable margin of error, and 2. estimate the rate at which some specified measure of error goes to 0 as n ! 1: d (Throughout, all the measures considered will be on R with Euclidean norm k k:) Outline 1. Main problems of quantization theory 2. Probability measures and distributions 3. Probabilities associated to affine maps 4. Some relevant results 5. Infinite affine systems Introduction Do˘ganC¸¨omez Quantization for infinite affine transformations It concerns determining an appropriate partitioning of the underlying space for the discrete measure, and error analysis. Hence, main goals of the theory are: 1. find the optimal discrete measure that yields \good" approximation of the given probability to within an allowable margin of error, and 2.
    [Show full text]
  • Arxiv:1808.01079V4 [Math.DS] 30 Jan 2019 Dimensional Persistent Homology for Xn
    A FRACTAL DIMENSION FOR MEASURES VIA PERSISTENT HOMOLOGY HENRY ADAMS, MANUCHEHR AMINIAN, ELIN FARNELL, MICHAEL KIRBY, JOSHUA MIRTH, RACHEL NEVILLE, CHRIS PETERSON, AND CLAYTON SHONKWILER i Abstract. We use persistent homology in order to define a family of fractal dimensions, denoted dimPH(µ) for each homological dimension i ≥ 0, assigned to a probability measure µ on a metric space. The case of 0-dimensional homology (i = 0) relates to work by Michael J Steele (1988) studying the total length of a minimal spanning tree on a random sampling of points. Indeed, if µ is supported on a compact subset of m 0 Euclidean space R for m ≥ 2, then Steele's work implies that dimPH(µ) = m if the absolutely continuous 0 part of µ has positive mass, and otherwise dimPH(µ) < m. Experiments suggest that similar results may be true for higher-dimensional homology 0 < i < m, though this is an open question. Our fractal dimension is defined by considering a limit, as the number of points n goes to infinity, of the total sum of the i- dimensional persistent homology interval lengths for n random points selected from µ in an i.i.d. fashion. To some measures µ, we are able to assign a finer invariant, a curve measuring the limiting distribution of persistent homology interval lengths as the number of points goes to infinity. We prove this limiting curve exists in the case of 0-dimensional homology when µ is the uniform distribution over the unit interval, and conjecture that it exists when µ is the rescaled probability measure for a compact set in Euclidean space with positive Lebesgue measure.
    [Show full text]
  • Al-Kashi Background Statistics
    Al-Kashi Background Statistics http://www.ar-php.org/stats/al-kashi/ PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 01 Feb 2014 07:11:01 UTC Contents Articles Summary Statistics 1 Mean 1 Median 6 Mode (statistics) 15 Variance 20 Standard deviation 32 Coefficient of variation 46 Skewness 49 Kurtosis 55 Ranking 61 Graphics 65 Box plot 65 Histogram 68 Q–Q plot 74 Ternary plot 79 Distributions 84 Normal distribution 84 Student's t-distribution 113 F-distribution 128 Feature scaling 131 Correlation and Regression 134 Covariance 134 Correlation and dependence 138 Regression analysis 144 Path analysis (statistics) 154 Analysis 156 Moving average 156 Student's t-test 162 Contingency table 171 Analysis of variance 174 Principal component analysis 189 Diversity index 205 Diversity index 205 Clustering 210 Hierarchical clustering 210 K-means clustering 215 Matrix 225 Matrix (mathematics) 225 Matrix addition 247 Matrix multiplication 249 Transpose 263 Determinant 266 Minor (linear algebra) 284 Adjugate matrix 287 Invertible matrix 291 Eigenvalues and eigenvectors 298 System of linear equations 316 References Article Sources and Contributors 326 Image Sources, Licenses and Contributors 332 Article Licenses License 335 1 Summary Statistics Mean In mathematics, mean has several different definitions depending on the context. In probability and statistics, mean and expected value are used synonymously to refer to one measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In the case of a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability P(x), and then adding all these products together, giving .[1] An analogous formula applies to the case of a continuous probability distribution.
    [Show full text]