Probability: a Brief Introduction (Dana Longcope 1/18/05)

Total Page:16

File Type:pdf, Size:1020Kb

Probability: a Brief Introduction (Dana Longcope 1/18/05) Probability: A brief introduction (Dana Longcope 1/18/05) Probability is a vast sub-topic within mathematics with numerous applications in Physics, Quantum mechanics being only one. Mathematical treatments can appear quite daunting but fortunately most of us have experience with random processes in life, games of chance and such things. The key concept in probability theory the the random variable. A random variable x is one which will assume a di®erent value each time we measure it (sometimes we say each time it is \realized"). If we measure it N times we ¯nd N di®erent values; we refer to ith measurement as xi. There are basically two di®erent kinds of random variable: discrete variables and continuous variables. We discuss these separately below. Discrete random variables A discrete random variable is one which can take only discrete values, let's say integers. For example, d is the number of spots showing on a 6-sided die after I roll it (i.e. it is the roll of a die). d may therefore take on the values 1, 2, . , 6, and no others; I cannot role a values d = ¼=2, d = p2. I might for example, roll my die 10 di®erent times and obtain the 10 realizations d1 = 5; d2 = 1; d3 = 3; d4 = 4; d5 = 4; d6 = 3; d7 = 6; d8 = 2; d9 = 3; d10 = 5 but the next time I rolled 10 times I would get 10 di®erent values. Since d is a random variable we don't know its value prior to rolling (at least that is the basic hypothesis of random variables). We characterize a random variable by listing the probabilities of its various outcomes. We denote by Pd the probability that a given realization will assume the value d. A probability Pj = 0 means that outcome d = j is completely impossible (so P7 = 0 since the die doesn't have a 7-spotted side); Pj = 1 means that that particular outcome is a certainty. These are the two extremes in probability and every probability must be within the rage 0 P 1. always. · j · There is no such thing as a negative probability, or a probability more certain than perfect certainty. In the case of a fair 6-sided we know that all 6 possible outcomes are equally likely. Furthermore, since the sum of all probabilities must be one (more on that below) so the value of each one must be 1=6: 1 1 1 1 1 1 P1 = 6 ; P2 = 6 ; P3 = 6 ; P4 = 6 ; P5 = 6 ; P6 = 6 If I want to know the probability that d will take one value from a set of possibilities I sum the probabilities of each outcome in the set. For example, the probability that d will be an even number is 1 1 1 1 P (d is even) = P2 + P4 + P6 = 6 + 6 + 6 = 2 : A simple consequence of this fact is that if I sum up the probabilities of all possible outcomes I must get 1: we are perfectly certain that d will assume some value. In probability this is called normalization: Pj = 1 : (1) Xj Let's consider taking a function of our random variable: f(d). Since d will take on only integer values f(x) need only be de¯ned for integers. Perhaps I am playing a game where a die roll d wins me f(d) dollars from the following payo® table 1 d 1 2 3 4 5 6 f(d) 1 0 1 0.5 0.5 0.5 ¡ ¡ (A negative value of f means that I lose f dollars.) The natural thing to ask is whether I should j j play this game? To answer this we compute the mean value or expectation of the function f(d). The mean is de¯ned as a sum over all possibilities f = P f(d) (2) h i d Xd From the payo® table above we ¯nd f = 1=12. This means I lose, on average $0.08 each time I h i ¡ roll the die. Of course, I never lose $0.08 on a particular roll; that is just the mean value. One should be careful not to confuse the mean with the experimental average. The mean, f h i is found from the knowledge of the probabilities. It is a precise number which is always the same. We will always use means Quantum Mechanics I. The average, for which I'll use the notation f¹, comes from a set of N experimental realizations, di 1 N f¹ = f(d ) N i Xi=1 This is what you compute in the laboratory. Using the 10 realization from earlier I get f¹ = 0:15 ¡ (This time at least, I seem to have lost even more that I \expected".) The average will be di®erent every time you perform a new set of experiments, and will almost never be the same as the mean. The useful relationship is that f¹ will be approximately equal to f as long as N is \large enough". h i This little tidbit goes by the name the law of large numbers. The trick is to know what \large enough" really means. but we cannot get into that here. This is (probably) the last we'll say of averages in this class. De¯nition (2) gives the recipe for computing the mean of any function f(d). It is worth making note of a few properties of the mean. 1. The mean is linear: If my function can be expressed as the sum of two functions, f(d) = g(d) + h(d), then the mean of the sum is the sum of the means g + h = g + h : h i h i h i If ® is a constant (i.e. it does not depend on d and is not otherwise random) then I can take it outside the mean ®f = ® f h i h i 2. The mean of a number is that number: 3 = 3P = 3 P = 3 h i j j Xj Xj where I've used the fact that P is normalized (i.e. eq. [1]). Since the mean of a function f j h i is not itself a random variable we consider it to be a number. This means that the mean of a mean is that mean: f = f ; h i h i This looks somewhat puzzling at ¯rst,Dbut wEe will run into its likeness often in the future. 2 3. The mean of a product in NOT the product of the means: It is usually a bad idea to discuss something that is not true. But this case appears so often and can cause so much harm if it is mistakenly used that I felt it worth stating up front. In mathematical terms gh = g h h i 6 h i h i Please note that there is a not equals in this expression. Among many other things this means that f 2 is di®erent from f 2. These are two di®erent things. h i h i It is common to take averages of the random variable itself and of various powers of it. For example, d = 3:5 for the 6-sided die. This tells us that the mean roll is a 3.5. although it's h i not easy to know what that means. One way to state it is to say that d is the centroid of the h i distribution Pd. A given roll will di®er from the mean by an amount ¢d = d d . If I use this to ¯nd the mean ¡ h i departure from the mean I ¯nd ¢d = d d = d d = 0 : h i h ¡ h ii h i ¡ hh ii (This was done laboriously on purpose; please check that you understand each step). The trivial result came because d goes above the mean by as much as it goes below the mean (¢d is positive as much as it is negative). We can obtain a more informative result by calculating the mean of the square of the departure: (¢d)2 = [d d ]2 = d2 2d d + d 2 h i h ¡ h i i h ¡ h i h i i = d2 2 d d + d 2 = d2 2 d d + d 2 h i ¡ h h ii hh i i h i ¡ h ih i h i = d2 d 2 h i ¡ h i Note that this would also be trivial if d2 were the same as d 2; but it is not. In fact (¢d)2 0, h i h i h i ¸ since it is a sum of non-negative numbers, so this exercise proves that d2 d 2 for any random h i ¸ h i variable. The expression above characterizes how far a given roll is expected to di®er from the mean roll, is called the variance of the random variable d: Var(d) = (¢d)2 = d2 d 2 h i h i ¡ h i It is common to discuss the square root of the variance, called the standard deviation = (¢d)2 d h i q which tells, in some sense, how far from the mean the value is likely to be: it is the \width" of the distribution. For the case of die rolls we ¯nd that d2 = 91=6 so = 1:71. A roll will be, on h i d average, within about 1.71 of the mean value 3.5. This statement will appear puzzling, knowing what you do about dice.
Recommended publications
  • Three Problems of the Theory of Choice on Random Sets
    DIVISION OF THE HUMANITIES AND SOCIAL SCIENCES CALIFORNIA INSTITUTE OF TECHNOLOGY PASADENA, CALIFORNIA 91125 THREE PROBLEMS OF THE THEORY OF CHOICE ON RANDOM SETS B. A. Berezovskiy,* Yu. M. Baryshnikov, A. Gnedin V. Institute of Control Sciences Academy of Sciences of the U.S.S.R, Moscow *Guest of the California Institute of Technology SOCIAL SCIENCE WORKING PAPER 661 December 1987 THREE PROBLEMS OF THE THEORY OF CHOICE ON RANDOM SETS B. A. Berezovskiy, Yu. M. Baryshnikov, A. V. Gnedin Institute of Control Sciences Academy of Sciences of the U.S.S.R., Moscow ABSTRACT This paper discusses three problems which are united not only by the common topic of research stated in thetitle, but also by a somewhat surprising interlacing of the methods and techniques used. In the first problem, an attempt is made to resolve a very unpleasant metaproblem arising in general choice theory: why theconditions of rationality are not really necessary or, in other words, why in every-day life we are quite satisfied with choice methods which are far from being ideal. The answer, substantiated by a number of results, is as follows: situations in which the choice function "misbehaves" are very seldom met in large presentations. the second problem, an overview of our studies is given on the problem of statistical propertiesIn of choice. One of themost astonishing phenomenon found when we deviate from scalar­ extremal choice functions is in stable multiplicity of choice. If our presentation is random, then a random number of alternatives is chosen in it. But how many? The answer isn't trival, and may be sought in many differentdirections.
    [Show full text]
  • The Halász-Székely Barycenter
    THE HALÁSZ–SZÉKELY BARYCENTER JAIRO BOCHI, GODOFREDO IOMMI, AND MARIO PONCE Abstract. We introduce a notion of barycenter of a probability measure re- lated to the symmetric mean of a collection of nonnegative real numbers. Our definition is inspired by the work of Halász and Székely, who in 1976 proved a law of large numbers for symmetric means. We study analytic properties of this Halász–Székely barycenter. We establish fundamental inequalities that relate the symmetric mean of a list of nonnegative real numbers with the barycenter of the measure uniformly supported on these points. As consequence, we go on to establish an ergodic theorem stating that the symmetric means of a se- quence of dynamical observations converges to the Halász–Székely barycenter of the corresponding distribution. 1. Introduction Means have fascinated man for a long time. Ancient Greeks knew the arith- metic, geometric, and harmonic means of two positive numbers (which they may have learned from the Babylonians); they also studied other types of means that can be defined using proportions: see [He, pp. 85–89]. Newton and Maclaurin en- countered the symmetric means (more about them later). Huygens introduced the notion of expected value and Jacob Bernoulli proved the first rigorous version of the law of large numbers: see [Mai, pp. 51, 73]. Gauss and Lagrange exploited the connection between the arithmetico-geometric mean and elliptic functions: see [BB]. Kolmogorov and other authors considered means from an axiomatic point of view and determined when a mean is arithmetic under a change of coordinates (i.e. quasiarithmetic): see [HLP, p.
    [Show full text]
  • TOPIC 0 Introduction
    TOPIC 0 Introduction 1 Review Course: Math 535 (http://www.math.wisc.edu/~roch/mmids/) - Mathematical Methods in Data Science (MMiDS) Author: Sebastien Roch (http://www.math.wisc.edu/~roch/), Department of Mathematics, University of Wisconsin-Madison Updated: Sep 21, 2020 Copyright: © 2020 Sebastien Roch We first review a few basic concepts. 1.1 Vectors and norms For a vector ⎡ 푥 ⎤ ⎢ 1 ⎥ 푥 ⎢ 2 ⎥ 푑 퐱 = ⎢ ⎥ ∈ ℝ ⎢ ⋮ ⎥ ⎣ 푥푑 ⎦ the Euclidean norm of 퐱 is defined as ‾‾푑‾‾‾‾ ‖퐱‖ = 푥2 = √퐱‾‾푇‾퐱‾ = ‾⟨퐱‾‾, ‾퐱‾⟩ 2 ∑ 푖 √ ⎷푖=1 푇 where 퐱 denotes the transpose of 퐱 (seen as a single-column matrix) and 푑 ⟨퐮, 퐯⟩ = 푢 푣 ∑ 푖 푖 푖=1 is the inner product (https://en.wikipedia.org/wiki/Inner_product_space) of 퐮 and 퐯. This is also known as the 2- norm. More generally, for 푝 ≥ 1, the 푝-norm (https://en.wikipedia.org/wiki/Lp_space#The_p- norm_in_countably_infinite_dimensions_and_ℓ_p_spaces) of 퐱 is given by 푑 1/푝 ‖퐱‖ = |푥 |푝 . 푝 (∑ 푖 ) 푖=1 Here (https://commons.wikimedia.org/wiki/File:Lp_space_animation.gif#/media/File:Lp_space_animation.gif) is a nice visualization of the unit ball, that is, the set {퐱 : ‖푥‖푝 ≤ 1}, under varying 푝. There exist many more norms. Formally: 푑 푑 Definition (Norm): A norm is a function ℓ from ℝ to ℝ+ that satisfies for all 푎 ∈ ℝ, 퐮, 퐯 ∈ ℝ (Homogeneity): ℓ(푎퐮) = |푎|ℓ(퐮) (Triangle inequality): ℓ(퐮 + 퐯) ≤ ℓ(퐮) + ℓ(퐯) (Point-separating): ℓ(푢) = 0 implies 퐮 = 0. ⊲ The triangle inequality for the 2-norm follows (https://en.wikipedia.org/wiki/Cauchy– Schwarz_inequality#Analysis) from the Cauchy–Schwarz inequality (https://en.wikipedia.org/wiki/Cauchy– Schwarz_inequality).
    [Show full text]
  • Continuous Probability Distributions Continuous Probability Distributions – a Guide for Teachers (Years 11–12)
    1 Supporting Australian Mathematics Project 2 3 4 5 6 12 11 7 10 8 9 A guide for teachers – Years 11 and 12 Probability and statistics: Module 21 Continuous probability distributions Continuous probability distributions – A guide for teachers (Years 11–12) Professor Ian Gordon, University of Melbourne Editor: Dr Jane Pitkethly, La Trobe University Illustrations and web design: Catherine Tan, Michael Shaw Full bibliographic details are available from Education Services Australia. Published by Education Services Australia PO Box 177 Carlton South Vic 3053 Australia Tel: (03) 9207 9600 Fax: (03) 9910 9800 Email: [email protected] Website: www.esa.edu.au © 2013 Education Services Australia Ltd, except where indicated otherwise. You may copy, distribute and adapt this material free of charge for non-commercial educational purposes, provided you retain all copyright notices and acknowledgements. This publication is funded by the Australian Government Department of Education, Employment and Workplace Relations. Supporting Australian Mathematics Project Australian Mathematical Sciences Institute Building 161 The University of Melbourne VIC 3010 Email: [email protected] Website: www.amsi.org.au Assumed knowledge ..................................... 4 Motivation ........................................... 4 Content ............................................. 5 Continuous random variables: basic ideas ....................... 5 Cumulative distribution functions ............................ 6 Probability density functions ..............................
    [Show full text]
  • Probability with Engineering Applications ECE 313 Course Notes
    Probability with Engineering Applications ECE 313 Course Notes Bruce Hajek Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign January 2017 c 2017 by Bruce Hajek All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes are left intact and not reproduced for commercial purposes. Email to [email protected], pointing out errors or hard to understand passages or providing comments, is welcome. Contents 1 Foundations 3 1.1 Embracing uncertainty . .3 1.2 Axioms of probability . .6 1.3 Calculating the size of various sets . 10 1.4 Probability experiments with equally likely outcomes . 13 1.5 Sample spaces with infinite cardinality . 15 1.6 Short Answer Questions . 20 1.7 Problems . 21 2 Discrete-type random variables 25 2.1 Random variables and probability mass functions . 25 2.2 The mean and variance of a random variable . 27 2.3 Conditional probabilities . 32 2.4 Independence and the binomial distribution . 34 2.4.1 Mutually independent events . 34 2.4.2 Independent random variables (of discrete-type) . 36 2.4.3 Bernoulli distribution . 37 2.4.4 Binomial distribution . 38 2.5 Geometric distribution . 41 2.6 Bernoulli process and the negative binomial distribution . 43 2.7 The Poisson distribution{a limit of binomial distributions . 45 2.8 Maximum likelihood parameter estimation . 47 2.9 Markov and Chebychev inequalities and confidence intervals . 50 2.10 The law of total probability, and Bayes formula . 53 2.11 Binary hypothesis testing with discrete-type observations .
    [Show full text]
  • HARMONIC ANALYSIS 1. Maximal Function for a Locally Integrable
    HARMONIC ANALYSIS PIOTR HAJLASZ 1. Maximal function 1 n For a locally integrable function f 2 Lloc(R ) the Hardy-Littlewood max- imal function is defined by Z n Mf(x) = sup jf(y)j dy; x 2 R : r>0 B(x;r) The operator M is not linear but it is subadditive. We say that an operator T from a space of measurable functions into a space of measurable functions is subadditive if jT (f1 + f2)(x)j ≤ jT f1(x)j + jT f2(x)j a.e. and jT (kf)(x)j = jkjjT f(x)j for k 2 C. The following integrability result, known also as the maximal theorem, plays a fundamental role in many areas of mathematical analysis. p n Theorem 1.1 (Hardy-Littlewood-Wiener). If f 2 L (R ), 1 ≤ p ≤ 1, then Mf < 1 a.e. Moreover 1 n (a) For f 2 L (R ) 5n Z (1.1) jfx : Mf(x) > tgj ≤ jfj for all t > 0. t Rn p n p n (b) If f 2 L (R ), 1 < p ≤ 1, then Mf 2 L (R ) and p 1=p kMfk ≤ 2 · 5n=p kfk for 1 < p < 1, p p − 1 p kMfk1 ≤ kfk1 : Date: March 28, 2012. 1 2 PIOTR HAJLASZ The estimate (1.1) is called weak type estimate. 1 n 1 n Note that if f 2 L (R ) is a nonzero function, then Mf 62 L (R ). Indeed, R if λ = B(0;R) jfj > 0, then for jxj > R we have Z λ Mf(x) ≥ jfj ≥ n ; B(x;R+jxj) !n(R + jxj) n and the function on the right hand side is not integrable on R .
    [Show full text]
  • Probabilistic Models for Shapes As Continuous Curves
    J Math Imaging Vis (2009) 33: 39–65 DOI 10.1007/s10851-008-0104-3 Probabilistic Models for Shapes as Continuous Curves Jeong-Gyoo Kim · J. Alison Noble · J. Michael Brady Published online: 23 July 2008 © Springer Science+Business Media, LLC 2008 Abstract We develop new shape models by defining a stan- 1 Introduction dard shape from which we can explain shape deformation and variability. Currently, planar shapes are modelled using Our work contributes to shape analysis, particularly in med- a function space, which is applied to data extracted from ical image analysis, by defining a standard representation images. We regard a shape as a continuous curve and identi- of shape. Specifically we develop new mathematical mod- fied on the Wiener measure space whereas previous methods els of planar shapes. Our aim is to develop a standard rep- have primarily used sparse sets of landmarks expressed in a resentation of anatomical or biological shapes that can be Euclidean space. The average of a sample set of shapes is used to explain shape variations, whether in normal subjects defined using measurable functions which treat the Wiener or abnormalities due, for example, to disease. In addition, measure as varying Gaussians. Various types of invariance we propose a quasi-score that measures global deformation of our formulation of an average are examined in regard to and provides a generic way to compare statistical methods practical applications of it. The average is examined with of shape analysis. relation to a Fréchet mean in order to establish its valid- As D’Arcy Thompson pointed in [50], there is an impor- ity.
    [Show full text]
  • Two Notes on Financial Mathematics
    TWO NOTES ON FINANCIAL MATHEMATICS Daniel Dufresne Centre for Actuarial Studies University of Melbourne The two notes which follow are aimed at students of financial mathematics or actuarial science. The first one, “A note on correlation in mean variance portfolio theory” is an elementary discussion of how negative correlation coefficients can be, given any set of n random variables. The second one, “Two essential formulas in actuarial science and financial mathematics,” gives a more or less self-contained introduction to Jensen’s inequality and to another formula, which I have decided to call the “survival function expectation formula.” The latter allows one to write the expectation of a function of a random variable as an integral involving the complement of its cumulative distribution function. A NOTE ON CORRELATION IN MEAN VARIANCE PORTFOLIO THEORY Daniel Dufresne Centre for Actuarial Studies University of Melbourne Abstract This note is aimed at students and others interested in mean variance portfolio theory. Negative correlations are desirable in portfolio selection, as they decrease risk. It is shown that there is a mathematical limit to how negative correlations can be among a given number of securities. In particular, in an “average correlation model” (where the correlation coefficient between different securities is constant) the correlation has to be at least as large as −1/(n − 1), n being the number of securities. Keywords: Mean-variance portfolio theory; average correlation mod- els 1. INTRODUCTION One of the more technical points encountered when teaching the basics of mean variance portfolio theory is the restrictions which apply on the covariance matrix of the securities’ returns.
    [Show full text]
  • SMALL STRONG SOLUTIONS for TIME-DEPENDENT MEAN FIELD GAMES with LOCAL COUPLING 1. Introduction We Study Solutions of the Mean Fi
    SMALL STRONG SOLUTIONS FOR TIME-DEPENDENT MEAN FIELD GAMES WITH LOCAL COUPLING DAVID M. AMBROSE Abstract. For mean field games with local coupling, existence results are typically for weak solutions rather than strong solutions. We identify conditions on the Hamiltonian and the coupling which allow us to prove the existence of small, locally unique, strong solutions over any finite time interval in the case of local coupling; these conditions place us in the case of superquadratic Hamiltonians. For the regularity of solutions, we find that at each time in the interior of the time interval, the Fourier coefficients of the solutions decay exponentially. The method of proof is inspired by the work of Duchon and Robert on vortex sheets in incompressible fluids. Petites solutions fortes pour jeux `achamp moyens avec une d´ependance temporelle et un couplage local R´esum´e: Pour les jeux `achamp moyens avec couplage local, les r´esultatsd'existence sont typiquement obtenus pour des solutions faibles plut^otque pour des solutions fortes. Nous identifions des conditions sur le Hamiltonien et sur le couplage qui nous permettent de d´emontrer l'existence d'une solution forte, petite et localement unique pour tout intervalle de temps fini dans le cas d'un couplage local; ces conditions nous placent dans une situation de Hamiltonien super-quadratique. Pour la r´egularit´edes solutions, nous trouvons que, pour chaque point dans l'int´erieurde l'intervalle de temps, les coefficients de Fourier des solutions d´ecroissent exponentiellement. La preuve est inspir´eepar les travaux de Duchon et Robert sur les nappes de tourbillons de fluides incompressibles.
    [Show full text]
  • 7 Probability Theory and Statistics
    7 Probability Theory and Statistics • • • In the last chapter we made the transition from discussing information which is considered to be error free to dealing with data that contained intrinsic errors. In the case of the former, uncertainties in the results of our analysis resulted from the failure of the approximation formula to match the given data and from round-off error incurred during calculation. Uncertainties resulting from these sources will always be present, but in addition, the basic data itself may also contain errors. Since all data relating to the real world will have such errors, this is by far the more common situation. In this chapter we will consider the implications of dealing with data from the real world in more detail. 197 Numerical Methods and Data Analysis Philosophers divide data into at least two different categories, observational, historical, or empirical data and experimental data. Observational or historical data is, by its very nature, non-repeatable. Experimental data results from processes that, in principle, can be repeated. Some1 have introduced a third type of data labeled hypothetical-observational data, which is based on a combination of observation and information supplied by theory. An example of such data might be the distance to the Andromeda galaxy since a direct measurement of that quantity has yet to be made and must be deduced from other aspects of the physical world. However, in the last analysis, this is true of all observations of the world. Even the determination of repeatable, experimental data relies on agreed conventions of measurement for its unique interpretation.
    [Show full text]
  • Convergence of Stochastic Processes
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science April 1992 Convergence of Stochastic Processes Robert Mandelbaum University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Robert Mandelbaum, "Convergence of Stochastic Processes", . April 1992. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-92-30. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/470 For more information, please contact [email protected]. Convergence of Stochastic Processes Abstract Often the best way to adumbrate a dark and dense assemblage of material is to describe the background in contrast to which the edges of the nebulosity may be clearly discerned. Hence, perhaps the most appropriate way to introduce this paper is to describe what it is not. It is not a comprehensive study of stochastic processes, nor an in-depth treatment of convergence. In fact, on the surface, the material covered in this paper is nothing more than a compendium of seemingly loosely-connected and barely- miscible theorems, methods and conclusions from the three main papers surveyed ([VC71], [Pol89] and [DL91]). Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MS- CIS-92-30. This technical report is available at ScholarlyCommons: https://repository.upenn.edu/cis_reports/470 Convergence of Stochastic Processes MS-CIS-92-30 GRASP LAB 311 Robert Mandelbaum University of Pennsylvania School of Engineering and Applied Science Computer and Information Science Department Philadelphia, PA 19104-6389 April 1992 Convergence of Stochastic Processes Robert Mandelbaum Department of Computer and Information Science University of Pennsylvania February 1992 Special Area Examination Advisor: Max Mintz Contents 1 Introduction 1 2 Revision of Basic Concepts 4 2.1 Linearity.
    [Show full text]
  • Asymptotic Behavior of the Sample Mean of a Function of the Wiener Process and the Macdonald Function
    J. Math. Sci. Univ. Tokyo 16 (2009), 55–93. Asymptotic Behavior of the Sample Mean of a Function of the Wiener Process and the MacDonald Function By Sergio Albeverio,Vadim Fatalov and Vladimir I. Piterbarg Abstract. Explicit asymptotic formulas for large time of expec- tations or distribution functions of exponential functionals of Brownian motion in terms of MacDonald’s function are proven. 1. Introduction. Main Results Many problems in probability and physics concern the asymptotic be- −1 T havior of the distribution function of the sample mean T 0 g(ξ(t))dt for large T , where ξ(t), t ≥ 0, is a random almost surely (a. s.) continuous process taking values in X ⊂ R and g is a continuous function on X, see [29], [37], [30], [7], [38, page 208]. Distributions and means of exponential functionals of Brownian motion and Bessel processes are intensively treated in connection with mathematical problems of financial mathematics, [19], [43]. In many cases exact distributions are evaluated, [33]. Nevertheless, dif- ficult problems remain open of evaluation of the asymptotic behavior of the sample means. As one of few examples of such the solutions, see [21], [22], where asymptotic behaviors of sample means of exponentials of a random walk, with applications to Brownian motion, where evaluated. In order to work these problems out, a fruitful idea is to pass from the nonlinear functional of ξ to a linear functional of the sojourn time Lt(ξ,·) (for definition see (2.1)), by the relation (2.2). For a wide class of homogeneous Markov processes ξ(t), main tools to evaluate such an asymptotic behavior are based on the Donsker-Varadhan large deviation 2000 Mathematics Subject Classification.
    [Show full text]