Probability Distributions

Probability Distributions

Probability Distributions Connexions module m43336 Zdzis law (Gustav) Meglicki, Jr Office of the VP for Information Technology, Indiana University RCS: Section-1.tex,v 1.78 2012/12/17 16:29:57 gustav Exp Copyright c 2012 by Zdzislaw Meglicki December 17, 2012 Abstract We introduce the concept of a probability distribution and its charac- terizations in terms of moments and averages. We present examples and discuss probability distributions on multidimensional spaces; this also in- cludes marginal and conditional probabilities. We discuss and prove some fundamental theorems about probability distributions. Finally, we illus- trate how random variables associated with various probability distribu- tions can be generated on a computer. Contents 1 Random Variables and Probability Distributions 2 2 Characteristics of Probability Distributions: Averages and Moments 4 3 Examples of Probability Distributions 7 3.1 Uniform Distribution . 7 3.2 Exponential Distribution . 9 3.3 Normal (Gaussian) Distribution . 10 3.4 Cauchy-Lorentz Distribution . 15 n 4 Probability Distributions on R : Marginal and Condi- tional Distributions 15 5 Variable Transformations 20 5.1 Application to Gaussian Distributions . 22 5.2 Application to Cauchy-Lorentz Distributions . 30 5.3 Cumulative Probability Distribution Theorem . 33 5.4 Linear Combination of Random Variables . 34 5.5 Covariance and Correlation . 35 5.6 Central Limit Theorem . 36 5.7 Computer Generation of Random Variables . 40 1 2 Licensed to Connexions by Zdzis law Meglicki, Jr 1 Random Variables and Probability Dis- tributions Variables in mathematics A variable in mathematics is an argument of a function. The variable may assume various values (hence the name) within its domain to which the function responds by producing the corresponding values, which usually reside in a set different from the domain of the function's argument. Using a formal notation, we may describe this as follows: X 3 x 7! f(x) = y 2 Y: (1) Here X is the function's domain, x 2 X is he function's variable, f is the function itself, y is the value that the function returns for a given x and that belongs to Y , the set of function values. Another way to describe this is f : X ! Y: (2) Neither of the above specifies what function f actually does. The formulas merely state that f maps elements of X onto elements of Y . For a mapping to be called a function, the mapping from x to y must be unique. But this requirement is not adhered too strictly and we do work with multivalued functions too. Variables in physics A variable in physics is something that can be measured, for example, a position of a material point or temperature or mass. How does a physics variable relate to a variable in mathematics? Depending on a position of the material point x, if the point is endowed with an electric charge qe and some externally applied electromagnetic field E is present, then the force that acts on the point will be a vector valued function of its position: F(x) = qeE(x): (3) We read this as follows: the material point is endowed with electric charge qe and is located at x. The position of the point is the variable here (it's actually a vector variable in this case, but we may also think of it as three scalar variables). Electric field E happens to have a vector value of E(x) at this point. It couples to the point's charge and in effect the force of F(x), that is also a vector, is exerted upon the point. The variable x may be itself a value of another function, perhaps a function of time t. We may then write x = f(t), or just x(t) for short, in which case F(x(t)) = qeE(x(t)): (4) Random variable A random variable is a physics variable that may assume different values when measured with a certain probability assigned to each possible value. We may think of it as an ordered pair (X; P : X 3 x 7! P (x) 2 [0; 1]) (5) where, X is a domain and P (x) is the probability of the specific value x 2 X occurring. The probability is restricted to real values within the line segment between 0 and 1, which is what [0; 1] means. This should not be confused with f0; 1g which is a set of two elements, 0 and 1, that does Creative Commons Attribution License (CC-BY 3.0) 3 not contain anything in between. We will often refer to a specific instance of (5), (x; P (x)) for short, also calling it a random variable. Otherwise, x can be used like a normal physics variable in physics expressions. However, it's association with probability carries forward to anything the variable touches, meaning that when used as an argument in successive functions, it makes the functions' outcomes random too. And so, if, say, (x;P (x)) is a random variable, then, for example, (E(x);P (x)) also becomes a random variable, although the resulting probability in (E;PE (E)) is not the same as P (x). There are ways to evaluate PE (E), about which we'll learn more in Section 5. A set of all pairs (x; P (x)) is the same as P : X 3 x 7! P (x) 2 [0; 1], Random variables and because we can understand a function as a subset of a certain relation probability distributions and a relation is a set of pairs|this is one of the definitions of a func- tion. So a theory of random variables is essentially a theory of probability distributions P (x), their possible transformations, evolutions and ways to evaluate them. And random variables themselves are merely arguments of such probability distributions. The notation used in the theory of Markov processes, as well as the concepts, can sometimes be quite convoluted|and it can get even worse when mathematicians lay their hands on it|and it will help us at times to get down to earth and remember that a random variable is simply a measurable quantity x that is associated with a certain probability of occurring P (x). The formal mathematical definition of a random variable is that it Random variables and is a function between a probability space and a measurable space, the mathematicians probability space being a triple (Ω; E; P ) where Ω is a sample space, E is a set of subsets of Ω called events (it has to satisfy a certain list of properties to be such) and P is a probability measure (P (x) dx can be thought of as a measure in our context assuming that X is a continuous domain). A space is said to be measurable if there exists a collection of its subsets with certain properties. The reason why this formal mathematical definition is a bit opaque is because the process of repetitive measurements that yields probabil- ity densities for various values of a measured quantity is quite complex. It is intuitively easy to understand|once you've carried out such mea- surements yourself|but not this easy to capture into well defined math- ematical structures that mathematicians like to work with. In particular, mathematicians do not like the frequency interpretation of probability and prefer to work with the Bayesian concept of it. It is easier to formalize. In the following we will be interested in variables that are sampled from Probability density (or a continuous domain, x 2 R¯ = [−∞; +1] (the bar over R means that we distribution) have compactified R by adding the infinities to it|sometimes we may be interested in what happens when x ! 1, in particular our integrals will normally run from −∞ to +1). The associated probabilities will then be described in terms of probability densities such that P (x) ≥ 0 everywhere (6) and Z +1 P (x) dx = 1; (7) −∞ 4 Licensed to Connexions by Zdzis law Meglicki, Jr the probability of finding x between, say, x1 and x2 being Z x2 P (x 2 [x1; x2]) = P (x) dx 2 [0; 1]: (8) x1 (7) then means that x has to be somewhere between −∞ and +1, which is a trivial observation. Nevertheless, the resulting condition, as imposed on P (x), is not so trivial and has important consequences. Conversely, a function that satisfies (6) and (7) can be always thought of as a probability density. If P1(x) and P2(x) are two probability densities and if they differ on more than a set of measure zero, then (x; P1(x)) and (x; P2(x)) are two different random variables. Here we assume that P1 and P2 are normal, well defined functions, not the so called generalized functions about which we'll say more later. Cumulative probability Once we have a P (x) we can construct another function out of it, distribution namely Z x D(x) = P (x0) dx0 (9) −∞ This function is called the distribution of x, or, more formally, the dis- tribution of (x; P (x)), this emphasizing that it is a function of x and a functional of P , hence a property of the random variable (x; P (x)). Of- tentimes people call P (x) distribution (or continuous distribution) too| physicists especially|and there is nothing that can be done about this. This nomenclature is entrenched. In this situation D(x) would be called a cumulative distribution function. The cumulative distribution D(x) is used, for example, in computer generation of random numbers with arbitrary (not necessarily uniform) distributions, so it is a useful and important function. Still, in the follow- ing I will call P (x) a probability distribution, too, and this will tie nicely with the Schwartz theory of distributions about which I'll say more later.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    46 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us