Quick viewing(Text Mode)

Randomness As an Equilibrium. Potential and Probability Density1

Randomness As an Equilibrium. Potential and Probability Density1

as an equilibrium. Potential and density1

Marian Grendar, Jr.∗ and Marian Grendar†

∗Institute of Measurement Science, Slovak Academy of Sciences Dúbravská cesta 9, Bratislava, 842 19, Slovakia. [email protected][email protected]

Abstract. Randomness is viewed through an analogy between a physical quantity, density of gas, and a mathematical construct – probability density. Boltzmann’s deduction of equilibrium distribution of ideal gas placed in an external potential field than provides a way of viewing probability density from a perspective of forces/potentials, hidden behind it.

INTRODUCTION ern/regulate) the future outcomes of the effect, 4) an interpretation of important yet (almost) meaning- That Probability and Mathematical can gain less statistical quantities, like Shannon’s or a new inspiration from Statistical was already Fisher’s information. Also, MaxEnt has a nice interpre- recognized and stressed by E. T. Jaynes. Jaynes’s lifelong tation in the presented context. work was devoted in large extent to interplay of this three areas (see [3]). In this peek we explore an analogy between gas den- RANDOMNESS AS AN EQUILIBRIUM sity and probability density, mentioned at [1]. Boltz- mann’s deduction of equilibrium gas distribution in an The fundamental equivalence external potential field than serves as a ground for view- ing randomness from an equilibrium perspective. Probability mass/density is a mathematical Among the gains of such a weird effort one can find: construct. Statistical Physics offers a physical model of 1) clear splitting of any random effect into its stochas- the – an ideal gas, placed in an tic and deterministic parts/potentials, external potential field. Probability density function is 2) physical interpretation of any probability distribu- materialized by the density function of the gas, since tion, through its relationship to potential. For instance, probability of finding a molecule of gas at an elementary is from the equilibrium perspective a volume is identical with the of molecules of the distribution of linear harmonic oscillator (LHO). Ubiq- gas at the volume. uity of LHO in the world of Physics and ubiquity of nor- The equivalence of the gas and probability densities mal distribution in the world of Probability and Statistics lays in the ground of the presented view of randomness. are thus related, Its equilibrium attribute then comes from Boltzmann’s 3) a for search for causes/forces in study of deduction of equilibrium distribution of the ideal gas a random effect. Revealing forces behind a random ef- placed in an external potential field. fect thus can help to understand the effect, and conse- quently help to predict (or in the case of also gov- Boltzmann’s formula

1 Presented at 21st MaxEnt, Baltimore, Aug 4th – 9th, 2001. // M. Grendar, Jr. and M. Grendar, ’Randomness as an Equilibrium. Poten- Boltzmann’s deduction of equilibrium distribution of tial and Probability Density’, in and Maximum En- ideal gas placed in an external potential field, in a text- tropy Methods in Science and Engineering: 21-st International Work- book exposition (see for instance [4]), can be for our pur- shop, edited by R. L. Fry, pp. 405-410, American Institute of Physics, Melville, New York, 2002, vol. CP617. poses given as follows: Molecules of the gas, closed in an infinitesimally where U(x) is potential function of the external field. Ec small cube with sides dx,dy,dz are subjected to two will be also called causal component of randomness, or forces: force F 1, represented also by a potential function causal intensity. U(x,y,z), and force F 2, induced by spatial difference of density n(x,y,z) of the gas. The two forces are given Definition 2 If intensities of the internal and external (restricting to one-dimensional case only) by fields compensate each other, the at- tains its equilibrium distribution in potential U(x) 1 dU(x) dF = n(x)dx U(x) − dx fX (x) = ke− (3)

and U(x) dn(x) where k = 1/Ω is normalizing constant, Ω = e− dF 2 = dx is the statistical sum. P − dx In the equilibrium, the two forces should compensate Definition 3 Normalized potential U˜(x) of the random each other, so variable X is dF 1 + dF 2 = 0 U˜(x) = U(x) ln(k) (4) hence − 1 dn(x) dU(x) = Note 1 According to (3) and (4) there is a unique rela- n(x) dx − dx tionship between normalized potential and equilibrium The equilibrium condition is a , distribution: solved by ˜ n(x) = ke U(x) U(x) = ln(fX (x)) (5) − − – the famous Bolzmann’s formula. The formula ex- presses distribution of the ideal gas placed in a potential Causal Intensity and Equilibrium field, in the equilibrium. Probability

Basics According to defining formulas (2), (3), (4) and (5) equilibrium probability distribution generated by causal intensity Ec(x), which is in the equilibrium in absolute Recalling the above mentioned fundamental equiva- value identical with the intensity Es(x), is lence of the gas and probability densities, Boltzmann’s R deduction can be transferred into pure probability, and c f (x) = e E (x)dx+c put in the ground of its equilibrium view. X Principles of the physicalization of randomness, ad- where c is the integration/normalization constant. mittedly metaphysical, can be formulated as follows: In this section we will discuss the simplest form causal P 1 : Randomness has its own, intrinsic field. intensities and their corresponding equilibrium distribu- P 2 : Any randomness takes place also in an external tions, together with their normalized potentials. field. 1) Zero causal intensity Ec(x) = 0 generates uniform Fundamental terms of vocabulary of the equilibrium distribution. Equivalently: uniform distribution reveals view of randomness are built up in analogy with the absence of causal component of randomness. Uniform above mentioned Boltzmann’s deduction. distribution is intrinsic distribution of randomness, left alone. Normalized potential of the uniform N-element Definition 1 Let X be a random variable, with pdf/pmf discrete distribution is U˜(x) = ln(N). f (x) defined over a support S. Intrinsic intensity Es of X 2) Constant causal intensity Ec(x) = a generates ex- its potential field is defined as ponential distribution Exp(a). Equivalently:− exponen- tial distribution reveals presence of constant causal in- s 1 dfX (x) E (x) = (1) tensity in a studied effect. Normalized potential of the −fX (x) dx is U˜(x) = ax ln(a). Es will be also called stochastic component of random- 3) Linear causal intensity Ec(x) = −bx generates nor- − ness, or stochastic intensity. mal distribution n(0, 1 ). Equivalently: normal distri- Intensity of external field is q b bution reveals a linear causal intensity. Normalized po- n(µ,σ) c dU(x) tential of is a linear harmonic oscillator potential E (x) = (2) (x µ)2 ˜ − √ 2 − dx U(x) = 2σ2 + ln( 2πσ ). 4) Superposition of linear and constant in- Thus, the presented view of randomness offers an tensities Ec(x) = a bx generates distribution interpretation of Shannon’s entropy maximization as a x2 − − search for such a probability distribution, which bal- f (x) = ke ax b 2 where k is a normalizing constant. X − − ances intrinsic field (stochastic component of random- 5) Causal intensity of the form Ec(x) = Γ0(x+1) + − Γ(x+1) ness) with given external potential field u(x) (causal ln(λ) generates P oi(λ). Equiva- component of randomness), multiplied by a multiplica- lently, Poisson distribution reveals presence of the above tive constant λ. The presence of λ can appear for the first force. Normalized potential of Poisson distribution is glance disturbing, but – as we claim at [2] – it is just the U˜(x) = λ ln(λ)x + ln(Γ(x + 1)). distinctive feature of Shannon’s entropy maximization, − c a 6) Causal intensity E (x) = x b generates gamma and as such can give an intuitive answer to eternal ’Why 1 − − distribution Γ(1 a, b ). Equivalently: gamma distribu- MaxEnt?’ question (for the argument, please see [2]). tion reveals presence− of a superposition of constant It is also worth noting, that Shannon’s entropy itself and reciprocal causal intensities. Normalized potential can be interpreted in the proposed context as nor- of Γ(α,β) distribution is U˜(x) = (1 α)ln(x) + x + malized potential. Fisher’s information number for the − β ln(Γ(α)βα). simple exponential form distribution has also very nice 7) Pearson’s system of distributions, an appreciated meaning – it is of potential. tool for various densities approximation (see [5]), is de- fined by differential equation 1 ∂f(x) x a ACKNOWLEDGEMENTS = − f(x) ∂x b + b x + b x2 0 1 2 It is a pleasure to thank Aleš Gottvald, George Judge and As it can be seen from (2), (4) and (5), the right-hand side Viktor Witkovský for valuable discussions and/or com- of the differential equation expresses causal intensity ments on earlier version of this work. The thanks extend Ec(x) which generates the Pearson’s system. to participants of MaxEnt workshop, especially to Ariel Caticha, Peter Cheeseman, Robert Fry, Samuel Kotz, Carlos Rodríguez and Alberto Solana for their interest MaxEnt: equilibrium of causal and and/or comments to presented work. Carlos Rodríguez stochastic intensities pointed us to an interesting work by Kenneth Hanson and Gregory Cunningham (see [6]). The work was in part First, we recall a definition of MaxEnt task (see [1]) supported by the grant VEGA 1/7295/20 from the Scien- simplified, without loss of generality, to one-potential- tific Grant Agency of the Slovak Republic. function case: Definition 4 ME task with u(x). Given a random sam- ple and a known potential function u(x), the maximum REFERENCES entropy task is to find the most entropic distribution p 1. M. Grendár, Jr., and M. Grendár, “MiniMax Entropy consistent with u- consistency condition. and Maximum Likelihood: Complementarity of There, we made use of u-moment consistency condi- Tasks, Identity of Solutions,” in Bayesian Inference tion: and Maximum Entropy Methods in Science and Engineering, edited by A. Mohammad-Djafari, pp. Definition 5 Let µ(u), m(u) are u-moment and 49–61, AIP Press, New York, 2001. Available on-line at u-moment, respectively. Then requirement of their equal- http://xxx.lanl.gov/abs/math.PR/0009129. 2. M. Grendár and M. Grendár, Jr., “Why Maximum ity Entropy? A Non-axiomatic Approach,” in Bayesian µ(u) = m(u) Inference and Maximum Entropy Methods in Science and is called u-moment consistency condition. Engineering, edited by R. Fry, AIP Press, New York; this volume. Also, the following well-known result (see for in- 3. E. T. Jaynes, Papers on Probability, Statistics and stance [1]) should be recalled: The most entropic dis- Statistical Physics, edited by R.D. Rosenkrantz, tribution p satisfying the u-moment consistency condi- D. Reidel Publishing Co., Dordrecht, 1983. Also, visit bayes.wustl.edu, web-page maintained by Larry tion is the simple exponential form pmf/pdf fX (x λ) = k(λ)e λu(x) λ u | Bretthorst. − , where is such that -potential moment 4. A. N. Matveyev, Molecular Physics, Vysshaya Shkola, consistency condition is satisfied. Moscow, 1987 (in Russian). 1 MaxEnt task with u(x)-potential is solved 5. A. Stuart and J. K. Ord, Kendall’s advanced theory of Statistics, Vol. 1, Distribution theory, Edward Arnold, by equilibrium distribution which has normalized poten- London, 1994. tial U˜(x) = λu(x) ln(k). − 6. K. M. Hanson and G. S. Cunningham, “The Hard Truth,” in Maximum Entropy and Bayesian Methods, edited by J. Skilling and S. S. Sibisi, pp. 157-164, Kluwer Academic, Dordrecht, 1996.