A Short Tutorial on the Path Integral Formalism for Stochastic Differential

A Short Tutorial on the Path Integral Formalism for Stochastic Differential

A short tutorial on the path integral formalism for stochastic differential equations Dani Mart´ı Group for Neural Theory, Ecole´ Normale Sup´erieure,Paris, France This is a very short and incomplete introduction to the formalism of path integrals for stochas- tic systems. The tutorial is strongly inspired by the very nice introduction by Chow and Buice (2015), which is far more complete than this text and whose reading I highly recommend if you are interested in the subject. Goal: Provide a general formalism to compute moments, correlations, and transition proba- bilities for any stochastic process, regardless of its nature. The formalism should allow for a systematic perturbative treatment. Preliminaries: Moment generating functionals Single variable Consider a single random variable X with probability density function p(x), not necessarily normalized. The n-th moment of X is defined as R xnp(x) dx hXni ≡ : R p(x) dx Instead of computing the moments one by one, we can make use of the moment generating function, defined as Z Z(λ) ≡ eλxp(x) dx: (1) With this definition, moments can be computed by simply taking derivatives of Z(λ) and dividing the result by Z(0): n n 1 d hX i = n Z(λ) : (2) Z(0) dλ λ=0 The expansion of Z(λ) in a power series reads then 1 X λm Z(λ) = Z(0) hXmi: (3) m! m=0 • Example: cumulant-generating function for a Gaussian random variable X with mean a and variance σ2. Plugging the (unnormalized) Gaussian probability density p(x) = exp[−(x − µ)2=(2σ2)] into Eq. (1) gives Z 1 (x − µ)2 Z(λ) = exp − 2 + λx dx; (4) −∞ 2σ which can be evaluated by `completing the square' in the exponent, 1 1 λ2σ2 − (x − µ)2 + λx = − (x − µ − λσ2)2 + λµ + : 2σ2 2σ2 2 1 Carrying out integral over x yields 1 Z(λ) = Z(0) exp λµ + λ2σ2 ; 2 p R 1 −x2=(2σ2) where Z(0) = −∞ e dx = 2πσ is the normalization factor. Once we have Z(λ) we can easily compute the moments using Eq. (2). Thus, the first and second moments of x are d 1 2 2 hXi = exp λa + λ σ = a; dλ 2 λ=0 2 2 d 1 2 2 2 2 hX i = 2 exp λa + λ σ = a + σ : dλ 2 λ=0 The cumulants of the random variable X, denoted by hhXnii, are defined by n n d Z(λ) hhX ii = n ln : (5) dλ Z(0) λ=0 Equivalently, the expansion of ln Z(λ) in power series is 1 X λn ln Z(λ) = ln Z(0) + hhXnii: (6) n! n=1 The relation between cumulants and moments follows from equating the expansion (6) to the logarithm of the expansion (3), 1 1 1 ! X λn X λn X λn hhXnii = ln hXni = ln 1 + hXni ; (7) n! n! n! n=1 n=0 n=1 expanding the right-hand side of Eq. (7), and equating coefficients of equal powers of λ. The first three cumulants are hhXii = hXi, hhX2ii = hX2i − hXi2, and hhX3ii = hX3i − 3hX2ihXi + 2hXi3. • Example: cumulant-generating function for a Gaussian random variable X with mean a and variance σ2. We just compute the logarithm of the moment-generating function found in the previous example, 1 ln Z(λ) = ln Z(0) + λa + λ2σ2 2 and identify the cumulants through relation (6). We have hhXii = a, hhX2ii = σ2, and hhXnii = 0 for n > 2. Multidimensional variables Let X be a random variable having r components X1;X2;:::;Xr, and with joint probability density p(x) ≡ p(x1; x2; : : : ; xr). The moments of X are R xm1 xm2 ··· xmr p(x) dx hXm1 Xm2 ··· Xmr i = 1 2 r ; 1 2 r R p(x) dx 2 where dx ≡ dx1dx2 ··· dxr. The moment-generating function of X is a straightforward gener- alization of the one-dimensional version Z Z(λ) ≡ exp(λTx) p(x) dx; (8) where here λ is an r-dimensional vector and λT x is just the dot product between λ and x. Moments are generated with 1 @m1 @m2 @mr hXm1 Xm2 ··· Xmr i = ··· Z(λ) (9) 1 2 r m1 m2 mr Z(0) @x1 @x2 @xr λ=0 • Example: moment-generating function for a random Gaussian vector X with zero mean and variance Σ = K, X ∼ N (0; K). If we drop, as usual, the normalization factor of the probability density we have Z 1 Z(λ) = exp − xT K−1x + λT x dx: (10) 2 The matrix K is symmetric and therefore diagonalizable (if K happens to be non- symmetric, the contribution from its antisymmetric part to the quadratic form xT K−1x vanishes). Let us denote by fuig the basis of unit eigenvectors of K, and by U the matrix that results from arranging the set fuig in columns. Because fug is an orthonormal set, UT U = 1. The diagonalization condition can be summarized by KU = UD, where D is 2 the diagonal matrix with the eigenvalue σi at its i-th diagonal entry. In terms of the new −1 −1 T P P basis, K = UD U , x = i ciui = Uc, and λ = i diui = Ud, where c and d are coefficient vectors. We have 1 1 Y c2 exp − xT K−1x + λTx = exp − cT D−1c + dTc = exp − i + c d ; 2 2 2σ2 i i i i and therefore Z Y c2 Y Z c2 Z(λ) = exp − i + c d jUj dc; = exp − i + c d dc 2σ2 i i 2σ2 i i i i i i i 2 2 Y 1=2 d σ = 2πσ2 exp i i = j2πDj1=2 expdT Dd i 2 i = j2πKj1=2 expλT Kλ: (11) where in the first line we used that the Jacobian jUj is 1 for orthogonal transformations, and where in the second line we completed the square for all the decoupled factors. Although the Gaussian example is just a particular case, it is an important one because it can be computed in a closed form and because it is the basis for many perturbative schemes. The moment generating function Z(λ) is an exponential of a quadratic form in λ, which implies that only moments of even order will survive (see Eq. (9)). In particular we have 1 @ @ hxaxbi = Z(λ) = Kab; Z(0) @xa @xb λ=0 where a; b are any two indices of the components of x. 3 Moreover, it is not difficult to convince oneself that for Gaussian vectors any moment of even-order can be obtained as a sum of all the possible pairings between variables, for example: hxaxbxcxdi = hxaxbihxcxdi + hxaxcihxbxdi + hxaxdihxbxci: This property is usually referred to as Wick's theorem, or Isserlis's theorem. Fields We can extend from finite-dimensional systems to functions defined on some domain [0;T ]. The basic idea is to discretize the domain interval into n segments of length h such that T = nh, and then take the limits n ! 1 and h ! 0 while keeping T = nh constant. Under this limit the originally discrete set ft0; t1; : : : ; tn = T g becomes dense on the interval [0;T ]. Any t can then be approximated as t = jh, for some j = 0; 1; : : : ; n and we can identify xj −! x(t); λj −! λ(t);Kij −! K(s; t): Now instead of a moment-generating function we'll have a moment-generating functional, an object that maps functions into real numbers. For the Gaussian case we have the functional version of Eqs. (10) and (11) Z 1 ZZ Z Z[λ(t)] = Dx(s) exp − x(u)K−1(u; v)x(v) du dv + λ(u)x(u) du 2 1 ZZ = Z[0] exp λ(u)K(u; v)λ(v) du dv (12) 2 where we use square brackets to denote a functional, and where Z[0] is the value of the generating functional with the external source λ(t) set to 0. We also defined the measure for integration over functions as n Y Dx(t) ≡ lim dxi: n!1 i=0 The integral in Eq. (12) is an example of path integral, or functional integral (Cartier and DeWitt-Morette, 2010). The presentation here is obviously very handwavy, and there are several ill-defined quan- 1=2 tities lying around like, for instance, the prefactor Z[0] = limn!1 j2πKj , which is formally infinite. Fortunately, the moments we derive from the moment-generating functional are well- defined. Now we need to compute the moments, for which we need the notion of functional deriva- tive. Although functional derivatives can be defined rigorously, here we only need to remember the following axioms δλ(t) @λj = δ(t − s) equivalent to = δij ; δλ(s) @λi ! δ Z @ X λ(s)x(s) ds = x(t) equivalent to λ x = x ; δλ(t) @λ j j i i j where δ(·) is the Dirac delta, or point-mass functional, and δij is the Kronecker delta: δij = 1 if i = j, 0 otherwise. As usual, moments are obtained from taking functional derivatives on the moment-generating functional: * + Y 1 Y δ x(ti) = Z[λ] : Z[0] δλ(ti) i i λ(t)=0 4 • Example: given the moment-generating functional of a Gaussian process, Eq. (12), the two-point correlation function is 1 δ δ hx(t)x(s)i = Z[λ] = K(t; s): Z[0] δλ(t) δλ(s) λ(u)=0 More generally, any (even-order) moment will be of the form * + Y 1 Y δ X x(t ) = Z[λ] = K(t ; t ) ··· K(t ; t ); i i1 i2 i2s−1 ti2s Z[0] δλ(ti) i i λ(t)=0 all possible pairings because of Wick's theorem.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us