A. Operator Relations
A.1 Theorem 1
Let A and B be two non-commuting operators, then [1]
α2 exp(αA)B exp(−αA)=B + α [A, B]+ [A, [A, B]] + .... (A.1) 2! Proof. Let f1(α)=exp(αA)B exp(−αA) , (A.2)
then, one can expand f1 in Taylor series about the origin. We first evaluate the derivatives − − f1(α)=exp(αA)(AB BA)exp( αA) , so f1(0) = [A, B] . (A.3) Similarly − − f1 (α)=exp(αA)(A [A, B] [A, B] A)exp( αA) , so that f1 (0) = [A, [A, B]] . (A.4) Now, we write the Taylor’s expansion α2 f (α)=f (0) + αf (0) + f (0) + ... (A.5) 1 1 1 2! 1 or α2 exp(αA)B exp(−αA)=B + α [A, B]+ [A, [A, B]] + .. (A.6) 2! A particular case is when [A, B]=c,wherec is a c-number, then
exp(αA)B exp(−αA)=B + αc , (A.7)
in which case exp(αA) acts as a displacement operator. 364 A. Operator Relations A.2 Theorem 2: The Baker–Campbell–Haussdorf Relation
Let A and B be two non-commuting operators such that
[A, [A, B]] = [B,[A, B]] = 0 , (A.8)
then α2 exp [α(A + B)] = exp αA exp αB exp − [A, B] (A.9) 2 α2 =expαB exp αA exp [A, B] . 2
Proof. Define f2(α) ≡ exp αA exp αB . (A.10) Then df (α) 2 =[A +exp(αA)B exp −(αA)]f (α) (A.11) dα 2 =(A + B + α [A, B])f2(α) ,
where in the last step, we used (A.6). Also, from the definition of f2(α), we can write
df (α) 2 =exp(αA)A exp αB +exp(αA)exp(αB)B (A.12) dα =expαA exp αB [exp(−αB)A exp αB + B]
= f2(α)(A + B + α [A, B]) .
By comparing (A.11) and (A.12), we can see that f2(α)commuteswith (A+B +α [A, B]), thus one can integrate as a c-number differential equation, getting α2 α2 f (α)=exp (A + B)α + [A, B] =expα(A+B)exp [A, B] , (A.13) 2 2 2
thus obtaining the desired result. Another application of the Theorem 1 is taking
A = aa†, (A.14) B = a or a† .
As [n, a]=−a (A.15) A.3 Theorem 3: Similarity Transformation 365
and the higher order commutators also give a with alternating signs, thus α2 exp(αn)a exp(−αn)=a − αa + a + ... =exp(−α)a. (A.16) 2 Similarly exp(αn)a† exp(−αn)=exp(α)a† . (A.17)
A.3 Theorem 3: Similarity Transformation
exp(αA)f(B)exp−(αA)=f(exp(αA)(B)exp−(αA)) . (A.18) Proof. We start with the following identity [exp(αA)(B)exp−(αA)]n =exp(αA)B exp(−αA)exp(αA)B exp(−αA)... =exp(αA)Bn exp(−αA) . Then, for any function f(B) that can be expanded in power series, the Theorem 3 follows. As an interesting application, let us calculate exp −αa† + α∗a f(a, a†)exp αa† − α∗a = f[exp −αa† + α∗a a exp αa† − α∗a , exp −αa† + α∗a a† exp αa† − α∗a ] = f(a + α, a† + α∗) . Also exp −αa† f(a, a†)exp αa† = f(a + α, a†) , (A.19) exp (α∗a) f(a, a†)exp(−α∗a)=f(a, a† + α∗) , (A.20) exp (αn) f(a, a†)exp(−αn)=f[a exp(−α),a† exp(α)] (A.21) Other properties: One can easily show that da†l a, a†l = la†l−1 = , (A.22) da† dal a†,al = −lal−1 = − . da A more general version of the above relations is for a function f(a, a) which may be expanded in power series of a and a† ∂f(a, a†) a, f(a, a†) = , (A.23) ∂a† ∂f(a, a†) a†,f(a, a†) = − . (A.24) ∂a 366 A. Operator Relations Reference
1. Louisell, W.H.: Quantum Statistical Properties of Radiation. John Wiley, New York (1973) B. The Method of Characteristics
We have a first-order partial differential equation: [1]
Pp+ Qq = R, (B.1)
where P = P (x, y, z),Q= Q(x, y, z),R= R(x, y, z), and
∂z ∂z p ≡ ,q ≡ , (B.2) ∂x ∂y
and we wish to find a solution of (B.1), of the form
z = f(x, y) . (B.3)
The general solution of (B.1) is
F (u, v)=0, (B.4)
where F is an arbitrary function, and
u(x, y, z)=c1 , (B.5)
v(x, y, z)=c2 ,
is a solution of the equations dx dy dz = = . (B.6) P Q R Proof. If (B.5) are solutions of (B.6), then the equations
∂u ∂u ∂u dx + dy + dz =0, (B.7) ∂x ∂y ∂z and dx dy dz = = , P Q R must be compatible; thus, we must have 368 B. The Method of Characteristics
Pux + Quy + Ruz =0, (B.8)
and similarly for v Pvx + Qvy + Rvz =0. (B.9) On the other hand, if x and y are independent variables and z = z(x,y), then from (B.5), we get
∂z u + u =0, (B.10) x z ∂x ∂z u + u =0, y z ∂y
and substituting (B.10) into (B.8) we get ∂z ∂z ∂u −P − Q + R =0, ∂x ∂y ∂z
and (B.1) is satisfied. The second part of the proof is to show that the general solution of (B.1) is F (u, v)=0. (B.11) From (B.11), one writes ∂F ∂F ∂u ∂u ∂z ∂F ∂v ∂v ∂z = + + + =0, (B.12) ∂x ∂u ∂x ∂z ∂x ∂v ∂x ∂z ∂x ∂F ∂F ∂u ∂u ∂z ∂F ∂v ∂v ∂z = + + + =0. (B.13) ∂y ∂u ∂y ∂z ∂y ∂v ∂y ∂z ∂y We finally notice that (B.13) is satisfied considering (B.10). Example. Find the general solution of the equation ∂z ∂z x2 + y2 =(x + y)z. (B.14) ∂x ∂y In this case
P = x2 , (B.15) Q = y2 , R =(x + y)z,
and we have to find the solution of dx dy dz = = . (B.16) x2 y2 (x + y)z Reference 369
Integrating, first dx dy = , x2 y2 we get −1 −1 x + y = c1 . (B.17) On the other hand,
x2 − dx − dy ( 2 1)dy dy dz = y = = , x2 − y2 x2 − y2 y2 (x + y)z
from where we get x − y = c = v. (B.18) z 2 Combining (B.17) and (B.18), we get xy = c = u, (B.19) z 1 so the general solution can be put as xy x − y F , =0, (B.20) z z
or if we write (B.20) in the equivalent form
u = g(v) , (B.21)
then the solution is xy x − y = g . (B.22) z z
Reference
1. Sneddon, I.: Elements of Partial Differential Equations. (Mc-Graw Hill, New York (1957) C. Proof
In this Appendix, we show the equation ⎡ ⎤ ⎣ − − − 2⎦ 2 − − δ(t tj)δ(t tk) S R ρaa = pRδ(t t ) . (C.1) j= k
For regular pumping, one can put tj = t0 + jτ, where τ is the constant time interval between the atoms and t0 some arbitrary time origin [1]. In this case, there are no pumping fluctuations, and therefore, there are no correlations between the products of delta functions, that is δ(t − tj)δ(t − tk) S = δ(t − tj) S δ(t − tk) S (C.2) j,k j k = R2 . Now, we split the l.h.s. of the above equation in two parts 2 δ(t − tj)δ(t − tk) S + δ(t − tj)δ(t − tk) S = R , (C.3)
j=k j=k 2 δ(t − tj)δ(t − tk) S + Rδ(t − t )=R , j= k thus proving the relation ⎡ ⎤ ⎣ − − − 2⎦ 2 − − δ(t tj)δ(t tk) S R ρaa = pRδ(t t ) j= k for p =1. In the Poissonian case, tj is totally uncorrelated from tk(j = k),so 2 δ(t − tj)δ(t − tk) S = δ(t − tj) S δ(t − tk) S = R , (C.4) j= k j k which proves (C.1) for p =0. Notice that in the above result, we are missing an atom in the second summation, so the above result is approximate, the approximation being very good when R>>1. (The error is of the order of R compared to R2.) A more general proof is found in the reference [2]. 372 C. Proof References
1. An excellent discussion on this point, as well and on noise supression in quantum optical systems is found in: Davidovich, L.: Rev. Mod. Phys., 68, 127 (1996) 2. Benkert, C., Scully, M.O., Bergou, J., Davidovich, L., Hillery, M., Orszag, M.: Phys. Rev. A, 41, 2756 (1990) D. Stochastic Processes in a Nutshell
D.1 Introduction
Classical Mechanics gives a deterministic view of the dynamical variables of a system. This of course is true, when one is not in a chaotic regime. On the other hand, in many cases, the system under study is only de- scribed by the time evolution of probability distributions. To show these ideas with an example, we take a look at the random walk in one dimension, by now, a classical problem [3]. A person moves in a line, taking random steps forward or backward, with equal probability, at fixed time intervals τ. Calling the position xn = na, then the probability that it occupies the site xn at time t is P (xn | t) and obeys the equation 1 1 P (x | t + τ)= P (x − | t)+ P (x | t) . (D.1) n 2 n 1 2 n+1 Now, we go to the continuum limit, letting τ and a become small, but a2 with finite τ . Then ∂ P (x | t + τ)=P (x | t)+τ P (x | t)+... (D.2) ∂t ∂ P (x ± | t)=P (x ± a | t)=P (x | t) ± a P (x | t) n 1 ∂x a2 ∂2 + P (x | t)+... , 2 ∂x2 and inserting the above expansions in (D.1), we get ∂ a2 ∂2 τ P (x | t)+O(τ 2)= P (x | t)+O(a4)+... (D.3) ∂t 2 ∂x2 Now, letting τ,a → 0with a2 D ≡ , (D.4) τ D being the diffusion coefficient, we get a diffusion or Fokker–Planck Equation: ∂ D ∂2 P (x | t)= P (x | t) . (D.5) ∂t 2 ∂x2 374 D. Stochastic Processes in a Nutshell D.2 Probability Concepts
Let us call ω an event and let A describe a set of events , thus ω ∈ A, (D.6) meaning that the event ω belongs to the set of events A [2] . Also, we call Ω the set of all the events and Φ the set of no events. We now introduce the probability of A, P (A), satisfying the following axioms i) P (A) ≥ 0 for all A. ii) P (Ω) = 1. iii) If Ai(i =1, 2, 3...) is a countable collection of non-overlapping sets, such that
Ai ∩ Aj =Φ,i= j, (D.7) then P (∪iAi)= P (Ai) . (D.8) i Now, we are ready to define the joint and conditional probabilities. Joint probability P (A ∩ B)=P {ω ∈ A and ω ∈ B} . (D.9) Conditional probability P (A ∩ B) P (A | B)= , (D.10) P (B) which satisfies the intuitive idea of a conditional probability that ω ∈ A (given that we know that ω ∈ B) is given by the joint probability of A and B divided by the probability of B. Now, suppose we have a collection of sets Bi, such that
Bi ∩ Bj =Φ, (D.11)
∪i(A ∩ Bi)=A ∩ (∪iBi)=A. (D.12) Now, by the axiom iii P (A ∩ Bi)=P (∪i(A ∩ Bi)) = P (A) , (D.13) i thus P (A, Bi)= P (A | Bi)P (Bi)=P (A) , (D.14) i i or, put it in words, if we sum the joint probability over the mutually exclusive events Bi, it eliminates that variable. These ideas will be useful later to derive the Chapman–Kolmogorov Equation. D.3 Stochastic Processes 375 D.3 Stochastic Processes
We have a time-dependent random variable X(t) and measure the values x1,x2,x3... at times t1,t2,t3..., then the joint probability densities
P (x1,t1; x2,t2; ...)
describe completely the system, which is referred to as a stochastic process. One can also define the conditional probability densities as
P (x1,t1; x2,t2; ... | y1,τ1; y2,τ2; ...) = (D.15)
P (x1,t1; x2,t2; ...y1,τ1; y2,τ2; ..)/P (y1,τ1; y2,τ2; ..) ,
where the time sequence increases as
t1 ≥ t2 ≥ ... ≥ τ1 ≥ τ2...
Some simple examples: a) Complete independence. In this case X(t) is completely independent of past and future, or
P (x1,t1; x2,t2; ...)=iP (xi,ti) . (D.16) b) The next simplest case is the Markov Process, where the conditional probability is entirely determined by the knowledge of the most recent condition, that is
P (x1,t1; x2,t2; ... | y1,τ1; y2,τ2; ...)=P (x1,t1; x2,t2; ... | y1,τ1) . (D.17)
It is simple to show, that for the Markovian case, an arbitrary joint probability can be written as
n−1 | P (x1,t1; x2,t2; ...xn,tn)= i=1 P (xi,ti xi−1,ti−1)P (xn,tn) . (D.18)
D.3.1 The Chapman–Kolmogorov Equation
As we saw in the previous section, summing over all mutually exclusive vari- ables, eliminates that variable, in other words P (A ∩ B ∩ C...)=P (A ∩ C...) . (D.19) B Now, we apply this idea to a stochastic process 376 D. Stochastic Processes in a Nutshell
P (x, t | x0,t0)= dyP(x, t; y,s | x0,t0) (D.20)
= dyP(x, t | y,s; x0,t0)P (y,s | x0,t0) .
Next, we apply the Markov condition, getting the Chapman–Kolmogorov Equation
P (x, t | x0,t0)= dyP(x, t | y,s)P (y,s | x0,t0) . (D.21)
In the above analysis, t0 is any initial time for which x(t0)=x0 ,ands is an intermediate time t0 ≤ s ≤ t,andx(s)=y. At this point, we observe that P (x, t | x0,t0) is a probability density, satisfying the initial condition | | − P (x, t x0,t0) t=t0 = δ(t t0) , (D.22)
and the normalization condition
dxP (x, t | x0,t0)=1. (D.23)
Now, going back to (D.21), we write t = s +Δt, and expand in Δt ∂P(xt | y,s) P (x, s+Δt | x ,t )= dy P (x, s | y,s)+Δt | P (y,s | x ,t ) , 0 0 ∂t t=s 0 0 or
P (x, s +Δt | x0,t0)=P (x, s | x0,t0)+Δt dyW(x | y)P (y,s | x0,t0) , (D.24) where W (x | y) is the transition rate, defined as ∂P(x, s | y,s) W (x | y)= | . (D.25) ∂t t=s Letting Δt → 0, (D.24) becomes ∂P(x, t | x ,t ) 0 0 = dyW(x | y)P (y,t | x ,t ) . (D.26) ∂t 0 0 This is the forward Chapman–Kolmogorov equation. By integrating (D.26), one can easily verify that dxW (x | y)=0. (D.27)
The transition probability can be split into two parts, one that does not change plus the change, that is D.3 Stochastic Processes 377
W (x | y)=W0(x)δ(x − y)+W1(x | y) , (D.28)
and integrating the above equation in x and using (D.27), we get
W0(y)=− W1(x | y)dx ,
so the forward Chapman–Kolmogorov equation now reads as ∂P(x, t | x0,t0) = dyW1(x | y)P (y,t | x0,t0) (D.29) ∂t
− dyW1(y | x)P (x, t|x0,t0) ,
which has the form of a rate equation. If the random variable X can take discrete values, the forward Chapman– Kolmogorov equation can be written as
∂P(x ,t) i = [W P (x ,t) − W P (x ,t)] . (D.30) ∂t ij j ji i j
This equation is known as the Master Equation. Many stochastic processes are of a special type called ‘birth and death process’ or one-step process [4].They correspond to
Wij = rjδi,j−1 + gjδi,j+1, (i = j) (D.31)
which permits jumps to adjacent sites. Also, for the diagonal part
Wn = −(rn + gn) , (D.32)
so the master equation reads . Pn= rn+1Pn+1 + gn−1Pn−1 − (rn + gn)Pn , (D.33)
where rn represents the probability per unit time to jump from n → n−1, and gn the probability per unit time to go from n → n +1. Typically, one-step processes occur in atomic transition via one photon (emission and absorption), nuclear excitation and de-excitation, fission, etc. An interesting example is the Poisson process, defined as
rn =0, (D.34)
gn = q.
Pn(0) = δn,0,
and the Master Equation is 378 D. Stochastic Processes in a Nutshell . Pn= q(Pn−1 − Pn) . (D.35)
This is a one-sided random walk. To solve it, we use the characteristic function: G(s, t)= exp ins = Pn(t)expins , (D.36) n
with boundary condition G(s, 0) = 1. Multiplying the Master Equation by exp ins and summing over n,weget