<<

Bruce K. Driver

Heat kernel weighted L2 – spaces

July 16, 2010 File:Cornell.tex

Contents

Part I Preliminaries

1 Where we are going ...... 3

2 Preliminaries ...... 5 2.1 Measurability and Density Facts...... 5 2.2 Holomorphic Functions ...... 6

3 Basic Gaussian Measure Concepts ...... 9

Part II Finite Dimensional Gaussian Measures

4 The Gaussian Basics I ...... 17

5 The Interpretation ...... 21

6 Fock Spaces ...... 27

7 Segal Bargmann Transforms ...... 31 7.1 Three key identities ...... 31 7.2 The Segal-Bargmann Transform ...... 33 7.2.1 Examples ...... 34

8 The Kakutani-Itˆo-Fock space isomorphism ...... 37 8.1 TheRealCase ...... 37 8.2 The Complex Case ...... 38 8.3 The Segal-Bargmann Action on the Fock Expansions ...... 40 4 Contents

Part III Infinite Dimensional Gaussian Measures

9 The Gaussian Basics II ...... 43 9.1 Gaussian Structures ...... 43 9.2 Cameron-Martin Theorem ...... 46 9.3 The Heat Interpretation ...... 48

10 Gaussian Process as Gaussian Measures ...... 51 10.1 Reproducing Kernel Hilbert Spaces ...... 51 10.2 The Example of Brownian Motion ...... 53

References ...... 55

Page: 4 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 Part I

Preliminaries

1 Where we are going

These lectures are devoted to the following two vague questions. 3. Put items 1. and 2. together and explain Segal - Bargmann for Gaussian measure on Banach spaces and make contact with multiple Itoˆintegrals – Question 1.1. Given a real W equipped with a measure µ. Does there ItˆoChaos on W exist a complexificaiton W of W, a measure µ on W , and a unitary map C C C 4. Quantize Yang-Mills – Explain the paper I wrote with Brian Hall on YM 2. U : L2 (W, µ) → HL2 (W , µ ) where HL2 (W , µ ) denotes the holomorphic C C C C 5. This should lead into Hall’s theorem in finite dimension and the finite di- L2 – functions on W . C mensional extensions of the Taylor map to Lie groups. Question 1.2. Given a pointed complex manifold (G, o) equipped with a mea- 6. Segal Bargmann on Lie Groups sure λ. Let 7. Extensions of the Theory – Give a survey of some or our results in the D := {“derivatives” of f at o : f ∈ H (G)} infinite dimensional Heisenberg groups. be the derivative space associated to H (G) (the holomorphic functions on G) and let T : H (G) → D be the “Taylor map;”

T f := {“derivatives” of f at o} .

Can we; 1. characterize the derivative space, D?

2. Find the , k·kD , on D such that Z 2 2 |f| dλ = kT fkD for all f ∈ H (G) . G We will only be able to provide some partial answers to these questions by way of a few examples where W and G are Lie groups or homogenous spaces equipped with “heat kernel measures.” The prototypical example we have in mind here goes under the names of the Segal-Bargmann transform and the Itˆochaos expansion. The original context of this theorem was for the case where W is a Banach space equipped with a Gaussian measure – for example W = C ([0,T ] , R) equipped with Wiener measure µ, i.e. the law of a Brownian motion. In this classical case everyone knows that Brownian motion is intimately related to the heat equation on R. However are point of view will be to exploit the less widely appreciated fact that µ is related to a heat equation on W. Here is the outline of these lectures; 1. Review Gaussian measure on Banach spaces. 2. Introduce the Segal-Bargman transform on flat finite dimensional spaces. (Masha does this?)

2 Banach Space Preliminaries

Given a real Banach space W, we let W ∗ denote the continuous dual of W 1. By the separability of W along with the Hahn - Banach theorem one may ∞ ∗ and BW be the Borel σ – algebra on W. Given a probability measure, µ, on BW find {αn}n=1 ⊂ W such that kxkW = supn |αn (x)| . The result now follows we letµ ˆ : W ∗ → C be its Fourier transform defined by from Exercise 2.1 below. Z 2. Let pW and pV denote the projections of W × V to W and V respectively. µˆ (α) = eiα(x)dµ (x) . Then W ∗ ∗ BW ⊗ BV = σ (α : α ∈ W ) ⊗ σ (β : β ∈ V ) ∗ ∗ = σ ({α ◦ pV : α ∈ W } ∪ {β ◦ pW : β ∈ V }) 2.1 Measurability and Density Facts ∗ ∗ = σ ({α ◦ pV + β ◦ pW :(α, β) ∈ W × V }) ∗  ∗  Definition 2.1 (Cylinder functions). To any non-empty subset, L, of W = σ ψ : ψ ∈ (W × V ) = BW ×V . ∞ we let FCc (L) denote those f : W → R of the form 3. The vector space operations are continuous and hence measurable by item

f = F (α1, α2, . . . , αn) (2.1) 2. 4. This is a consequence of the fact that BW is countably generated say by for some n ∈ ,F ∈ C∞( n), and {α }n ⊂ L. Similarly let FP(L) denote ∞ N c R i i=1 {B (xn, r): n ∈ N and r ∈ Q+} where {xn}n=1 is a countable dense subset the polynomial cylinder functions f as in Eq. (2.1) where now F : Rn → R is a of W. polynomial function on Rn. 5-7. These are applications of the multiplicative systems theorem. p L p p ∗ We will use the following standard results through out these notes. 8. If FP(W ) 6⊆ L then by the Hahn Banach theorem there exists λ ∈ (L ) such that λ 6= 0 while λ(FP(W )) = {0}. Under these assumptions it can Theorem 2.2. If W and V be real separable Banach spaces, 1 ≤ p < ∞, and be shown that λ eiα = 0 for all α ∈ L. From item 6. it now follows that µ and ν be probability measures on (W, BW ) and (V, BV ) respectively, then; λ ≡ 0 which is a contradiction.

∗ 1. BW = σ (W ) . 2. B ⊗ B = B . W V (W ×V ) Exercise 2.1. Suppose that L ⊂ W ∗. Show that σ (L) = B iff k·k is σ (L) 3. The vector space operations are measurable on W. W W p – measurable. 4. L (W, BW , µ) is separable. ∞ p 5. If σ (L) = BW then FCc (L) is dense in L (W, BW , µ) . The following is a typical example for W and L. ∗  iα 6. If L is a subspace of W such that σ (L) = BW then e : α ∈ L is total p Example 2.3 (Canonical Continuous Stochastic Processes). Suppose that T ∈ in L (W, BW , µ) . ∗ (0, ∞) is given and let W := C ([0,T ] , ) and 7. If L is a subspace of W such that σ (L) = BW and µˆ|L =ν ˆ|L, then µ = ν. R 8. If L is a subspace of W ∗ such that for all α ∈ L there exists ε = ε (α) > 0 ε|α| 1 kxkW := max |x (t)| . such that e ∈ L (W, BW , µ) , then FP (L) is a dense subspace of t∈[0,T ] p L (W, BW , µ) . By the Stone – Weierstrass theorem we know that (W, k·kW ) is a separable ∗ Proof. For a full proof of these results of [7, Chapter ??]. Here I will just Banach space. For t ∈ [0,T ] let αt ∈ W be the evaluation maps, αt (x) = x (t) . give a brief hint at the proofs. Since 6 2 Banach Space Preliminaries

k·kW = sup |αt| The next theorem gathers together a number of basic properties of holo- t∈Q∩[0,T ] morphic functions which may be found in [21]. (Also see [20].) One of the key it follows from Exercise 2.1 that σ (L) = BW when L = {αt : 0 ≤ t ≤ T } or ingredients to all of these results is Hartog’s theorem, see [21, Theorem 3.15.1]. L = span {α : 0 ≤ t ≤ T } . In particular if µ is a probability measure on R t Theorem 2.6. If u : D → Y is holomorphic, then there exists a function u0 : (W, B ) , then µ is completely determined by its finite dimensional distributions W D → Hom (X,Y ), the space of bounded complex linear operators from X to or equivalently byµ ˆ restricted to L = span {α : 0 ≤ t ≤ T } . R t Y , satisfying Assumption 1 Through out these notes W will be a separable Banach space 1. If a ∈ D, x ∈ B (a, r /2), and h ∈ B (0, r /2), then which is taken to be real unless otherwise specified. X a X a

0 4Ma 2 ku (x + h) − u (x) − u (x) hkY 6 khkX . (2.3) ra (ra − 2 khk ) 2.2 Holomorphic Functions X In particular, u is continuous and Frech´etdifferentiable on D. The following material is taken directly from [8, Section 5.1]. Let X and Y be 2. The function u0 : D → Hom (X,Y ) is holomorphic. two complex Banach space and for a ∈ X and δ > 0 let Remark 2.7. By applying Theorem 2.6 repeatedly, it follows that any holomor- phic function, u : D → Y is Frech´et differentiable to all orders and each of the BX (a, δ) := {x ∈ X : kx − akX < δ} Frech´etdifferentials are again holomorphic functions on D. be the open ball in X with center a and radius δ. Proof. By [21, Theorem 26.3.2 on p. 766.], for each a ∈ D there is a lin- 0 0 Definition 2.4. Let D be an open subset of X. A function u : D → Y is said ear operator, u (a): X → Y such that du (a + λh) /dλ|λ=0 = u (a) h. The to be holomorphic (or analytic) if the following two conditions hold. Cauchy estimate in Theorem 3.16.3 (with n = 1) of [21] implies that if a ∈ D, x ∈ BX (a, ra/2) and h ∈ BX (0, ra/2) (so that x + h ∈ BX (a, ra)), then 1. u is locally bounded, namely for all a ∈ D there exists an ra > 0 such that 0 ku (x) hkY 6 Ma. It follows from this estimate that

Ma := sup {ku (x)kY : x ∈ BX (a, ra)} < ∞. n 0 o sup ku (x)kHom(X,Y ) : x ∈ BX (a, ra/2) 6 2Ma/ra. (2.4) 2. The function u is complex Gˆateaux differentiable on D, i.e. for each a ∈ D and h ∈ X, the function λ → u (a + λh) is complex differentiable at λ = and hence that u0 : D → Hom (X,Y ) is a locally bounded function. The estimate 0 ∈ C. in Eq. (2.3) appears in the proof of the Theorem 3.17.1 in [21] which completes the proof of item 1. (Holomorphic and analytic will be considered to be synonymous terms for To prove item 2. we must show u0 is Gˆateauxdifferentiable on D. We will the purposes of this paper.) in fact show more, namely, that u0 is Frech´etdifferentiable on D. Given h ∈ X, let F : D → Y be defined by F (x) := u0 (x) h. According to [21, Theorem Typically the easiest way to check that λ → u (a + λh) is holomorphic in a h h 26.3.6], F is holomorphic on D as well. Moreover, if a ∈ D and x ∈ B (a, r /2) neighborhood of zero is to use Morera’s theorem which I recall for the reader’s h a we have by Eq. (2.4) that convenience.

kFh (x)kY 6 2Ma khkX /ra. Theorem 2.5 (Morera’s Theorem). Suppose that Ω is an open subset of C and f ∈ C(Ω) is a complex function such that So applying the estimate in Eq. (2.3) to Fh, we learn that Z 4 (2M khk /r ) f(z)dz = 0 for all solid triangles T ⊂ Ω, (2.2) kF (x + k) − F (x) − F 0 (x) kk a X a · kkk2 (2.5) h h h Y 6 ra ra  X ∂T 2 2 − 2 kkkX then f is holomorphic on Ω. for x ∈ B (a, ra/4) and kkkX < ra/4, where

Page: 6 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 2.2 Holomorphic Functions 7

0 d d 0 2  Therefore, an L2– convergent sequence of holomorphic functions is uniformly Fh (x) k = |0Fh (x + λk) = |0u (x + λk) h =: δ u (x; h, k) . dλ dλ convergent on compact subsets of D1 and so the limit is also holomorphic. Since the derivatives of uniformly convergent holomorphic functions are uniformly Again by [21, Theorem 26.3.6], for each fixed x ∈ D, δ2u (x; h, k) is a contin- convergent, it follows that L2 convergence also implies uniform convergence of uous symmetric in (h, k) ∈ X × X. Taking the supremum of Eq. the differentials on compact sets. (2.5) over those h ∈ X with khkX = 1, we may conclude that

0 0 2 u (x + k) − u (x) − δ u (x; ·, k) Hom(X,Y ) 0 = sup kFh (x + k) − Fh (x) − Fh (x) kkY khkX =1 4 (2M /r ) a a kkk2 . 6 ra ra  X 2 2 − 2 kkkX This estimate shows u0 is Frech´et differentiable with u00 (x) ∈ Hom (X, Hom (X,Y )) being given by u00 (x) k = δ2u (x; ·, k) ∈ Hom (X,Y ) for all k ∈ X and x ∈ D. The following well known fact has been taken directly from [5, Lemma 3.2].

Lemma 2.8 (Holomorphic L2). Let M be a finite dimensional complex an- alytic manifold and ρ be a smooth positive measure on M. Let H(M) denote the holomorphic functions on M. Then H(M) ∩ L2(ρ) is a closed subspace of 2 2 L (ρ). Moreover, if fn → f in L (ρ) as n → ∞, then fn and dfn converges to f and df respectively uniformly on compact subsets of M.

Proof. Since the property that a function on M is holomorphic is local, it . suffices to prove the lemma in the case that M = D1 and ρ = 1, where for any R > 0, . d DR = {z ∈ C : |zi| < R ∀i = 1, 2, . . . d}. (2.6)

Let f be a holomorphic function on D1, 0 < α < 1, and z ∈ Dα. By the mean value theorem for holomorphic functions;

Z √ d −d −1θj d Y f(z) = (2π) fn({zj + rje }j=1) dθj, (2.7) d [0,2π] j=1

d . where r = (r1, r2, . . . , rd) ∈ R such that 0 ≤ ri <  = 1−α for all i = 1, 2, . . . , d. Multiplying (2.7) by r1 ··· rd and integrating each ri over [0, ) shows Z f(z) = (π2)−d f(z + ξ)λ(dξ), D where λ denotes Lebesgue measure on Cd. In particular, for each α < 1,

2 −d sup |f(z)| ≤ (π(1 − α) ) kfkL2 . z∈Dα

Page: 7 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09

3 Basic Gaussian Measure Concepts

Definition 3.1. A measure µ on (W, BW ) is called a (mean zero) Gaussian In particular, ∗ h i  1  measure provided that every element α ∈ W is Gaussian random variable. i(Y −Yn) 2 E e = exp − E (Y − Yn) We will refer to (W, BW , µ) as a Gaussian space. 2 A word of warning: I am going to drop the adjective, “mean zero” and which along with the DCT shows simply refer to a Gaussian measure or Gaussian probability and intend for you h i 2 i(Y −Yn) to understand that it is a mean zero. δn := E (Y − Yn) = −2 ln E e → 0 as n → ∞. The condition that every α ∈ W ∗ is Gaussian relative to µ is equivalent to assuming that W ∗ ⊂ L2 (µ) and d √ If N is a standard normal random variable then Y − Yn = δnN and therefore, − 1 q (α) ∗ µˆ(α) := e 2 µ for all α ∈ W (3.1) p p/2 p E (Y − Yn) = δn E [N ] → 0 as n → ∞ where q (α) := q (α, α) and µ µ which is the first assertion. The second assertion now follows by the following Z ∗ simple computation; qµ (α, β) = α (x) β (x) dµ (x) for all α, β ∈ W . (3.2) W  2 Y Yn 2Y 2Yn Yn+Y  n Pn ∗ E e − e =E e + e − 2e For any finite subset {αk}k=1 and ak ∈ R we have k=1 akαk ∈ W and is 1 4 Y 2 1 4 Y 2 1 (Y +Y )2 therefore Gaussian and in particular =e 2 E + e 2 E n − 2e 2 E n   2 Y 2 2 Y 2 1 (2Y )2 Z n → e E + e E − 2e 2 E = 0. i Pn a α − 1 q Pn a α 1 X e k=1 k k dµ = e 2 µ( k=1 k k) = exp − a a q (α , α ) .  2 k l µ k l  W k,l=1

n ∞ Hence we see that {αk}k=1 are jointly Gaussian random variables. Exercise 3.1. Suppose that {Yn}n=1 is a sequence of random variables such that (Y ,Y ) is a mean zero Gaussian random vector for all m 6= n and Y Definition 3.2. We say that a Gaussian measure µ on (W, B ) is non- n m n W converges to some Y in probability. Show that Y is Gaussian and that (Y,Y ) degenerate if q is positive definite on W ∗, i.e. (W ∗, q ) is an inner product n µ µ is again a mean zero Gaussian random vector for each n ∈ . space. (This condition turns out to be equivalent to the of µ being all of N W.) Corollary 3.4. The quadratic form q : W ∗ × W ∗ → R in Eq. (3.2) is continu- ∞ ous, i.e. there exists C2 ∈ (0, ∞) such that Lemma 3.3. If for {Yn}n=1 ∪ {Y } are random variables such that {Yn,Y } is P a mean zero Gaussian vector for each n and Yn → Y as n → ∞, then Yn → Y ∗ |q (α, β)| ≤ C2 kαkW · kβkW for all α, β ∈ W . (3.3) in Lp (P ) for all 1 ≤ p < ∞ and eYn → eY in L2 (P ) . Proof. Because of the Cauchy–Schwarz inequality Proof. By assumption, aYn + bY is Gaussian for all a, b ∈ R and therefore p p (aY +bY ) 1 (aY +bY )2 |q (α, β)| ≤ q(α) q(β) Ee n = e 2 E n and i(aY +bY ) − 1 (aY +bY )2 Ee n = e 2 E n . and so it suffices to show 10 3 Basic Gaussian Measure Concepts |q (α, α)| Proof. Since C2 := sup 2 < ∞. (3.4) ∗ α∈W kαkW ∗ Z z Z z y y e = 1 + e dy = 1 + 10≤y≤ze dy, (By convention we will typically define 0/0 = 0 in this type of ratio.) If C2 = ∞ 0 R ∞ ∗ in Eq. (3.4) we could find {αn}n=1 ⊂ W such that q (αn, αn) = 1 while limn→∞ kαnk ∗ = 0. However this is not possible since, Z Z  Z  W εkxk2 y e dµ(x) = 1 + 10≤y≤εkxk2 e dy dµ(x) Z W W iαn(x) R q (αn, αn) = −2 ln[ˆµ(αn)] = −2 ln e dµ (x) → −2 ln (1) = 0 Z ∞ W = 1 + dy eyµ(εkxk2 ≥ y). (3.8) 0 by the DCT. while q (αn, αn) → ∞ Because of this formula it suffices to show that there are constants, C ∈ (0, ∞) Exercise 3.2. Suppose that µ is a Gaussian measure on (W, B) and for θ ∈ R, and β ∈ (1, ∞) such that 1 let Rθ : W × W → W × W be the “rotation” map given by µ(εkxk2 ≥ y) ≤ Ce−βy for all y ≥ 0. (3.9) Rθ(x, y) = (x cos θ − y sin θ, y cos θ + x sin θ). (3.5) For if we use Eq. (3.9) in Eq. (3.8), we will have Then for all f ∈ (B ) = (B ⊗ B ) and θ ∈ , (W ×W ) b W W b R Z Z ∞ εkxk2 y −βy C Z Z e dµ(x) ≤ 1 + C dy e e = 1 + < ∞. (3.10) W 0 β − 1 f(x, y)dµ(x)dµ(y) = f(Rθ(x, y))dµ(x)dµ(y), (3.6) W ×W W ×W We will now prove Eq. (3.9). By replacing y with εt2, Eq. (3.9) is equivalent to i.e. µ × µ is invariant under the rotations Rθ for all θ ∈ . (See Theorem 3.6 showing R 2 2 for a strong converse to this exercise.) µ(kxk ≥ t) ≤ Ce−βεt = Ce−γt for all t ≥ 0, (3.11) Hint: compute both sides of Eq. (3.6) when f (x, y) = eiψ(x,y) and ψ ∈ where γ := βε. Because we are free to choose ε > 0 as small as we like, it suffices (W × W )∗ . to prove that Eq. (3.11) for some γ > 0. Let P = µ × µ on W × W and let Theorem 3.5 (Ferniques Theorem). Suppose that µ is any measure on t ≥ s ≥ 0. Then by the Rπ/4 invariance of P, (W, B ) such that µ×µ is invariant under rotation by 45o, i.e. (µ × µ)◦R−1 = W µ(kxk ≤ s)µ(kxk ≥ t) = P (kxk ≤ s and kyk ≥ t) µ × µ where   1 x + y x − y R(x, y) = √ (x − y, y + x). = P √ ≤ s and √ ≥ t 2 2 2 √ √ Then there exists ε = ε (µ) > 0 such that ≤ P (|kx|| − ky||| ≤ 2s and kxk + kyk ≥ 2t). Z √ √ εkxk2 e W dµ (x) < ∞. (3.7) Let a = kxk and b = kyk and notice that if |a − b| ≤ 2s and a + b ≥ 2t then W √ √ √ 2t ≤ a + b ≤ b + 2s + b = 2b + 2s and In particular µ has moments to all orders. (For the proof of this theorem see [7, √ √ √ Theorem ??] (also see [4, Theorem 2.8.5] or [23, Theorem 3.1]).) 2t ≤ a + b ≤ a + a + 2s = 2a + 2s

1 th If pi : W ×W → W is projection onto the i – factor for i = 1, 2, then pi◦Rθ (x, y) is from which it follows that linear combination of x and y. As the vector space operations are measurable it t − s t − s follows that pi ◦ Rθ is measurable for i = 1 and 2 and therefore Rθ is BW ⊗ BW – a ≥ √ and b ≥ √ , 2 2 measurable. Alternatively one observes that Rθ : W × W → W × W is continuous and B(W ×W ) = BW ⊗ BW which again shows that Rθ is measurable. as seen in Figure 3.1 below. Combining these expressions shows

Page: 10 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 3 Basic Gaussian Measure Concepts 11 Then by Eq. (3.12)

 2 2 1 tn+1 − s [µ(kxk ≥ tn)] µ(kxk ≥ tn+1) ≤ µ(kxk ≥ √ ) = µ(kxk ≤ s) 2 µ(kxk ≤ s) or equivalently

µ(kxk ≥ t ) µ(kxk ≥ t )2 α (s) := n+1 ≤ n =: α2 (s). n+1 µ(kxk ≤ s) µ(kxk ≤ s) n

Iterating this inequality implies

n µ(kxk ≥ s) α (s) ≤ α2 (s) with α (s) = , (3.16) n 0 0 µ(kxk ≤ s)

i.e. 2n µ(kxk ≥ tn) ≤ µ(kxk ≤ s)(α0(s)) for all n. (3.17)

We now fix an s > 0 sufficiently large so that√ α0(s) < 1 and suppose t ≥ 2s is √given. Choose n so that tn ≤ t ≤ tn+1 = s + 2tn in which case (as 0 ≤ t − s ≤ Fig. 3.1. The region R is contained in the region a ≥ t√−s and b ≥ t√−s . 2tn) 2 2  2 2 t − s 2 2s n √ ≤ tn ≤ 2 2 , 2 21/2 − 1   t − s t − s i.e. µ(kxk ≤ s)µ(kxk ≥ t) ≤ P kxk ≥ √ and kyk ≥ √ 2 2 2 21/2 − 1 2n ≥ (t − s)2 .  t − s 2 4s2 = µ(kxk ≥ √ ) , 2 Combining this with Eq. (3.17), using α0(s) < 1, shows which is to say for t ≥ s ≥ 0, µ(kxk ≥ t) ≤ µ(kxk ≥ tn) 2 1  t − s  2n (t−s)2 µ(kxk ≥ t) ≤ µ(kxk ≥ √ ) . (3.12) ≤ µ(kxk ≤ s)(α0(s)) ≤ µ(kxk ≤ s)ρ µ(kxk ≤ s) 2 where 2 We will now complete the proof by iterating Eq. (3.12). Define t0 = s and then (21/2−1) ∞ 4s2 define {tn}n=1 inductively so that ρ := (α0(s)) ∈ (0, 1). 2 2 tn+1 − s Since t ≥ 2s is equivalent to (t − s) ≥ t/2, we have (t − s) ≥ (t/2) and √ = tn for all n (3.13) 2 therefore − |ln ρ| t2 µ(kxk ≥ t) ≤ µ(kxk ≤ s)e 4 i.e. √ |ln ρ| tn+1 = s + 2tn (3.14) which is sufficient to prove Eq. (3.11) with γ = 4 and C chosen to be a and (by a simple induction argument) sufficiently large constant. As stated Theorem 3.5 seems to say something about measures more general n n+1 n+1 X 2 2 − 1 2 2 than Gaussian measures but as is well know this not the case. t = s 2i/2 = s ≤ s . (3.15) n 21/2 − 1 21/2 − 1 i=0

Page: 11 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 12 3 Basic Gaussian Measure Concepts Theorem 3.6. If µ is a probability measure such that µ × µ is invariant under Let f (t) :=µ ˆ (tβ) so that f (0) = 1, f˙ (0) = 0 (since f (t) is odd), and R , then µ is a Gaussian measure on (W, B ) . As usual we will have µˆ = π/4 W 2 − 1 q  d  Z Z e 2 where q (α) := q (α, α) and f¨(0) = eitβ(x)dµ (x) = − β2 (x) dµ (x) := −q (β) . dt Z t=0 W W ∗ q (α, β) := α (x) β (x) dµ (x) for all α, β ∈ W . So by Taylor’s theorem; W 1 (See Feller [11, Section III.4, pp. 77-80] for more general theorems along these f (t) = 1 − q (β) t2 + O t3 lines.) 2 Proof. By Fernique’s Theorem 3.5 bound in Eq. (3.7), µ has moments to while by Eq. (3.21); ∗ all orders andµ ˆ(α) is infinitely differentiable in α. In fact for any α ∈ W , the n 2n  2 function, h  −n/2i 1 −n  −3n/2 Z f (1) = f 2 = 1 − q (β) 2 + O 2 . izα(x) 2 C 3z → “ˆµ (zα) ” := e dµ (x) W A simple calculus exercise now shows is holomorphic. The invariance of µ × µ under rotation by π/4 and the formula,   µ × µ(α ◦ p + β ◦ p ) =µ ˆ(α)ˆµ(β), implies 1   1 \ 1 1 ln f (1) = 2n · ln 1 − q (β) 2−n + O 2−3n/2 → − q (β) as n → ∞. 2 2  1   1  µˆ(α)ˆµ(β) =µ ˆ √ (α + β) µˆ √ (−α + β) ∀ α, β ∈ W ∗. (3.18) 2 2 Thus we have shown − 1 q(β) − 1 Var (β) Taking α = 0 and then β = 0 in this equation implies, µˆ (β) =µ ˆ (β) = f (1) = e 2 = e 2 µ ,

 1   1  −q/2 µˆ(β) =µ ˆ √ β µˆ √ β ∀ β ∈ W ∗ (3.19) i.e.µ ˆ = e and so µ is a (possibly degenerate) Gaussian measure. 2 2 ∗     Corollary 3.7. Suppose that L is a linear subspace of W such that σ (L) = BW 1 1 ∗ µˆ(α) =µ ˆ √ α µˆ −√ α ∀ α ∈ W (3.20) and µ is a probability measure on (W, BW ) such that every element α ∈ L is a 2 2 mean - zero Gaussian random variable. Then µ is a Gaussian measure. and in particular we learn Proof. Let         1 1 1 1 ∗ µˆ √ α µˆ −√ α =µ ˆ √ α µˆ √ α ∀ α ∈ W . L ⊕ L = ψ ∈ (W × W )∗ : α := ψ (·, 0) ∈ L and β := ψ (0, ·) ∈ L 2 2 2 2  ∗ = α ◦ p1 + β ◦ p2 ∈ (W × W ) : α, β ∈ L  t  For t ∈ R near zero we know thatµ ˆ √ α 6= 0 and hence we may conclude 2 th that where pi : W × W → W is projection onto the i – factor. We then have that  t   t  µˆ −√ α =µ ˆ √ α σ (L ⊕ L) = σ ({α ◦ p : α ∈ L} ∪ {β ◦ p : β ∈ L}) 2 2 1 2 = σ (L) ⊗ σ (L) = BW ⊗ BW = B(W ×W ). for t near zero and hence by the principle of analytic continuation√ this equation in fact holds for all t ∈ C and in particular for t = 2 from which we learn So in order to show (µ × µ) ◦ R−1 = µ × µ it suffices to show see item 7 of thatµ ˆ (−α) =µ ˆ (α) for all α ∈ W ∗. Since µˆ (α) =µ ˆ (−α) =µ ˆ (α) we may now π/4 h i conclude thatµ ˆ is real. −1 b Theorem 2.2) (µ × µ) ◦ Rπ/4 = µ\× µ on L ⊕ L which we now do. Iterating Eq. (3.19) implies, Let ψ (x, y) = α (x) + β (y) with α, β ∈ L. Then 2n  1 n  1 1 µˆ (β) =µ ˆ √ β for all n ∈ N0. (3.21) ψ ◦ Rπ/4 (x, y) = √ ψ (x − y, x + y) = √ [α (x) + β (x) + β (y) − α (y)] 2 2 2

Page: 12 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 so that

h −1 ib (µ × µ) ◦ Rπ/4 (ψ)  = µ\× µ ψ ◦ Rπ/4 Z  1  = exp i√ [α (x) + β (x) + β (y) − α (y)] dµ (x) dµ (y) W ×W 2 − 1 [q(α+β)+q(β−α)] − 1 [2q(α)+2q(β)] = e 4 = e 4 = µ\× µ (ψ) .

Example 3.8 (Gaussian processes and Gaussian measures). Suppose that {Yt}0≤t≤T is a stochastic process with continuous sample paths, W := C ([0,T ] , R) , and αt (x) = x (t) for all x ∈ W and 0 ≤ t ≤ T. Recall from   Example 2.3 we know that σ {αt} = BW . Therefore we may view Y t∈Q∩[0,T ] as a W – valued random variable and define µ := Law (Y·) as a measure on (W, BW ) . If we further assume that {Yt}0≤t≤T is a mean zero Gaussian process, then (W, BW , µ) is a Gaussian measure space. To prove this apply Corollary 3.7 with Pn L := span {αt : 0 ≤ t ≤ T } . Note that for α = i=1 aiαti in L we have that d Pn Pn Pn α = i=1 aiYti , i.e. Lawµ (α) = LawP ( i=1 aiYti ) . By assumption i=1 aiYti is Gaussian and therefore so is α.

Example 3.9 (Brownian Motion). Suppose that {Bt}t≥0 is a Brownian motion. Then B induces a Gaussian measure, µ on W := {x ∈ C ([0,T ] , R): x (0) = 0} which is uniquely specified by its covariances; Z x (s) x (t) dµ (x) = s ∧ t for all 0 ≤ s, t ≤ T. W

Part II

Finite Dimensional Gaussian Measures

4 The Gaussian Basics I

In this chapter suppose that W is a finite dimensional real Banach space is a linear isomorphism of vectors spaces.1 Using this isometry,q ¯ induces an ∗ ∗ ∗ ∗ and q : W × W → R is a non-negative symmetric quadratic form on W . inner product, (·, ·)H∗ , on H and hence, by the Riesz theorem, an inner prod- m uct, (·, ·)H , on H. Suppose that {hk}k=1 is any orthonormal basis of (H, (·, ·)H ) Notation 4.1 Let and α, β ∈ W ∗. Then Nul (q) := {α ∈ W ∗ : q(α) = 0} (4.1) m be the null space of q and X q (α, β) =q ¯(α + Nul (q) , β + Nul (q)) = (α|H , β|H )H∗ = hα, hkihβ, hki. 0 k=1 H = Hq = Nul (q) := {ξ ∈ W : hα, ξi = 0 ∀ α ∈ Nul (q)} (4.2) If x∈ / H there exists α ∈ Nul (q) such that α (x) 6= 0 and therefore from Eq. be the backwards annihilator of Nul (q) . (If q is non-degenerate, i.e. Nul (q) = (4.5), {0} , then H = W.) We call H the “horizontal space” associated to q. We |α (x)| |α (x)| may also refer to H as the Cameron-Martin space associated to q. kxkH ≥ = = ∞. pq (α) 0 ∗ Pm Lemma 4.2. There is a unique inner product, (·, ·)H , on H such that for any For h ∈ H we have and α ∈ W we have h = (h , h) h and so m k=1 k H k orthonormal base {hk}k=1 (m := dim (H)) of H we have 2 m m m m 2 X X 2 X 2 X ∗ |α (h)| = (hk, h)H α (hk) ≤ (hk, h)H · α (hk) = (h, h)H q (α) q (α, β) = hα, hkihβ, hki for all α, β ∈ W . (4.3) k=1 k=1 k=1 k=1 with equality if we choose α ∈ W ∗ such that α (h ) = (h , h) for all k = In particular k k m 1, . . . , m. From these observations it follows that X 2 q (α) = (α, α)q = |hα, hki| . (4.4) 2 k=1 2 |α (h)| khkH := sup = (h, h)H for all h ∈ H. Moreover, let α∈W ∗ q (α) |α (x)| kxkH := sup p (with 0/0 := 0), (4.5) α∈W ∗ q (α) 1 Here is the argument. Let N := dim W and εi N be a basis for W ∗ such that i=1 then  i N ε : m < i ≤ N is a basis for K and let {ei}i=1 be the corresponding dual basis. H = {h ∈ W : khkH < ∞} (4.6) PN i Since for any x ∈ W we have x = i=1 ε , x ei and 2 and khk = (h, h) for all h ∈ H. n D E o H H H = x ∈ W : εi, x = 0 for m < i ≤ N , Proof. The form q descends to a strictly positive definite quadratic form, m ∗ ∗ ∗ q,¯ on W / Nul (q) and the map it follows that H = span {ei}i=1 . So letting R : W → H be the restriction map, Ra = a|H , it follows that Ra = 0 iff ha, eii = 0 for 1 ≤ i ≤ n iff a ∈ ∗ ∗ span εi : m < i ≤ N iff a ∈ K. Thus it follows that Eq. (4.7) indeed defines an W / Nul (q) 3 (α + Nul (q)) → α|H ∈ H (4.7) isomorphism of vector spaces. 18 4 The Gaussian Basics I Z Z ∗ ∗ εkxk2 εkxk2 Theorem 4.3. As above assume that dim W < ∞ and q : W × W → R is W W m e dµ (x) = e dµ (x) a non-negative symmetric quadratic form and let {hk}k=1 be an orthonormal W H m m/2 basis for H ⊂ W as in Lemma 4.2. If {Nk} is any i.i.d. sequence of standard Z   Z   k=1 Cεkhk2 1 Cεkhk2 1 2 normal random variables, then µ := Law (Pm N h )is a Gaussian measure ≤ e H dµ (h) = e H exp − khk dh k=1 k k 2π 2 H with µˆ = e−q/2, i.e. Eq. (3.1) holds. Moreover, for every bounded measurable H H  m/2 Z   function, f : W → we have 1 Cεkhk2 1 2 R = e H exp − (1 − 2Cε) khk dh 2π 2 H Z Z Z   H 1 1 2 fdµ = fdµ = f (h) exp − khk dh (4.8)  1 m/2  2π m/2  1 m/2 Z 2 H = = < ∞ W H H 2π 1 − 2Cε 1 − 2Cε m/2 where dh denotes Lebesgue measure on H and Z = (2π) is the normalization which is valid provided that 2Cε < 1. constant so that 1 Z  1  2 Theorem 4.5 (Characterization of H). Suppose that (W, µ) is a Gaussian exp − khkH dh = 1. Z 2 2 H measure space with dim W < ∞ and define J = Jµ : L (µ) → W by Pm ∗ Proof. Let S := k=1 Nkhk and α ∈ W . Then Z Jf := f (x) x dµ (x) . Z W iα(x) h iα(S)i h i Pm N α(h )i µˆ (α) = e dµ (x) = E e = E e k=1 k k W Further let K be the subspace of L2 (µ) defined by m !! m ! 1 X 1 X 2 − 1 q(α) = exp − Var N α (h ) = exp − α (h ) = e 2 .  2 ∗ 2 k k 2 k K := [α] ∈ L (µ): α ∈ W . k=1 k=1 Then J L2 (µ) = H and J| : K → H is a unitary map such that (h, Jα) = Similarly, K H α (h) for all α ∈ W ∗ and h ∈ H. In particular, Z Z m ! 1 X − 1 Pm x2 Z  Z  fdµ = [f (S)] = f xkhk e 2 k=1 k dx 2 E m/2 Hµ = f (x) x dµ (x): f ∈ K = f (x) x dµ (x): f ∈ L (µ) . W m R (2π) k=1 W W from which Eq. (4.8) easily follows by making the orthogonal change of variables, The adjoint map for J ∗ : H → L2 (µ) is given by Pm h = k=1 xkhk. ∗ (J h)(x) = 1H (x)(h, x)H for µ – a.e. x. Theorem 4.4 (Baby Fernique’s Theorem). Suppose that (W, µ) is a Gaus- ∗ sian measure space with dim W < ∞, then there exists ε > 0 such that Notice that JJ = idH . Z εkxk2 Proof. Since µ (W \ H) = 0 we also have e W dµ (x) < ∞. W Z Jf = f (x) x dµ (x) ∈ H (The general version of this theorem removes the restriction that dim W < ∞.) H

2 2 2  ∗ Proof. Recall that q (α) ≤ C kαkW for some C < ∞ and therefore, which shows that J L (µ) ⊂ H. Moreover if α ∈ W , then using the notation in the proof of Theorem 4.3 we find |α (h)| |α (h)| 1 khkH = sup ≥ sup = khkW .   ∗ p ∗ C kαk C m m α∈W q (α) α∈W W X X Jα = E [α (S) S] = E  NkNlα (hk) hl = α (hk) hk. Thus it follows that k,l=1 k1

Page: 18 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 4 The Gaussian Basics I 19

∗ As we may choose α ∈ W such that (α (h1) , . . . , α (hm)) is any m – tuple we Proof. This is a simple matter of using the change of variables formula. Let please, we may now conclude that JK = H. Moreover J is unitary since h ∈ H and f : W → R be a bounded measurable function. Then m Z Z Z 2 X 2 2 f (x) dµ (x) = f (x + h) dµ (x) = f (x + h) dµ (x) kJαk = α (hk) = q (α) = kαk 2 . h H L (µ) W W H k=1 Z   1 1 2 ∗ = f (x + h) exp − kxkH dx Finally for α ∈ W and h = Jf ∈ H we have Z H 2 1 Z  1  (h, Jα) = (Jf, Jα) = (f, α) = f (x) exp − kx − hk2 dx H H L2(µ) Z 2 H Z H = f (x) α (x) dµ (x) Z  1  = f (x) exp (h, x) − khk2 dµ (x) W H 2 H Z  H = α f (x) x dµ (x) = α (Jf) = α (h) . W from which Eq. (4.9) follows. If h∈ / H, then µh (H + h) = µ (H + h − h) = µ (H) = 1 while µ (H) = 1 and H ∩ (H + h) = ∅. m ∗ ∗ Let {αn}n=1 ⊂ W be chosen to be an orthonormal basis of K. Then J : H → L2 (µ) is given by

m ∗ X (J h, f)L2(µ) = (h, Jf)H = (h, Jαn)H (Jαn, Jf)H n=1 m m ! X X = αn (h)(αn, f)L2(µ) = αn (h) αn, f n=1 n=1 L2(µ) from which it follows that

m ∗ X 2 J h = αn (h) αn ∈ L (µ) . n=1 Notice that

m m ∗ X X (J h)(k) = αn (h) αn (k) = (Jαn, h)H (Jαn, k)H = (h, k)H n=1 n=1 for all k ∈ H while the value of J ∗h on W \ H is not uniquely determined as this is a set of µ – measure zero.

∗ Notation 4.6 We will often suggestively write (h, x)H for (J h)(x) . Theorem 4.7 (Baby Cameron Martin Theorem). For h ∈ W let µh (A) := µ (A − h) . The µh  µ iff h ∈ H and if h ∈ H then

dµ  1  h = exp J ∗h − khk2 . (4.9) dµ 2 H

Page: 19 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09

5 The Heat Equation Interpretation

For this section, suppose that (W, BW , µ) is a Gaussian measure space with Definition 5.3. Associated to µ is the second order differential operator L = −q/2 ∞ ∗ µˆ = e and H = Hµ. Lµ acting on FC (W ) defined by either;

Lemma 5.1. Suppose α ∈ W ∗ for 1 ≤ i ≤ n, F ∈ C∞ ( n) , and X 2 ∞ ∗ i R Lµf = ∂hf for all f ∈ FC (W ) ∞ ∞ ∗ h∈S f = F (α1, . . . , αn) ∈ FC (α1, . . . , αn) ⊂ FC (W ) . (5.1) or by n Then for any choice of orthonormal basis S ⊂ H we have X Lµ [F (α1, . . . , αn)] = q (αi, αj)(∂j∂iF )(α1, . . . , αn) n i,j=1 X 2 X ∂ f = q (αi, αj)(∂j∂iF )(α1, . . . , αn) . (5.2) h where S is any orthonormal basis for H and {α }n ⊂ W ∗ and F ∈ C∞ ( n) h∈S i,j=1 i i=1 R are arbitrary. where ∂ f (ω) := d | f (ω + th) . h dt 0 Exercise 5.1. Show that Proof. The proof consists of the chain rule along with the fact that Z 2  q (α, β) = P α (h) β (h) . Indeed, Lµf (x) = ∂y f (x) dµ (y) for all x ∈ W. h∈S W n X This gives another proof that Lµ is well defined. ∂hf = (∂iF )(α1, . . . , αn) · αi (h) ∗ i=1 Remark 5.4. We may recover q = qµ from L := Lµ since for α1, α2 ∈ W we n have with F (x1, x2) = x1x2 that 2 X ∂hf = (∂j∂iF )(α1, . . . , αn) · αi (h) αj (h) 2 i,j=1 X L (α1 · α2) = q (αi, αj)(∂j∂iF )(α1, . . . , αn) = 2q (α1, α2) . and therefore and i,j=1

n q X 2 X X In this way we see that the map q → L is injective. ∂hf = (∂j∂iF )(α1, . . . , αn) · αi (h) αj (h) h∈S i,j=1 h∈S Lemma 5.5. Suppose that N = dim W < ∞ and L is a constant coefficient n purely second order semi-elliptic differential operators, i.e. L = P gij∂ ∂ for X ei ej = q (αi, αj)(∂j∂iF )(α1, . . . , αn) . N ij some basis {ei}i=1 of W and g ≥ 0. Then L = Lµ where µ is the unique i,j=1 Gaussian measure on W such that µˆ = e−q/2 where

1 X ij q (α, β) := L (αβ) = g hα, eii hβ, eji . (5.3) Remark 5.2. Equation (5.2) demonstrates that its left member is independent of 2 the choice of orthonormal basis, S, for H while its right member is independent This allows us to conclude that the map q → Lq is a one to one correspondence of how f is represented in the form of Eq. (5.1). For this reason the following between the non-negative quadratic forms on W ∗ and the the constant coefficient definition makes sense. (Also see Exercise 5.1.) purely second order semi-elliptic differential operators acting on C2 (W ) . 22 5 The Heat Equation Interpretation Proof. It is clear that q defined by Eq. (5.3) is a non-negative quadratic while ∗ form on W and therefore Z √ Z √ iα(x+ ty) iα(x) i tα(y) m e dµ (y) = e e dµ (y) X W W √ q (α, β) = hα, hki hβ, hki (5.4) 1 iα(x) − 2 q( tα) −tq(α)/2 iα(x) k=1 = e e = e e .

 i N So we have shown for some linearly independent subset of W. Letting ε i=1 be the dual basis N i j   Z √ to {ei}i=1 it follows by comparing Eqs. (5.3) and (5.4) with α = ε and β = ε etL/2eiα (x) = eiα(x+ ty)dµ (y) . that W m ij X i j ∗ g = ε , hk ε , hk . Differentiating this equation in α relative to α1, . . . , αn ∈ W we find, k=1 etL/2 (iα . . . iα ) eiα = ∂ . . . ∂ etL/2eiα Since 1 n α1 αn X i Z √ √ √ ε , hk ∂e = ∂P hεi,h ie = ∂h     iα(·+ ty) i i k i k = iα1 · + ty . . . iαn · + ty e dµ (y) . i W we may now conclude, Evaluating this equation at α = 0 ∈ W ∗ then shows m m X X X X Z  √   √  L = gij∂ ∂ = εi, h εj, h ∂ ∂ = ∂2 = L . L/2 ei ej k k ei ej hk q e (α1 . . . αn) = α1 · + ty . . . αn · + ty dµ (y) . i,j k=1 i,j k=1 W You are asked to justify these computations in Exercise 5.3 and 5.6 below.

∗ Exercise 5.2. Show Definition 5.6. Given L ⊂ W let P (L) = R [L] denote the space polynomial functions on W based on L. Thus f ∈ P (L) iff there exists n ∈ N, αi ∈ L for tL/2  iα tL/2 iα n e (iα1 . . . iαn) e = ∂α1 . . . ∂αn e e . (5.7) 1 ≤ i ≤ n, and a polynomial function p : R → R such that f = p (α1, . . . , αn) . Hint: you might use the Cauchy estimates to simplify your life. Theorem 5.7 (Heat Interpretation). Let (W, BW , µ) be a Gaussian space and L = Lµ. Then Exercise 5.3. Use the following outline in order to prove Eq. (5.5) with t = 1. Let α, α , . . . , α ∈ W ∗ for some k ∈ . Z  √    1 k N f x + ty dµ (y) = etL/2f (x) for all f ∈ P (W ∗) (5.5) W 1. Show for all z ∈ R that Z where k izα L  k izα ∞ n α e dµ = e 2 (iα) e (0) (5.8) X 1 tL etL/2f := f. (finite sum) (5.6) W n! 2 n=0 by differentiating the identity ∗ iα iα Proof. For α ∈ W and h ∈ W we have, ∂he = iα (h) e and therefore it Z ∞  n 2 X 1 q (α) follows that eizαdµ = e−z q(α)/2 = − z2n (5.9) X 2 n! 2 Leiα = (iα (h)) · eiα = −q (α) eiα W n=0 h∈S k – times in z making use of the identity and therefore ∞ n n n X tn L  q (α) L  etL/2eiα = eiα = e−tq(α)/2eiα − = eizα (0) (5.10) n! 2 2 2 n=0

Page: 22 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 5 The Heat Equation Interpretation 23 2. Taking z = 0 in Eq. (5.8) shows Exercise 5.6. Let f ∈ C2 (W ) such that f and its first and second derivatives Z grow (for example) at most exponentially at infinity. Show that k L k α dµ = e 2 α (0) . Z  √  W F (t, x) := f x + ty dµ (y) ∀ t > 0 and x ∈ W. (5.14) W Polarize this identity by computing ∂α1 . . . ∂αk of both sides in order to conclude satisfies the heat equation, Z h L i α1 . . . αk dµ = e 2 (α1 . . . αk) (0) . (5.11) W ∂ 1 F (t, x) = (LF )(t, x) . (5.15) 3. Use Eq. (5.11) along with linearity and the translation invariance of L (i.e. ∂t 2 L [f (x + ·)] = (Lf)(x + ·)) to prove Eq. (5.5). Hint: use the fact that µ (W \ H) = 0 and the results of Exercise 5.5. Corollary 5.8. For all g ∈ P (W ∗) , x ∈ W, and t ≥ 0, Definition 5.9 (Convolutions). The convolution µ∗ν of two probability mea- Z  √    sures, µ and ν on (W, B ) is the probability measure defined by g x + ty dµ (y) = etL/2g (x) . (5.12) W W Z Proof. When t = 0 both sides of Eq. (5.12) are equal to g (x) and so we µ ∗ ν (A) := 1A (x + y) dµ (x) dν (y) √ W ×W may assume that t > 0. Applying Eq. (5.5) with f (x) = g tx and using √ (Lf)(x) = t (Lg) tx shows, for all A ∈ BW . The convolution may also be written as; Z √ √      √  Z Z g tx + ty dµ (y) = eL/2f (x) = etL/2g tx . µ ∗ ν (A) = ν (A − x) dµ (x) = µ (A − y) dν (y) . W W W

The proof is then complete by replacing x by √1 x in this equation. If f : W → is a bounded (or non-negative) measurable function we define t C

d−1 Z Exercise 5.4 (Integration by Parts). Suppose that (x, y) ∈ R × R → µ ∗ f (x) = f (x − y) dµ (y) . d−1 f(x, y) ∈ C and (x, y) ∈ R × R → g(x, y) ∈ C are measurable functions W such that for each fixed y ∈ d, x → f(x, y) and x → g(x, y) are continuously R It is a simple matter to check that µ ∗ ν is the unique probability measure differentiable. Also assume f · g, ∂xf · g and f · ∂xg are integrable relative to d−1 d on (W, BW ) such that Lebesgue measure on R × R , where ∂xf(x, y) := dt f(x + t, y)|t=0. Show Z Z Z Z fd (µ ∗ ν) = f (x + y) dµ (x) dν (y) ∂xf(x, y) · g(x, y)dxdy = − f(x, y) · ∂xg(x, y)dxdy. (5.13) W W ×W d−1 d−1 R×R R×R ∞ for all bounded measurable functions f : W → C. We also use below that Hints: Let ψ ∈ Cc (R) be a function which is 1 in a neighborhood of 0 ∈ and set ψε(x) = ψ(εx). First verify Eq. (5.13) with f(x, y) replaced by Z Z Z R r ψε(x)f(x, y) by doing the x – integral first. Then use the dominated convergence (f ∗ µ) dν = f (x − y) dµ (y) dν (x) = fd (µ ∗ ν) theorem to prove Eq. (5.13) by passing to the limit, ε ↓ 0. W W ×W W r Exercise 5.5 (Gaussian Integration by parts). If h ∈ H and f ∈ P (W ∗) , where µ (A) := µ (−A) for all A ∈ BW . then Z Z Exercise 5.7. Suppose that µ and ν are two Gaussian measures on W. Show ∂hf (x) dµ (x) = (x, h)H f (x) dµ (x) . µ ∗ ν is again Gaussian with qµ∗ν = qµ + qν and Lµ∗ν = Lµ + Lν . (Hint: you W W might use Exercise 5.1 for the last assertion.) 1 This formula actually holds for any f ∈ C (W ) such that f, ∂hf, and (·, h)H f are µ – integrable.

Page: 23 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 24 5 The Heat Equation Interpretation

p p− Definition 5.10 (Dilating µ). If (W, BW , µ) is a Gaussian probability space 1. For all f ∈ L (µt) and h ∈ H := Hµ, f (h + ·) ∈ L (µt) and and t > 0 let Z Z √ µt ∗ f (h) = f (y + h) dµt (y)    −1/2  µt (A) := 1A tx dµ (x) = µ t A for all A ∈ BW . W W Z   ∗ 1 2 = f exp Jt h − khkH dµt When µt ∗ f is defined we write H 2t

tL/2 e f = µt ∗ f. is well defined. 2. The resulting function, µt ∗ f : H → C admits an analytic continuation It is easily verified that to HC := H + iH – the complexification of H. In particular µt ∗ f is real analytic on H. Z Z √  fdµt = f ty dµ (y) 3. The derivatives may be computed as; W H Z n  1 (y,h) − 1 khk2  D (µ ∗ f)(h , . . . , h ) = f (y) ∂ . . . ∂ h → e t [ H 2 H ] dµ (y) . for all bounded and measurable functions f : W → C. This result then implies 0 t 1 n h1 hn t −tq/2 H µˆt = e so that µt is again a Gaussian measure with qµt = tqµ. When dim W < ∞, the change of variables formula shows that (See Lemma ?? below for the infinite dimensional version.)

1 1 2 2 − kyk Proof. The first technical point here is that an element of L (µt) is an dµt (y) = 1H (y) m e 2t H dy (2πt) equivalence class of functions rather than a fixed function so we need to know that the integral is independent of the choice of function in this equivalence where this last equation is short hand for class. However by the Cameron-Martin theorem we know that µt and µt (· − h) Z have the same null sets so this technical point is OK. Moreover, by the Cameron 1 − 1 kyk2 µt (A) = m e 2t H dy for all A ∈ BW . Martin theorem we learn that A∩H (2πt) Z Z   ∗ 1 2 Exercise 5.8. Let (W, BW , µ) be a Gaussian probability space and {µt} be (µ ∗ f)(h) = f (h + x) dµ (x) = f exp J h − khk dµ . t>0 t t t 2t H t as in Definition 5.10. Show µt ∗ µs = µt+s for all s, t > 0. In particular conclude W W that µ ∗ µ = µ2.  ∗ 1 2  ∞− As exp Jt h − 2t khkH ∈ L (µt) it follows by H¨older’sinequality that the Notation 5.11 (Derivative operators) If f : W → C is a smooth function latter integral is well defined and hence so is the first. We may easily extend n n th near some point x ∈ W and n ∈ N, let Dx f : W → C be the n – order the latter expression to H by setting derivatives of f at x defined by C Z   ∗ 1 n (µt ∗ f)(z) = f exp J z − B (z, z) dµt D f (h1, . . . , hn) := (∂h . . . ∂h f)(x) , t x 1 n W 2t where for any h ∈ W we let ∗ ∗ ∗ where if z = h + ik ∈ HC,Jt z := Jt h + iJt k and similarly B : HC × HC → C is the complex bilinear extension of the inner product on H. If w ∈ H as well d C (∂ f)(x) = | f (x + th) and λ∈ we have h dt 0 C Z ∗ ∗ 1 2 1 J z+λJ w− [B(z,z)+2λB(z,w)+λ B(w,w)] be the directional derivative of f at x in the “direction” h. (µt ∗ f)(z + λw) = f · e t t 2t dµt W Lemma 5.12 (Smoothing Lemma). If p > 1 and t > 0, then; which is a holomorphic function of λ by Morera’s and Fubini’s theorem. (See 1 The term directional derivative is a bit of a misnomer here since the derivative the proof of Lemma 8.7 below for more details on this type of argument.) Hence

depends on both the direction and the length of h and not just the direction of h. we have shown µt ∗ f has an analytic continuation to HC.

Page: 24 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 5 The Heat Equation Interpretation 25 Z − 1 x2 Z − 1 x2 Moreover, from this expression we learn that we may differentiate µt ∗ f (h) p e 2t p e 2t ∞ > f (x, y) √ dx δ0 (dy) = f (x, 0) √ dx relative to h as many times as we like to find 2 2 R 2πt R 2πt

(∂k1 . . . ∂kn µt ∗ f)(h) and so there is no control of f (x, y) for y not equal to zero. (Notice that f = g Z   1  γ ⊗ δ0 – a.e. iff f (·, 0) = g (·, 0) γ – a.e.) Hence it is clear that we can only = f · ∂ . . . ∂ h → exp J ∗h − khk2 dµ k1 kn t 2t H t expect µt ∗ f (a, b) to make sense for b = 0. It his case it follows that for a ∈ C W we have Z   1  − 1 (x−a)2 ∗ ∗ 2 Z e 2t = f · pn (Jt∗ k1,...,J kn) · exp J h − khk dµt. t t H (µt ∗ f)(a, 0) = f (x, 0) √ dx W 2t R 2πt ∗ ∗ ∗ A key point here is that ∂k [h → Jt h] = Jt k which is clear and Jt is a linear which is analytic in a. ∗ map. Moreover by Lemma 3.3 we may conclude that ∂k (h → exp (Jt h)) = ∗ ∗ p Jt k · exp (Jt h) where the derivative holds in all L for all 1 ≤ p < ∞. Remark 5.13. All of the above results reflect the fact that  m/2 1 − 1 kxk2 p (x) := e 2t H t 2πt is the to the heat equation (5.15) on H ⊂ W.

Example 5.14. Let W = R2 and

− 1 x2 e 2 dµ (x, y) = √ dx δ0 (dy) 2π

1+ and f ∈ L (µt) . Then H = R × {0} ,

− 1 x2 e 2t dµt (x, y) = √ dx δ0 (dy) 2πt and

− 1 x2 Z e 2t (µt ∗ f)(a, b) = f ((a, b) + (x, y)) √ dx δ0 (dy) 2 R 2πt − 1 x2 Z e 2t = f ((a, b) + (x, 0)) √ dx R 2πt − 1 x2 Z e 2t = f (x + a, b) √ dx R 2πt − 1 (x−a)2 Z e 2t = f (x, b) √ dx. R 2πt By assumption we have

Page: 25 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09

6 Fock Spaces

2 X 2 Suppose that H and K are two Hilbert spaces and α : H × K → C is a kαk := |α (h1, . . . , hn)| multilinear form which is continuous in each variable separately. Then for each h1∈S1,...,hn∈Sn k ∈ K we have supkhk=1 |α (h, k)| = kα (·, k)kH∗ < ∞ by assumption. Therefore is independent of the choices of orthonormal bases, Si, for Hi. {α (h, ·)}khk=1 are point wise bounded collection of continuous linear functionals on K and so by the uniform boundedness principle they are uniformly bounded, Proof. The best way do this is as follows. Let Λi be other bases for Hi, then i.e. for any ul ∈ Hl and 1 ≤ i ≤ n we have h → α (u1, . . . , ui−1, h, ui+1, . . . , un) is sup sup |α (h, k)| = sup kα (h, ·)kK∗ =: C < ∞. a continuous linear functional on Hi and therefore khk=1 kkk=1 khk=1 X 2 2 Thus α : H × K → is continuous. |α (u1, . . . , ui−1, h, ui+1, . . . , un)| = kα (u1, . . . , ui−1, ·, ui+1, . . . , un)k ∗ C Hi h∈Si Lemma 6.1. Suppose that α : H × K → R is a continuous bilinear form, then which is independent of the choice of basis. Thus we find that 2 X 2 kαk := |α (h, k)| X 2 h∈S,k∈Λ |α (k1, . . . , ki−1, hi, . . . , hn)|

(k1,...,ki−1)∈Λ1×···×Λi−1 is independent of choice of orthonormal bases S for H and Λ for K. (hi,...,hn)∈Si×···×Sn X X 2 Proof. Letα ˜ : H → K be the unique linear operator such that = |α (k1, . . . , ki−1, hi, . . . , hn)|

(k1,...,ki−1)∈Λ1×···×Λi−1 hi∈Si α (h, k) = (˜αh, k)K for all h ∈ H and k ∈ K. (hi+1,...,hn)∈Si+1×···×Sn ∗ ∗ X X 2 That is it β : K → K is the inverse or the map K 3 k → (k, ·) ∈ K , then = |α (k1, . . . , ki, hi+1, . . . , hn)|

αh˜ = β ◦ α (h, ·) . Notice that (k1,...,ki−1)∈Λ1×···×Λi−1 ki∈Λi (hi+1,...,hn)∈Si+1×···×Sn

kαh˜ kK = sup |(˜αh, k)K | = sup |α (h, k)| ≤ C khkH X 2 kkk=1 kkk=1 = |α (k1, . . . , ki, hi+1, . . . , hn)|

(k1,...,ki)∈Λ1×···×Λi so thatα ˜ is a . Moreover we have (hi+1,...,hn)∈Si+1×···×Sn X 2 X 2 X 2 2 and so it follows by induction that |α (h, k)| = |(˜αh, k)K | = |αh˜ | = kα˜kHS h∈S,k∈Λ h∈S,k∈Λ h∈S X 2 X 2 |α (k1, . . . , kn)| = |α (h1, . . . , hn)| . and the latter quantity is known to be basis independent. (k1,...,kn)∈Λ1×···×Λn h1∈S1,...,hn∈Sn 1 Proposition 6.2. Suppose that α : H1 × · · · × Hn → R is a continuous multi- Even better suppose that α : H1 × · · · × Hn → H is another . Then is multi-linear form which is continuous in each of its variables. Then 1 The continuity assertion is equivalent to the existstence of a C < ∞ such that X 2 kα (u1, . . . , ui−1, h, ui+1, . . . , un)kH |α (h1, . . . , hn)| ≤ C kh1k ... khnk h∈Si H1 Hn = kα (u1, . . . , ui−1, ·, ui+1, . . . , un)k for all hi ∈ Hi. HS(Hi,H) 28 6 Fock Spaces 2 X 2 which is basis independent. The same argument as above now allows us to kρ − ρkk = |ρ (h1, . . . , hn) − ρk (h1, . . . , hn)| change the basis one slot at a time to see that the whole thing is basis indepen- (h1,...hn)∈S1×···×Sn dent. X 2 = lim inf |ρl (h1, . . . , hn) − ρk (h1, . . . , hn)| l→∞ n Definition 6.3. If ρ : H → C be a multilinear form which is continuous in (h1,...hn)∈S1×···×Sn each of its variables and we let X 2 ≤ lim inf |ρl (h1, . . . , hn) − ρk (h1, . . . , hn)| l→∞ 2 X 2 (h1,...hn)∈S1×···×Sn kρk := |ρ (h1, . . . , hn)| (6.1) Multn(H,C) 2 = lim inf kρl − ρkk → 0 as k → ∞. (h1,...hn)∈S1×···×Sn l→∞ where S ⊂ H is any orthonormal basis of H. Further let Multn (H, C) denote 2 those ρ such that kρk < ∞ where by convention, Mult0 (H, ) = Multn(H,C) C C Example 6.5. Suppose that α , . . . , α ∈ H∗, then we may define an element with inner product, (z, w) := zw.¯ As we saw above these definitions 1 n Mult0(H,C) α1 ⊗ · · · ⊗ αn ∈ Multn (H, C) by are well defined. We further let n Y Symn (H, C) = {α ∈ Multn (H, C): α is symmetric} . α1 ⊗ · · · ⊗ αn (h1, . . . , hn) = αi (hi) . i=1 Below we will usually just write kρk for kρk as it should be clear from Multn(H,C) P context which norm we mean. Similarly α1 ∨ · · · ∨ αn := σ ασ1 ⊗ · · · ⊗ ασn ∈ Symn (H, C) where the sum is over all permutations of {1, 2, . . . , n} . Observe that Lemma 6.4. Multn (H, C) is a complex Hilbert space in when equipped with the n inner product, 2 Y 2 kα1 ⊗ · · · ⊗ αnk = kαik ∗ Multn(H,C) H X i=1 (ρ1, ρ2) = ρ1 (h1, . . . , hn) · ρ2 (h1, . . . , hn). (6.2) Multn(H,C) h1,...hn∈S and that Proof. The sum defining the inner product in Eq. (6.2) converges by ! 2 X X kα1 ∨ · · · ∨ αnk = ασ1 ⊗ · · · ⊗ ασn, ατ1 ⊗ · · · ⊗ ατn the Cauchy - Schwarz inequality and clearly defines an inner product on Multn(H,C) σ τ Multn (H, C) whose associated norm is given by Eq. (6.1). Since the inner prod- Multn(H,C) n n uct may be recovered from the norm by polarization it must be basis indepen- X Y X Y = (α , α ) = (α , α ) dent. So it only remains to show Multn (H, C) is a complete space. σi τi H∗ στi τi H∗ ∞ σ,τ i=1 σ,τ i=1 To simplify notation let kρk := kρkMult (H, ) . So let {ρk}k=1 be a Cauchy n C n n sequence in Multn (H, C) . As any unit vectors h ∈ H is part of an orthonormal X Y X Y ∞ = (ασi, αi)H∗ = n! (ασi, αi)H∗ . basis for H it easily follows that {ρk (h1, . . . , hn)}k=1 is a Cauchy sequence for all n σ,τ i=1 σ i=1 (h1, . . . , hn) ∈ H . Thus we know that ρ (h1, . . . , hn) := limk→∞ ρk (h1, . . . , hn) n exists and the resulting function, ρ : H → C is still multi-linear. By the uniform Example 6.6. Suppose that f : H → C is a smooth function near x ∈ H, boundedness principle it is continuous in each of its variables as well. We now n n then Dx f : H → C (see Notation 5.11) defines a multi-linear symmetric use Fatou’s lemma to learn, function on H since mixed partial derivatives commute. If we further assume ∞ ∗ that f ∈ FC (H ) so that f = F ((k1, ·) ,..., (km, ·)) for some ki ∈ H, then n Dx f ∈ Symn (H, C) and

m n X Dx f = (∂i1 . . . ∂in F ) ((k1, ·) ,..., (km, ·)) (ki1 , ·) ⊗ · · · ⊗ (kin , ·) . i1,...,in=1

Page: 28 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 ∗ Remark 6.7. Let Pn (H ) be the space of homogeneous polynomials of degree n on H. When dim H < ∞. the map ∗ Symn (H, C) 3 α → (x → α (x, x, . . . , x)) ∈ Pn (H ) (6.3) is a linear isomorphism with inverse map given by 1 P (H∗) 3 p → Dnp ∈ Sym (H, ) . (6.4) n n! 0 n C ∗ When dim H = ∞ it is no longer true that Pn (H ) and Symn (H, C) are alg isomorphic. To describe better what is going on in this case let Multn (H, C) ∗ denote those α ∈ Multn (H, C) such that α = P α for some finite rank or- ∗ thonormal projection on H where P α (h1, . . . , hn) := α (P h1, . . . , P hn) . We alg alg also let Symn (H, C) = Multn (H, C) ∩ Symn (H, C) . ∗ Proposition 6.8. The map in Eq. (6.4) is a linear isomorphism from Pn (H ) alg alg onto Symn (H, C) and Symn (H, C) is a dense subspace in Symn (H, C) . ∞ Proof. Let S be an orthonormal basis for H and {Sl}l=1 ⊂ S such that S ↑ S with # (S ) < ∞ for all l. Further let P x := P (x, h) h be orthogonal l l l h∈Sl ∗ projection onto Hl := span Sl. Then give α ∈ Symn (H, C) let αl := Pl α ∈ alg 1 n Symn (H, C) and pl (x) := αl (x, . . . , x) in Pn (H) with n! D0 pl = αl. So to finish the proof of the assertion it suffices to show αl → α in Symn (H, C) . This however follows from the DCT for sums. Indeed, for h1, . . . , hn ∈ S, we have 2 2 |(α − αl)(h1, . . . , hn)| ≤ 2 |α (h1, . . . , hn)| while 2 lim |(α − αl)(h1, . . . , hn)| = 0. l→∞

Definition 6.9 (Fock spaces). Given a real Hilbert space, H and t > 0, let

n ∞ 2 o T (H; t) := α = (αn)n=0 : αn ∈ Multn (H, C) and kαkt < ∞ and

F (H; t) := {α ∈ T (H; t): αn ∈ Symn (H, C)} where n 2 X t 2 kαk := kαnk . t n! Multn(H,C) We call T (H; t) the full Fock space over H and F (H; t) the Bosonic Fock space over H. The full Fock space T (H; t) is a Hilbert space when given the inner product, X tn (α, β) := (αn, βn) t n! Multn(H,C) for all α, β ∈ T (H; t) and F (H; t) is a Hilbert subspace of T (H; t) .

7 Segal Bargmann Transforms

dn   X h i 7.1 Three key identities etB/2 F · G¯  = etB/2 e−tB/2∂ . . . ∂ f · e−tB/2∂ . . . ∂ g¯ . dtn t t h1 hn h1 hn h1,...,hn∈S Let (W, BW , ν) be a real Gaussian probability space with Cameron-Martin space H := H and B := L := P ∂2 where S is an orthonormal basis for H. Because etB/2 e−tB/2f · e−tB/2g¯ is a polynomial in t, Taylor’s theorem implies ν ν h∈Sν h ν (We assume dim W < ∞ for the moment but it is not really needed). Given ∞ ∗ h i X tn dn   f, g ∈ P (W ) , n ∈ N, and h1, . . . , hn ∈ H, let etB/2 e−tB/2f · e−tB/2g¯ = etB/2 F · G¯  | n! dtn t t t=0 n n=0 D f (h1, . . . , hn) := (∂h . . . ∂h f)(x) . x 1 n ∞ n X t n n n = (D f, D g¯)Mult (H, ) As mixed partial derivatives commute it follows that Dx f ∈ Symn (H, C) . n! n C f n ∞ n=0 Further let αx := (Dx f)n=0 ∈ F (H; t) for all t > 0. as claimed. Equation (7.2) follows immediately from Eq. (7.1) with t = 1 and ∗ Theorem 7.1 (Key identity 1). Let f, g ∈ P (W ) , x ∈ W, and t > 0, then x = 0. Recall from Theorem 7.1 that for all f, g ∈ P (W ∗); ∞ n tB/2 h −tB/2 −tB/2 i f g  X t n n e e f · e g¯ (x) = α , α = (D f, D g¯) ∞ x x t n! x x Multn(H,C) h i X 1 n=0 eB/2 e−B/2f · e−B/2g¯ = (Dnf, Dng) . (7.3) n! Multn(Hν ,C) (7.1) n=0 and in particular Notation 7.2 Let W = W + iW denote the complexification of W, W ∗ = C C ∞ †    f g X 1 Hom (W, C) be the continuous complex linear functionals on W, and W := e−B/2f · e−B/2g = α , α = (Dnf, Dng¯) . (7.2) C C 0 0 0 0 Multn(H, ) ˜ L2(ν) 1 n! C HomR (W, C) be the continuous real linear functionals on WC. Further let B be n=0  †  the operator acting on P W given by Proof. Below we will suppress x from the notation with the understanding C that all formulas are to evaluated at x. To further simplify the notion let ˜ X 2 B := ∂ih. −tB/2 −tB/2 h∈Sν Ft := e f and Gt := e g.¯ We will also abuse notation and view B = P ∂2 as on operator on both h∈Sν ih By simple calculus we have,  †  P (W ) and on P W . C d   1 etB/2 F · G¯  = etB/2 B F G¯  − BF · G¯ − F · BG¯  ∗ dt t t 2 t t t t t t Theorem 7.3 (Key identity 2). If f, g ∈ P (W ) (the holomorphic polyno- 1 C X tB/2  mial functions on WC), then = e ∂hFt · ∂hG¯t ∞ h∈S ˜ h i X 1 h i eB/2 eB/2f · eB/2g¯ = (Dnf, Dng) . (7.4) X tB/2 −tB/2 −tB/2 n! Multn(Hν ,C) = e e ∂hf · e ∂hg¯ . n=0 h∈S 1 The identity in this theorem is not as key as the other two key identities. We will It now follows by induction that in fact only use it in the proof of the third key identity. 32 7 Segal Bargmann Transforms ∞ Proof. Let B/˜ 2 h B/2 B/2 i X 1 (n) ˜ h i e e f · e g¯ = u (1) = u (0) u (t) := etB/2 etB/2f · etB/2g¯ n! n=0 ∞ which is a polynomial in t and therefore as usual, X 1 X = [∂h1 . . . ∂hn f · ∂h1 . . . ∂hn g¯] ∞ n! n=0 h1,...,hn∈Sν B/˜ 2 h B/2 B/2 i X 1 (n) e e f · e g¯ = u (1) = u (0) . ∞ n! X 1 n=0 = (Dnf, Dng) . n! Multn(Hν ,C) n=0 Moreover, we have for f, g ∈ P (W ∗) and h ∈ H ; C ν ∂ f = i∂ f, ∂ g¯ = ∂ g = i∂ g = −i∂ g,¯ ih h ih ih h h Notation 7.4 Associated to a probability measure µ on (W, BW ) are two the two probability measures µ × δ and δ × µ on W = W + iW =∼ W × W. We Bf˜ = −Bf, B˜g¯ = −Bg,¯ and 0 0 C will continue to denote µ × δ0 by µ and we will denote δ0 × µ by µ.˜ Thus the measures µ and µ˜ satisfy; ˜ ˜ ˜ X B (fg¯) = Bf · g¯ + fBg¯ + 2 ∂ihf · ∂ihg¯ Z Z Z Z

h∈Sν fdµ = fdµ and fdµ˜ = f (ix) dµ (x) W W W W X C C = −Bf · g¯ − fBg¯ + 2 ∂hf · ∂hg.¯ (7.5) for all bounded measurable functions f on W . h∈Sν C

∗ Lemma 7.5. Suppose that µ and ν are two probability measures on (W, BW ) . If f, g ∈ P (α1, . . . , αn) with {α1, . . . , αn} ⊂ W , we can choose Sν so that h ⊥ C n n 2 n {Jν α1|W ,...,Jν αn|W } for all but at most n element of Sν . This is equivalent 1. If µ (k·kW ) + ν (k·kW ) < ∞ for some n ∈ N then µ ∗ ν (k·kW ) < ∞. εk·k  εk·k  to 2. If there exists ε > 0 such that µ e W + ν e W < ∞ then µ ∗ εk·k  # {h ∈ Sν : α1 (h) = ··· = αn (h) = 0} ≤ n. ν e W < ∞ as well. 3. If µ (k·kn ) < ∞ and f : W → satisfies |f| ≤ C (1 + k·kn ) then there With such a choice the sum appearing in Eq. (7.5) is really a finite sum. W R W exists C0 < ∞ such that |µ ∗ f| ≤ C0 (1 + k·kn ) . The computations now go as before, namely W Exercise 7.1. Prove Lemma 7.5. 1 h h i i u˙ (t) = etB/˜ 2 B˜ etB/2f · etB/2g¯ + BetB/2f · etB/2g¯ + etB/2f · BetB/2g¯ 2 Corollary 7.6 (Key identity 3). Continuing the notation in Theorem 7.3, " # for all f, g ∈ P (W ∗) , tB/˜ 2 X tB/2 tB/2 C = e ∂he f · ∂he g¯ eB/2 [f · g¯] = eB/˜ 2 eBf · eBg¯ . (7.6) h∈Sν X tB/˜ 2 h tB/2 tB/2 i = e e ∂hf · e ∂hg¯ . Alternatively we may write this as,

h∈Sν ν ∗ [f · g¯] =ν ˜ ∗ [ν2 ∗ f · ν2 ∗ g¯] (7.7) Hence by induction we learn that where ν2 := ν ∗ ν and for a measurable function u : WC → C with polynomial (n) X tB/˜ 2 h tB/2 tB/2 i growth, u (t) = e e ∂h . . . ∂h f · e ∂h . . . ∂h g¯ Z Z 1 n 1 n (ν ∗ u)(z) := u (z − x) dν (x) = u (z + x) dν (x) h1,...,hn∈Sν W W and and therefore, Z Z (˜ν ∗ u)(z) := u (z − ix) dν (x) = u (z + ix) dν (x) W W 2 R I will often write µ (f) for W fdµ.

Page: 32 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 7.2 The Segal-Bargmann Transform 33 First proof. By Theorem 7.1 and 7.3 we know that 7.2 The Segal-Bargmann Transform

B/2 h −B/2 −B/2 i B/˜ 2 h B/2 B/2 i e e f · e g¯ = e e f · e g¯ . Definition 7.8. Suppose that W is a complex Banach space and µ is any prob- ability measure on B such that µ (k·kn ) < ∞ for all n ∈ . We let H˙ L2 (µ) Replacing f and g by eB/2f and eB/2g respectively in this equality proves Eq. W W N denote the L2 (µ) – closure of P (W ∗) – the holomorphic polynomial functions (7.6). on W. Second proof. In this direct proof we will adapt the argument in Hall [18, p.  †  820]. For h ∈ H we define two differential operators on P W ; ν C Remark 7.9. If dim W < ∞ and µ is a non-degenerate (i.e. Hµ = W ) Gaussian probability measure on (W, BW ) , then we will see in Theorem 8.10 below that 1 1 ∗ 2 Zh := [∂h − i∂ih] and Z¯h := [∂h + i∂ih] . P (W ) is a dense subspace of the Hilbert space (see Lemma 2.8) HL (W, µ) . 2 2 Thus H˙ L2 (W, µ) = HL2 (W, µ) in this case. For f ∈ P (W ∗) we have Z f = ∂ f, Z¯ f = 0,Z f¯ = 0, Z¯ f¯ = Z f = ∂ f,¯ C h h h h h h h and Theorem 7.10 (Generalized Segal-Bargmann transform). Let µ be any 1 εk·k  Z2 + Z¯2 = ∂2 − ∂2  . measure on (W, BW ) such that µ e W < ∞ for some ε > 0. Then there h h 2 h ih exists a unique unitary map From these observations it follows that 2 2 B B P Z2 P Z¯2 Sµ,ν : L (W, µ ∗ ν) → H˙ L (W , µ × ν) e f · e g¯ = e h∈Sν h f · e h∈Sν h g¯ C P (Z2 +Z¯2 ) 1 P (∂2 −∂2 ) ∗ = e h∈Sν h h [f · g¯] = e 2 h∈Sν h ih [f · g¯] such that for all p ∈ P (W ) , 1 ˜ 2 (B−B) B  = e [fg¯] . Sµ,ν p = ν2 ∗ p = e p . C C Applying eB/2 to both sides of this identity completes the proof. Proof. In light of Lemma 7.5, Fernique’s Theorem 3.5, and Theorem 2.2, we n ∗ 2 Corollary 7.7. Let µ be any measure on (W, BW ) such that µ (k·kW ) < ∞ for know that P (W ) is a dense subspace of L (µ ∗ ν) . Therefore it follows that all n ∈ . Then for all f, g ∈ P (W ∗) we have N C the isometric map in Corollary 7.7 extends uniquely to a unitary map from 2 ˙ 2 Z Z L (W, µ ∗ ν) to HL (WC, µ × ν) . f · g¯ d (µ ∗ ν) = [ν2 ∗ f · ν2 ∗ g¯] d (µ × ν) . In the case where µ and ν are both non-degenerate Gaussian measures we W W C can compute Sµ,ν f more explicitly. Proof. Integrate Eq. (7.7) relative to µ. In doing so we make use the fact that ν is symmetric and therefore, Corollary 7.11. If µ and ν are both Gaussian measures on W with full support 2 Z Z Z (i.e. Hµ = W = Hν ) and f ∈ L (µ ∗ ν) , then; [ν ∗ F ] dµ = [ν ∗ F ] dµ = F (x − y) dν (y) dµ (x) W W W ×W 1. for all x ∈ W the following integral exists, C Z Z = F (x + y) dν (y) dµ (x) = F d (µ ∗ ν) Z W ×W W (ν2 ∗ f)(x) = f (x − y) dν2 (y) . for F : W → [0, ∞] measurable and W Z Z 2. ν2 ∗ f : Hµ∗ν = W → C is smooth and even admits a unique analytic ν˜ ∗ F dµ = (˜ν ∗ F )(x) dµ (x) continuation, (ν2 ∗ f) to all of W . W W C C C 3. Sµ,ν f = (ν2 ∗ f) – µ × ν a.s. Z Z  C = F (x − iy) dν (y) dµ (x) 4. The resulting Segal - Bargmann map, W W 2 2 Z Z Sµ,ν : L (W, µ ∗ ν) → HL (W , µ × ν) , = F (x + iy) dµ (x) dν (y) = F d (µ × ν) . C W ×W W ×W is unitary.

Page: 33 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 34 7 Segal Bargmann Transforms

Proof. 1. Let us write H for Hν and x · y for (x, y) . Further let C be where Hν   the unique positive operator (matrix) on H such that (x, y) = Cx · y for all 1 1 1 Hµ K (z, y) := exp z · y − y · y + (I − D) y · y x, y ∈ H. Further let λ := µ ∗ ν and observe that 2 4 2 The fact that dν (x)  1  Z 2 ∝ exp − x · x and dx 4 z → f (y) K (z, y) dλ (y) W dλ (x)  1   1  ∝ exp − (I + C)−1 x · x = exp − (I − D) x · x is analytic follows either by Morera’s Theorem 2.5 or by differentiating past the dx 2 2 integral. (See the proof of Lemma 8.7 below for more details on this type of argument.) The details are left to the reader. where D = C (I + C)−1 and 0 < D < I as can be seen by the spectral theorem. 3. As µ × ν is a positive smooth measure on W it follows from Lemma 2.8 Using this notation it follows that C that the evaluation maps, Z 2 f (x − y) dν2 (y) HL (W , µ × ν) 3 F → F (z) ∈ C W C Z   1 are continuous linear functionals on HL2 (W , µ × ν) . Therefore if p ∈ P (W ∗) ∝ f (y) exp − (x − y) · (x − y) dy C n 2 W 4 with pn → f in L (λ) then Z 1  exp − 4 (x − y) · (x − y) ∝ f (y) dλ (y) (Sµ,ν f)(z) = lim (Sµ,ν pn)(z) . 1  n→∞ W exp − 2 (I − D) y · y Z   2 − 1 x·x 1 1 1 On the other hand since K (z, ·) ∈ L (λ) it also follows that = e 4 f (y) exp x · y − y · y + (I − D) y · y dλ (y) . W 2 4 2 Z Z To verify the latter integral is well defined it suffices to show lim pn (y) K (z, y) dλ (y) = f (y) K (z, y) dλ (y) . n→∞ W W 1 1 1  exp x · y − y · y + (I − D) y · y ∈ L2 (dλ (y)) Putting these two observations together allows us to conclude that 2 4 2 Z − 1 z·z (Sµ,ν f)(z) = (const.) · e 4 f (y) K (z, y) dλ (y) = (ν2 ∗ f) (z) . which is the case since, C W   1 1 1 1 2 2 − (I − D) x · x + 2 x · y − y · y + (I − D) y · y 4. The unitarity of Sµ,ν from L (W, µ ∗ ν) to HL (WC, µ × ν) now follows 2 2 4 2 from Theorem 7.10 and Remark 7.9. 1 1 1 = x · y − y · y + (I − D) y · y = x · y − Dy · y 2 2 2 and therefore, 7.2.1 Examples Z   2 1 1 1 Bargmann [3] introduces three forms of the Segal-Bargmann transform. I will exp x · y − y · y + (I − D) y · y dλ (y) W 2 4 2 describe the two most interesting forms of the transform in the next two exam- Z  1  ples – the third form is rather trivially obtained from one of these forms. ∝ exp x · y − Dy · y dy < ∞ 2 d W Example 7.12 (Standard form 1). In Corollary 7.11 take W = R and µ = ν = t∆/2 as D > 0. Pt/2 where Pt = e δ0, i.e. R 2. The analytic continuation, F (z) , of x → f (x − y) dν2 (y) is given by W −d/2 − 1 x·x P (dx) = p (x) dx := (2πt) e 2t dx. (7.8) Z t t − 1 z·z F (z) = (const.) · e 4 f (y) K (z, y) dλ (y) Then we have shown W

Page: 34 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 7.2 The Segal-Bargmann Transform 35 Z Z 2 2  t∆/2  Using this result in Eq. (7.11) then gives, |f (x)| dPt (x) = e f (x + iy) dPt/2 (x) dPt/2 (y) d d R C C Z Z 2    d Z 2 2 1 z2 t∆  1 2 1   1 |f (x)| dx = C e 2t e f (2z) exp − |z| dxdy t∆/2 − t (x·x+y·y) t = e f (x + iy) e dxdy d d C t πt d R C C C Z   2 1 Re z2 1 2  d Z   t∆  t ( ) 1 2 1 = Ct e f (2z) e exp − |z| dxdy d C t = |(Pt ∗ f) (z)| exp − z · z¯ dxdy. (7.9) C πt d C t C where where  1 d  1 d/2  1 d/2 Z   d d −d/2 1 Ct = (4πt) = 4 . (Pt ∗ f) (z) = (2πt) f (x) exp − (x − z) · (x − z) dx. πt 2πt 2πt C d 2t R Now d 2 2 2 2 2 2 2 Example 7.13 (Standard form 2). In Corollary 7.11 take W = R and ν = Pt/2 Re z − |z| = x − y − x + y = −2y and µ = m, where m is Lebesgue measure. Technically this is not allowed since m is not a probability measure and certainly does not integrate polynomials. and so we have shown Nevertheless blindly going ahead using m ∗ Pt = m suggests that we should Z  d/2 Z   2 d 1 t∆  2 2 2 expect |f (x)| dx = 4 e f (2z) exp − y dxdy. d 2πt d C t Z  d/2 Z   R C 2 1 2 1 2 |f (x)| dm (x) = |(Pt ∗ f) (z)| exp − y dxdy (7.10) Making the change of variables, z → 1 z then implies, d πt d C t 2 R C where we are now writing y2 for y ·y. As it turns out we may derive this formula Z  d/2 Z   2 1 t∆  2 1 2 rigorously from Eq. (7.9). |f (x)| dx = e f (z) exp − y dxdy d 2πt d C 2t 2 √ R C Following Hall [15, p. 149], for f ∈ L (m) apply Eq. (7.9) with f by f/ pt ∈ Z t∆  2 L in Eq. (7.9) shows, = e f (z) dxPt (dy) d C Z  d Z   2   C 2 1 f 1 2 |f (x)| dx = Pt ∗ √ (z) exp − |z| dxdy (7.11) which is Eq. (7.10). d πt d pt t R C C where Example 7.14 (Interpolating forms). In this example we wish to “interpolate” d/4 − 1 (z−x)2  f   1  Z e 2t between the two standard forms. In order to do so we apply Corollary 7.11 with P ∗ √ (z) = f (x) dx. d t − 1 x2 W = and µ = Pσ and ν = Ps to arrive at the identity, pt 2πt d e 4t R C R Z Using the identity, 2 |f (x)| dPσ+s (x) 1 2 1 2 1 2 1 2 d − (z − x) + x = − (x − 2z) + z R Z 2t 4t 4t 2t 2 = |(P2s ∗ f) (x + iy)| dPσ (x) dPs (y) d C we learn that C  d Z   2 2     d/4 Z   1 2 x y f 1 1 z2 1 2 √ P ∗ (z) = e 2t exp − (x − 2z) f (x) dx = |(P2s ∗ f) (x + iy)| exp − + dxdy. (7.12) t √ 2π σs d C 2σ 2s pt 2πt d 4t C C R  d/4 Z 1 1 z2 d/2 This gives the results in Example 7.12 when σ = s = t/2. Moreover for f ∈ = e 2t (4πt) p2t (x − 2z) f (x) dx 2 d/2 2πt d L (m) we may multiply Eq. (7.12) by (2πσ) and then let σ → ∞ in order R  d/4 to formally arrive at Eq. (7.10) with t = s. In a little more detail, 1 d/2 1 z2 = (4πt) e 2t (P2t ∗ f) (2z) . 2πt C

Page: 35 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 36 7 Segal Bargmann Transforms Z  d/2 Z " 2 #! 2 d/2 2 σ 2 |x| for all f ∈ L (P1) . Similarly to Example 7.15 we may easily conclude that the (2πσ) |f (x)| dPσ+s (x) = |f (x)| exp − dx d σ + s d 2 (σ + s) map, R R 2 d  ˙ 2 d  Z L R ,Pt 3 f → (z → f (i Im z)) ∈ HL C , δ0 × Pt 2 → |f (x)| dx as σ → ∞ is unitary. d R If f a function on Rd with (for example) at most exponential growth one and similarly easily shows that Sf (z) is given explicitly as, (2πσ)d/2 ·RHS (7.12) Z  1  Sf (z) = (4πt)−d/2 f (x) exp − (x − z)2 dx. Z   2 2  d 4t −d/2 2 x y R = (2πs) |(P2s ∗ f) (x + iy)| exp − + dxdy √ d C 2σ 2s d  C So if g ∈ Cc R , C we may take f := g/ pt in the above formulas in order to Z  2  −d/2 2 y find, → (2πs) |(P2s ∗ f) (x + iy)| exp − dxdy d C 2s Z C √ −d/2 g (x) − 1 (x−z)2 Z S (g/ pt)(z) = (4πt) p e 4t dx 2 d pt (x) = |(P2s ∗ f) (x + iy)| dxPs (dy) R d C Z C 1 x·x − 1 (x−z)2 = Ct g (x) e 4t e 4t dx In the next two examples we explore what happens if µ or ν degenerate to d R Z δ0. − 1 z2 1 x·z = Cte 4t g (x) e 2t dx, d d d Example 7.15. Let W = R , WC = C , ν = δ0 and µ be a probability measure R on (W, BW ) as in Theorem 7.10. We then have , µ ∗ ν = δ0, ν2 = δ0 ∗ δ0 = δ0and where d/4 −d/2 ν2 ∗ f = f. Therefore Theorem 7.10 states that Ct := (2πt) (4πt) . Z Z 2 Taking z = iy then implies, |p (x)| dµ (x) = |pC (x + iy)| dµ (x) δ0 (dy) W W √ 1 y2 C S (g/ p )(iy) = C e 4t gˆ (y/2t) Z t t = |pC (x)| dµ (x) where W Z gˆ (y) := g (x) eiy·xdx which gives no new information. d R Incidentally, notice that √ is the Fourier transform of g. Therefore Eq. (7.13) with f := g/ pt becomes 2 2 L (µ⊗δ0) L (µ) Z Z ˙ 2 ∼ 2 2 2 1 2 HL (µ ⊗ δ0) := HP (W ) = P (W ) = L (W, µ) 2 2t y C |g (x)| dx = Ct |gˆ (y/2t)| e Pt (dy) d d d since for any holomorphic polynomial p on C , R R Z −d 2 Z Z = (4πt) |gˆ (y/2t)| dy 2 2 2 d kpk 2 = |p (x + iy)| µ (dx) δ (dy) = |p (x)| µ (dx) . R L (µ⊗δ0) 0 W W  d C 1 Z Z = |gˆ (y)|2 dy = |gˆ (2πy)|2 dy The following example can essentially be found in Hall [18, Theorem 2.2]. 2π d d R R d d which is the isometry property of the Fourier transform. Because the maps Example 7.16 (Fourier Wiener Transform). Let W = R ,WC = C , µ = δ0, ν = Pt (see Eq. (7.8)), and S = Sδ0,Pt . By Theorem 7.10, 2 d  ˙ 2 d  ∼ 2 d  S : L R ,Pt → HL C , δ0 × Pt = L R ,Pt and Z Z √ 2 2 2 d  2 d  |f (x)| pt (x) dx = |Sf (z)| δ0 (dx) Pt (dy) L R , dx 3 g → g/ pt ∈ L R ,Pt d d R ZC are unitary we have actually given a proof of the fact that the Fourier transform, 2 2 d  = |Sf (iy)| Pt (dy) (7.13) g → gˆ (2π (·)) is a unitary map on L R , dx . d R Some where around here see the Ham- burg moment prob- Page: 36 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 lem section ??. 8 The Kakutani-Itˆo-Fock space isomorphism

(Some old stuff has now been moved to the bone yard ??. BRUCE: see Proposition 8.3 (Hermite/Chaos Expansion). Let (W, BW , µ) be a non- Chapter ?? below for some QM interpretation of this stuff.)) degenerate Gaussian probability space. Then;

1. Fµ [Fn (µ)] = Symn (Hµ, C) and for any α ∈ Symn (Hµ, C) , 8.1 The Real Case 1 F −1α = e−Lµ/2 (x → α (x, . . . , x)) . (8.1) µ n! As usual let (W, BW , µ) be a Gaussian probability space. If H := Hµ is a proper subspace of W it is not true that the restriction map, 2 2. L (W, µ) is the orthogonal Hilbert space direct sum of the subspaces Fn (µ) ∗ ∗ for n = 0, 1, 2 ..., i.e. every f ∈ L2 (W, µ) has a unique orthogonal direct P (W ) 3 p → p|H ∈ P (H ) , sum decomposition of the form is one to one. which is a bit annoying. However notice that if p, q ∈ P (W ∗) with ∞ p = q (µ a.s.) then p = q on H the conversely. Thus P (W ∗) / ∼ and P (H∗) X are isomorphic where p ∼ g iff p = q (µ a.s.) iff p = q on H. For the sake of f = fn with fn ∈ Fn (µ) . (8.2) simplicity and with no real loss in generality let us assume in this section that n=0

H = W, i.e. µ is non-degenerate. Lµ/2 3. Writing e f for µ ∗ f, the fn in Eq. (8.2) are computed via In what follows we will simply write F (H) for F (H; 1) – see Definition 6.9.

2 1 −L/2 n L/2 Theorem 8.1 (Fock Space Isomorphism I). For f ∈ L (µ) let Fµf := fn = e (x → ∂x e f)(0). (8.3) n ∞ 2 n! (D0 (µ ∗ f))n=0 , then Fµf ∈ F (H) and Fµ : L (µ) → F (H) is unitary. 4. Fn (µ) is the set of all polynomials on W of degree n which are orthogonal Proof. First off from Lemma 5.12 we know that µ∗f is smooth on Hµ = W n to all polynomials of degree at most n − 1. so that it makes sense to even write D0 (µ ∗ f) . Secondly, from Proposition 6.8 and Theorem 7.1 (with ν = µ) we know that F | ∗ is an isometry µ P(W ) Proof. Let us write H for H ,F for F , and L for L in this proof. from P (W ∗) onto a dense subset of F (H) and hence extends uniquely to a µ µ µ 2 −L/2 ∗ unitary transformation, Fµ : L (µ) → F (H) . It only remains to show that 1. If f = e p ∈ Fn (µ) for some p ∈ Pn (W ) , then F f := (Dn (µ ∗ f))∞ . µ 0 n=0 h i h i Let h1, . . . , hn ∈ H and define l (f) := [∂h . . . ∂h (µ ∗ f)] (0) for all f ∈ k −L/2 k L/2 −L/2 1 n (F f)k = D0 µ ∗ e p = D0 e e p L2 (µ) . According to Lemma 5.12 this is a bounded linear functional. Thus 2 ∗ 2 k n if f ∈ L (µ) and pk ∈ P (W ) such that pk → f in L (µ) we will have = D0 p = δk,nD0 p n n l (pk) → l (f) . We also have D (µ ∗ pk) → D (µ ∗ f) and therefore, 0 0 where the last equality is a consequence of the fact that p is homogeneous n n D0 (µ ∗ f)(h1, . . . , hn) = lim D0 (µ ∗ pk)(h1, . . . , hn) = lim l (pk) = l (f) of degree n. This shows that F [Fn (µ)] ⊂ Sym (Hµ, ) . k→∞ k→∞ n C Conversely if f ∈ F (H) with F f = α ∈ Symn (H, C) , let which completes the proof. 1 th −Lµ/2 Definition 8.2. The n level Hermite (or homogeneous chaos) sub- g := e (x → α (x, . . . , x)) ∈ Fn (µ) . n! 2 −Lµ/2 ∗ ∗ space of L (W, µ) is the space Fn (µ) = e Pn (W ) , where Pn (W ) de- notes the space of homogeneous polynomials of degree n on W. Then 38 8 The Kakutani-Itˆo-Fock space isomorphism 1 h i 2 L/2 n −Lµ/2 In words, to find the Hermite decomposition of f ∈ L (µ) apply e to f, then F g = D0 µ ∗ e (x → α (x, . . . , x)) n! compute the Taylor expansion of the result, then apply e−L/2 to each term in 1 h i = Dn eLµ/2e−Lµ/2 (x → α (x, . . . , x)) this expansion. So formally the theorem represents the assertion that n! 0 −L/2 L/2 1 IdL2(µ) = e ◦ Taylor0 ◦ e . = Dn [α (x, . . . , x)] = α. n! 0 By Theorem 8.1, F is unitary and therefore f = g a.s. and we proved Eq. 8.2 The Complex Case (8.1) and Symn (Hµ, C) ⊂ F [Fn (µ)] . 2. & 3. These items follow directly from item 1. and Theorem 8.1 as fn = Now suppose that W is a complex vector space. We now wish to assume that −1 H = H is a complex subspace of W. (This need not be the case in general, F [(F f)n] . µ L/2 ∗ ∗ 4. Noting that e : P (W ) → P (W ) is degree preserving we see that just take W = C with µ = γ ⊗ δ0 where γ is the standard normal distribution. −L/2 ∗ Fn (µ) = e Pn (W ) is contained inside the degree n – polynomials In this case Hµ = R ⊂ C which is a real but not complex subspace.) ∗ n in P (W ) . Moreover ⊕ Fk (µ) is equal to the degree n polynomials in k=0 Lemma 8.5. Let µ be a Gaussian measure on W, then iHµ = Hµ is equivalent P (W ∗) . This is because if p ∈ P (W ∗) is a degree n – polynomial, then by to the condition on q that q (α) = 0 iff q (α ◦ Mi) = 0. items 2. and 3., Proof. Recall that ∞ X 1 ⊥ p = e−L/2(x → ∂keL/2p)(0) H = Nul (q) := {x ∈ W : α (x) = 0 whenever q (α) = 0} . k! x k=0 ∞ Thus ix ∈ H iff X 1 = e−L/2(x → ∂keL/2p)(0) ∈ ⊕n F (µ) . α (ix) = 0 whenever q (α) = 0. k! x k=0 k k=0 Thus if q (α) = 0 iff q (α ◦ Mi) = 0 and x ∈ H, then α (x) = 0 when q (α) = 0 implies α ◦ Mi (x) = 0 when q (α ◦ Mi) = 0 which is equivalent to α (ix) = 0 n−1 Hence we may conclude that Fn (µ) is perpendicular to ⊕k=0 Fk (µ) which when q (α) = 0 showing that ix ∈ H. ∗ ∗ is precisely the degree n−1 polynomials in P (W ) . Moreover if p ∈ P (W ) Conversely suppose that iH = H. Then using H = Nul (q)⊥ or equivalently n is a degree n polynomial (i.e. p ∈ ⊕k=0Fk (µ)) which is orthogonal to the 0 0 n−1 Nul (q) = H it follows that Nul (q) = (iH) . Therefore α ∈ Nul (q) iff α (H) = degree n − 1 polynomials (i.e. p ⊥ ⊕ Fk (µ)) we must have p ∈ Fn (µ) . k=0 {0} iff α (iH) = {0} iff α ◦ Mi (H) = {0} iff α ◦ Mi ∈ Nul (q) . Definition 8.6. If W is a finite dimensional complex vector space, we say that a Gaussian measure µ on W is compatible with the complex structure iff Remark 8.4. Combining the results of items 2. and 3. of Proposition 8.3, if ∗ Nul (M tr) q = Nul (q) iff iH = H , i.e. iff H is a complex subspace of W. f ∈ L2 (µ) then i µ µ µ If µ is non-degenerate1 then µ is of course compatible with the complex ∞ X 1 structure. Conversely if µ is compatible with the complex structure we may f = e−tL/2 (x → ∂n (µ ∗ f) (0)) (orthogonal terms). n! x replace W by H and assume that µ is non-degenerate with out any real loss of n=0 generality. (If µ were not compatible with the complex structure, the measure We will write this succinctly as µ would be supported on a real but not complex subspace of W.) Ignoring the complex structures on H and W we still have Fock and Hermite expansion ∞ X 1 results in Theorem 8.1 and Proposition 8.3. Our goal here is to describe these f(x) = e−L/2(x → ∂neL/2f)(0) (8.4) n! x expansions on the Hilbert subspace of holomorphic functions inside of L2 (µ) . n=0 ∞ The key new ingredient is contained in the next lemma. X 1 “ = ” e−L/2 (x → ∂neL/2f)(0). (8.5) 1 It should now be clear that µ is non-degenerate if any one of the following equivalent n! x n=0 conditions hold; 1) qµ is positive definite, 2) Hµ = W, or 3) the support of µ is all of W.

Page: 38 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 8.2 The Complex Case 39

Lemma 8.7. Suppose that W is a complex vector space and µ is a non- and n o C C degenerate Gaussian measure on W. Then µ ∗ f is a holomorphic function on F (H) := α ∈ F (H): αn ∈ Symn (H, C) ∀ n ∈ N0 . 2 Hµ = W for all f ∈ L (µ) . (So α ∈ Symn (H, C) iff α ∈ Symn (H, C) and Proof. Let f ∈ HL2 (µ) . The first thing we want to prove is that µ∗f is still α (ih , h , . . . , h ) = iα (h , h , . . . , h ) for all (h , . . . , h ) ∈ Hn.) holomorphic. So we have to show for each x, y ∈ W that F (λ) := µ∗f (x + λy) is 1 2 n 1 2 n 1 n holomorphic for λ near 0 in C. Recall, using the baby Cameron-Martin Theorem Example 8.9. If dim H < ∞ and f : H → C is a function which is holomorphic n C 4.7, that near 0 ∈ H, then D0 f ∈ Symn (H, C) for all n ∈ N0, see Theorem 2.6. Z Theorem 8.10 (Fock/Hermite Expansions II). Suppose that W is a com- µ ∗ f (x) = f (x − z) µ (dz) (8.6) plex finite dimensional vector space and µ is a non-degenerate Gaussian measure W 2 Z on W. Let HFn (µ) = Fn (µ) ∩ HL (µ) – the holomorphic polynomials inside = f (x + z) µ (dz) of Fn (µ) . Then; W Z  2  C (x,z) − 1 kxk2 1. Fµ HL (µ) = F (H) . = f (z) e H 2 H µ (dz) . (8.7) C −L/2 (n) (n) W 2. Fµ [HFn (µ)] = Symn (H, C) = e H where H denotes the homoge- neous holomorphic polynomials of degree n. Replacing f by |f| in this equation and using 2 (x, ·) under µ is Gaussian with 2 ∞ H 3. HL (µ) = ⊕n=0HFn (µ) (orthogonal direct sum). variance 4 kxk2 along with the Cauchy-Schwarz inequality shows 2 P∞ H 4. if f ∈ HL (µ) and f = n=0 fnis the Hermite expansion from Proposition 8.3, then fn ∈ HFn (µ) for all n. Z 1 2 − 2 kxkH (x,·)H µ ∗ |f| (x) = |f (x − z)| µ (dz) ≤ e kfk 2 · e 5. HF n (µ) is the set of all holomorphic polynomials on W of degree n which L (µ) 2 W L (µ) are orthogonal to all holomorphic polynomials on W of degree n − 1 or less. − 1 kxk2 p 1 2 1 kxk2 2 2 H 4kxkH 2 H 6. The holomorphic polynomials on W are dense in HL (W, µ) . = kfkL2(µ) · e · e 2 = kfkL2(µ) · e . Proof. To simplify notation let H = Hµ,L = Lµ, and F = Fµ. Thus if T is a solid triangle contained in C we have 1. & 2. By Lemma 8.7, if f ∈ HL2 (µ) then µ ∗ f is still holomorphic and Z Z Z 1 kx+λyk2 n C 2 H therefore [F f] = D0 [µ ∗ f] ∈ Symn (H, ) for all n ∈ . This shows |f (x + λy − z)| µ (dz) |dλ| ≤ kfkL2(µ) · e |dλ| < ∞. n C N ∂T W ∂T  2  C C that F HL (µ) ⊂ F (H) and that F [HFn (µ)] ⊂ Symn (H, C) . If C Since λ → f (x + λy − z) is holomorphic it follows by Fubini’s theorem along α ∈ Symn (H, C) , then p (x) := α (x, . . . , x) is complex differentiable and with Morera’s Theorem 2.5 that therefore a holomorphic polynomial. Since partial derivations preserve the class of holomorphic functions we may conclude that F −1α = e−L/2 [p] is Z Z h i −1 C F (λ) dλ = µ ∗ f (x + λy − z) dλ holomorphic and in Fn (µ) . Therefore F Symn (H, C) ⊂ HFn (µ) and ∂T ∂T C Z Z  so Symn (H, C) ⊂ F [HFn (µ)] and we have proved the first equality in item ∞ C −1 P −1 2 = f (x + λy − z) dλ µ (dz) = 0. 2. If α = (αn) ∈ F (H) , then F α = n=0 F αn is in HL (µ) as each W ∂T term is in HL2 (µ) and HL2 (µ) is a closed subspace of L2 (µ) . This shows −1   2  2  Since T was arbitrary, another application of Morera’s Theorem 2.5 implies F that F F C (H) ⊂ HL (µ) , i.e. F C (H) ⊂ F HL (µ) which completes is holomorphic. the proof of item 1. Since n o (n) C Definition 8.8. Given a complex Hilbert space, H, let H = x → α (x, . . . , x): α ∈ Symn (H, C)

C it follows from Eq. (8.1) that Multn (H, C) := {α ∈ Multn (H, C): α is complex multi-linear} h i h i −1 C −L/2 (n) C C HFn (µ) = F Sym (H, ) = e H . Symn (H, C) := Symn (H, C) ∩ Multn (H, C) n C

Page: 39 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 40 8 The Kakutani-Itˆo-Fock space isomorphism −1 ˜ 3. & 4. Item 3. and 4. follows directly from items 1. and 2. and the fact that Proof. Let Tµ,ν := Fµ×ν Sµ,ν Fµ∗ν . Since Lµ∗ν = Lµ + Lν ,Lµ×ν = Lµ + Lν , C ∞ C  −Lν  ∗ F (H) = ⊕n=0 Symn (H, C) (orthogonal direct sum). and Sµ,ν p = e p for p ∈ P (W ) we see that n (k) C 5. Let Hn = ⊕k=0H be the holomorphic polynomials of degree less than or −L/2 h i h 1 i h 1 i equal to n. Since e Hn = Hn we see that −Lµ∗ν /2 −Lν − (Lµ+Lν ) − (Lµ−Lν ) Sµ,ν e p = e e 2 p = e 2 p n −L/2 (k) n C C Hn = ⊕ e H = ⊕ HFk (µ) . 1 1 ˜ k=0 k=0 − 2 (Lµ−Lν ) − 2 (Lµ+Lν ) −Lµ×ν /2 = e pC = e pC = e pC. n Therefore if p ∈ Hn is orthogonal to Hn−1 then p ∈ ⊕k=0HFk (µ) and n−1 This proves Eq. (8.8). Hence if α ∈ Sym (H) and p (x) := 1 α (x, . . . , x) , then p ⊥ ⊕k=0 HFk (µ) which implies p ∈ HF n (µ) . Conversely if p ∈ HF n (µ) , n n! then p ∈ F (µ) and therefore perpendicular to all polynomials of degree n h i −Lµ∗ν /2 −Lµ×ν /2 less than n and in particular to the holomorphic polynomials of degree less Tµ,ν α = Fµ×ν Sµ,ν e p = Fµ×ν e pC than n. h i 2 P∞ = Dn eLµ×ν /2e−Lµ×ν /2p 6. If f ∈ HL (µ) and f = n=0 fn is its Hermite expansion, then pN := 0 C PN fn is a holomorphic polynomial for each N ∈ . Moreover, pN → f n C ∼ n=0 N = D p = α ∈ Sym (Hµ×ν = Hµ + iHν ) in L2 (µ) . 0 C C n

Remark 8.13. The theory described above is particularly nice in the special case where µ = ν. In this case the following properties hold; 8.3 The Segal-Bargmann Action on the Fock Expansions tr 1. qµ×µ (α ◦ Mi) = qµ×µ (α) so that qµ×µ is invariant under Mi . Notation 8.11 Let H be a real Hilbert space and H = H +iH be its complex- ∼ C 2. The inner product on Hµ ×Hµ = Hµ +iHµ is now the real part of a complex ification. Given a real multilinear form, α : Hn → , we let α be the unique C C inner product and with this complex inner product Hµ + iHµ = (Hµ) as complex multi-linear form on Hn such that α = α on Hn. C C C Hilbert spaces. The next theorem shows that the Segal-Bargmann transform acts on the 3. If f is holomorphic then Fock space by this very simple complexification operation.   Lµ×µf = Lµ + L˜µ f = (Lµ − Lµ) f = 0, Theorem 8.12. Let W be a finite dimensional real Banach space and µ and

ν be non-degenerate Gaussian measures on (W, BW ) so that Hµ∗ν = Hµ = −Lµ×µ/2 2 i.e. holomorphic functions are now harmonic. Consequently, e p = p Hν = W as sets. Then the Segal-Bargmann transform Sµ,ν : L (W, µ ∗ ν) → 2 whenever p is a holomorphic polynomial. HL (WC, µ × ν) (see Corollary 7.11) satisfies Fµ×ν Sµ,ν f = (Fµ∗ν f) for all 2 2 C 4. The Fock-Itˆo-Kakutaniisometry for an element f ∈ HL (WC, µ × µ) is f ∈ L (W, µ ∗ ν) , i.e. the following diagram commutes, now simply given by n ∞ Sµ,ν Fµ×µf = (D0 f)n=0 L2 (W, µ ∗ ν) / HL2 (W , µ × ν) C which we will simply refer to as a Taylor map.

Fµ∗ν Fµ×ν   F (Hµ∗ν ) 3 α / αC ∈ F (Hµ + iHν )

Moreover the action of Sµ,ν : Fn (µ ∗ ν) → HF n (µ × ν) is given by h i −Lµ∗ν /2 −Lµ×ν /2 Sµ,ν e p = e pC. (8.8)

(We write Hµ + iHν rather than HC to indicate that as a real Hilbert space Hµ + iHν = Hµ × Hν .)

Page: 40 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 Part III

Infinite Dimensional Gaussian Measures

9 The Gaussian Basics II

Most of the finite dimensional statements above hold in infinite dimensions 2. For f ∈ Re L2 (µ) , let with a few exceptions. The most notable exception is that the Cameron-Martin Z space, Hµ in Theorem 9.1, has measure zero whenever dim Hµ = ∞. In what Jf = Jµf := hf := x f(x) dµ(x) ∈ W, (9.6) follows we frequently make use of the fact that W Z p 1 Cp := kxkW dµ (x) < ∞ for all 1 6 p < ∞ (9.1) where the integral is to be interpreted as a . . Then Jf ∈ H W and J : Re L2 (µ) → H is a contraction. ∗ 2 which certainly holds in light of Fernique’s Theorem 3.5. (Also see Skorohod’s 3. Now suppose that K is the closure of W in Re L (µ). Then JK := J|K : inequality, K → H is an isometry. Z λkxk 4. Moreover, J(K) = H and therefore JK : K → H is an isometric isomor- e W dµ (x) < ∞ for all λ < ∞; (9.2) phism of real Banach spaces. Since K is a real Hilbert space it follows that W k·k is a Hilbertian norm on H. see for example [23, Theorem 3.2].) H 5. H is a separable Hilbert space and

∗ (Ju, h)H = u(h) for all u ∈ W and h ∈ H. (9.7) 9.1 Gaussian Structures 6. The quadratic form q may be computed as

The next theorem forms the natural extension of Lemma 4.2 and Theorem 4.5 ∞ to the infinite dimensional setting. This material is well known and may be X q(u, v) = u(hk)v(hk) (9.8) (mostly) found in the books [23] and [4]. In particular, the following theorem is k=1 based in part on [4, Lemma 2.4.1 on p. 59] and [4, Theorem 3.9.6 on p. 138]. ∞ where {hk}k=1 is any orthonormal basis for H. Theorem 9.1. Let W be a real separable Banach space and (W, BW , µ) be a 7. If q is non-degenerate, the Cameron-Martin space, H, is dense in W. Gaussian measure space as in Definition 3.1 For x ∈ W let i itr Notice that by Item 1. H ,→ W is continuous and hence so is W ∗ ,→ H∗ =∼ |u(x)| H = (·, ·) ∗ . Eq. (9.8 asserts that kxk := sup (with 0/0 := 0) (9.3) H Hµ p u∈W ∗ q(u, u) ∗ q = (·, ·)H W ∗×W ∗ . and define the Cameron-Martin subspace, H = Hµ ⊂ W, by ∞ ∞ 8. If {hk}k=1 is any orthonormal basis for H and {Nk}k=1 are a sequence of H = {h ∈ W : khkH < ∞} . (9.4) i.i.d. standard normal random variables, then; a) S := P∞ N h converges in W a.s. and in Lp (µ; W ) for all 1 ≤ p < Then; k=1 k k ∞. 1. (H, k·kH ) is a normed space such that 1 Notice that Z √ p kxf(x)k dµ(x) ≤ C2 kfkL2(µ) < ∞, khkW 6 C2 khkH for all h ∈ H, (9.5) X where C2 is as in (9.1. so the integrand is indeed Bochner integrable. 44 9 The Gaussian Basics II

∗ 2 b) Law (S) = µ. 3. Let f ∈ K and choose un ∈ W such that L (µ) − limn→∞ un = f. Then c) Jα = P∞ α (h ) h for all α ∈ W ∗. k=1 k k d) If f ∈ L2 (µ) and h ∈ H, then R un(x)f(x)dµ(x) 2 |un (Jf)| W kfkL2(µ) −1  lim = lim = = kfkL2(µ) (Jf, h) = f, J h 2 . (9.9) n→∞ p n→∞ H K L (µ) q(un) kunkL2(µ) kfkL2(µ)

∗ −1 ∗ 2 Alternatively stated J = JK where J : H → L (µ) is the adjoint of from which it follows kJfkH = kfkK . So we have shown that J : K → H J. (Incidentally, as we will see later, J ∗h is the Wiener integral of h in is an isometry. the Brownian motion setting.) 4. We now wish to show that JK := J|K : K → H is surjective, i.e. given 2 P∞ −1  e) If f ∈ L (µ) then Jf = k=1 f, JK hk hk. h ∈ H we are looking for an f ∈ K such that ¯ W 9. Let W0 := H be the closure of H inside of W. Then W0 is again a Z separable Banach space and µ (W ) = 1. If we let µ := µ| then h = Jf = xf (x) dµ (x) . 0 0 BW0 W (W0, BW0 , µ0) is a non-degenerate Gaussian measure space. Moreover, ∗ q (u, v) = q0 (u|W0 , v|W0 ) for all u, v ∈ W where q0 = qµ0 , i.e. This will be the case iff Z Z ˆ ∗ q0 (u, v) := u (x) v (x) dµ0 (x) . h(u) := u (h) = u (Jf) = u (x) f (x) dµ (x) = (u, f)K for all u ∈ W . W0 W Proof. See Theorem 9.1 in [7]. We will prove each item in turn. In order to see that this equation has a solution f, notice that

1. Using Eq. (3.3) we find ˆ p h(u) = |u(h)| 6 q(u) khkH = kukL2(µ) khkH = kukK khkH |u (h)| |u (h)| p ∗ ˆ khkW = sup 6 sup p 6 C2 khkH , for all u ∈ W which is dense in K. Therefore h extends continuously to K ∗ kuk ∗ u∈W \{0} W ∗ u∈W \{0} q (u, u) /C2 and so by the Riesz representation theorem for Hilbert spaces, there exists an f ∈ K such that hˆ (u) = (u, f) for all u ∈ W ∗ ⊂ K. and hence if khk = 0 then khk = 0 and so h = 0. If h, k ∈ H, then for K H W 5. H is a separable since it is unitarily equivalent to K ⊂ L2(W, B, µ) and all u ∈ W ∗, |u(h)| khk pq(u) and |u(k)| kkk pq(u) so that 6 H 6 H L2(W, B, µ) is separable. Suppose that u ∈ W ∗, f ∈ K and h = Jf ∈ H. p Then |u(h + k)| 6 |u(h)| + |u(k)| 6 (khkH + kkkH ) q(u). (Ju, h) = (Ju, Jf) = (u, f) This shows h + k ∈ H and kh + kk khk + kkk . Similarly, if λ ∈ H H K H 6 H H R Z Z  and h ∈ H, then λh ∈ H and kλhkH = |λ| khkH . Therefore H is a subspace = u(x)f(x)dµ(x) = u xf(x)dµ(x) of W and (H, k·kH ) is a normed space. W W 2. For f ∈ Re L2 (µ) and u ∈ W ∗ = u(Jf) = u(h).

  ∞ ∗ Z Z 6. Let {ei}i=1 be an orthonormal basis for H, then for u, v ∈ W , u (Jf) = u xf(x)dµ(x) = u(x)f(x)dµ(x) = (u, f) 2 (9.10)   L (µ) ∞ W W X q(u, v) = (u, v)K = (Ju, Jv)H = (Ju, ei)H (ei, Jv)H and hence i=1 ∞ X p = u(ei)v(ei) |u (Jf)| 6 kukL2(µ) kfkL2(µ) = q(u) kfkL2(µ) i=1

which shows that Jf ∈ H and kJfkH 6 kfkL2(µ) . wherein the last equality we have again used Eq. (9.7).

Page: 44 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 9.1 Gaussian Structures 45

∗ 7. If u ∈ W such that u|H = 0, then by Eq. (9.8) it follows that q (u, u) = 0 Exercise 9.1. Suppose that (W, BW , µ) and (V, BV , ν) are two Gaussian mea- and since q is an inner product we must have u = 0. Alternatively this last sure spaces. Show; assertion follows from Eq. (9.7); 1. (W × V, BW ×V , µ × ν) is a Gaussian measure space. ∗ q(u, u) = (Ju, Ju)H = u(Ju) = 0. 2. qµ×ν (ψ) = qµ (ψ (·, 0)) + qν (ψ (0, ·)) for all ψ ∈ (W × V ) . 3. Hµ×ν = Hµ × Hν as Hilbert spaces. It now follows as a consequence of the Hahn–Banach theorem that H must be a dense subspace of W. Exercise√ 9.2. Let t > 0 and Dt : W → W be the dilation operator given by D (x) = tx and let µ := µ◦D−1. In more detail we have µ (A) := µ t−1/2A a) I will omit the proof of 8a. which relies on basic Martingale theory for t t t t for all A ∈ B and Gaussian measures on Banach spaces which may be found in [7, Part W VIII] and [28]. Z Z √  Z b) As simple computation shows that fdµt = f tx dµ (x) = f ◦ Dt dµ (x) (9.11) W W W ∞ !   h i 1 X 2 1 for all bounded measurable functions on W. Let Ttf := f ◦ Dt and Jt := Jµt eiα(S) = exp − α (h ) = exp − q (α, α) E 2 k 2 with J = J1. Show; k=1 2 2 1. kxk = t kxk for all x ∈ W and hence Hµ = Hµ as sets with (h, k) = which is enough to show Law (S) = µ. H Ht t Hµ t (h, k) for all h, k ∈ Hµ. Hµt c) Making use of 8b. we have p p 2. Tt : L (µt) → L (µ) is isometric isomorphism of Banach spaces for all " N ! N # 1 ≤ p < ∞. X X 2 Jα = E [α (S) S] = lim E α Nkhk Nlhl 3. Jt = Dt|H JTt on L (µt) . N→∞ ∗ ∗ k=1 l=1 4. J = TtJt Dt, i.e. for h ∈ H and x ∈ W we have N ∞ √ X X ∗ 1 ∗   = lim E [NkNl] α (hk) hl = α (hk) hk. (J h)(x) = √ (J h) tx (µ – a.e. x). N→∞ t k,l=1 k=1 t ∗ ∗ ∗ ∗ 2 5. If α ∈ W and h = Jtα ∈ H show that Jt h and J h have a unique d) For α ∈ W and f ∈ L (µ) we have ∗ 1 ∗ continuous version and for these versions we have Jt h = t J h. (Jf, Jα) = α (Jf) = (f, α) . H L2(µ) Hint: if you get stuck this exercise is mostly a special case of Proposition 9.3 below. By continuity it then follows that (Jf, Jg)L2(µ) = (f, g)L2(µ) for all −1 g ∈ K. Taking g = JK h then implies Eq. (9.9). Remark 9.2. By Lemma 3.3 and Exercise 3.1 it follows that each f ∈ K is a e) For f ∈ L2 (µ) we have mean zero Gaussian random variable. As K is a subspace it follow that K ⊂ L2 (µ) consists of jointly Gaussian random variables. ∞ ∞ ∞ X X ∗ X −1  Jf = (Jf, hk)H hk = (f, J hk)L2(µ) hk = f, JK hk L2(µ) hk Proposition 9.3 (Pushing forward (may omit)). Suppose that (W, BW , µ) k=1 k=1 k=1 is a Gaussian probability space and let T : W → W0 be a bounded lin- ear transformation to a separable Banach space W0. Let µ0 := T∗µ and for wherein we have used Eq. (9.9) for the last equality. 2 ∗L (µ0) f0 ∈ K0 := W let 8. As S in part 8a takes values in W0 a.s., it follows that µ = Law (S) is 0

concentrated on W0. The remaining assertions are all easy and will be left Z 2 ∗L (µ0) to the reader. J0f0 := f0 (x0) x0dµ0 (x0) f0 ∈ K0 := W0 . W0 Then;

Page: 45 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 46 9 The Gaussian Basics II ∞ ⊥ 1. µ0 is a Gaussian measure on (W0, BW0 ) with q0 (α0) = q (α0 ◦ T ) , Better way: Let {hn}n=1 ⊂ Nul (T |H ) ⊂ H be an orthonormal basis 2. H = T (H) , ⊥ ∞ 0 for Nul (T |H ) and {kn}n=1 be an orthonormal basis for Nul (T |H ) . It then 3. TJT tr = J and T |∗ J = J T tr, P∞ P∞ 0 0 ∞ 0 H 0 follows that Law ( Nnhn + Nnkn) = µ where {Nn,Nn} are i.i.d. ∗ n=1 n=1 n=1 4. TT |H = IdH0 , and standard normal random variables and the sums are convergent in W. Therefore ⊥ 5. T | ⊥ : Nul (T | ) → H is unitary. Nul(T |H ) H 0 ∞ ∞ !! X X 0 ∗ µ0 = T∗µ = Law T Nnhn + Nnkn Proof. Let α0 ∈ W0 , then n=1 n=1 Z Z ∞ ! µˆ (α ) = eiα(x0)d (T µ)(x ) = eiα◦T dµ = e−q(α◦T )/2 X 0 0 ∗ 0 = Law NnT hn W0 W n=1 tr which shows that µ0 is a Gaussian measure with q0 = q ◦ T . Given x ∈ W we ∗ d where the latter sum is convergent in W0. Thus if α0 ∈ W0 then α0 = have P∞ 2 2 Nnα0 (T hn)(α0 distributed by µ0 = T∗µ) and hence is Gaussian. More- 2 |α0 (T x)| |α0 ◦ T (x)| 2 n=1 kT xkH = sup = sup ≤ kxkH over it follows that 0 ∗ q (α ) ∗ q (α ◦ T ) α0∈W0 0 0 α0∈W0 0 " ∞ #2 ∞ X X 2 which shows that T (H) ⊂ H0 and T |H : H → H0 is a contraction. q (α ) = N α (T h ) = [α (T h )] . ∗ 0 0 E  n 0 n  0 n Now suppose that α0 ∈ W0 , then n=1 n=1 Z Z ∞ TJ [α ◦ T ] = T α ◦ T (x) xdµ (x) = α ◦ T (x) T x dµ (x) As the {T hn}n=1 are linearly independent, it follows that H0 is the Hilbert space 0 0 0 ∞ 2 W W with {T hn}n=1 being an orthonormal basis for H0. Notice that if {an} ⊂ ` , Z P∞ then n=1 anhn converges in H and hence in W and therefore = α0 (y) y dµ0 (y) = J0α0. W ∞ ! ∞ 0 X X T a h = a T h By a simple limiting argument it now follows that TJT tr = J and from this n n n n 0 n=1 n=1 identity we learn that showing the latter sum is convergent in W0. Thus we have show that H0 = tr  ⊥ ⊥ H0 = J0K0 = TJT K0 ⊂ TJK = TH T Nul (T | ) = TH and that T | ⊥ : Nul (T | ) → H is a unitary H Nul(T |H ) H 0 map. from which we may now conclude that H0 = T (H) . ∗ ∗ ∗ ∗ Let us simply write T for T |H . Then for α ∈ W0 and β ∈ W we have (T ∗J α, Jβ) = (J α, T Jβ) = (α ◦ T )(Jβ) = (J (α ◦ T ) , Jβ) 9.2 Cameron-Martin Theorem 0 H 0 H0 H Lemma 9.4. Let (W, B, µ) be a non-degenerate (for simplicity) Gaussian mea- from which it follows that T ∗J = JT tr. Using this identity and item 2. we 0 sure space . Then there exists {u }∞ ⊂ W ∗ ⊂ K which is an orthonormal learn that TT ∗J = TJT tr = J which implies TT ∗ = Id . k k=1 0 0 H0 basis for K and satisfies The last identity implies that T ∗ is an isometry since ∞ ∗ ∗ ∗ 2 X 2 (T h ,T k ) = (h ,TT k ) = (h , Id k ) = (h , k ) . kxk = |uk (x)| for all x ∈ W. (9.12) 0 0 H 0 0 H0 0 H0 0 H0 0 0 H0 H k=1 ∗ ∗ ∗ ⊥ Since T is an isometry Ran (T ) is closed and hence Ran (T ) = Nul (T ) . It In particular, now follows that ( ∞ ) X 2 ∗ −1 ⊥ ∗ H := Hµ = x ∈ W : |uk (x)| < ∞ ∈ BW . (9.13) T |Nul(T )⊥ = (T ) : Nul (T ) = Ran (T ) → H0 k=1 ⊥ is also an isometry and therefore T | ⊥ : Nul (T | ) → H is unitary. Moreover, if dim W = ∞ then µ (H) = 0. Nul(T |H ) H 0

Page: 46 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 9.2 Cameron-Martin Theorem 47

∞ ∗ n Proof. If {uk}k=1 ⊂ W is any orthonormal basis for K, then for any 1 X 2 2 lim |uk (x)| = E |u1| = 1 (µ – a.e. x). (9.16) x ∈ H ⊂ W we will have n→∞ n k=1 ∞ ∞ 1 Pn 2 2 X 2 X 2 On the other hand if x ∈ H, then limn→∞ |uk (x)| = 0 and so µ (H) = kxk = (x, Juk) = |uk (x)| . (9.14) n k=1 H H 0. k=1 k=1

∞ To get this identity to hold for all x ∈ W we will choose the {uk}k=1 more Theorem 9.5 (Cameron-Martin). Let (H, W, µ) be a Gaussian measure ∞ ∗ carefully. To this first choose {αn}n=1 ⊂ W such that kxkW = supn |αn (x)| ∞ space as above and for h ∈ W let µh(A) = µ(A − h) for all A ∈ BW . Then for all x ∈ W as mentioned in the proof of Theorem 2.2. Now from the {uk}k=1 µ  µ iff h ∈ H and if h ∈ H then ∗ 2 h by applying the Graham-Schmidt process to the {αn} ⊂ W ⊂ K ⊂ L (µ) . ∞   dµh ∗ 1 2 I claim the resulting sequence {uk}k=1 is complete and hence an orthonormal = exp J h − khk . ∞ dµ 2 H basis for K. To see this suppose that f ∈ K is perpendicular to {uk}k=1 , then ∗ −1 Moreover if h ∈ W \ H (i.e. khkH = ∞), then µh ⊥ µ. (Since J = JK , if 0 = (f, uk)K = (Jf, Juk)H = uk (Jf) for all k. (9.15) ∗ h = Jα, then J h = α where α (k) = (Jα, k)H = (h, k) for all k ∈ H. For this reason it is often customary to abuse notation and write J ∗h = (h, ·) .) Now for each n ∈ N we know that (αn, uk) = 0 for all k > n so that αn = H Pn k=1 (αn, uk) uk as elements of K which implies Proof. I will only prove here that µh  µ when h ∈ H. See, for example, [7, Proposition ??] for a proof of the orthogonality assertions. n ! n 2 X X We must show q α − (α , u ) u = α − (α , u ) u = 0. Z Z n n k k n n k k ∗ 1 2 J h− 2 khk k=1 k=1 L2(µ) f(x + h)dµ(x) = e f(x)dµ(x) W W Because q is non-degenerate we may now conclude that α = Pn (α , u ) u n k=1 n k k for all f ∈ (B ) . It suffices to show that as element of W ∗. So we may now conclude from this remark and Eq. (9.15) W b Z Z that α (Jf) = 0 for all n and therefore kJfk = sup |α (Jf)| = 0. Having iϕ(x+h) iϕ(x) J ∗h− 1 khk2 n W n n e dµ(x) = e e 2 H dµ(x) (9.17) shown Jf = 0 also shows f = 0 (J : K → H is isometric) which proves the W ∞ ∗ assertion that {uk}k=1 is complete. We now fix this choice for {uk} for the rest for all ϕ ∈ W . of the proof. ∗ ∗ ∞ 2 We will start by verifying Eq. (9.17) when h = Jψ ∈ JW for some ψ ∈ W . P ∗ If x ∈ W satisfies k=1 |uk (x)| < ∞ we may define h := For h of this form J h = ψ a.s. and P∞ uk (x) Juk ∈ H. The sum converges in H and hence also in W k=1 ϕ (h) = ϕ (Jψ) = (Jϕ, Jψ) = q (ϕ, ψ) . and therefore for all m ∈ N, H

∞ ∞ Therefore the left side of Eq. (9.17) is given by X X iϕ(h) − 1 q(ϕ,ϕ) iq(ϕ,ψ)− 1 q(ϕ,ϕ) um (h) = uk (x) um (Juk) = uk (x)(Juk, Jum)H = um (x) . e e 2 = e 2 k=1 k=1 while the right side by; Since uk (x − h) = 0 for all k it follows from the argument above that x = h ∈ H. Z Z iϕ(x) J ∗h− 1 khk2 iϕ(x) ψ(x)− 1 q(ψ,ψ) P∞ 2 e e 2 H dµ(x) = e e 2 dµ(x) Thus we have shown k=1 |uk (x)| < ∞ implies x ∈ H which combined with ∞ 2 W P W Eq. (9.14) shows that k=1 |uk (x)| < ∞ iff x ∈ H. As Eq. (9.12) holds on H 2 P∞ 2  Z  and both kxk = ∞ and |uk (x)| = ∞ for x∈ / H we can conclude that − 1 q(ψ,ψ) 1 2 H k=1 = e 2 exp (ψ + iϕ) dµ Eq. (9.12) holds for all x ∈ W. 2 W ∞ ∗ ∞ Since {u } ⊂ W is an orthonormal basis for K, the {u } are i.i.d. − 1 q(ϕ,ϕ)+iq(ϕ,ψ)+ 1 q(ψ,ψ)− 1 q(ψ,ψ) k k=1 k k=1 = e 2 2 2 standard normal random variables. Therefore by an application of the e strong iq(ϕ,ψ)− 1 q(ϕ,ϕ) law of large numbers, = e 2 .

Page: 47 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 48 9 The Gaussian Basics II

∗ W For general h ∈ H we may choose ψn ∈ W so that hn := Jψn → h in H Second Proof.. By replacing W by W0 = H¯ if necessary we may assume ∗ 2 or equivalently so that ψn → J h in K ⊂ L (µ) . It then follows from Lemma that (W, BW , µ) is a non-degenerate Gaussian probability space. Given a cylin- ∗ ψn J h 2 3.3 that e → e in L (µ) . Using this remark, it is easy to pass to the limit der function f = F (α1, . . . , αn) we may assume (if not apply Gram–Schmidt to n n ∗ in Eq. (9.17) with h replaced by hn in order to show Eq. (9.17) holds for all the {αk}k=1) that the {αk}k=1 form an orthonormal subset of (W , q) . Extend ∞ h ∈ H. this set to an orthonormal basis, {αk}k=1 , for K. Then for any N ≥ n, by finite dimensional integration by parts, Remark 9.6. Despite the fact that µ (H) = 0 and infinite dimensional Lebesgue n measure does not exists and that Z below should be (2π)dim H/2 = ∞, one Z Z X ∂ fdµ = (D F )(α , . . . , α ) α (h) dµ should still morally think that µ is the measure given by h k 1 n k W W k=1   n n/2 1 1 Z   1 2 2 X 1 − |a| n “dµ (x) = 1H (x) exp − kxkH Dx.” = αk (h) DkF (a1, . . . , an) e 2 R da Z 2 n 2π k=1 R n n/2 The Cameron-Martin theorem is easy to understand with this heuristic; namely Z   1 2 X 1 − |a| n by making the formal change of variables, x → x − h we “find,” = αk (h) akF (a1, . . . , an) e 2 R da n 2π k=1 R 1  1  n Z dµ (x) = exp − kx − hk2 D (x − h) X h Z 2 H = αk (h) αk · F (α1, . . . , αn) dµ   k=1 W 1 1 2 1 2 = exp − kxk − khk + (x, h) Dx N Z Z 2 H 2 H H X = αk (h) αk · F (α1, . . . , αn) dµ (9.18)  1  W = exp (h, x) − khk2 dµ (x) . k=1 H 2 H wherein we have used We have use the formal translation invariance of Dx in the second line. Z Z Z αk · F (α1, . . . , αn) dµ = αkdµ · F (α1, . . . , αn) dµ = 0 for all k > n W W W Exercise 9.3. Let (W, BW , µ) be a non-degenerate Guassian probability space. Show that µ (B (x, ε)) > 0 for all x ∈ W and ε > 0 where B (x, ε) is the open because αk is independent of {α1, . . . , αn} . Since ball in W of radius ε centered at x. N ∞ ∞ ∗ X X Theorem 9.7 (Integration by Parts). Let h ∈ H and f ∈ FC (W ) such lim αk (h) Jαk = (Jαk, h) Jαk = h N→∞ H that f and ∂hf does not grow too fast at infinity. Then k=1 k=1 Z Z it follows from Theorem 9.1 that ∗ ∂hfdµ = J h · fdµ. W W N X −1 ∗ 2 αk (h) αk → JK h = J h in L (µ) . First Proof.. Assuming enough regularity on f to justify the interchange k=1 of the derivatives involved with the integral we have, using the Cameron-Martin theorem, that Thus letting N → ∞ in Eq. (9.18) completes the second proof. Z Z d d Z ∂hfdµ = |0f (x + th) dµ (x) = |0 f (x + th) dµ (x) W W dt dt W d Z  t2  Z 9.3 The Heat Interpretation = | f exp tJ ∗h − khk2 dµ = f · J ∗h dµ (x) . dt 0 2 H W W The heat interpretation of a Gaussian measure remains essentially unchanged It is not really necessary in this proof that f be a cylinder function. when going to the infinite dimensional setting. In fact, when acting on cylinder

Page: 48 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 functions the results really come back to the finite dimensional – see the second proof of integration by parts in Theorem 9.7.

Exercise 9.4 (Compare with Exercise 5.6). Let (W, BW , µ) be a Gaussian probability space (dim W = ∞ permissible). Suppose f ∈ FC2 (W ∗) such that f and its first and second derivatives grow (for example) at most exponentially at infinity. Show that Z  √  F (t, x) := f x + ty dµ (y) ∀ t > 0 and x ∈ W. (9.19) W satisfies the heat equation, ∂ 1 F (t, x) = (LF )(t, x) . (9.20) ∂t 2

10 Gaussian Process as Gaussian Measures

In this chapter let {Yt}0≤t≤T be a mean zero Gaussian random process having continuous sample paths. As in Example 2.3, let W := C ([0,T ] , R) and Theorem 10.2 (The Cameron-Martin Space for Y ). Let K˜ = µ := Law (Y·) be the associated Gaussian measure on (W, BW ) . Recall that we 2 have also defined α ∈ W ∗ to be the evaluation map α (x) = x (t) for all x ∈ W n oL (P ) t t span {Y } . Then the Cameron-Martin space H associate to and 0 ≤ t ≤ T. Our immediate goal in this chapter is to better understand t 0≤t≤T µ n ˜ ˜ o ˜ ˜ (W, BW , µ) . In particular we want to describe the associate Cameron-Martin µ is given by hZ˜ : Z ∈ K where for Z ∈ K; spaces for such process. Of course the case where {Yt} is a Brownian motion holds special interest for us. h ˜i hZ˜ (t) = E YtZ for 0 ≤ t ≤ T.

Moreover if Z˜ , Z˜ ∈ K,˜ then 10.1 Reproducing Kernel Hilbert Spaces 1 2  h i h ˜ , h ˜ = Z˜1 · Z˜2 .  k Z1 Z2 Hµ E Lemma 10.1. To each n ∈ N let Λn := n T : 0 ≤ k ≤ n and let πn : W → W 00 be the projection map defined by πn (x) = y where y = x on Λn and y (t) = 0 2 ∗L (µ) for t∈ / Λn. (So πn (x) is a piecewise linear approximation to x.) Then; Proof. Letting K := W we know from Theorem 9.1 that Hµ = 2  Jµ (K) = Jµ L (µ) . For Z˜ ∈ K˜ we can find a Z ∈ K such that Z˜ = Z ◦ Y· 1. πn (x) → x in W as n → ∞ for each x ∈ W. Therefore ∗ 2. If α ∈ W , then αn := α ◦ πn converges to α as n → ∞ both pointwise on 2 Z W and in L (µ) . h (t) = [Y Z ◦ Y ] = α (x) Z (x) dµ (x) = α (J Z) 2 Z˜ E t t t µ ∗L (µ) W 3. If K := W , then span {αt : 0 ≤ t ≤ T } is a dense subspace of K. and hence h = J Z ∈ H . Similarly if Z˜ , Z˜ ∈ K˜ there exists Z ,Z ∈ K Proof. We prove each item in turn. Z˜ µ µ 1 2 1 2 such that Z˜i = Zi ◦ Y and we have 1. The convergence is a simple consequence of the fact that every x ∈ W is Z  h i uniformly continuous. h ˜ , h ˜ = (JµZ1,JµZ2) = [Z1Z2] dµ = Z˜1 · Z˜2 . Z1 Z2 H Hµ E 2. The pointwise convergence follows directly from item 1. The L2 (µ) – con- µ W vergence may be deduced from Lemma 3.3 or using the DCT along with 2 the uniform estimate; By Lemma 3.3 we know that [0,T ] 3 t → Yt ∈ L (P ) is continuous. Since 2 2 L (P ) × L (P ) 3 (f, g) → (f, g) 2 ∈ R is a continuous function it follows |α (x)| ≤ kαk kπ (x)k ≤ kαk kxk . L (P ) n W ∗ n W W ∗ W h ˜i directly that hZ˜ (t) = E YtZ is a continuous function of t. Of course this is a 2 Notice that k·kW ∈ L (µ) by Fernique’s theorem. consequence of the general theory as well. 3. Because πn (x) is completely determined by the values of x on Λn it follows 2 that α = α ◦ π is a linear combination of {α } . Thus it follows that Definition 10.3 (Reproducing Kernel). Let G = GY : [0,T ] → R be the n n t t∈Λn ∗ 2 reproducing kernel associate to Y define by every α ∈ W is in the L (µ) – closure of span {αt}0≤t≤T which suffices to prove item 3. G (s, t) := E [YsYt] . 52 10 Gaussian Process as Gaussian Measures

Proposition 10.4. The reproducing kernel satisfies; 1. for each t ∈ [0,T ] there exists G (t, ·) ∈ H such that h (t) = (G (t, ·) , h)H for all h ∈ H. Moreover G (t, s) = (G (t, ·) ,G (s, ·)) showing that G is a 1. (Continuity.) G : [0,T ]2 → is continuous and G (t, ·) ∈ H for all 0 ≤ R µ symmetric function of (s, t) . t ≤ T. 2. The map [0,T ] 3 t → G (t, ·) ∈ H is continuous. 2. (Reproducing property.) (G (t, ·) , h) = h (t) for all h ∈ H and t ∈ [0,T ] . Hµ µ p 3. (s, t) → G (s, t) is continuous. 3. (Pointwise bounds.) If h ∈ Hµ and 0 ≤ t ≤ T, then |h (t)| ≤ G (t, t) · 4. {G (t, ·) : 0 ≤ t ≤ T } is total in H. khk . In particular, Hµ 5. H is necessarily a separable Hilbert space. ∞ r 6. If {hn}n=1 is any orthonormal basis for H and 0 ≤ s, t ≤ T, then khkW ≤ max G (t, t) · khkH . 0≤t≤T µ ∞ X h (s) h (t) = G (s, t) Moreover these bounds are sharp. n n n=1 4. (Totality.) {G (t, ·) : 0 ≤ t ≤ T } is a of Hµ, i.e. span {G (t, ·) : 0 ≤ t ≤ T } is dense in Hµ. where the sum is absolutely convergent. 7. Each h ∈ H satisfies the continuity estimate, Proof. The continuity of G follows from the comments before Definition 10.3 and moreover G (t, ·) = hY ∈ Hµ. If h = h ˜ ∈ Hµ we will have, p t Z |h (t) − h (s)| ≤ khkH · G (t, t) + G (s, s) − 2G (s, t). h i (G (t, ·) , h) = (hY , h ˜) = YtZ˜ = h ˜ (t) Hµ t Z Hµ E Z Example 10.5 (The Classical Cameron – Martin Space). Let W = {x ∈ C([0,T ] → R): x(0) = 0} and let H denote the set of functions h ∈ W which proves the reproducing property. By the Cauchy–Schwarz inequality and which are absolutely continuous and satisfy (h, h) = R T |h0(s)|2ds < ∞. The the reproducing property, 0 space H is called the Cameron-Martin space and is a Hilbert space when 2 |h (t)|2 = (G (t, ·) , h) equipped with the inner product Hµ 2 2 Z T ≤ kG (t, ·)k khk 0 0 Hµ Hµ (h, k) = h (s)k (s)ds for all h, k ∈ H. = (G (t, ·) ,G (t, ·)) khk2 = G (t, t) khk2 . 0 Hµ Hµ Hµ By the fundamental theorem of calculus we have for h ∈ H that If we choose a t0 ∈ [0,T ] such that G (t0, t0) = max0≤t≤T G (t, t) and let h = G (t , ·) , then khk = pG (t , t ) and Z t Z T 0 Hµ 0 0 0 0 h (t) = h (σ) dσ = 1σ≤th (σ) dσ = (G (t, ·) , h)H   0 0 p h (t0) = G (t0, t0) = max G (t, t) · khk 0≤t≤T Hµ provided we define which show that the given bounds are sharp. Z s G (t, s) = 1σ≤tdσ = min (s, t) . (10.1) The fact that {G (t, ·) : 0 ≤ t ≤ T } is a total in Hµ follows from the fact 0 that {Yt : 0 ≤ t ≤ T } is total in K.˜ Alternatively, if h ∈ Hµ is perpendicular to {G (t, ·) : 0 ≤ t ≤ T } then 0 = (h, G (t, ·)) = h (t) for all t which shows that The function in Eq. (10.1) is the reproducing kernel for H. Consequently we Hµ h must be zero. may conclude that ∞ Much of what we have just proved for G holds more generally as you are X min (s, t) = h (s)h (t) asked to show in the next exercise. n n n=1 Exercise 10.1 (Reproducing Kernel Hilbert Spaces). Let H be a sub- ∞ for any orthonormal basis {hn}n=1 of H and we have the “Sobolev” inequality space of W := C ([0,T ] , R) (can replace [0,T ] by a more general topological for h ∈ H; space if you wish) which is equipped with a Hilbertian norm, k·k , such that H √ p khkW ≤ C khkH for all h ∈ H. Then; |h(t) − h (s)| = khkH s + t − 2s ∧ t = khkH |t − s|

Page: 52 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 10.2 The Example of Brownian Motion 53 for all 0 ≤ s, t ≤ T. Of course we could also prove this inequality directly; The goal of the next few exercise is to convince you that it is reasonable to Z t heuristically view Wiener measure µ := Law (B) as |h(t) − h (s)| = h0 (τ) dτ ! s 1 1 Z T s s dµ (x) = exp − x0 (s)2 ds Dx. Z t Z t √ 0 2 2 Z 2 0 ≤ |h (τ)| dτ · 1 dτ ≤ khkH t − s s s where Dx is the non-existent Lebesgue measure on H = Hµ and Z is an ill- for all 0 ≤ s ≤ t ≤ T. defined a normalizing constant.

Exercise 10.2 (Projection Lemma). Let Λ = {0 = t < t < ··· < t = T } . 10.2 The Example of Brownian Motion 0 1 n Following Lemma 10.1, let πΛ : W → W be the projection map defined by π x = y where y = x on Λ and y00 (t) = 0 for t∈ / Λ. (So π (x) is a piecewise Theorem 10.6 (Brownian Motion). Let W = {x ∈ C([0,T ] → ): x(0) = 0} , Λ Λ R linear approximation to x.) Let {Bt}0≤t≤T be a Brownian motion, and let µ := Law (B·) as a measure on (W, BW ) . Then Hµ is the classical Cameron-Martin space described in Example H := π (W ) = {h ∈ H = H : h00 (t) = 0 if t∈ / Λ} . 10.5. Λ Λ µ ⊥ Proof. The reproducing kernel for Hµ is Show πΛ|H is an orthogonal projection onto HΛ and that HΛ = {k ∈ H : k|Λ = 0} . G (s, t) = E [BsBt] = s ∧ t. which is also that reproducing kernel of H described in Example 10.5. As G (t, ·) Exercise 10.3 (Approximation theorem). For each n ∈ N let Λn :=  k T : 0 ≤ k ≤ n and π := π : W → H as above. Show for any bounded is a total subset of both Hµ and H and the inner products of G (t, ·) and G (s, ·) n n Λn continuous function f : W → that is G (s, t) when computed in either Hilbert space it follows that H = Hµ as R Hilbert spaces. Z Z 1 − 1 khk2 f (x) dµ (x) = lim f (h) e 2 H dλn (h) (10.2) Remark 10.7. If we did not know about H ahead of time how might of we n→∞ Z W n HΛn determined Hµ explicitly? Here is one method. By the general theory we know Pn n/2 that {G (t, ·) : 0 ≤ t ≤ T } is a total subset of Hµ. If h = i=1 λiG (ti, ·) then where λn is Lebesgue measure on HΛn and Zn = (2π) . n n Hint: from Lemma 10.1 and the DCT, one easily shows that 2 X X khk = λiλj (G (ti, ·) ,G (tj, ·)) = λiλjG (ti, tj) . Hµ Hµ Z Z i,j=1 i,j=1 f (x) dµ (x) = lim f (πn (x)) dµ (x) . (10.3) But W n→∞ W Z T G (s, t) = s ∧ t = 1σ≤s · 1σ≤tdσ Now compute the law of πn under µ by computing its Fourier transform. 0 and so it follows that Exercise 10.4. Here is a slightly different and perhaps more intuitive outline n Z T indicating that H in Example 10.5 is the Cameron-Martin space for Brownian 2 X khk = λ λ 1 · 1 dσ Hµ i j σ≤ti σ≤tj motion. Let Λ = {0 = t0 < t1 < ··· < tn = T } be a partition of [0,T ] , ∆i = i,j=1 0 √1  ti − ti−1, and B be a Brownian motion. Further let Zi := Bti − Bti−1 . ∆i n Z T X The exercise is to understand the following outline. = λiλj1σ≤ti · 1σ≤tj dσ 0 n i,j=1 1. Observe that {Zi}i=1 are i.i.d. standard normal random variables. 2 1 2. Let Y := πΛ (B) . Using the fact that Y˙ (t) = √ Zi for ti−1 < t < ti we Z T n Z T ∆i X 0 2 see that Y is linear transformation of the {Zi} . Therefore Y is Gaussian = λj1σ≤tj dσ = |h (σ)| dσ.

0 j=1 0 and by the change of variables formula,

Page: 53 job: Cornell macro: svmonob.cls date/time: 16-Jul-2010/16:09 1 Z  1  [f (Y )] = f (y) exp − kyk2 dy E HΛ ZΛ HΛ 2

provided k·k2 is chosen so that kY k2 = Pn Z2. As HΛ HΛ i=1 i

Z ti 2 ˙ 2 Zi = Y (t) dt ti−1 we learn that

n Z ti Z T 2 X 2 2 2 kY k = Y˙ (t) dt = Y˙ (t) dt = kY k . HΛ H i=1 ti−1 0

3. Combining these results with the observation in Eq. (10.3) shows again that Eq. (10.2) holds. References

1. Lars Andersson and Bruce K. Driver, Finite-dimensional approximations to 10. Bruce K. Driver and Brian C. Hall, Yang-Mills theory and the Segal-Bargmann Wiener measure and path integral formulas on , J. Funct. Anal. 165 transform, Comm. Math. Phys. 201 (1999), no. 2, 249–290. MR 1 682 238 (1999), no. 2, 430–498. MR 1 698 956 11. William Feller, An introduction to probability theory and its applications. Vol. II., 2. John C. Baez, Irving E. Segal, and Zheng-Fang Zhou, Introduction to algebraic Second edition, John Wiley & Sons Inc., New York, 1971. MR MR0270403 (42 and constructive quantum field theory, Princeton University Press, Princeton, NJ, #5292) 1992. MR 93m:81002 12. James Glimm and Arthur Jaffe, Collected papers. Vol. 2, Birkh¨auserBoston Inc., 3. V. Bargmann, On a Hilbert space of analytic functions and an associated integral Boston, MA, 1985, Constructive quantum field theory. Selected papers, Reprint transform, Comm. Pure Appl. Math. 14 (1961), 187–214. MR MR0157250 (28 of articles published 1968–1980. MR 91m:81003 #486) 13. , Quantum physics, second ed., Springer-Verlag, New York, 1987, A func- 4. Vladimir I. Bogachev, Gaussian measures, Mathematical Surveys and Mono- tional integral point of view. MR 89k:81001 graphs, vol. 62, American Mathematical Society, Providence, RI, 1998. MR 14. Leonard Gross, Lattice gauge theory; heuristics and convergence, Stochastic 2000a:60004 processes—mathematics and physics (Bielefeld, 1984), Springer, Berlin, 1986, 5. Bruce K. Driver, On the Kakutani-Itˆo-Segal-Gross and Segal-Bargmann-Hall iso- pp. 130–140. MR 87g:81064 morphisms, J. Funct. Anal. 133 (1995), no. 1, 69–128. 15. Brian C. Hall, The Segal-Bargmann “coherent state” transform for compact Lie 6. , On the Kakutani-Itˆo-Segal-Gross and Segal-Bargmann-Hall isomor- groups, J. Funct. Anal. 122 (1994), no. 1, 103–151. MR MR1274586 (95e:22020) phisms, J. Funct. Anal. 133 (1995), no. 1, 69–128. 16. , The Segal-Bargmann “coherent state” transform for compact Lie groups, 7. Bruce K. Driver, Probability tools with examples, These are expanded lecture notes J. Funct. Anal. 122 (1994), no. 1, 103–151. MR 95e:22020 of a graduate probability course., June 2010. 17. , The inverse Segal-Bargmann transform for compact Lie groups, J. Funct. 8. Bruce K. Driver and Maria Gordina, Square integrable holomorphic functions on Anal. 143 (1997), no. 1, 98–116. MR 98e:22004 infinite-dimensional heisenberg type groups, Probab. Theory Relat. Fields Online 18. , A new form of the Segal-Bargmann transform for Lie groups of compact first (2009), 48 pages. type, Canad. J. Math. 51 (1999), no. 4, 816–834. MR MR1701343 (2000g:22013) 9. Bruce K. Driver and Leonard Gross, Hilbert spaces of holomorphic functions on 19. , A new form of the Segal-Bargmann transform for Lie groups of compact complex lie groups, New Trends in Stochastic Analysis (New Jersey) (K. D. El- type, Canad. J. Math. 51 (1999), no. 4, 816–834. MR 1 701 343 worthy, S. Kusuoka, and I. Shigekawa, eds.), Proceedings of the 1994 Taniguchi 20. Michel Herv´e, Analyticity in infinite-dimensional spaces, de Gruyter Studies in Symposium, World Scientific, 1997, pp. 76–106. Mathematics, vol. 10, Walter de Gruyter & Co., Berlin, 1989. MR MR986066 (90f:46074) 21. Einar Hille and Ralph S. Phillips, and semi-groups, Ameri- can Mathematical Society, Providence, R. I., 1974, Third printing of the revised edition of 1957, American Mathematical Society Colloquium Publications, Vol. XXXI. MR MR0423094 (54 #11077) 22. Nobuyuki Ikeda and Shinzo Watanabe, Stochastic differential equations and dif- fusion processes, second ed., North-Holland Mathematical Library, vol. 24, North- Holland Publishing Co., Amsterdam, 1989. MR MR1011252 (90m:60069) 23. Hui Hsiung Kuo, Gaussian measures in Banach spaces, Springer-Verlag, Berlin, 1975, Lecture Notes in Mathematics, Vol. 463. MR 57 #1628 24. Michael Reed and Barry Simon, Methods of modern . II. Fourier analysis, self-adjointness, Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1975. MR 58:12429b 25. , Methods of modern mathematical physics. I, second ed., Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1980, Functional analysis. MR 85e:46002 26. Barry Simon, The classical moment problem as a self-adjoint finite difference operator, Adv. Math. 137 (1998), no. 1, 82–203. MR MR1627806 (2001e:47020) 27. R. F. Streater and A. S. Wightman, PCT, spin and statistics, and all that, second ed., Addison-Wesley Publishing Company Advanced Book Program, Redwood City, CA, 1989. MR 91d:81055 28. Daniel W. Stroock, Probability theory, an analytic view, Cambridge University Press, Cambridge, 1993. MR MR1267569 (95f:60003)