<<

arXiv:1504.03049v4 [math-ph] 15 Dec 2015 —– etrso ue Analysis Super on Lectures h eesr n htsthat? What’s and necessary Why oad e praht ytmo PDEs of system a to approach new a Towards -eso15Dcme 6 2015 16, December e-Version1.5 yAssiINOUE Atsushi By NOTICE: COMMENCEMENT OF A CLASS i

Notice: Commencement of a class

Syllabus Analysis on —– a construction of non-commutative analysis

3 October 2008 – 30 January 2009, 10.40-12.10, H114B at TITECH, Tokyo, A. Inoue

Roughly speaking, RA(=real analysis) means to study properties of (smooth) functions defined on real , and CA(=) stands for studying properties of (holomorphic) functions defined on spaces with complex structure.

On the other hand, we may extend the differentiable calculus to functions having definition domain in , for example, S. Lang “Differentiable Manifolds” or J.A. Dieudonn´e“Trea- tise on Analysis”. But it is impossible in general to extend differentiable calculus to those defined on infinite dimensional Fr´echet space, because the implicit theorem doesn’t hold on such generally given Fr´echet space.

Then, if the ground ring (like R or C) is replaced by non-commutative one, what type of analysis we may develop under the condition that newly developed analysis should be applied to systems of PDE or RMT(=Random Theory).

In this lectures, we prepare as a “ground ring”, Fr´echet-Grassmann algebra having count- ably many Grassmann generators and we define so-called superspace over such algebra. On such superspace, we take a space of super-smooth functions as the main objects to study.

This procedure is necessary not only to associate a Hamilton flow for a given 2d 2d system × of PDE which supports to resolve Feynman’s murmur, but also to make rigorous Efetov’s result in RMT. (1) Feynman’s path-integral representation of the solution for Schr¨odinger equation (2) Dirac and Weyl equations, Feynman’s murmur (3) Why is such new algebra necessary?Differential operator representations of 2d 2d-matrices × (4) Fr´echet-Grassmann algebra R and superspace Rm|n (5) Elementary on superspace, super- and super-, etc (6) Differential calculus on superspace; super smooth functions and theorem, etc (7) Integral calculus on superspace; integration by parts, change of variables under integral sign, etc (8) Fourier transformations on Rm|n and its application (9) Path-integral representation of a fundamental solution of Weyl equation

Home Page:http://www.math.titech.ac.jp/‘inoue/SLDE2-08.html (Closed after my retirement from TITech at 31th March 2009)

To audience: Please keep not only intellectual curioisity but also have patience to follow at least 3 lectures. ii

For what and why, this lecture note is written

I delivered 14 lectures, each 90 minutes, for graduate students at Tokyo Institute of Technology, started in Autumn 2008.

Since there is a special tendency based not only on Japanese culture but rather asian aesthetic feeling, students hesitate to make a question during class. Reasons about this tendency seems based on modesty and timidity and feeling in his or her noviciate, or more frankly speaking, they are afraid of making a stupid question before friends which may probably exhibit their ignorance, which stems from their sense of guilty that they haven’t been studied sufficiently enough, or bother those appreciation for lectures by stopping by questions. In general, some one’s any questions on a lecture is very constructive that makes clear those which are not easy to understand for audience but also corrects miss understandings of speaker himself.

Though “Instantaneous response towards your uncomfortable feeling” is embodied by very young peoples saying “Why, Mamy?”, but it seems rather difficult to do so in student society. Even though, please make a question without hesitation in any time. Not only primitive stays near radical but also your slight doubt makes slow down the speed of speaker’s explanation which gives some time to other students to help consider and to follow up.

To make easy to pose such questions, I prepare pre-lecture note for a week before my lecture and corrected version of it after lecture with answers to questions if possible and necessary, those are posted on my home-page.

This lecture note is translated and revised from them. Especially, I make big revision in Chapter 8 and Chapter 9.

At the delivering time of my lectures, I haven’t yet clarified sufficiently the characterization of super-smooth functions but also the definition of integral on superspace which admits naturally the change of variables under integral sign. Therefore, I change significantly these representations in this notes from those original lectures. I owe much to my colleague Kazuo Masuda whose responses to my algebra related questions help me very much not only to my understanding but also to correct some defects in other’s publications with counter-examples. Concerning the characterization of super- , we need to prepare Cauchy-Riemann equation for them which is deeply connected with the countably infinite Grassmann generators.

In his naive definition of integral, Berezin’s formula of change of variables under integral sign holds only for integrand having a compact . This point is ameliorated when we modify the method of V.S. Vladimirov and I.V. Volovich [139] or A. Rogers [115] which relates to the different recognition of body part of “super space” from Berezin.

Leaving from syllabus, I give some application of super analysis to Random Matrix Theory (=RMT) and of SUSYQM=SUper SYmmetric Quantum Mechanics with Witten’s index.

I gather some facts which are not explained fully during these lectures in the last chapter named “Miscellaneous”. (1) I give a precise proof of Berezin’s formula of change of variables under integral sign. I confess Rothstein’s paper is not understandable to me, even a question letter to him without response and FORWHATANDWHY,THISLECTURENOTEISWRITTEN iii an explanation in Rogers’ book is also outside my scope. (2) Function spaces on superspace and Fourier transformation for supersmooth functions are breifly introduced. (3) As a typical simplest example of application of superanalysis, I give another proof of Qi’s result concerning an example of weakly hyperbolic equations, and (4) I give also an example for a system version of Egorov’s theorem which begins with Bernardi’s question. (5) I mentioned in the lecture that the problem posed by I.M. Gelfand at ICM congress address in 1954 concerning derivative equations related to QED or turbulence, which is more precisely explained there, and finally, (6) in his famous paper, E. Witten introduced Supersymmetric Lagrangian, derived from his de- ∗ formed dφ dφ +dφdφ. Here, we derive “the classical symbol of this operator” as the supersymmetric ∗ i j extension of Riemannian metric gij(q)dq dq , which is merely my poor interpretation from physi- cists calculation. (7) As an example where we need the countably infinite number of Grassmann generators, we consider Weyl equation with electro-magnetic external field. Besides whether it is phyisically meanigfull, we solve the Hamilton equation corresponding to this equation by degree of Grass- mann generators.

Though, references are delivered at each time in lectures, but I gathered them at the last part.

Finally, this note may be unique since I confess several times “their arguments are outside of my comprehension” which are not so proud of but I do so. Because not only I feel ashamed of my unmatureness of differential geometry and algebra’s intuition, but also I hope these confessions encourage to those young mathematicians who try to do some thing new!

Contents

Notice: Commencement of a class i For what and why, this lecture note is written ii

Chapter 1. A motivation of this lecture note 1 1.1. Necessity of the non-commutative analysis and its benefit 1 1.2. The first step towards Dirac and Weyl equations 13

Chapter 2. Super number and Superspace 21 2.1. Super number 21 2.2. Superspace 27 2.3. Rogers’ construction of a countably infinite Grassmann generators 28

Chapter 3. Linear algebra on the superspace 31 3.1. Matrix algebras on the superspace 31 3.2. Supertrace, superdeterminant 33 3.3. An example of diagonalization 39

Chapter 4. Elementary differential calculus on superspace 47 4.1. Gˆateaux or Fr´echet differentiability on Banach spaces 47 4.2. Gˆateaux or Fr´echet differentiable functions on Fr´echet spaces 48 4.3. Functions on superspace 51 4.4. Super differentiable functions on Rm|n 58 4.5. Inverse and implicit function theorems 69

Chapter 5. Elementary integral calculus on superspace 75 5.1. Integration w.r.t. odd variables – Berezin integral 75 5.2. Berezin integral w.r.t. even and odd variables 80 5.3. Contour integral w.r.t. an even variable 83 5.4. Modification of Rogers, Vladimirov and Volovich’s approach 86 5.5. Supersymmetric transformation in superspace – as an example of change of variables 92

Chapter 6. Efetov’s method in Random Matrix Theory and beyond 95 6.1. Results – outline 96 6.2. The derivation of (6.1.7) with λ iǫ(ǫ> 0 fixed) and its consequences 98 − 6.3. The proof of semi-circle law and beyond that 107 6.4. Edge mobility 109 6.5. Relation between RMT and Painlev´etranscendents 114

Chapter 7. Fundamental solution of Free Weyl equation `ala Feynman 117

v vi CONTENTS

7.1. A strategy of constructing parametrices 118 7.2. Sketchy proofs of the procedure mentioned above 119 7.3. Another construction of the solution for H-J equation 123 7.4. Quantization 128 7.5. Fundamental solution 132

Chapter 8. Supersymmetric Quantum Mechanics and Its Applications 143 8.1. What is SUSYQM 143 8.2. Wick rotation and PIM 149 8.3. addition 161 8.4. A simple example of supersymmetric extension 166

Chapter 9. Miscellaneous 169 9.1. Proof of Berezin’s Theorem 5.2.1 169 9.2. Function spaces and Fourier transformations 174 9.3. Qi’s example of weakly hyperbolic equation 187 9.4. An example of a system version of Egorov’s theorem – Bernardi’s question 190 9.5. Functional Derivative Equations 194 j k d 9.6. Supersymmetric extension of the Riemannian metric gjk(q)dq dq on R 197 9.7. Hamilton flow for Weyl equation with external electro-magnetic field 202

Bibliography 207 CHAPTER 1

A motivation of this lecture note

1.1. Necessity of the non-commutative analysis and its benefit

1.1.1. Another basic field? Is it truely necessary to introduce another ground ring in anal- ysis except for R or C? Why?

In the theory of linear PDEs(=Partial Differential Equations) of type, the main problem is to reduce the non-commutativity inherited from the so-called Heisenberg’s uncertainty principle ~ ∂ ~ ~ ∂ ~ ∂ ~ (1.1.1) ,q = δ , i.e. (q u(q)) q u(q)= δ u(q) i ∂q k i jk i ∂q k − k i ∂q i jk  j  j j to the one where commutative algebraic calculation is available. This is done using Fourier trans- formations, that is, the non-commutativity caused by the Heisenberg’s uncertainty principle, is reduced to the commutative one with error terms on the phase space by Fourier transformation, and then this transformed one is analyzed there, and by the inverse Fourier transformation, it is transformed back to the original setting. This procedure is done modulo error terms with suitable estimates. These ideas and various devices are unified as the theory of ΨDO(=Pseudo Differential Operators) and FIO(=Fourier Integral Operators).

Whether this strategy is extendable also to a system of PDEs, is our main concern.

Since there exists another non-commutativity stems from matrices coefficients for a system of PDEs, it seems difficult to treat it as similar as scaler cases. But if we may diagonalize that system nicely then we may the standard method to its each component. Even if it is hard to diagonalize straightforwordly, then we impose certain conditions on the characteristic roots associated to that system in order to assure that we may essentially reduce that system to the scalar pseudo differential operators. But if this procedure fails, is there any detour? Especially, if we need a “Hamilton flow” for the given system, how do we associate that classical objects keeping matrix structure as it is?

On the other hand, if the phenomenon is describable only using a system of PDEs, it seems natural to adandone the idea of reducing it to the scalar case. Of course, treating that system of PDEs, we need new idea to overcome the non-commutativity of matrices.

This difficulty is clearly claimed by Feynman where he asks what is the corresponding classical mechanics and action integral for Dirac equation. Moreover, he proposes to use quarternion to resolve this difficulty.

Here, we propose a new idea to overcome the non-commutativity of matrices. This idea is essentially simple when we encounter 2d 2d-matrices: Since those matrices are decomposed with × elements in Clifford algebras and that algebras has the representation by differential operators on

1 2 1. A MOTIVATION

Grassmann algebras, we extend the ground ring to the one having Grassmann character. Developing analysis on this ground ring, we may apply the standard process which are used in the scalar PDE. This idea is based on F.A. Berezin and M.S. Marinov [11] of “Treat bosons and fermions on equal footing”. Therefore, my answer to Feynman’s proposal should be,

“ Mr. Feynman, if you use Fr´echet-Grassmann algebras with countably infinite Grassmann generators instead of querternions, then it goes well!”

1.1.2. Feynman’s path-integral representation of the solution for Schr¨odinger equa- tion. More than seventy years ago, as a graduate student, R. Feynman [44] has a primitive question why Schr¨odinger equation may be considered as the governing equation of quantum mechanics? In other word, though Bohr’s correspondence principle which is derived after many experiments and thoughts, that principle should be essential in Quantum Mechanics, but it seems difficult to derive it directly from the Schr¨odinger equation itself.

Mathematically, this question is interpreted as follows: Let u(t,q) : R Rm C satisfy the × → initial value problem for the Schr¨odinger equation ∂ ~2 i~ u(t,q)= ∆u(t,q)+ V (q)u(t,q), (1.1.2) ∂t − 2   u(0,q)= u(q). How does the solution u(t,q) depend on ~? Especially, can we deduce the Bohr’s correspondence principle from this?

On the other hand, about fourty years before when I had been a student, main research subjects developing general theory of linear PDEs are “existence, uniqueness and regularity” of solutions for the given equation1. Essential ingredients of these subjects is almost exhaustively studied and col- lected in L. H¨ormander’s book [64]2 and one of the recent problems is to pick up special properties from governing equation or to represent the solution as explicit as possible by using known objects (for example, R. Beals [9]). From this point of view, to make clear the dependence of the solution of Schr¨odinger equation on Planck’s constant ~ and to explain mathematically the appearance of Bohr’s correspondence principle is a good starting problem.

Therefore, we begin with retracing the heuristic procedure taken in Feynman’s doctor thesis (see also, S.A. Albeverio and R.J. Hoegh-Krohn [3]) where he introduced his path-integral repre- sentation.

∞ m For the right-hand side of (1.1.2), we define the Hamiltonian operator on C0 (R ) as

2 m 2 ˆ ~ ˆ ˆ ∂ H = ∆+ V ( )= H0 + V, H0 =∆= 2 . − 2 · ∂qj Xj=1

1Before advent of functional analytic approach to PDE, rather explicit solution is pursued at that time, therefore it is too hard to obtain a solution for a generally given PDE. 2These books are not only so volumy to read through but also so difficult to find out problems for doctor thesis. Therefore, I recommend to use them as dictionary, but rather to look his doctor thesis [62] itself. Moreover, as he is a specialist to apply Hahn-Banach extension theorem, reconsider his procedure by using “constructive extension theorem”? 1.1. NECESSITY OF THE NON-COMMUTATIVE ANALYSIS AND ITS BENEFIT 3

If above Hˆ is essentially self-adjoint in L2(Rm), applying Stone’s theorem, solution of (1.1.2) is written by −1 ˆ u(t,q) = (e−i~ tH u)(q). Or generalizing a little, when and how the exponential function of a given operator A ∞ (tA)k etA = (t R+, or t iR) k! ∈ ∈ Xk=0 is well-defined? Guiding this problem, Hille-Yosida theory of semigroups is established.

[Report problem 1-1]: Check what is the Stone’s theorem. If the is finite-dimensional, what is the corresponding theorem in elementary linear algebra? It is also preferable to check what is the theory of Hille-Yosida.

−i~−1tHˆ On the other hand, Lie-Trotter-Kato’s product formula says that if Hˆ = Hˆ0 + V , e is given by −1 −1 t −1 t k −i~ tHˆ −i~ V −i~ Hˆ0 e = s-lim e k e k even if [Hˆ0,V ] = 0. k→∞ 6   ˆ ˆ k k k ˆ j k−j Remark 1.1.1. (i) In the above, if [H0,V ] = 0, then (H0 + V ) = j=0 j H0 V and we s(Hˆ +V ) sHˆ sV have e 0 = e 0 e , i.e. it isn’t necessary to apply above product formula.P  (ii) There doesn’t exist the difference between strong and weak convergence in finite-dimensional vector spaces. Check the difference between the convergence of operators in “strong” or “uniform” sense in infinite-dimensional Banach space.

If the initial data u belongs to (Rm)(=a space of Schwartz’ rapidly decreasing functions), S where (Rm)= u C∞(Rm : C) p (u) < , k N S { ∈ | k,S ∞ ∀ ∈ } ℓ β 2 1/2 with pk,S(u)= sup q ∂q u(q) , q = (1+ q ) , q∈Rm,ℓ+|β|≤kh i | | h i | | since we know −1 ˆ −1 2 (e−i~ tH0 u)(¯q)=(2πi~t)−m/2 dq ei~ (¯q−q) /(2t)u(q), m ZR we have −1 ˆ −1 −1 ˆ (e−i~ tH u)(¯q) (e−i~ tV (e−i~ tH0 u))(¯q) ∼ −1 −1 2 (2πi~t)−m/2 e−i~ tV (¯q) dq ei~ (¯q−q) /(2t)u(q). ∼ m ZR Therefore, we get

−1 ˆ −1 ˆ −1 −1 (1) 2 −1 ˆ (e−i~ sH (e−i~ tH u))(¯q) (2πi~s)−m/2 e−i~ sV (¯q) dq(1) ei~ (¯q−q ) /(2s)(e−i~ tH u)(q(1)) ∼ m ZR −1 −1 (1) 2 (2πi~)−m (ts)−m/2e−i~ sV (¯q) dq(1) ei~ (¯q−q ) /(2s) ∼ m ZR −1 (1) −1 (1) 2 e−i~ tV (q ) dq ei~ (q −q) /(2t)u(q) × m  ZR  = (2πi~)−m (ts)−m/2 dq m ZR −1 (1) −1 (1) 2 −1 (1) 2 dq(1) e−i~ (sV (¯q)+tV (q )) ei~ (¯q−q ) /(2s)+i~ (q −q) /(2t) u(q). × m  ZR  4 1. A MOTIVATION

Putting t = s in the above, we have i~−1t(V (¯q)+ V (q(1)))+i~−1[(¯q q(1))2 + (q(1) q)2]/(2t) − − − 1 q¯ q(1) 2 1 q(1) q 2 = i~−1t − V (¯q)+ − V (q(1)) . 2 t − 2 t −       Repeating this procedures k-times and denoting q(k) =q, ¯ q(0) = q, we define

k (j) (j−1) 2 (k) (0) 1 q q (j) t St(q ,...,q )= − V (q ) "2 t/k − # k Xj=1   and we get k −i~−1 t V −i~−1 t Hˆ (k−1) (1) e k e k 0 u(q) dq F (t, q,q¯ , ,q ,q)u(¯q). ∼ k · · · Z Here, we put  

−1 (k−1) (1) F (t, q,q¯ (k−1), ,q(1),q) = (2πi~(t/k))−km/2 dq(1) dq(k−1) ei~ St(¯q,q ,··· ,q ,q). k · · · · · · · · · Z Z Making k formally, we have →∞ −1 (k−1) (1) (1.1.3) F (t, q,q¯ ) = s-lim(2πi~(t/k))−km/2 dq(1) dq(k−1) ei~ St(¯q,q ,··· ,q ,q). k→∞ · · · · · · Z Z and −1 ˆ (e−i~ tH u)(¯q)= dq F (t, q,q¯ )u(q). Z [Report Problem 1-2]: Show that the function space (Rm) forms a Fr´echet space. S Feynman’s interpretation: The of “paths” is denoted by C = γ( ) AC([0,t] : Rm) γ(0) = q, γ(t) =q ¯ , t,q,q¯ { · ∈ | } C = φ( ) AC([0,t] : Rm) φ(0) = φ(t) , t,loop { · ∈ | } where AC stands for absolute continuity. In this case, for any γ C , we have ∈ t,q,q¯ Ct,q,q¯ = γ + Ct,loop. For example, take as γ the straight line combining q andq ¯ such that γ = γ (s) = (1 s )q + s q¯. sl sl − t t By connecting two paths and adjusting time scale, we may define the sum operation in Ct,loop which makes it linear space.

We get a Lagrange function L(γ, γ˙ ) from a Hamilton function H(q,p) by Legendre transform with a certain convexity; 1 L(γ, γ˙ )= γ˙ 2 V (γ) C∞(T Rm). 2 − ∈ For any path γ C , regarding S (q(k),...,q(0)) as a Riemann sum of an action function S (γ), ∈ t,q,q¯ t t we get t (k) (0) St(γ)= dτ L(γ(τ), γ˙ (τ)) = lim St(q , ,q ). k→∞ · · · Z0 Making k , we “construct” a limit of measures dq(1) dq(k−1) →∞ · · · DF γ = dγ(τ) 0<τ

i~−1 R t dτ L(γ(τ),γ˙ (τ)) F (t, q,q¯ )= DF γ e 0 . ZCt,q,q¯ 1.1. NECESSITY OF THE NON-COMMUTATIVE ANALYSIS AND ITS BENEFIT 5

Then, if we could apply the stationary phase method to this representation when ~ 0, we got → the main term which is obtained from the classical path γc, i.e. t d t δ dτ L(γ(τ), γ˙ (τ)) = dτ L((γ + ǫφ)(τ), (γ ˙ + ǫφ˙)(τ)) =0 for φ C dǫ c c ǫ=0 ∀ ∈ t,loop Z0 Z0

In this sense, Bohr’s correspondence principle is derived! (Probably, Feynman yelled with delight “I did it!” ?).

The obstruction of this beautiful expression is the claim that “There doesn’t exist a non-trivial Lebesgue-like measure on any infinite-dimensional barreled locally convex ”3. [Report Problem 1-3 (Campbell-Hausdorff’s formula and its application)]: (1) Search “Campbell Hausdorff” in Google and check what it is. (2) Apply that formula to etX where 0 µ 1 0 µ 0 01 X = Ω2− µ2 0 0 µ −  0 Ω2 µ2 µ 0  − −  and get the concrete expression. Don’t use the diagonalization procedure but apply Campbell-Hausdorff formula to the suitable decomposition of X. This matrix is derived from the Hamiltonian mechanics, for Lagrangian function L below, which is called Bateman-model. 1 Ω2 (1.1.4) L(q, q˙)= (q ˙2 +q ˙2)+ µ(q q˙ q q˙ )+ (q2 + q2) C∞(T R2 : R). 2 1 2 1 2 − 2 1 2 1 2 ∈ (3) Search also “Lie-Trotter-Kato formula”.

[Report Problem 1-4]: What is the meaning of AC function, what property it shares?

1.1.3. Non-existence of Feynman measure. To “feel” the reason why there doesn’t exist Lebesgue-like measure (called Feynman measure), we give a simple theorem due to H.H. Kuo [95].

Since that theorem is formulated in Hilbert space and the path space Ct,loop is not Hilbert one, those who don’t satisfy this explanation, consult the paper by O.G. Smolyanov and S.V. Fomin [127].

For the sake of those who forget teminology, we recall the following:

Definition 1.1.1 (Complete σ-algebra). For a given space X, a of all X B P satisfying

, • ∅∈B A = Ac = X A , • ∈B ⇒ \ ∈B A = ∞ A • n ∈B ⇒ n=1 n ∈B is called complete σ-algebraP .

Definition 1.1.2 (measure). A set function µ defined on a complete σ-algebra of a space B X is called a measure if it satisfies

0 µ(A) , µ( ) = 0, • ≤ ≤∞ ∅ A , A A = (j = k)= µ ∞ A = ∞ µ(A ). • n ∈B j ∩ k ∅ 6 ⇒ n=1 n n=1 n P  P 3Though to construct Lebesgue’s integration theory, we are taught to prepare measure theory but is it truelly necessary to do so? For example, Berezin integral below works without measure. 6 1. A MOTIVATION

Definition 1.1.3 (Borel-algebra). A family of sets of a X is called a B Borel-algebra and denoted = (X) if it satisfies B B A = Ac = X A , • ∈B ⇒ \ ∈B A = ∞ A , • n ∈B ⇒ n=1 n ∈B (X) , • O ⊂B P is the minimum in X for the ordering by the set inclusion. •B P Definition 1.1.4. A Borel measure4 satifying below is called Lebesgue-like: (1) For any bounded Borel set, its measure is not only finite, but also positive if a set is not empty. (2) That measure is translation invariant5.

Theorem 1.1.1. There exists no non-trivial Lebesgue-like Borel measure on a inifinite dimen- sional separable Hilbert space.

Proof. Since H is separable, there exists a countable orthonormal base e , e , 6. { 1 2 ···} Assume that there exists a non-trivial Lebesgue-like Borel measure µ on (H). Define open B sets as 1 B = u H u e < and B = u H u < 2 , n { ∈ k − nk 2} { ∈ k k } then they satisfy

B B = and ∞ B B. n ∩ m ∅ ∪n=1 n ⊂ Since the measure is Lebesgue-like, we have ∞ 0 <µ(B )= µ(B )= < , = µ(B ) µ(B) < . Contradiction!  1 2 · · · ∞ ∞ n ≤ ∞ n=1 X [Report Problem 1-5]: What occurs if bases has continuous cardinality? By the way, check whether there exist non-separable Hilbert space. Check also the problem in general Banach space.

Remark 1.1.2. Recently, I recognized very radical idea from Hung Cheng, a Professor of Applied Mathematics in the theoretical physics group of MIT, he is a phisicist having job in math.department: He claimed in [25],

The path integration approach is not only heuristic and non-rigorous; worse, it often leads to erroneous results when applied to non-Abelian gauge field.

Remark 1.1.3 (Note added:2014.11). Though full Feynman measure doesn’t exist7, the objects represented formally using path-integration should be carefully researched. How to make rigorous the partial differentiation in “path-integral category” is now under-construction by Fujiwara [51], N. Kumano-go [93].Their trials are done in (1 + 0)-, how to generalize it to (1 + d)- dimensional case contains many interesting problem such as time-slicing should be replaced by something-like finite elements method corresponding to triangulation in , etc?.

4measure defined on Borel algbra 5assume the translation is defined on that topological space X 6Hilbert-Schmidt’s procedure of orthogonalization holds for countable number of bases 7recall also, there doesn’t exist “ functor” called full quantization, see R. Abraham and J.E. Marsden [2] 1.1. NECESSITY OF THE NON-COMMUTATIVE ANALYSIS AND ITS BENEFIT 7

Not as integrand by measure, but something new, my concern is how one can prove that this object is a solution of Functional Differential Equations. See, the last chapter!

1.1.4. Resume of known procedures. Assuming a certain convexity to apply Legendre transform, we have Lagrange Mechanics Hamilton Mechanics ←→ Legendre transform L(γ, γ˙ ) H(q,p). ←→ Classical Mechanics q˙ = Hp(q,p), q(0) q Hamilton equation with = , ( p˙ = Hq(q,p), p(0) p −     d q H 0 1 i.e. = J q with J = . dt p Hp 1 0     − 

m ∂φ ∂H ∂φ ∂H Liouville equation φ˙ = φ, H = with φ(0,q,p)= φ(q,p). { } ∂qj ∂pj − ∂pj ∂qj Xj=1   Quantum Mechanics quantization Liouville equation Heisenberg picture −−−−−−−−−−→

 quantization  Hamilton equation Schr¨odinger picture y −−−−−−−−−−→ y “L(γ, γ˙ ) or H(q,p) Hˆ = Hˆ (q, i~∂ )′′ → − q (S) A description of the movement of the state vector u(t) w.r.t. time t: ∂u(t) Schr¨odinger picture i~ = Huˆ (t) with u(0) = u, ∂t −1 i.e. u(t)= e−i~ tHˆ u.

(H) A description of the change of the kinetic operator Fˆ(t) w.r.t. time t: d Heisenberg picture i~ Fˆ(t) = [Fˆ(t), Hˆ ] with Fˆ(0) = Fˆ . dt (F) Path Integral method, clarifying Bohr’s correspondence principle:

Feynman picture u(t,q)= dq E(t, 0,q,q)u(q) Z with t −1 E(t, 0,q,q)= DF γ exp i~ ds L(γ(s), γ˙ (s)) . ZCt,q,q Z0 Here,  C = γ C([0,t] : Rd) γ(0) = q, γ(t)= q t,q,q { ∈ | } and −1 E(t, 0,q,q) D(t, 0,q,q)1/2ei~ S(t,0,q,q) δ (q). ∼ ∼ q 8 1. A MOTIVATION

Problem 1.1.1. Give a meaning to the symbolic representation t D γ exp i~−1 L(γ(τ), γ˙ (τ))dτ F { } Z Z0 for a wider class of Lagrangian L. (0) Concerning this question, D. Fujiwara [49] gives a rigorous meaning without notorious measure when the potential V satisfies ∂αV (x) C ( α 2). | x |≤ α | |≥ (i) For the Coulomb potential V (q) = 1/ q , i.e. hydrogen atom, because of the singularity, we have | | not yet established8 the analogous result as Fujiwara.

(a) I propose to calculate this by replacing 1/ q with 1/( q 2 + ǫ)1/2 for any ǫ > 0 | | | | and finally making ǫ 0, or → (b) Use the fact that Schr¨odinger equation with 3-dimensional Coulomb potential is obtained form 4-dimensional harmonic oscillator(See, for example, N.E. Hurt [65]).

(ii) At least in dimension 1, the essential selfadjointness of ∆+ q 4 is proved by many methods − | | (see, M. Reed and B. Simon vol I [111]). But we might not apply the procedure used by Fujiwara to construct a parametrix using classical quantities (but, see, S. Albeverio and S. Mazzucchi [5]). (iii) How do we proceed when there exists many paths connecting points q and q′ like the dynamics on the circle or sphere (see, L. Schulman [121])? and (iv) When ∂αV (q) C ( α 2), the above constructed parametrix converges in uniform operator | q |≤ α | |≥ . On the other hand, Lie-Trotter-Kato product formula assures only for the strong convergence. How can one express the reason for this difference? In case of using polygonal line approximation for classical path to the harmonic oscillator, we get the strong but non- of para- metrices. One possibilty may to use non-standard analysis to check why there exists the difference of the convergence.

[Report Problem 1-6]: What is the meaning of essential adjointness? Check [111]!

Problem 1.1.2. Fujiwara adopted the Lagrangian formulation in his procedure, stressing with- out Fourier transform. Does there exist the Hamiltonian object corresponding to this parametrix? (see for example, A. Intissar [83] and A. Inoue [73]): t D q D p exp i~−1 dτ [q ˙(τ)p(τ) H(q(τ),p(τ)) ? F F { − } ZZ Z0 (Added August 2015): Concerning this problem, I find an interesting and important results by N. Kumano-go and D. Fujiwara [94]. They try to define “path integral” and corresponding calculation without “measure” and succeed it at least partially.

1.1.5. Feynman’s murmur. In p. 355 of their book [45], Feynman wrote as follows (under- lined by the author):

path integrals suffer grievously from a serious defect. They do not permit a · · · discussion of spin operators or other such operators in a simple and lucid way.

8Rather recently, I find a paper by C.Grosche [56] where he claims this problem is solved by path inte- gral method. But from his explanation, seemingly, we don’t have clearly the corresponding principle from their representation(2015.1.20) 1.1. NECESSITY OF THE NON-COMMUTATIVE ANALYSIS AND ITS BENEFIT 9

They find their greatest use in systems for which coordinates and their con- jugate momenta are adequate. Nevertheless, spin is a simple and vital part of real quantum-mechanical systems. It is a serious limitation that the half- integral spin of the electron does not find a simple and ready representation. It can be handled if the amplitudes and quantities are considered as quarternions instead of ordinary complex numbers, but the lack of commutativity of such num- bers is a serious complication.

Main Problem: How do we treat this murmur as a mathematical problem?

Though for a given Schr¨odinger equation, we may associate a corresponding classical me- chanics, but how do we define the classical mechanics corresponding to Dirac or Weyl equations? In other word, since Schr¨odinger equations is obtained from Lagrangian or Hamiltonian function by quantization, can we define a Hamiltonian function from which we get Dirac equation after quantization?

One of my objects to this lecture note, is to answer this main problem affirmatively by prepar- ing new tools and giving sketchy explanation. There exists at least two problems for this: (1) How to define classical mechanics to Dirac or Weyl equations, or more generally for 2d 2d × systems of LPDE? (2) Like Dirac or Weyl equations who have only first order derivatives in space variables, it seems impossible, even if there exist Hamilton equations, to assign initial and final positions in configu- ration space as is done in Schr¨odinger equation. How to get rid of this?

Finally, my answer is affirmative, it is possible with not only using superspace formulation but also re-interpreting the method of characteristics by Hamilton flow and Fourier transformation.

======Mini Column 1: Stationary phase method ======

Consider the integral with parameter

I(ω)= dq u(q)eiωφ(q). Z Study the asymptotic behavior of I(ω) when ω . Remembering Riemann-Lebesgue lemma, it →∞ seems natural to imagine the following fundamental fact holds.

Lemma 1.1.1. Let φ C∞(Rd : R) and u C∞(Rd : R). Then, ∈ ∈ 0 φ′ = 0 on supp u = I(ω) = 0(ω−k) when ω . 6 ⇒ →∞

This is a fundamental fact for the stationary phase method. Therefore, further study is to study the behavior when “φ′ = 0 on supp u”. A typical answer for this is given 6 Theorem 1.1.2 (Theorem 7.7.5, p.220 of H¨ormander I of [64]). Let K be a compact set of Rd, X an open neighborhood of K and k a positive integer. If u C2k(K), φ C3k+1(X) and ∈ 0 ∈ φ 0 on X and let there exists a point q K such that φ(q ) = 0, φ′(q ) = 0, det φ′′(q ) = 0 ℑ ≥ 0 ∈ ℑ 0 0 0 6 10 1. A MOTIVATION and φ′ = 0 on K q . Then, we have 6 \ { 0}

iωφ(q) iωφ(q0) ′′ −1/2 −j −k α dq u(q)e e (det(ωφ (q0)/(2πi)) ω Lju Cω sup D u . Rd − ≤ | | Z j

Here, C is bounded when φ stays in a bounded set in C3k+1(X) and q q /φ′(q) has a uniform | − 0| bound. With φ′′(q )−1(q q ),q q g (x)= φ(q) φ(q ) h 0 − 0 − 0i q0 − 0 − 2 which vanishes of third order at q0, we have ′′ ν µ φ (q )D, D (gq u)(q ) L u = i−j2−ν h 0 i 0 0 . j µ!ν! ν−Xµ=j 2νX≥3µ This is a differential operator of order 2j acting on u at q0. The coefficients are rational homoge- neous functions of degree j in φ′′(q ), , φ(2j+2)(q ) with denominator (det φ′′(q ))3j . In every − 0 · · · 0 0 term the total number of derivatives of u and of φ′′ is at most 2j.

Remark 1.1.4. In mathematics society, it is regarded as someone′s-conjecture if the statement “This is the main term” goes without precise estimates of error terms. But in papers of mathe- matical physics, seemingly there is not so many with estimating “error terms”. For example, the famous paper of E. Witten [143] doesn’t have estimates of “so-called small terms” with precise calculation. Or more frankly speaking, since there doesn’t exist Feynman measure, the represen- tation using such measure seems a castle in the air, though it shows us the goal or a dream as it is so. Getting the main terms without error estimates, you may proceed very algebraically and geometrically, and you may have something-like solution, but it doesn’t mean its conclusion is true! Even if you may have experiments based on that calculation and if you may claim the data is inside measurement error, how you may assure the theory is correct even in mathematical sense!

======End of Mini Column 1 ======

1.1.6. Fujiwara’s procedure. Since there doen’t exist the so-called Feynman measure which guarantees the beautiful path-integral expression, how do we represent the solution of the Schr¨odinger equation?

As the operator ~2 Hˆ = Hˆ (q, i~∂ )= ∆+ V (q) − q − 2 −1 is essentially self-adjoint on L2(Rm) under certain conditions on V , there exists a solution ei~ tHˆ u (by Stone’s theorem) of the initial value problem ∂u(t,q) i~ = Huˆ (t,q) with u(0,q)= u(q). ∂t Moreover, by L. Schwartz’s theorem, we have a kernel E(t,q,q′) ′(R Rm Rm) such ∈ D × × that −1 ˆ ei~ tH u, ϕ = E(t,q,q′)u(q′), ϕ(q) = E(t,q,q′), u(q′)ϕ(q) = E(t, , ), u ϕ . h i h i h i h · · ⊗ i On the other hand, for the heat case etHˆ v, the distributional kernel H(t,q,q′) has the representation by the “classical quantities”? 1.1. NECESSITY OF THE NON-COMMUTATIVE ANALYSIS AND ITS BENEFIT 11

Method of Fujiwara: About 30 years before, there doesn’t exist a paper on the construction of a fundamental solution for the initial value problem of Schr¨odinger equation. Fujiwara adopts the argument of Feynman modifying mathematically.

(1) For given Lagrangian L(γ, γ˙ ) = 1 γ˙ 2 V (γ) C∞(T M)(M = Rm), by Legendre trans- 2 | | − ∈ form, we have the Hamilton function H(q,p) = inf[qp ˙ L(q, q˙)] C∞(T ∗M). q˙ − ∈ (2) For the Hamilton function H(q,p)= 1 p 2 + V (q), we construct a solution S(t,q,q) of the 2 | | Hamilton-Jacobi equation

1 2 St(t,q,q)+ H(q,Sq(t,q,q)) = 0 with lim tS(t,q,q)= q q . t→0 2| − |

(3) For the action function S(t,q,q) obtained above, the amplitude function9 defined by ∂2S(t,q,q) D(t,q,q) = det (van Vleck determinant) ∂q∂q   satisfies the continuity equation

Dt(t,q,q)+ ∂q(D(t,q,q)Hp(q,Sq(t,q,q))) = 0 with lim D(t,q,q) = 1. t→0

(4) Then we define the integral transformation

−1 (1.1.5) F (t)u(q) = (2πi~)−m/2 dq D(t,q,q)1/2ei~ S(t,q,q)u(q). m ZR α Theorem 1.1.3 (Theorem 2.2 of Fujiwara [49]). Assume sup D V (q) Cα ( α 2). Fix q∈Rm | |≤ | | ≥ 0

F (t)u C u by Cotlar’s lemma. k k≤ k k (ii) For any u L2(Rm : C), t, s, t + s [ T, T ], ∈ ∈ − lim F (t)u u = 0, t→0 k − k ∂ i~ (F (t)u)(q) = Hˆ (q, i~∂ )u(q), ∂t − q t=0 2 2 F (t + s) F ( t)F (s) C(t + s ). k − k≤ (iii) Moreover, there exists a limit lim (F (t/k))k = E(t) in (H), i.e. in the operator k→∞ B norm of L2(Rm : C), which satisfies the initial value problem below: ∂ i~ (E(t)u)(q)= Hˆ (q, i~∂ )(E(t)u)(q), ∂t − q   (E(0)u)(q)= u(q). Remark 1.1.5. The operator F (t) is said to be a parametrix and E(t) or its kernel is called the fundamental solution.

9How to recognize Feynman’s idea of “put equal weight for each path”, I feel some difference between Fujiwara’s idea and mine 12 1. A MOTIVATION

Outline of the proof: In (2), for the construction of a solution of the Hamilton-Jacobi equation, he uses the Jacobi’s method.

(a) For the given H(q,p) and the initial data (q,p), there exists a unique Hamilton flow (q(s),p(s)) = (q(s,q,p),p(s,q,p)).

(b) For the given time t which is sufficiently small, and for any given terminal position q¯, applying the implicit function theorem toq ¯ = q(t,q,p), we get the unique p denoted by p = ξ(t,q, q¯).

(c) Using this function, we put

S(t, q,q¯ )= S0(t,q,p) p=ξ(t,q,q¯). That is, there exists a unique path γ in C such that c t,q,q¯ t inf St(γ)= St(γc)= S(t,q, q¯) with St(γ)= dτ L(γ(τ), γ˙ (τ)). γ∈C t,q,q¯ Z0 Moreover, this function S(t, q,q¯ ) is a solution of the Hamiltonian-Jacobi equation.

ℓ α β Remark 1.1.6. By this construction, we have estimates for ∂t ∂q¯ ∂q S(t, q,q¯ ) with respect to (t, q,q¯ ).

(3) is proved from (2) algebraically (see, Inoue and Maeda [79] or [81] even on superspace).

(4) Since we have estimates of S(t, q,q¯ ) or D(t, q,q¯ ) w.r.t. (¯q,q), we may prove the L2- boundedness of the operator (1.1.5) applying Cotlar’s lemma. Since we take D(t, q,q¯ )1/2 as the amplitude, the operator (1.1.5) is considered as acting on the half-density bundle (or the intrinsic Hilbert space) “L2(Rm : C)”. I regard this fact as corresponding to Copenhagen interpretation.

(5) Though above theorem is sufficient concerning the convergence of parametrix (1.1.5), but this convergence is not sufficient for the Feynman’s expression. Concerning this or the construction of the fundamental solution itself, there exists another paper by Fujiwara [50] which isn’t discussed in this lecture because I haven’t appreciated it fully10.

Problem 1.1.3. In the above theorem, the momentum energy is restricted on the flat Riemann- ian metric 1 p 2 on Rm. Whether this procedure works for the Riemannian metric is calculated by 2 | | physicist (see for example, B. DeWitt [34]) and he suggests the desired Laplace-Beltrami operator i j but with the term R/12 where R is the scalar curvature of gij(q)dq dq . In general, to prove the L2-boundedness of the FIO with suitable phase and amplitude of order 0, Fujiwara applied Cotlar’ lemma which is formulated in flat space. Technically, we need new device to extend almost or- thogonality in case the space is curved. Therefore, it is an open problem to associate a quantum m mechanics for given Riemann metric gij(q) on R following Fujiwara’s procedure. On the other hand, above procedure of Fujiwara was used also by Inoue and Maeda [79] to explain mathematically the origin of the term (1/12)R, R =the scalar curvature of the configuration manifold in the heat category, which appeared when one wants to “quantize with purely imaginary time” the Lagrangian on a curved manifold.

10But a part of this problem is considered as the product formula for FIOp on superspace and is treated in Chapter 7 slightly 1.2.THEFIRSTSTEPTOWARDSDIRACANDWEYLEQUATIONS 13

Problem 1.1.4. Feynman or Fujiwara used Lagrangian formulation. How do we connect the above procedure directly to the Hamiltonian without using Lagrangian?

1.2. The first step towards Dirac and Weyl equations

1.2.1. The origin of Dirac and Weyl equations. Why and how does P. Dirac introduce, now so-called, Dirac equation? We modify the description of Nishijima [107].

Assume that energy E and momentum p of the free particle with mass m satisfy the Einstein relation11 E2 = c2 p 2 + c4m2. | | Following the canonical quantization procedure of substitution ~ ∂ ∂ pj , E i~ −→ i ∂qj −→ ∂t which we did to get the Schr¨odinger equation, we have the Klein-Gordon equation ∂2 ~2 u c2~2∆u + c4m2u = 0. ∂t2 − Unfortunately, the solution u of this equation does not permit the Copenhagen interpretation, that is, the quantity ρ = u 2 is not interpreted as the probability density. In order to get rid of this | | inconvenience, it is claimed that it is necessary to have the first order derivative w.r.t. time in physics literature.

If this saying is accepted, the simplest prescription is to put

E = c p 2 + c2m2. ± | | But the right-hand side of above defines a ΨDO,p which doesn’t have local property12. This gives us a certain conflict if we insist on that the physical law(which is assured by experiments in laboratory system) should satisfy local property. Therefore, it is not so nice to accept such ΨDO with symbol above, as the quantization of Einstein relation.

[Report problem 2-1]: For a ΨDO, it has pseudo-local property. Report on this subject.

In order to have the equation which stems from Einstein relation and admits probabilistic interpretation, we need to have ∂ i~ Hˆ ψ = 0 ∂t − which satisfies  ∂2 ~2 + Hˆ 2 ψ = 0. ∂t2 Assuming that this equation coincide with Klein-Gordon equation, we need that “the symbol corresponding to the operator Hˆ ” should satisfy

H2 = c2 p 2 + c4m2. | |

11Remember, for p = 0, this gives the theoretical foundation of the possibility of atomic bomb! 12 α − If P (q,p) = P|α|≤N aα(q)p , then roughly speaking, it is quantized as a PDE P (q, i~∂q ). Then, it satisfies ∞ m supp P (q, −i~∂q)u ⊂ supp u for any u ∈ C0 (R ). This is the local property of PDE 14 1. A MOTIVATION

Supposing that the state vector ψ which satisfies the desired equation has multicomponents, then we may have the option such that 3 2 H = αj pj + mc ββ. Xj=1 Here, above appeared letters α , β satisfy { j } 2 (1.2.1) αj αk + αk αj = 2δjkI, αj β + β αj = 0, β = I

Dirac gave an example of 4 4 matrices satisfying the relation (1.2.1), which is now called × Dirac matrices: 0 σj I2 0 αj = = σ1 σj, β = = σ3 I2. σj 0 ⊗ 0 I2 ⊗    −  Here, Pauli matrices σ 3 are gievn by { j}j=1 0 1 0 i 1 0 (1.2.2) σ = , σ = , σ = 1 1 0 2 i −0 3 0 1      −  satisfying

(1.2.3) σjσk + σkσk = 2δjkI, σ1σ2 = iσσ3, σ2σ3 = iσσ1, σ3σ1 = iσσ2.

[Report problem 2-2]: There are many representations satisfying (1.2.1), named Majonara or chiral representation, etc. Seek such representations as many as possible and check the relationship between them. By the way of checking these, study also the Lorentz invariance. If such relations are explained by unified manner, using the differential representation point of view, it will be good enough for master thesis, isn’t it?

Problem 1.2.1. For a given external electro-magnetic field, the IVP (=initial value problem) for the Dirac equation is given as follows: Find ψ(t,q) : R R3 C4, for the given initial data × → ψ(q) C∞(R3 : C4), satisfying ∈ 0 ∂ i~ ψ(t,q)= H(t)ψ(t,q), (1.2.4) ∂t   ψ(t,q)= ψ(q). Here,  3 ~ ∂ e 2 (1.2.5) H(t)= c αk Ak(t,q) + mc β + eA0(t,q). i ∂qk − c Xk=1   Though it is well-known that this IVP has a solution, we want to know a “good13” parametrix or fundamental solution as Feynman desired. More explicitly, show the mathematical proof for the phenomena called Zitterwebegung (see, Inoue [72] for free case).

Seemingly H. Weyl had been at Dirac’s talk as an audience, he proposed 2 2-matrix repre- × sentation (1.2.2) in stead of 4 4-one when the mass m = 0. From this, he derived the initial value × problem of the free Weyl equation: For a “vector” ψ(t,q), it satisfies 3 ∂ ∂ i~ ψ(t,q)= Hψ(t,q), H = ic ~ σ , ∂t − j ∂q ψ1(t,q) (1.2.6)  j=1 j with ψ(t,q)= .  X ψ2(t,q)  ψ(0,q)= ψ(q),    13representation implying Bohr’s principle 1.2.THEFIRSTSTEPTOWARDSDIRACANDWEYLEQUATIONS 15

In spite of the beauty of this equation, it is not accepted in physicists society for a while, because the “parity” is not preserved by this one. Its meaning is reconsidered after Lee-Yang’s theory and Wu’s experiment in weak interaction, which shows that the parity is not necessarily preserved for certain spinning particles.

Since Neutrino has been considered as the particle with mass 0, Weyl equation is believed to be the governing equation of Neutrino untill the recent experiment of Kamiokande which suggests that at least certain Neutrino has non-zero mass.

[Report Problem 2-3]: Search “Weyl equation” in internet to check whether the usage of this equation in condensed matter physics, etc. Report things what you appreciate interesting.

Ordinary procedure: As a hint to get a result for Problem 1.2.1, we give a simple example. Though the equation (1.2.6) is a system but with constant coefficients, applying Fourier transform, we may have the solution rather by algebraic operation. In fact, defining Fourier transform as

−1 −1 uˆ(p) = (2π~)−m/2 dq e−i~ qpu(q), u(q) = (2π~)−m/2 dp ei~ qpuˆ(p), m m ZR ZR and applying this to q R3 of (1.2.6), we get ∈ ∂ (1.2.7) i~ ψˆ(t,p)= Hˆ ψˆ(t,p) ∂t Here, 3 p3 p1 ip2 2 2 2 Hˆ = c σjpj = c − and Hˆ = c p I2. p1 + ip2 p3 | | Xj=1  −  From this, we have

Proposition 1.2.1. For any t R and ψ L2(R3 : C2), we have ∈ ∈ −1 −1 −1 ˆ (1.2.8) e−i~ tHψ(q) = (2π~)−3/2 dp ei~ qpe−i~ tHψˆ(p). 3 ZR If ψ (R3 : C2), then ∈ S −1 −3 i~−1qp −1 sin(c~ t p ) ′ 3 2 (1.2.9) E(t,q) = (2π~) dp e cos (c~ t p )I2 i | | Hˆ (R : C ) 3 | | − c p ∈ S ZR | | and  

−1 (1.2.10) e−i~ tHψ(q)= E ψ(t,q)= dq′ E(t,q q′)ψ(q′). ∗ 3 − ZR In spite of this, we may give another representation with action integral and amplitude 1/2, S D which is proved in [71] and will be explained in later chapter.

Remark 1.2.1. Pauli said one day that “There exists no classical counter-part corresponding to quantum spinning particle”, so I had seen somewhere but I can’t remember where exactly. Therefore, such saying didn’t exist? Please give a look to the splendid book S. Tomonaga [132], written in Japanese. In any way, it seems difficult to imagine the classical mechanics corresponding to the equation (1.2.6) from the formula (1.2.9). This is the one reason why I denote Feynman’s murmur as Feynman’s problem. 16 1. A MOTIVATION

Claim 1.2.1. In spite of above, I claim that I may construct the classical mechanics corre- sponding to (1.2.6), which yields a path-integral-like representation14 in Theorem 7.0.3 of it!

1.2.2. The method of characteristics and Hamiltonian path-integral-like represen- tation. Though Schr¨odinger equation has 2-times partial derivatives which guarantees to assign initial and final positions to the corresponding classical flow on configuration space,but there exists only 1-time partial derivatives w.r.t. the space variables in Dirac or Weyl equations, this is the very reason why we need Hamiltonian path-integral representation. We need to use phase space instead of configuration space.

Therefore, we want to give a simple example exhibiting “Hamiltonian path-integral-like rep- resentation”, which is a necessary device to resolve Feynman’s problem.

We may solve the following equation readily: ∂ ~ ∂ i~ u(t,q)= a u(t,q)+ bqu(t,q), (1.2.11) ∂t i ∂q   u(0,q)= u(q). From the right-hand side of above, we get a Hamiltonian function

−1 ~ ∂ −1 H(q,p)= e−i~ qp a + bq ei~ qp = ap + bq, i ∂q ~=0   then, the corresponding classical orbit is obtained easily from the Hamilton equation q˙(t)= Hp = a, q(0) q (1.2.12) with = ( p˙(t)= Hq = b p(0) p − −     such as

(1.2.13) q(s)= q + as, p(s)= p bs. − Using these, by applying the method of characteristics, we get

−1 −1 2 U(t,q)= u(q)e−i~ (bqt+2 abt ).

Using the q = y(t, q¯) =q ¯ at of q = q(t,q), the solution of (1.2.11) is given as − −1 −1 2 u(t, q)= U(t,q) = u(q at)e−i~ (bqt−2 abt ). |q=y(t,q¯) − Remark 1.2.2. In the above procedure, the information from p(t) is not used.

[Report problem 2-4]: Study the method of characteristics for the first order PDE. Since from the information obtained from ODE(such as (non-linear) Hamilton equation), we get a solution of PDE(such as (linear) Louville equation), this is the core of the method of characteristics. What is the linear Liouville equation corresponding to the non-linear field equation, for example, the Hopf equation represented by functional derivatives is the Liouville equation corresponding to the Navier-Stokes equation.

14Though I said on one hand that “There doesn’t exist path-integral”(more accurately, in path space, there doesn’t exist Lebesgue-like Borel measure) but here I mention path-integral-like. Therefore, it seems better to find more suitable nomination for path-integral-like representation 1.2.THEFIRSTSTEPTOWARDSDIRACANDWEYLEQUATIONS 17

Another point of view from Hamiltonian path-integral-like method: Put

t S (t,q,p)= ds [q ˙(s)p(s) H(q(s),p(s))] = bqt 2−1abt2, 0 − − − Z0 S(t, q,p)= q p + S (t,q,p) = qp apt bqt + 2−1abt2. 0 − −   q=y(t,q)

Then, the classical action S(t, q,p¯ ) satisfies the Hamilton-Jacobi equation.

∂ S + H(¯q, ∂ S) = 0, ∂t q¯   S(0, q,p¯ ) =qp ¯ . On the other hand, the van Vleck determinant (though scalar in this case) is calculated as ∂2S(t, q,p) D(t, q,p)= = 1. ∂q∂p This quantity satisfies the continuity equation: ∂ 1 ∂H ∂S D + ∂ (DH )=0 where H = q,¯ , ∂t 2 q¯ p p ∂p ∂q¯   D(0, q,p¯ ) = 1. 

As an interpretation of Feynman’s idea, we regard that the transition from classical to quantum is to study the following quantity or the one represented by this (be careful, the term “quantization” is not so well-defined mathematically as functor, so ad-hoc):

−1 u(t, q) = (2π~)−1/2 dp D1/2(t, q,p)ei~ S(t,q,p)uˆ(p). ZR That is, in our case at hand, we should study the quantity defined by

−1 u(t, q) = (2π~)−1/2 dp ei~ S(t,q,p)uˆ(p) ZR −1 −1 −1 2 = (2π~)−1 dpdq ei~ (S(t,q,p)−q p)u(q) = u(q at)ei~ (−bqt+2 abt ) . 2 − ZZR Therefore, we may say that this second construction gives the explicit connection between the solution (1.2.11) and the classical mechanics given by (1.2.12). We feel the above expression “good” because there appears two classical quantities S and D and also explicit dependence on ~.

Claim 1.2.2. Applying superanalysis, we may extend the second argument above to a system of PDOs e.g. quantum mechanical equations with spin such as Dirac, Weyl or Pauli equations, (and if possible, any other 2d 2d system of PDOs), after interpreting these equations as those on × .

1.2.3. Decomposition of 2 2 matrix by Clifford algebra. × How matrix does act on vectors?: Following matrices form a special class in 2 2 matrices. × a b A = − , a, b R. b a ∈   This set of matrices A not only preserves their form under four rules of arithmetic but also is { } commutative each other. Moreover, we identify this matrix A with a complex number α = a + ib. 18 1. A MOTIVATION

x If we regard a vector z = with a complex number z = x + iy, then the multiplication α to z y is considered as   a b x ax by (a + ib)(x + iy) αz − = − (ax by)+ i(bx + ay) = (a + ib)(x + iy) ∼ ∼ b a y bx + ay ∼ −      ax by (bx + ay) a b x y − − = − − . ∼ bx + ay ax by b a y x  −     Then, may we find another interpretation of making act 2 2 matrix to a column vector? Since × above mentioned interpretation gives you many stand points, is it possible this idea generalize?

Guided by the following theorem of C. Chevalley15 below, we decompose a 2 2-matrix. Here, × Theorem 1.2.1 (C. Chevalley). Any Clifford algebra has the representation on Grassmann algebra.

(I) For any 2 2 matrix, we have × a c a + b 1 0 a b 1 0 c + d 0 1 c d 0 1 = + − + + − d b 2 0 1 2 0 1 2 1 0 2 1 0      −    −  a + b a b c + d c d = I + − σ + σ + i − σ . 2 2 2 3 2 1 2 2 Here, σ satisfies not only (1.2.2) but also the relation (1.2.3). { j} This decomposition stands for that a set of all 2 2 matrices is spanned by Pauli matrices × σ having Clifford structure. { k} (II-1) Now, preparing a letter θ satisfying θ2 = 0, we identify Pauli matrices with differential operators acting on Grassmann algebra Λ = u(θ)= u + u θ u , u C , i.e. for { 0 1 | 0 1 ∈ } u0 u0 + u1θ , ∼ u1   define the action as 0 0 0 u0 ∂ u1 0 1 u0 θu(θ)= u0θ = , u(θ)= u1 = . ∼ u0 1 0 u1 ∂θ ∼ 0 0 0 u1           Then, we have ∂ u1 0 1 u0 θ + u(θ)= u0θ + u1 = , ∂θ ∼ u0 1 0 u1        ∂ u1 0 1 u0 θ u(θ)= u0θ u1 − = − , − ∂θ − ∼ u0 1 0 u1        ∂ u0 1 0 u0 1 2θ u(θ)= u0 u1θ = . − ∂θ − ∼ u1 0 1 u1   −   −   This means that Pauli matrices are represented as differential operators acting on Λ. But such a representation is not unique !

(II-2) Here is another representation: Preparing 2 letters θ1, θ2 satisfying θiθj + θjθi = 0 for i, j = 1, 2, we put Λ = u = u + u θ θ u , u C , Λ = v = v θ + v θ v , v C , ev { 0 1 1 2 | 0 1 ∈ } od { 1 1 2 2 | 1 2 ∈ } 15Though I don’t know how to prove this theorem itself, but I’m satisfied by constructing the differential operator representation of 2×2-matrices using Pauli matrices. Oh, such a jerry-built attitude as a mathematician is allowed?! Or I’m far from being a solid mathematician? 1.2.THEFIRSTSTEPTOWARDSDIRACANDWEYLEQUATIONS 19 and define differential operators acting on Λev as 2 ∂ u1 0 1 u0 σ1(θ, ∂θ)= θ1θ2 u(θ)= u0θ1θ2 + u1 = , − ∂θ ∂θ ∼ u0 1 0 u1  1 2       2 ∂ u1 0 1 u0 iσ2(θ, ∂θ)= θ1θ2 + u(θ)= u0θ1θ2 u1 − = − , − ∂θ ∂θ − ∼ u0 1 0 u1  1 2       ∂ ∂ u0 1 0 u0 σ3(θ, ∂θ)= 1 θ1 θ2 u(θ)= u0 u1θ1θ2 = . − ∂θ − ∂θ − ∼ u1 0 1 u1  1 2  −   −   Remark 1.2.3. Above defined differential operators σj(θ, ∂θ) annihilate Λod. Moreover, the symbols corresponding to them are “even”. This evenness of Hamiltonian function is crucial to derive Hamilton flow corresponding to Weyl or Dirac equations.

CHAPTER 2

Super number and Superspace

To explain symbols θ, θ1, θ2 appeared in previous lectures, we prepare a set of countably infinite Grassmann generators. After introducing of these, we may consider the classical mechanics corresponding to PDE with spin, which is rather easily solved.

2.1. Super number

2.1.1. The Grassmann generators. Preparing symbols σ ∞ which satisfy the Grass- { j}j=1 mann relation (2.1.1) σ σ + σ σ = 0, j, k = 1, 2, , j k k j · · · we put formally I (2.1.2) C = X = XIσ XI C { ∈ } I∈I X and C(0) = C[0] = C,

(j) I  C = X = XIσ and   I   |X|≤j    [j] I (j) (j−1) C = X = XIσ = C /C ,   |I|=j   X where the index set is defined by  N = I = (i ) 0, 1 I = i < , I { k ∈ { } | | k ∞} Xk I ˜ σ = σi1 σi2 , I = (i , i , ), σ0 = 1, 0˜ = (0, 0, ) . 1 2 · · · 1 2 · · · · · · ∈I Remark 2.1.1. How do we construct symbols σ ∞ satisfying the Grassmann relation? { j}j=1 What is the meaning of summation appeared above? These will be soon explained.

In today’s lecture, we prove the following Proposition which guarantees that C (or R, defined later) plays the alternative role of C (or R) in analysis.

Proposition 2.1.1 (A. Inoue and Y. Maeda [80]). C forms an -dimensional Fr´echet- ∞ Grassmann algebra over C, that is, an associative, distributive and non- with degree, which is endowed with the Fr´echet topology.

Remark 2.1.2. There exist some papers using C, for example, S. Matsumoto and K. Kakazu [100], Y. Choquet-Bruhat [20], P. Bryant [21]. But, seemingly, they didn’t try to construct “elementary and real analysis” on this “ground ring” C (or R).

21 22 2. SUPER NUMBER AND SUPERSPACE

2.1.2. spaces and their . Following G. K¨othe [90], we introduce the sequence spaces ω and φ, (effm=except for finitely many)

φ = x = (xk) = (x1,x2, ,xk, ) xk C and xk = 0 effm k , (2.1.3) · · · · · · ∈ ( ω = u = (uk) = (u1, u2, , uk, ) uk C . · · · · · · ∈ For any  containing φ, we define the space × by X X × = u = (u ) u x < for any x = (x ) , X k | k|| k| ∞ k ∈X ( k ) X then, we get φ× = ω and ω× = φ. We introduce the (normal) topology in and × by defining the seminorms X X × (2.1.4) pu (x)= u x = px (u) for x and u . | k|| k| ∈X ∈X Xk (n) (n) Especially, x converges to x in φ, that is, pu(x x) 0 as n for each u ω if and − → → ∞ ∈ only if for any ǫ> 0, there exist L and n0 such that (n) (i) x = xk =0 for k>L when n n0, and (2.1.5) k ≥  (ii) x(n) x <ǫ for k L when n n .  | k − k| ≤ ≥ 0 (n) (n) Analogously, u  converges to u in ω, that is, px(u u) 0 as n for each x φ if and − → →∞ ∈ only if for any ǫ> 0 and each k, there exists n0 = n0(ǫ, k) such that (2.1.6) u(n) u <ǫ when n n . | k − k| ≥ 0 Clearly, ω forms a Fr´echet space because the above topology in ω is equivalent to the one defined ∞ by countable seminorms: pk(u) k∈N where pk(u)= uk for u = (u1, u2, )= j=1 ujej ω with j { } | | · · · ∈ P e = (0, , 0, 1, 0, ) ω. j · · · · · · ∈ Now,z }| we define{ the isomorphism (diadic-decomposition) from onto N defined by I ∞ 1 (2.1.7) r : I = (i ) r(I)=1+ 2ki N where i = 0 or 1. I ∋ k → 2 k ∈ k Xk=1 Using r(I) in (2.1.7), we define a map I T : σ e I for I = (i ) . → r( ) k ∈I Extending this map linearly, we put I (j) (2.1.8) T (X)= xr(I)er(I) ω for X = XIσ C . ∈ I ∈ X |X|≤j More explicitly, we have the following first few terms: x e = (X , X , X , X , X , X , X , ). r(I) r(I) (0,0,0,··· ) (1,0,0,··· ) (0,1,0,··· ) (1,1,0,··· ) (0,0,1,··· ) (1,0,1,··· ) (0,1,1,··· ) · · · X Then, since T (C[j]) and T (C[k]) are disjoint sets in ω if j = k, we have 6 ∞ (2.1.9) T (C[j])= ω. Xj=0 2.1. SUPER NUMBER 23

Therefore, it is reasonable to write as in (2.1.2) and more precisely, ∞ ∞ [j] [j] [j] I (2.1.10) C = j=0C , that is, X = X with X = XIσ . ⊕ I Xj=0 |X|=j Here, X[j] is called the j-th degree component of X C. By definition, we get ∈ C(j) C(k) for j k, ⊂ ≤ ∞ (2.1.11)  [j] ∞ (j)  C = C with j=0 C = C,  ∩ Xj=0  (2.1.12) C[j] C[k] C[j+k] and C(j) C(k) C(j+k). · ⊂ · ⊂ Remark 2.1.3. The second relation with C(∗) in (2.1.12) also holds for the Clifford algebras but the first one with C[·] is specific to the Grassmann algebras satisfying (2.1.1). Here, the Clifford relation for e is defined by { j} (2.1.13) e e + e e = 2δ I for any i, j = 1, 2, . i j j i ij · · · Typical examples, though not countably many but finitely many elements, are the 2 2-Pauli matrices × e = σ and the 4 4-Dirac matrices e = ββ,αα . j { j}j=1,2,3 × { j}j=0,1,2,3 { j}

2.1.3. Topology. We introduce the weakest topology in C which makes the map T continuous I from C to ω, that is, X = I XIσ 0 in C if and only if projI(X) 0 for each I with ∈I → → ∈ I projI(X)= XI; it is equivalent to the metric dist(X,Y ) = dist(X Y ) defined by P − 1 projI(X) (2.1.14) dist(X)= for any X C. r(I) | | I 2 1+ projI(X) ∈ X∈I | | ℓ For example, X(ℓ) = f(ℓ)σ σ 0 in C even if f(ℓ) because dist(X(ℓ)) 2−2 +1. 1 · · · ℓ → →∞ ≤ Comparison 2.1.1. The sequence space ω is regarded as a formal power series ring of an interminate element X (see, for example p.25 or p.91 of F. Treves [135]). That is, ∞ C[[X]] = u = u(X)= u Xn u C = u = (u , u , , u , ) u C . { n | n ∈ } ∼ { 0 1 · · · n · · · | n ∈ } n=0 X Introducing “standard” algebraic operations and putting a fundamental neighbourhood system as ∞ 1 V = u = u(X)= u Xp u C, u for any p n , m,n { p | p ∈ | p|≤ m ≤ } p=0 X we may define a Fr´echet topology on it.

2.1.4. Algebraic operations – addition and product. For any X,Y C, we define ∈ ∞ (2.1.15) X + Y = (X + Y )[j] with (X + Y )[j] = X[j] + Y [j] for j 0 ≥ Xj=0 and ∞ j [j] [j] [j−k] [k] I (2.1.16) XY = (XY ) where (XY ) = X Y = (XY )Iσ . I Xj=0 Xk=0 |X|=j 24 2. SUPER NUMBER AND SUPERSPACE

τ(I;J,K) Here, (XY )I = I J K( 1) XJYK C is well-defined because for any set I , there exist = + − ∈ ∈I only finitely many decompositions by sets J, K satisfying I = J+˙ K (i.e. I = J K, J K = ). P ∪ ∩ ∅ Here, the indeces τ(I; J, K), or more generally τ(I; J , , J ) are defined by 1 · · · k I J J J J I (2.1.17) ( 1)τ( ; 1,··· , k)σ 1 σ k = σ with I = J +˙ J +˙ +˙ J . − · · · 1 2 · · · k But for notational simplicity, we will use ( 1)τ(∗) without specifying the decomposition if there − occurs no confusion.

Exercise 2.1.1. Prove that for sets J, K satisfying I = J + K,

J K I J K I K J ( 1)| || |( 1)τ( ; , ) = ( 1)τ( ; , ). − − −

Moreover, we get

Lemma 2.1.1. The product defined by (2.1.16) is continuous from C C C. × →

Proof. It is simple by noting that there exist 2|I| elements J satisfying J I and that ∈I ⊂ r(I) (XY )I XJ YK 2 (max XJ )(max YK ) for any X,Y C.  J⊂I K⊂I | |≤ I J K | || |≤ | | | | ∈ =X+ Proof of Proposition 2.1.1. Clearly, we get

X(YZ) = (XY )Z (associativity), (X(Y + Z)= XY + XZ (distributivity). Other properties have been proved. 

Remark 2.1.4. We may consider that an element of X C stands for the ‘state’ such that I ∈ the position labeled by σ is occupied by XI C. In other word, considering σ as the countable ∈ { i} indeterminate letters, it seems reasonable to regard C as the set of certain formal power series1 with simple topology. Therefore, it is permitted to reorder the terms freely under ‘summation sign’. That 2 is, the summation I∈I XIer(I) is ‘unconditionally (though not absolutely) convergent’ and so is I I C I∈I X σ . We useP such a big space with rather weak topology because this algebra is considered asP the ambient space for reordering the places. We feel such a big ambient space will be prefarable and tractable for our future use.

Remark 2.1.5. (1) As C(j) forms a filter by (2.1.11) and (2.1.12), it gives a 0-neighbourhood { } base of the linear topology of C which is equivalent to the above one defined by (2.1.6). (See G. K¨othe [90] for the linear topology of vector spaces.) s (2) We may introduce a stronger topology in C called the topology by degree, that is, X(n) X in C → means that (n) (i) there exists ℓ 0 such that X = XI = 0 for any n and I when I >ℓ and ≥ I | | (n) (ii) X XI 0 as n when I ℓ. | I − |→ →∞ | |≤

1with the special property that same letter appears only once in each monomials 2diverting the terminology of the basis problem in the Banach spaces 2.1. SUPER NUMBER 25

2.1.5. The supernumber. The set C defined by (2.1.2) is called the (complex) supernumber algebra over C and any element X of C is called (complex) supernumber.

Parity: We introduce the parity in C by setting I 0 if X = I∈I,|I|=ev XIσ , (2.1.18) p(X)= I (1 if X = PI∈I,|I|=od XIσ . X C is called homogeneous if it satisfies p(X) =P 0 or = 1. We put also ∈ C = ∞ C[2j] = X C p(X) = 0 , ev ⊕j=0 { ∈ | } (2.1.19)  C = ∞ C[2j+1] = X C p(X) = 1 , od ⊕j=0 { ∈ | }   C = C C = C C . ∼ ev ⊕ od ∼ ev × od  Moreover, it splits into its even and odd parts, called (complex) even number and (complex) odd number, respectively :

I J [j] [j] (2.1.20) X = Xev + Xod = XIσ + XJσ = X + X . I J | X|=ev | X|=od jX=ev jX=od Using (2.1.20), we decompose

[j] [0] (2.1.21) X = XB + XS where XS = X and XB = X0˜ = X 1≤Xj<∞ and the number XB is called the body (part) of X and the remainder XS is called the soul (part) of X, respectively. We define the map πB from C to C by πB(X)= XB, called the body projection (or called the augmentation map).

Remark 2.1.6 (Important3). C does not form a field because X2 = 0 for any X C . But, ∈ od it is easily proved that

(i) if X satisfies XY = 0 for any Y C , then X = 0, and ∈ od (ii) the decomposition of X with respect to degree in (2.1.10) is unique. These properties are shared only if the number of Grassmann generators is infinite. For example, if the number of Grassmann generators is finite, say n, then the number σ σ σ , which is not 1 2· · · n zero, is recognized 0 for the multiplication of any odd number generated by σ n . { j}j=1 Lemma 2.1.2 (the invertible elements). Let X C with X = 0. Then there exists a unique ∈ B 6 element Y C such that XY =1= Y X. ∈

Proof. In fact, decomposing X = XB + XS and Y = YB + YS, we should have

XBYB = 1, XBYS + XSYB + XSYS = 0.

I J I J τ(K;I,J) K Therefore, putting X = I XIσ and Y = J YJσ and noting that σ σ = ( 1) σ S | |>0 S | |>0 − for K = I + J, we have P P −1 −1 τ(K;I,J) YB = XB , YK = XB ( 1) XIYJ. − K I J − X= +

3This important fact is not mentioned at lecture time, why such bonehead! 26 2. SUPER NUMBER AND SUPERSPACE

For example, −1 for K = 1, then YK = X XKY , , | | − B B · · · −1 τ(K;I,J) for K = ℓ, then YK = XB ( 1) XIYJ. | | − K I J − X= + If XB = 0, there exists no Y satisfying XY =1 or Y X = 1.  Now, we define our (real) supernumber algebra by

−1 I (2.1.22) R = πB (R) C = X = XIσ XB R and XI C for I = 0 . ∩ ( I | ∈ ∈ | | 6 ) X∈I Defining as same as before, we have (2.1.23) R = R R , R = ∞ R[j]. ev ⊕ od ⊕j=0 Analogous to C, we put R = X C π X R , R[j] = R C[j], { ∈ | B ∈ } ∩ (2.1.24)  R = R C , R = R C = C , ev ∩ ev od ∩ od od   R = R R = R R . ∼ ev ⊕ od ∼ ev × od  Here, we introduced the body (projection) map πB by πBX = projB(X)= X0˜ = XB. R(j) and other terminologies are analogously introduced.

2.1.6. Conjugation. We define the operation “complex” conjugation, denoted by X as fol- I i lows: Denoting the complex conjugation of XI C by XI and defining σ = σin σ 1 for ∈ n · · · 1 I = (i , , i ), we put 1 · · · n I I I | |(| |−1) I (2.1.25) X = XIσ = ( 1) 2 XIσ . I I − X∈I X∈I Then,

Lemma 2.1.3. For X,Y C and λ C, we have ∈ ∈ (2.1.26) (X)= X, XY = Y X, λX = λ¯X.

Proof. To prove the second equality, we remark σIσJ = σJ σI. In fact, K K I J K I J K τ(K;I,J) | |(| |−1) K σ σ = ( 1)τ( ; , )σ = ( 1) ( 1) 2 σ , I I − J J − I I − J J J I | |(| |−1) | |(| |−1) J I | |(| |−1) | |(| |−1) τ(K;J,I) K σ σ = ( 1) 2 ( 1) 2 σ σ = ( 1) 2 ( 1) 2 ( 1) σ , − − − − − I J K I J K J I ( 1)| || |( 1)τ( ; , ) = ( 1)τ( ; , ). − − − Therefore, we get the desired result. 

Moreover, we have, if K = I + J, K K I I J J τ(K;I,J) | |(| |−1) | |(| |−1) | |(| |−1) τ(K;J,I) ( 1) ( 1) 2 = ( 1) 2 ( 1) 2 ( 1) . − − − − − Remark 2.1.7. We may introduce “real” as X = X for X C, or from purely aethetical point ∈ of view, the set of “reals” may be defined by R I R = X = XIσ XI R , { I | ∈ } X∈I 2.2. SUPERSPACE 27 but we don’t use this “real” in the sequel. Because the analysis is really done for the body part and the soul part is used not only for reordering the places but also “imaginary”, therefore, we imagine that the set I = x = xIσ x R and xI F RF B ∈ ∈ ( I∈I ) X would be more natural as our “supernumber algebra”. Here, F should be an associative algebra such that we may define seminorms analogously as before. This point of view will be discussed if necessity occurs.

Remark 2.1.8. There is another possible way of defining the conjugation: By Hahn-Banach extension theorem, we may define σ¯ as a linear mapping from C to C such that σ¯ ,σ = δ , j h j ki jk and by this, we may introduce the , between C and C¯ which is the Grassmann algebra h· ·i generated by σ¯ , and whose Fr´echet topology is compatible with the duality above. In this case, { j} I in i1 putting σ =σ ¯n σ¯1 for I = (i1, , in) and · · · · · · I I ∗ I | |(| |−1) I X = XIσ = ( 1) 2 XIσ¯ , I I − X∈I X∈I we have also (2.1.26).

2.2. Superspace

Definition 2.2.1. The super Euclidean space or (real) superspace Rm|n of dimension m n is | defined by m|n m n t t t R = Rev Rod X = ( x, θ), (2.2.1) × ∋ where x = t(x , ,x ) and θ = t(θ , ,θ ) with x R , θ R . 1 · · · m 1 · · · n j ∈ ev s ∈ od Notation: In the following, we abbreviate the symbol ‘transposed’ t(x , ,x ) and denote 1 · · · m x = (x , ,x ), etc. unless there occurs confusion. 1 · · · m The topology of Rm|n is induced from the metric defined by dist (X,Y ) = dist (X Y ) m|n m|n − for X,Y Rm|n, where we put ∈ m n 1 projI(xj) 1 projI(θs) (2.2.2) distm|n(X)= I | | + @ I | | . 2r( ) 1+ projI(x ) 2r( ) 1+ projI(θ ) I j s=1 I s Xj=1  X∈I | | X  X∈I | | Clearly, dist (X) = dist(X) for X R1|1 = R C. Analogously, the complex superspace of 1|1 ∈ ∼ ⊂ dimension m n is defined by | (2.2.3) Cm|n = Cm Cn . ev × od We generalize the body map π from Rm|n or Rm|0 to Rm by π X = π x = (π x , ,π x ) B B B B 1 · · · B m ∈ Rm for X = (x,θ) Rm|n. The (complex) superspace Cm|n is defined analogously. ∈ m|n m|n Dual superspace. We denote the superspace R by RX whose point is presented by X = (x,θ) = (x , ,x ,θ , ,θ ). We prepare another superspace Rm|n whose point is denoted 1 · · · m 1 · · · n Ξ by Ξ = (ξ,π) = (ξ , ,ξ ,π , ,π ), such that they are “dual” each other by 1 · · · m 1 · · · n m n (2.2.4) X Ξ = x ξ + θ π R . h | im|n h j| ji h k| ki∈ ev Xj=1 Xk=1 28 2. SUPER NUMBER AND SUPERSPACE

Or, for any ~ R× andk ¯ C×, we may put ∈ ∈ m n (2.2.5) X Ξ = ~−1 x ξ +k ¯−1 θ π R . h | i~|¯k h j| ji h k| ki∈ ev Xj=1 Xk=1 We abbreviate above or by unless there occurs confusion. h·|·im|n h·|·i~|¯k h·|·i

2.3. Rogers’ construction of a countably infinite Grassmann generators

We borrow her construction in A. Rogers [113]. Denote by the set of integer ML given by = µ µ = (µ ,µ , ,µ ), 1 µ <µ < <µ L and = ∞ . ML { | 1 2 · · · k ≤ 1 2 · · · k ≤ } M∞ ∪L=1ML We regard and for any j N, we put (j) . For each r N, we may correspond a ∅ ∈ ML ∈ ∈ M∞ ∈ member µ by using ∈ M∞ 1 (2.3.1) r = (2µ1 + 2µ2 + + 2µk ). 2 · · · r Conversely, for each µ , we define e as e = (0, , 0, 1, 0, ) where r and µ are related ∈ M∞ µ µ · · · · · · by (2.3.1). Then, w = µ wµeµ. Now, we introduce thez multiplication}| { by e e = e e = e for µ , Pµ ∅ ∅ µ µ ∈ M∞ (2.3.2) e e = e e for i, j N,  (i) (j) − (j) (i) ∈  e = e e e where µ = (µ ,µ , ,µ ). µ (µ1) (µ2) · · · (µk ) 1 2 · · · k That is, we identify  ω w = (w , w , w , w , )= w e (w , w , w , w , )= w e ∋ 1 2 3 4 · · · j (j) ←→ (1) (2) (1,2) (3) · · · µ µ µ Xj=1 X where e σ , e e = e σ σ = σI , I = (1, 1, 0, ), (j) ↔ j (1) (2) (1,2) ↔ 1 2 (1,2) · · · µ1 e = e e e σ σ σ = σI , I = (0, , 0, 1, 0, , 0, 1, 0, ). µ (µ1) (µ2) · · · (µk) ↔ µ1 µ2 · · · µk µ · · · · · · · · · z }| {µk Defining σj = e(j), we have a countably infinite Grassmann algebra| by{z (2.3.2). } In stead of the sequence space ω, Rogers uses ℓ1 to construct the real Banach-Grassmann algebra, which is the set of absolutely convergent sequences I X = XI < for X = I XIσ with XI R, such that XY X Y . k k | | ∞ ∈I ∈ k k ≤ k kk k I∈I X P Proposition 2.3.1 (Roger). ℓ1 with the above multiplication forms a Banach-Grassmann algebra with countably infinite generators.

Remark 2.3.1. There are many papers treating super manifolds which are based on ground ring with Banach-Grassmann structure (for example, V.S. Vladimirov and I.V. Volovich [138], etc). This phenomenon is rather reasonable because inverse or implicit function theorems hold in Banach space as same as Euclidian case, but not so in general, in Fr´echet space. But the condition I XI < is too difficult to check in our concrete problem. This has the similarity to ∈I | | ∞ P 2.3. ROGERS’ CONSTRUCTION OF A COUNTABLY INFINITE GRASSMANNGENERATORS 29 indeterminate coefficients method to solve Cauchy-Kovalevsky theorem and to check its convergence by majorant test.

Remark 2.3.2. Concerning inverse or implicit function theorems in certain Fr´echet space, see R. Hamilton [57] or J.T. Schwartz [124] for Nash’s implicit function theorem.

CHAPTER 3

Linear algebra on the superspace

In this chapter, we quote results from F.A. Berezin [10], B.S. deWitt [34] and D.A. Leites [97] with modifications if necessary.

Remark 3.0.3. Almost all papers prefixed “super”, treated the case of finite number of odd variables also with the finite number of Grassmann generators, or rather, they don’t distinguish odd variables and Grassmann generators. But in any way, after slight modification if necessary, algebraic operations not affected with the topology is borrowed from these papers.

3.1. Matrix algebras on the superspace

3.1.1. Super matrices.

Definition 3.1.1. A rectangular array M, whose cells are indexed by pairs consisting of a row number and a column number, is called a and denoted by M Mat ((m n) (r s) : C), ∈ | × | if it satisfies the following:

A C (1) A (m + n) (r + s) matrix M is decomposed blockwisely as M = where A, B, × D B C and D are m r, n s, m s and n r matrices with elements in C, respectively. × × × × (2) One of the following conditions is satisfied: Either p(M) = 0, that is, p(A )=0= p(B ) and p(C )=1= p(D ) or • jk uv jv uk p(M) = 1, that is, p(A )=1= p(B ) and p(C )=0= p(D ). • jk uv jv uk We call M is even denoted by Mat ((m n) (r s) : C) (resp. odd denoted by Mat ((m n) (r s) : ev | × | od | × | C)) if p(M) = 0 (resp. p(M) = 1). Therefore, we have

Mat ((m n) (r s) : C) = Mat ((m n) (r s) : C) Mat ((m n) (r s) : C). | × | ev | × | ⊕ od | × |

Moreover, we may decompose M as M = MB + MS where A 0 B when p(M) = 0,  0 BB!  MB =    0 C B when p(M) = 1.  DB 0 !    The summation of two matrices in Mat ((m n) (r s) : C) or in Mat ((m n) (r s) : C) ev | × | od | × | is defined as usual, but the sum of Mat ((m n) (r s) : C) and Mat ((m n) (r s) : C) is not ev | × | od | × | defined except at least one of them being zero matrix.

31 32 3. LINEAR ALGEBRA ON THE SUPERSPACE

It is clear that if M is the (m + n) (r + s) matrix and N is the (r + s) (p + q) matrix, then × × we may define the product MN and its parity p(MN) as

(MN)ij = MikNkj, p(MN)= p(M)+ p(N) mod2. Xk Moreover, we define Mat [m n : C] as the algebra of (m + n) (m + n) supermatrices. | ×

3.1.2. Matrices as Linear Transformations. By definition of matrix operation to vector, we have A C Mat ((m n) (r s) : C) M = : Rr|s Rm|n, ev | × | ∋ D B →   A C Mat ((m n) (r s) : C) M = : Rr|s Rm Rn , od | × | ∋ D B → od × ev   0 In m n n m n|m Mat od((n m) (m n) : C) Λn,m = : Rod Rev Rev Rod = R . | × | ∋ Im 0 × → ×   For elements X = (x,θ) = (x , ,x ,θ , ,θ ) and Ξ=(ξ,π) = (ξ , ,ξ ,π , ,π ) 1 · · · m 1 · · · n 1 · · · m 1 · · · n in Rm|n, we define X and X Ξ as in (2.1.25) and (2.2.4), respectively. h | im|n If we introduce the duality between Rm|n as in (2.2.4), we may define the transposed operator as MX Ξ = X tM Ξ for any M Mat ((m n) (r s) : C), h | im|n h | ir|s ∈ ev | × | for X = (x,θ) Rr|s and Ξ = (ξ,ω) Rm|n. More precisely, we have ∈ ∈ A C tA tD tM = t = and ttttM = M. D B tC tB   −  Analogously, defining the duality between Cm|n and Cm|n for Z = (z,θ) Cr|s, Υ = (w, ρ) Cm|n Z Υ ∈ ∈ by m n m n Z Υ = z w + θ ρ , or = z w + θ ρ , h | im|n j j k k j j k k Xj=1 Xk=1 Xj=1 Xk=1 we denote the conjugate (or adjoint) matrix of A by A∗ = tA¯ = tA etc. Then, we may introduce M ∗, the conjugate (or adjoint) of matrix M, by

MZ Υ = Z M ∗Υ . h | im|n h | ir|s Therefore, we have A C ∗ A∗ D∗ M ∗ = = and M ∗∗ = M. D B C∗ −B∗   −  Lemma 3.1.1. For M Mat ((m n) (r s) : C) and N Mat ((r s) (p q) : C), we have ∈ | × | ∈ | × | I 0 (MN)t = N t M t, (MN)∗ = N ∗M ∗, (M t)t = ΛMΛ, where Λ= m . 0 I  − n

m|n If M Mat [m n : C] is even, denoted by M Mat ev[m n : C], then M acts on R ∈ | ∈ | m|n linearly. Denoting this by TM , we call it super linear transformation on R and M is called the representative matrix of TM . 3.2. SUPERTRACE, SUPERDETERMINANT 33

Proposition 3.1.1. Let M Mat [m n : C] and assume det M = 0. Then, for given ∈ ev | B 6 Y Rm|n, ∈ (3.1.1) TM X = Y has the unique solution X Rm|n, which is denoted by X = M −1Y . ∈ −1 Proof. Since MB has the inverse matrix MB , (3.1.1) is reduced to ′ ′ −1 X + NSX = Y , Y = MB Y where N = M −1M . Remark that N X[j] ∞ C[k] for j 0. Decomposing by degree, we S B S S ∈ k≥j+1 ≥ get P X[j] = Y ′[j] (N X(j−1))[j] for j = 1, 2,.... − S As X(0) = X[0] = Y ′[0], we get X[j] from X(j−1) for j 1 by induction.  ≥ Exercise 3.1.1. How about M Mat ((m n) (n m) : C) ? ∈ od | × | Definition 3.1.2. M Mat [m n : C] is called invertible or non-singular if M is invertible, ∈ ev | B i.e. det A det B = 0, and denoted by M GL [m n : C]. B· B 6 ∈ ev |

3.2. Supertrace, superdeterminant

3.2.1. Supertrace.

Lemma 3.2.1. Let V , W be two rectangular matrices with odd elements, m n, n m, × × respectively. We have (1) tr (VW )k = tr (WV )k for any k = 1, 2, . − −1 · · · (2) det(Im + VW ) = det(In + WV ) .

Proof. Let V = (v ), W = (w ) with v , w C . ij jk ij jk ∈ od tr (VW )k = v w v v w ij1 j1j2 j2j3 · · · jk−1jℓ jki = X w v v w v = tr (WV )k. − j1j2 j2j3 · · · jℓ−1jk jki ij1 − Using this, we have tr ((WV )ℓ−1WVX)= tr (V (WV )ℓ−1W ) which yields − ( 1)ℓ+1 log det(I + WV ) = tr log (I + WV )= − tr ((WV )ℓ−1WV ) n n ℓ Xℓ ( 1)ℓ+1 ( 1)ℓ+1 = − tr (V (WV )ℓ−1W ) = − tr (VW )ℓ ℓ − − ℓ ℓ ℓ X   X = log det(I + VW ).  − m Comparison 3.2.1. If A = (a ) Mat (m n : C ), B = (b ) Mat (n m : C ), then we ij ∈ × ev jk ∈ × ev have (1) tr (AB)k = tr (BA)k,

(2) det(Im + AB) = det(In + BA). A C Definition 3.2.1. Let M = Mat [m n : C]. We define the supertrace of M by D B ∈ |   str M = tr A ( 1)p(M) tr B. − − 34 3. LINEAR ALGEBRA ON THE SUPERSPACE

Using Lemma 3.2.1, we get readily

Proposition 3.2.1. (a) Let M,N Mat [m n : C] such that p(M)+ p(N) 0 mod 2. Then, ∈ | ≡ we have str (M + N) = str M + str N. (b) M is a matrix of size (m + n) (r + s) and N is a matrix of size (r + s) (m + n). Then, × × str (MN) = ( 1)p(M)p(N) str (NM). − 3.2.2. Super determinant. For even supermatrix, we put

Definition 3.2.2. Let M be a supermatrix. When det B = 0, we put B 6 sdet M = det(A CB−1D) (det B)−1 − · and call it superdeterminant or of M.

Corollary 3.2.1. When det B = 0 and sdet M = 0, then det A = 0. B 6 6 B 6 Exercise 3.2.1. Prove the above corollary.

For reader’s sake, we recall the definition in commutative setting.

Definition 3.2.3. Let B = (B ) be (ℓ ℓ)-matrix with elements in C , denoted by, B jk × ev ∈ Mat [ℓ : Cev]. As Cev is a commutative ring, we may define det B as usual: det B = sgn (ρ)B B . 1 ρ(1) · · · ℓ ρ(ℓ) ρ∈℘ Xℓ Then, analogously as ordinary case, we have, (3.2.1) det(AB) = det A det B, det(exp A) = exp(tr A) for A, B Mat [ℓ : C ]. · ∈ ev Moreover, for case, we have

Comparison 3.2.2. Let

A11 A12 Im 0 A = , M = −1 , A21 A22 A A I   − 22 21 n be block matrices of even elements. Then, we have −1 A11 A12A22 A21 A12 −1 (3.2.2) det A = det(AM) = det − = det(A11 A12A22 A21) det A22. 0 A22 − ·   Remark 3.2.1. It seems meaningful to cite here the result of F.J. Dyson [40],

Theorem 3.2.1 (Dyson). Let R be a ring with a element and without divisors of zero. Assume that on the matrix ring A with n > 1, a mapping D exists satisfying the following axioms: Axiom 1. For any a A, D(a)=0 if and only if there is a non-zero w W with aw =0. Here, W is the set∈ of single-column matrices with elements in R. ∈ Axiom 2. D(a)D(b)= D(ab). Axiom 3. Let the elements of a be aij i, j = 1, ,n, and similarly for b and c. If for some row-index k we have ··· a = b = c , i = k ij ij ij 6 (aij + bij = cij , i = k, 3.2. SUPERTRACE, SUPERDETERMINANT 35

then D(a)+ D(b)= D(c). Then, R is commutative.

This theorem states that if the elements of matrix are taken from non-commutative algebra, then it is impossible to define the determinant having above three properties. But, he claims a certain ‘determinant’ is defined for some class of matrices with elements in ‘quarternion’ requiring only one or two properties above (By the way, Moore’s point of view, is reconsidered significantly in that paper). In fact, we may define “superdeterminant” for “supermatrix” as above which staisfies the properties below.

Now, we continue to study the properties of super-determinant defined in the previous lecture.

Following decomposition of a even supermatrix M will be useful: −1 −1 A C Im CB A CB D 0 Im 0 = − −1 if det BB = 0, D B 0 In 0 B B D In 6         (3.2.3) −1 Im 0 A 0 Im A C = −1 −1 if det AB = 0. DA In 0 B DA C 0 In 6    −    Moreover, we have

A˜ C˜ A C −1 (A CB−1D)−1 A−1C(B DA−1C)−1 = = − − − D˜ B˜ D B B−1D(A CB−1D)−1 (B DA−1C)−1     − − −  (I A−1CB−1D)−1A−1 (I A−1CB−1D)−1A−1CB−1 = m m (I B−−1DA−1C)−1B−1DA−1 − (I− B−1DA−1C)−1B−1 − n − n −  A−1(I CB−1DA−1)−1 A−1CB−1(I DA−1CB−1)−1 = m n B−1DA−1−(I CB−1DA−1)−1 − B−1(I DA− −1CB−1)−1 − m − n − 

A C sdet = (det A)(det B)−1 det(I A−1CB−1D) D B m −   = (det A)(det B)−1 det(I CB−1DA−1) = (det A˜)−1(det B)−1 (3.2.4) m − = (det A)(det B)−1 det(I B−1DA−1C) n − = (det A)(det B)−1 det(I DA−1CB−1) = (det A)(det B˜). n − As we have the following A C A−1 0 I 0 A C I 0 A CB−1D 0 (3.2.5) m m = , D B 0 B−1 0 I D B 0 I − 0 B DA−1C    − n  − n  −  we guarantee the invertibility of matrices appeared above.

Lemma 3.2.2. (1) Let L Mat [ℓ : C ] such that the product of any two entries of it is zero. ∈ ev ev Then (I + L)−1 = I L, det(I + L)=1+tr L. ℓ ℓ − ℓ (2) Let M Mat [m n : C] such that the product of any two entries of it is zero. Then ∈ ev |

sdet (Im+n + M)=1+str M. 36 3. LINEAR ALGEBRA ON THE SUPERSPACE

Proof. (1) Remarking

(I + L)−1 = I L + L2 L3 + and det(eL)= etr L, ℓ ℓ − − · · · we get the result readily. A C (2) For M = , satisfying C(I + B)−1D =0 and tr A tr B = 0 guaranteed by the product D B n of any two entries ofM being zero, sdet (I + M) = det(I + A C(I + B)−1D) det(I + B)−1 m+n m − n · n = det(I + A) det(I B)=1+tr A tr B = 1+str M.  m · n − − Corollary 3.2.2. When det B = 0 and sdet M = 0, then det A = 0. B 6 6 B 6 Exercise 3.2.2. Prove the above corollary.

Theorem 3.2.2. Let M,N Mat [m n : C]. ∈ | (1) If M is invertible, then we have sdet M = 0. Moreover, if A is nonsingular, then 6 (3.2.6) (sdet M)−1 = (det A)−1 det(B DA−1C). · − (2) Multiplicativity of sdet :

(3.2.7) sdet (MN) = sdet M sdet N. · (3) str and sdet are matrix invariants. That is, if N is invertible, then

(3.2.8) str M = ( 1)p(M)+p(N) str (NMN −1), sdet M = sdet (NMN −1). −

Proof (due to Leites [97]). (1) By −1 A C Im 0 A 0 Im A C (3.2.9) = −1 −1 if det AB = 0, D B DA In 0 B DA C 0 In 6      −    we have readily by definition, sdet M = det A(det(B DA−1C))−1, which yields (3.2.6). − (2) [Step 1]: Let , and be subgroups of GL[m n : C], given by G+ G0 G− | Im C A 0 Im 0 + = , 0 = , − = . G 0 In G 0 B G D In          Then, we have, M = M M M with M , M and M . i.e., for any M GL[m n : + 0 − + ∈ G+ 0 ∈ G0 − ∈ G− ∈ | C], −1 −1 A C Im CB A CB D 0 Im 0 (3.2.10) M = = − −1 if det BB = 0. D B 0 In 0 B B D In 6         Remarking that I C I C′ I C + C′ m m = m , 0 In × 0 In 0 In       we introduce the notion of elemantary matrices having the form

Im E 0 I  n where E has only one non-zero entry. 3.2. SUPERTRACE, SUPERDETERMINANT 37

[Step 2]: We claim sdet (MN) = sdet M sdet N whenever M or M , and similarly, · ∈ G+ ∈ G0 whenever N or N . For example, when ∈ G0 ∈ G− ′ Im C A C M = + N = , 0 In ∈ G D B     we have I C′ A C A + C′D C + C′B sdet (MN) = sdet m = sdet 0 I D B D B  n     = det(A + C′D (C + C′B)B−1D) (det D)−1 = det(A CB−1D) (det D)−1 − · − · = sdet M sdet N. · Exercise 3.2.3. Check other cases analogously.

[Step 3]: We claim that sdet (MN) = sdet M sdet N for any elementary matrix N · Im E N = +. 0 In ∈ G   Since we have sdet (MN) = sdet(M (M M N)) = sdet M sdet (M (M N))) = sdet M sdet (M N), + 0 − +· 0 − 0· − sdet M sdet N = sdet M sdet M sdet N, · 0· −· by Step 1 and Step 2, we need to prove sdet (M N) = sdet M sdet N = 1 − −· when N is an elementary matrix. By definition, I 0 I E I E sdet m m = sdet m = det(1 E(1 + DE)−1D) det(1 + DE)−1. D In 0 In D In + DE − ·      As E has only one non-zero entry, the product of any two of the matrices E, DE, E(1 + DE)−1D is zero. Applying Lemma, we get, by (1 + DE)−1 = 1 DE and E DE = 0, − · sdet (M N) = det(1 DE) (det(1 + DE))−1 = (1 tr DE)(1 + tr DE)−1. − − · − As tr DE = tr ED, we have − sdet (M N)=1=sdet M sdet N. − −· [Step 4]: Put

= N GL[m n : R] sdet (MN) = sdet M sdet N for any M GL[m n : R] . G ∈ | | · ∈ |   For N ,N , we have 1 2 ∈ G sdet (M N1N2) = sdet ((MN1)N2) = sdet(MN1) sdet N2 (3.2.11) · · = sdet M sdet N sdet N = sdet M sdet (N N ), · 1· 2 · 1 2 which implies froms a group. By Steps 2 and 3, contains and and all elementary matrices G G G− G0 N . By Step1, GL[m n : C] is generated by these matrices, we have = GL[m n : C], that is, ∈ G+ | G | sdet (MN) = sdet M sdet N. · (3) Let N, M be given. Then, using (3.2.11), we get

−1 str NMN −1 = ( 1)p(N)p(MN ) str MN −1N = ( 1)p(N)+p(M) str M, − − 38 3. LINEAR ALGEBRA ON THE SUPERSPACE since p(MN −1) = p(M)+ p(N −1) mod2 and 0= p(NN −1) = p(N)+ p(N −1) mod 2, we have p(N)p(MN −1)= p(N)+ p(M) mod 2.

Using (3.2.11), we have sdet (MN) = sdet(NM) which implies sdet (NMN −1) = sdet(N −1NM)= sdet M. 

Theorem 3.2.3 (Liouville’s theorem: Theorem 3.5 of [10]). Let M(t) Mat [m n : C] with a ∈ | real parameter t. Let X(t) Mat [m n : C] satisfy ∈ | d (3.2.12) X(t)= M(t)X(t), X(0) = I . dt m+n Then X(t) GL[m n : C], and ∈ | t (3.2.13) sdet X(t) = exp ds str M(s) . { } Z0 Proof (with slight modification of Berezin’s proof in [10]). Let X˜ (t) be a solution of d X˜ (t)= X˜ (t)M(t), X˜ (0) = I . dt − m+n Then, since d (X˜(t)X(t)) = 0 with X˜ (0)X(0) = I , dt m+n we have X˜(t)X(t)= I which implies X(t) GL[m n : C]. m+n ∈ | Let A(t) C(t) X (t) X (t) M(t)= , X(t)= 11 12 . D(t) B(t) X (t) X (t)    21 22  Then, we put Y (t)= X (t) X (t)X−1(t)X (t) and Z = X−1(t). Differentiating X−1X = I 11 − 12 22 21 22 22 22 n w.r.t. t and substituting X˙22 = DA12 + BX22 which is obtained from (3.2.12), we have, d Z = Z(DX X−1 + B). dt − 12 22 Analogously calculating, we get d Y = (A X X−1D)Y. dt − 12 22 As all elements appeared in the above equations are even, we may apply the classical Liouville theorem to have d d det Y = tr (A X X−1D) det Y, det Z = tr (DX X−1 + B) det Z. dt − 12 22 dt − 12 22 Putting V = X X−1 and W = D in Lemma 3.2.1, we get tr (A X X−1D) = tr (A+DX X−1), 12 22 − 12 22 12 22 therefore, recalling the definition of super-determinant, we have d d sdet X = (det Y det Z) = tr (A B) det Y det Z = str M sdet X with sdet X(0) = 1. dt dt · − · · · This yields the desired result after integrating w.r.t. t. 

Corollary 3.2.3. For M,N Mat [m n : C] we have ∈ ev | sdet (MN) = sdet M sdet N, · (3.2.14) exp(str M) = sdet(exp M). 3.3. AN EXAMPLE OF DIAGONALIZATION 39

Proof. (1) Put X(t) = (1 t)I + tM and Y (t) = (1 t)I + tN. As X(t) and Y (t) are − m+n − m+n differentiable in t and invertible except at most one t, we my define dX(t) dX(t) A(t)= X(t)−1, B(t)= Y (t)−1. dt dt Then d (X(t)Y (t)) = (A(t)+ B (t))X(t)Y (t) where B (t)= X(t)B(t)X(t)−1. dt 1 1 Applying above theorem, we have 1 1 sdet (MN) = sdet(X(1)Y (1)) = exp ds str (A(t)+ B (t)) = exp ds(str A(t)+str B(t)) { 1 } { } Z0 Z0 = sdet X(1) sdet Y (1) = sdet M sdet N. · · (2) Putting M(t)= M, X(t)= etM and t = 1 in theorem above, we get the desired result. 

Comparison 3.2.3 (cited from “Encyclopaedia of Mathematics” ed. M. Hazewinkel). Liouville- Ostrogradski formula (or Liouville formula) : A relation that connects the Wronskian of a system of solutions and the coefficients of an ordinary linear differential equation.

Let x (t), ,x (t) be an arbitrary system of solutions of a homogeneous system of linear first- 1 · · · n order equations

(3.2.15) x′ = A(t)x, x Rn ∈ with an operator A(t) that is continuous on an interval I, and let

W (x (t), ,x (t)) = W (t) 1 · · · n be the Wronskian of this system of solutions. The Liouville-Ostrogradski formula has the form d (3.2.16) W (t)= A(t) tr A(t), t I dt · ∈ or, equivalently, t (3.2.17) W (x (t), ,x (t)) = W (x (t), ,x (t)) exp ds tr A(s) , t,t I. 1 · · · n 1 · · · n · { } ∈ Zt Here, tr A(t) is the trace of the operator A(t). The Liouville-Ostrogradski formula can be written by means of the Cauchy operator X(t,t) of the system (3.2.15) as follows: t (3.2.18) det X(t,t) = exp ds tr A(s) , t,t I. { } ∈ Zt The geometrical meaning of (3.2.18) (or (3.2.17) ) is that as a result of the transformation X(t,t) : Rn Rn the oriented volume of any body is increased by a factor exp t ds tr A(s) . → { t } R 3.3. An example of diagonalization

A C Definition 3.3.1. A supermatrix M = Mat [m n : C] is called generic if all D B ∈ |   eigenvalues of MB as Mat [m + n : C] are different each others.

Theorem 3.3.1 (Berezin). Let M Mat [m n : C] be generic. Then, there exists a matrix ∈ | X GL[m n : C] such that E = XMX−1 is diagonal. ∈ | 40 3. LINEAR ALGEBRA ON THE SUPERSPACE

Proof. Decomposing the equality EX = XM with respect to the degree, we have

k k (3.3.1) (EX)[k] = E[j]X[k−j] = X[j]M [k−j] = (XM)[k]. Xj=0 Xj=0 From this, we want to construct X[k] and E[k]: For k = 0, we have

(3.3.2) E[0]X[0] = X[0]M [0].

[0] [0] [0] [0] [0] [0] By the assumption, there exist X , E = diagonal matrix with (λ , , λm ) and X ,E = 11 11 1 · · · 22 22 diagonal matrix with (λ[0] , , λ[0] ) such that m+1 · · · m+n [0] [0] [0] [0] [0] [0] X11 AB = E11 X11 and X22 BB = E22 X22 . Defining [0] [0] [0] X11 0 [0] E11 0 X = [0] , E = [0] , 0 X22 ! 0 E22 ! we have the desired one satisfying (3.3.2).

Assume that there exist X[j] and E[j] for 0 j k 1 satisfying (3.3.1). Multiplying (X[0])−1 ≤ ≤ − from the right to (3.3.1) for k, we have

(3.3.3) E[0]X[k](X[0])−1 X[k](X[0])−1E[0] + E[k] = K[k] − where k−1 k−1 K[k] = ( X[j]M [k−j])(X[0])−1 ( E[j]X[k−j])(X[0])−1. − Xj=0 Xj=1 By inductive assumption, the matrix K[k] is known and belongs to Mat [m n : C]. From (3.3.3), we | have

(3.3.4) (λ[0] λ[0])(X[k](X[0])−1) + λ[k]δ = (K[k]) . i − j ij i ij ij This equation is uniquely solvable since λ[0] = λ[0] and i 6 j [k] [k] λi = (K )ii, [k] [k] [0] −1 (K )ij (X (X ) )ij = , for i = j. λ[0]−λ[0]  i j 6 Therefore, we define X[j] and E for any j 0. Since X[0] is invertible, X GL[m n : C]. This [j] ≥ ∈ | implies X and E are defined as desired. 

Problem 3.3.1. Find a condition for a supermatrix M being diagonalizable? Is “generic” condition in Theorem 3.3.1 necessary?

3.3.1. A simple example. Let

x1 θ1 Q = with x1, x2 Rev, θ1, θ2 Rod, θ2 ix2 ∈ ∈   which maps R1|1 to R1|1 or R iR to R iR . This supermatrix appears in Efetov’s calcula- od × ev od × ev tion in Random Matrix Theory (see for example, K.B. Efetov [41], A. Inoue and Y. Nomura [82]). 3.3. AN EXAMPLE OF DIAGONALIZATION 41

3.3.2. Invertibility of Q. Find Y for a given V such that y v QY = V with Y = 1 ,V = 1 R1|1, ω2 ρ2 ∈     x1y1 + θ1ω2 = v1, θ2y1 + ix2ω2 = ρ2. If (x x ) = 0, we have readily 1 2 B 6 ix2v1 θ1ρ2 x1ρ2 θ2v1 y1 = − , ω2 = − with D± = ix1x2 θ1θ2. D− D+ ± Analogously, for ω1 ρ1 Y˜ = Rod iRev, V˜ = Rod Rev, iy2 ∈ × v2 ∈ ×     satisfying QY˜ = V˜ , we have

ix2ρ1 θ1v2 x1v2 θ2ρ1 ω1 = − , iy2 = − . D− D+

To relate the above quantity with the sdet Q, we proceed as follows: Let y ω Y = 1 1 with QY = YQ = I . ω iy 2  2 2 Then, from QY = I2, we have

x1y1 + θ1ω2 = 1, x1ω1 + iy2θ1 = 0, θ y + ix ω = 0, θ ω x y = 1. 2 1 2 2 2 1 − 2 2 Therefore, we have

ix2 θ1 1 θ1 2 ix2 x D− − D− −1 2 Y = θ2 x1 = (sdet Q) θ2 x1x2+2iθ1θ2 , 2 3 D+ D+ ! ! − x2 − x2 which yields YQ = I2 also. Here, we used

−1 −1 ix1x2 θ1θ2 −1 ix1x2 + θ1θ2 sdet Q = det(x1 θ1(ix2) θ2) (det(ix2)) = − 2 , (sdet Q) = 2 . − · (ix2) x1 Therefore,

1 −θ1 ix2v1−θ1ρ2 y D 2 v D 2 1 = + ix2 (ix2) 1 = + (ix2) , ω 2 −θ2 ix1x2−2θ1θ2 ρ 2 −ix2θ2v1+(ix1x2−2θ1θ2)ρ2 2 x1 2 3 2 x1 3   (ix2) (ix2) !   (ix2) !

1 −θ1 ix2ρ1−iθ1v2 ω D 2 ρ D 2 1 = + ix2 (ix2) 1 = + (ix2) . iy 2 −θ2 ix1x2−2θ1θ2 v 2 −ix2θ2ρ1+(ix1x2−2θ1θ2)v2 2 x1 2 3 2 x1 3   (ix2) (ix2) !   (ix2) !

3.3.3. Eigenvalues of Q. Let u QU = λU with U = , u R , ω R , λ R . ω ∈ ev ∈ od ∈ ev   Then, (x λ)u + θ ω = 0, θ u + (ix λ)ω = 0. 1 − 1 2 2 − Putting D (λ) = (x λ)(ix λ)+ θ θ , D (λ) = (x λ)(ix λ) θ θ , + 1 − 2 − 1 2 − 1 − 2 − − 1 2 we have

D−(λ)u = 0, D+(λ)ω = 0. 42 3. LINEAR ALGEBRA ON THE SUPERSPACE

To guarantee the existence of u = 0 satisfying above, we take λ satisfying B 6 D (λ)= λ2 (x + ix )λ + ix x θ θ = 0. − − 1 2 1 2 − 1 2 This yields

θ1θ2 θ1θ2 λ = x1 + (or λ = ix2 , but πB(λ) R) x ix − x1−ix2 6∈ 1 − 2 and 1 θ θ U = , QU = (x + 1 2 )U. θ2 1 x ix  x1−ix2  1 − 2 Analogously, we seek λ˜ iR , U˜ R R satisfying QU˜ = λ˜U˜ which is given ∈ ev ∈ od × ev −θ 1 θ1θ2 U˜ = x1−ix2 , QU˜ = (ix + )U.˜ 1 2 x ix   1 − 2 Therefore,

θ1 θ1 θ1θ2 1 1 x1 + 0 Q − x1−ix2 = − x1−ix2 x1−ix2 . θ2 1 θ2 1 0 ix + θ1θ2 x1−ix2 ! x1−ix2 ! 2 x1−ix2 !

3.3.4. Diagonalization of Q. We may diagonalize the matrix Q by using the change of variables θ θ iθ θ y = x + 1 2 , y = x 1 2 , 1 1 x ix 2 2 − x ix (3.3.5) ϕ−1(x,θ) = (y,ω)=  1 − 2 1 − 2 θ θ  ω = 1 , ω = 2 ,  1 x ix 2 −x ix 1 − 2 1 − 2  or  x1 = y1 + ω1ω2(y1 iy2), x2 = y2 iω1ω2(y1 iy2), (3.3.6) ϕ(y,ω) = (x,θ)= − − − ( θ1 = ω1(y1 iy2), θ2 = ω2(y1 iy2), − − − such that y 0 y2 0 (3.3.7) GQG−1 = 1 , GQ2G−1 = 1 0 iy 0 y2  2  − 2 where 1 + 2−1ω ω ω 1 + 2−1ω ω ω G = 1 2 1 , G−1 = 1 2 1 . ω 1 2−1ω ω ω 1 2−−1ω ω  2 − 1 2  − 2 − 1 2 It is clear that str Q = x ix = y iy = str GQG−1, and 1 − 2 1 − 2 2 2 2 2 2 −1 2 str Q = x1 + x2 + 2θ1θ2 = y1 + y2 = str (GQG ) . ======Mini Column 2: Tensor and exterior algebras, interior product ======

As a characteristic feature of mathematical thought, it some times happens that to gener- alize that situation makes it easier to understand. Though I have a tendency to feel bothered and sleepy following lengthy algebraic procedure, but I try to collect some terminology from T. Yokonuma [145](in Japanese). 3.3. AN EXAMPLE OF DIAGONALIZATION 43

Tensor algebras: Let V be a d-dimensional vector space with inner product ( , ). Put V ∗ is a ∗ · · set of linear operators on V , then V and V is dual each other by ∗ , . V h· ·iV Theorem 3.3.2. Let k be a field with characteristic 0, and let V ,V , ,V be finite dimen- 1 2 · · · n sional linear spaces on k. Then, we have a unique pair (U , ι ) satisfying following properties ( ) , 0 n ⊗ 1 ( ) , here U is a linear space on k and n-times ι (V ,V , ,V : U ). ⊗ 2 0 n ∈L 1 2 · · · n 0 ( ) U is generated by image of ι , ι (V V V : U ). ⊗ 1 0 n n 1 × 2 ×···× n 0 ( ) For any Φ (V ,V , ,V : U), there exists a linear map F : U U such that ⊗ 2 ∈ L 1 2 · · · n 0 → Φ= F ι . ◦ n Definition 3.3.2. (U , ι ) defined in the above theorem is called tensor product of V ,V , ,V , 0 n 1 2 · · · n and denoted by U = V V V , ι (v , v , .v )= v v v (v V ). 0 1 ⊗ 2 ⊗···⊗ n n 1 2 · · · n 1 ⊗ 2 ⊗···⊗ n i ∈ i

Definition 3.3.3. For matrices A = (αij), B = (βij), we define a matrix α B α B α B 11 12 · · · 1n α21B α2nB  . ··· ··· .  , . .   αm1B αm2B αmnB  · · ·  which is called the tensor (or Kronecker) product of A and B and denoted by A B. If A is ⊗ (m,n)-matrix and B is (m′,n′)-matrix, then A B is (mm′,nn′)-matrix. ⊗ Let V be a linear space over k and V ∗ be the dual of V . Then, p-times q-times T p(V )= V V V ∗ V ∗ q ⊗···⊗ ⊗ ⊗···⊗ 0 p p 0 is denoted by (p,q)-tensor space. We putz T}|0 (V ){ = zk, T0}|(V ) ={ T (V ), Tq (V ) = Tq(V ) and 0 0 T0 (V )= T (V )= T0(V ).

Remark 3.3.1. Here, we use identification V V ∗ V ∗ V , V V ∗ V V V ∗ ⊗ ≡ ⊗ ⊗ ⊗ ⊗ ⊗ ≡ V ∗ V ∗ V V V , etc. ⊗ ⊗ ⊗ ⊗ (p, 0)- or (0,q)- tensors are called p-th order contravariant or q-th order covariant tensor, 1 0 ∗ respectively. An element in T0 (V )= V is called contravariant vector, one in T1 (V )= V is called 0 covariant vector and one in T0 (V )= k is scalar. From Hom(V,V ) V ∗ V T 1(V ), linear transformation on V is regarded as (1, 1)-tensor. ≡ ⊗ ≡ 1 Since (V,V : k) V ∗ V ∗ = T 0(V ), bilinear form onV is 2-th covariant tensor. L ≡ ⊗ 2 Proposition 3.3.1 (contraction). Take integers p > 0, q > 0 and consider a tensor space T p(V ). For any integers r, s satisfying 1 r p and 1 s q, there exists a unique linear map q ≤ ≤ ≤ ≤ c r : T p(V ) T p−1(V ) satisfying the following: For any v V , ϕ V ∗, s q → q−1 i ∈ j ∈ c r(v v ϕ ϕ )= ϕ (v )v vˇ v ϕ ϕˇ ϕ . s 1 ⊗···⊗ p ⊗ 1 ⊗···⊗ q s r 1 ⊗···⊗ r ⊗···⊗ p ⊗ 1 ⊗···⊗ s ⊗···⊗ q

Remark. In the above,v ˇr orϕ ˇs stands for deleting that component, respectively. This map r cs is called contraction w.r.t r-th contravariant index and s-th covariant index. 44 3. LINEAR ALGEBRA ON THE SUPERSPACE

The permutation group with p letters, 1, 2, ,p , is denoted by S and each σ S has the { · · · } p ∈ p signature sgn σ = . ± Proposition 3.3.2. (1) For each σ S , there exists uniquely a linear transformation P of ∈ p σ T p(V ) satisfying

P (v v )= v −1 v −1 (v V ). σ 1 ⊗···⊗ p σ (1) ⊗···⊗ σ (p) i ∈ (2) For σ, τ, 1 S , we have ∈ p PσPτ = Pστ , P1 = I.

Definition 3.3.4. An element t T p(V ) is called a symmetric tensor when it satisfies for ∈ any σ S . All such elements is denoted by Sp(V ). In case P (t) = (sgn σ)t for any σ S , it ∈ p σ ∈ p is called alternating (=anti-symmmetric?) tensor whose set is denoted by Ap(V ).

For t Sp(V ), t′ Sq(V ), we define product of them as t t′ = (t t′). ∈ ∈ · Sp+q ⊗ For t Ap(V ), t′ Aq(V ), we define exterior product of them as t t′ = (t t′). ∈ ∈ ∧ Ap+q ⊗ Definition 3.3.5. We put 1 1 = P , = (sgn σ)P . Sp p! σ Ap p! σ S S σX∈ p σX∈ p Unless there occurs confusion, we simply denote them as = , = . Sp S Ap A Definition 3.3.6. In infinite direct sum ∞ T (V )= T p(V ), p=0 M is called tensor algebra, if we introduce addition and product as follows: ′ ∞ ′ ∞ ∞ t + t = j=0(tj + tj), t = t ,t′ = t′ T (V ), t ,t′ T j(V ), α k = αt = ∞ αt , j j ∈ j j ∈ ∈ ⇒  jP=0 j j=0 j=0 ′ ∞ ′ t t = tr t . X X ⊗ P p=0 r+s=p ⊗ s  P P  Analogously, we put 

Definition 3.3.7. Introducing product in · ∞ S(V )= Sp(V ) p=0 M as t Sp(V ), t′ Sq(V )= t t′ = (t t′), ∈ ∈ ⇒ · Sp+q ⊗ we have a symmetric algebra S(V ) on V .

Exterior algebra:

Definition 3.3.8. Remarking Ap(V ) = 0 for p>n, we have ∞ n A(V )= Ap(V )= Ap(V ). p=0 p=0 M M 3.3. AN EXAMPLE OF DIAGONALIZATION 45

We define the exterior product as ∧ n t t′ = t t′ = ( t t′ ). ∧ p ∧ q Ak p ⊗ q p,q X Xk=0 p+Xq=k A(V ) is called the on V .

Lemma 3.3.1. (1) v V = (v v )= v v , i ∈ ⇒A 1 ⊗···⊗ p 1 ∧···∧ p (2) σ S = v −1 v −1 = sign σ(v v ). ∈ p ⇒ σ 1 ∧···∧ σ p 1 ∧···∧ p (3) t Ap(V ),t′ Aq(V )= t′ t = ( 1)pqt t′. ∈ ∈ ⇒ ∧ − ∧

p ∗ The p-th order covariant tensor space Tp(V )= T (V ) with inner (or scalar) product ϕ ϕ T (V ), v v T p(V ) 1 ⊗···⊗ p ∈ p 1 ⊗···⊗ p ∈ v v , ϕ ϕ = ϕ (v ) ϕ (v ) −→ h 1 ⊗···⊗ p 1 ⊗···⊗ pi 1 1 · · · p p (v V, ϕ V ∗) i ∈ j ∈ is regarded as the dual of T p(V ).

A bilinear form on Ap(V ) Ap(V ∗) is defined as h·|·ip × z ξ = p! z,ξ (z Ap(V ), ξ Ap(V ∗)). h | ip h i ∈ ∈ Then,

Proposition 3.3.3. (1) For z = v v (v V ) and ξ = ϕ ϕ (ϕ V ∗), we 1 ∧···∧ p i ∈ 1 ∧···∧ p i ∈ have z ξ = det(ϕ (v )). h | ip i j (2) Ap(V ) and Ap(V ∗) are dual each other by the scalar product . h·|·ip (3) For z = n z A(V ) (z Ap(V )) and ξ = n ξ A(V ∗) (ξ Ap(V ∗)), we define the p=0 p ∈ p ∈ p=0 p ∈ p ∈ scalar product P n P z ξ = z ξ h | i h p| pip Xp=0 then. A(V ) and A(V ∗) form dual spaces each other.

Definition 3.3.9. For ξ A(V ∗), we define a linear transformation δ(ξ) on A(V ∗) as ∈ δ(ξ)ζ = ξ ζ (ζ A(V ∗)) ∧ ∈ which is called (left)exterior multiplication. Transposed map of this δ(ξ) is denoted by ∂(ξ) and called interior product (or multiplication) by ξ: ∂(ξ)z ζ = z δ(ξ)ζ = z ξ ζ . h | i h | i h | ∧ i

Remark: Above defined operations are denoted also by δ(ξ) = ξ , ∂(ξ) = ξ . · ∧ · · ⌋· ======End of Mini Column 2 ======

CHAPTER 4

Elementary differential calculus on superspace

On real Euclidian space Rm, to begin with, we consider a real-valued, continuous and smooth function. On the other hand if we work on complex space Cm with a complex valued function, it seems natural to develop complex analytic functions. From these, what is a natural candidate for a function on superspace Rm|n. This chapter and the next one are rewritten rather significantly from the original lectures.

4.1. Gˆateaux or Fr´echet differentiability on Banach spaces

For the future use, we prepare the following lemma:

Lemma 4.1.1 (see, Lemma 2.1.14 of Berger [12]). Let X1, X2,Y be Banach spaces. The Banach spaces L(X1, X2 : Y ) and L(X1 : L(X2 : Y )) are identical up to a linear isometry.

Definition 4.1.1. Let (X, ) and (Y, ) be two Banach spaces. k · kX k · kY (i) A function Φ : X Y is called Gˆateaux(or G-) differentiable at x X in the direction h X → ∈ ∈ if there exists an element Φ′ (x; h) Y such that G ∈ d Φ(x + th) Φ(x) tΦ′ (x; h) 0 when t 0, i.e. Φ(x + th) =Φ′ (x; h). k − − G kY → → dt t=0 G ′ ′ ΦG(x; h) is also denoted by ΦG(x)(h), dGΦ(x; h) or (dGΦ(x))(h). The second order Gˆateaux- derivatives d(2)Φ(x; h) at x X in the direction h = (h , h ) X2 is defined by G ∈ 1 2 ∈ ′′ (2) dGΦ(x; h)= dG Φ(x; h)= dG(dGΦ(x; h1); h2) d ∂2 = dGΦ(x + th2; h1) t=0 = Φ(x + t1h1 + t2h2) t =t =0. dt ∂t1∂t2 1 2

Analogously, we may define N-th Gˆateaux-derivatives d(N)Φ(x; h) (or Φ(N)(x; h)) with h = (h , , h ) G G 1 · · · N ∈ XN . If this dN Φ(x; h , , h ) exists, then it is symmetric w.r.t. (h , , h ). G 1 · · · N 1 · · · N (ii) Φ : X Y is called Fr´echet(or F-) differentiable at x X if there exist a bounded linear → ∈ operator Φ′ (x) : X Y and an element τ(x, h) Y such that F → ∈ Φ(x + h) Φ(x) Φ′ (x)h = τ(x, h) with τ(x, h) = o( h ). − − F k kY k kX ′ ′ ′ ′ It is clear that if ΦF (x) (or dF Φ(x)) exists, then ΦG(x) exists also and ΦG(x)=ΦF (x). The second ′′ ′ order Fr´echet-derivatives ΦF (x; h) at x X is defined if ΦF : X L(X : Y ) is differentiable at ∈ ′′ → x X in the Fr´echet sense. In this case, ΦF L(X : L(X : Y )) ∼= L2(X : Y )= L(X, X : Y ). It is ∈ 2 ∈ ′′ denoted by Φ C (U : Y ) if (a) Φ is twice Fr´echet differentiable, and (b) ΦF (x) : U L(X, X : Y ) ∈ (N) → is continuous. We define analogously N-th Fr´echet differivative ΦF and a class of N-times Fr´echet differentiable functions CN (U : Y ). That is, Φ CN (X : Y ) means for each x U X, Φ is ∈ F ∈ ⊂ 47 48 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

N N-times Fr´echet- differentiable and Φ(N)(x) is a continuous map from U to L(X X : Y )= F ×···× LN (X : Y ) w.r.t. x z }| {

Theorem 4.1.1 (see, Theorem 2.1.13 of Berger [12]). If Φ : X Y be Fr´echet-differentiable → at x, it is Gˆateaux-differentiable at x. Conversely, if the Gˆateaux derivative of Φ at x, dGΦ(x, h), is linear in h and is continuous in x as a map from X L(X : Y ), then Φ is Fr´echet-differentiable ′ ′ → at x. In either case, we have ΦG(x)y =ΦF (x,y).

Theorem 4.1.2 (see, Theorem 2.1.27 of Berger [12]). If Φ : X Y be N-times Fr´echet- → differentiable in a neighbourhood U at x and Φ(N)(x)(h(1), , h(N)) denotes the N-th Fr´echet de- F · · · rivative, the Φ is N-times Gˆateaux-differentiable and

d(N)Φ(x; h(1), , h(N))=Φ(N)(x)(h(1), , h(N)). G · · · F · · · Conversely, if the N-th Gˆateaux derivative d(N)Φ(x; h(1), , h(N)) of Φ exists in a neighbourhood G · · · of U of x, d(N)Φ(x; h(1), , h(N)) L (X : Y ), and as a function of x, d(N)Φ(x; h(1), , h(N)) is G · · · ∈ N G · · · continuous from U to LN (X : Y ), then Φ is N-times Fr´echet-differentiable and the two derivatives are equal at x.

Problem 4.1.1. How does one extend these notion of differentiability to those on functions on Rm|n?

4.2. Gˆateaux or Fr´echet differentiable functions on Fr´echet spaces

In this section, I borrow representations in R. Hamilton’s paper [57] which I overooked when lecture had been prepared.

4.2.1. Gˆateaux-differentiability.

Definition 4.2.1 (Gˆateaux-derivative, -differential and -differentiability). (i) Let X, Y be Fr´echet spaces with countable seminorms p , q , respectively. Let U be an open subset of X. { m} { n} For a function f : U Y , we say that f is 1-time Gˆateaux (or G-)differentiable at x U in the → ∈ direction y X if there exists the following limit in Y : ∈ f(x + ty) f(x) df(x + ty) ′ lim − = = dGf(x; y)= dGf(x) y = dGf(x)y = fG(x)y, t→0 t dt { } t=0 i.e., for given x U and y X there exists an element d f(x; y) Y such that for any n N, we ∈ ∈ G ∈ ∈ have q (f(x + ty) f(x) td f(x; y)) = o(t). n − − G We call this dGf(x; y) the G-differential of f at x in the direction y and denoted as above, and ′ dGf(x) or fG(x) are called the G-derivative. Moreover, f is said to be G-differentiable in U and denoted by f C1−(U : Y ) if f has the G-differential d f(x; y) for every x U and any direction ∈ G G ∈ y X. A map f : U Y is said to be 1-time continuously G-differentiable on U, denoted by ∈ → f C1 (U : Y ), if f has G-derivative in U and if d f : U X (x,y) d f(x; y) Y is jointly ∈ G G × ∋ → G ∈ continuous. 4.2. GATEAUXˆ OR FRECHET´ DIFFERENTIABLE FUNCTIONS ON FRECHET´ SPACES 49

(ii) If X, Y are Banach spaces with norms , , respectively, then, f has G-differential k·kX k·kY df(x; y) Y at x U in the direction y X if and only if ∈ ∈ ∈ f(x + ty) f(x) td f(x; y) = o( t ) as t 0. k − − G kY | | → Moreover, f C1 (U : Y ) if and only if f is G-differentiable at x and d f is continuous from ∈ G G U x to d f(x) L(X : Y ). ∋ G ∈ Proposition 4.2.1 (see, pp.76-77 of [57]). Let X, Y be Fr´echet spaces and let U be an open subset of X. If f C1 (U : Y ), then d f(x; y) is linear in y. ∈ G G Remark 4.2.1 (see, p.70 of [57]). It should be remarked that even if X,Y,Z are Banach spaces and U X, there exists the difference between ⊂ “L : U Y Z is continuous” and “L : U L(Y : Z) is continuous”. × → → Definition 4.2.2 (Higher order derivatives, see, p.80 of [57]). Let X, Y be Fr´echet spaces. (i) If the following limit exists, we put

2 2 dGf(x + tz; y) dGf(x; y) dGf(x) y, z = dGf(x; y, z) = lim − . { } t→0 t Moreover, f is said to be C2 (U : Y ) if d f is C1 (U X : Y ), which happens if and only if d2 f G G G × G exists and is continuous, that is, d2 f is jointly continuous from U X X Y . G × × → (ii) Analogously, we define n dn f : U X X (x,y , ,y ) dn f(x) y , ,y = dn f(x; y , ,y ) Y. G × ×···× ∋ 1 · · · n → G { 1 · · · n} G 1 · · · n ∈ N z }| N{ ∂ d N−1 Φ(x + tjhj) = dG Φ(x + tN hN ; h1, , hN−1) ∂t1 ∂tN dtN · · · j=1 t1=···=tN =0 tN =0 · · · X = d (dN−1Φ(x; h , , h ); h ) G G 1 · · · N−1 N = dN Φ(x; h , , h )=Φ(N)(x; h , , h ). G 1 · · · N G 1 · · · N n n ∞ f is said to be CG(U : Y ) if and only if dGf exists and is continuous. We put CG (U : Y ) = ∞ Cn (U : Y ). ∩n=0 G

Definition 4.2.3 (Many variables case). (i) Let X1, X2, Y be Fr´echet spaces. For x = (x ,x ) X X and z = (z , z ) X X , we put 1 2 ∈ 1 × 2 1 2 ∈ 1 × 2 f(x1 + tz1,x2) f(x1,x2) ∂x f(x) z1 = fx (x; z1)= fx (x)z1 = lim − , 1 { } 1 1 t→0 t f(x1,x2 + tz2) f(x1,x2) ∂x f(x) z2 = fx (x; z2)= fx (x)z2 = lim − . 2 { } 2 2 t→0 t They are called partial derivatives. We define the total G-derivative as

′ f(x1 + tz1,x2 + tz2) f(x1,x2) dGf(x) z = fx(x; z) = lim − . { } t→0 t For f : X Y with X = n X , we define ∂ f(x) and d f(x) for x = (x , ,x ), analogously. → i=1 i xj G 1 · · · n (ii) If X1, X2, Y are BanachQ spaces, we may define analogously the above notion. n Proposition 4.2.2. Let Xi i=1, Y be Fr´echet spaces and let U be an open set in X = n { } i=1 Xi. Q 50 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

(a) f C1 (U : Y ), i.e. d f(x) y exists and is continuous, if and only if ∂ f(x) exist and are ∈ G G { } xj {·} continuous, and we have, for x = (x )n , y = (y )n X, i i=1 i i=1 ∈ n n (4.2.1) d f(x; y)= d f(x) y = f (x; y )= f (x) y . G G { } xi i xi { i} Xi=1 Xi=1 (b)[Taylor’s formula] Moreover, if f Cp (U : Y ), we have ∈ G p k 1 f(x + y)= dk f(x) y, ,y + R f(x,y) k! G { · · · } p (4.2.2) k=0 X−p z }| { with lim t Rpf(x,ty) = 0 for y X, t→0 ∈ where 1 (1 s)p−1 dp R f(x,y)= − f(x + sy)ds. p (p 1)! dsp Z0 − Proof. (4.2.1) is proved in Theorem 3.4.3 of [57] for N = 2. (4.2.2) is given, for example, in p.101 of Keller [87], et al. 

4.2.2. Fr´echet-differentiability.

Definition 4.2.4 (see, Definition 1.8. of L. Schwartz [124]). (i) Let X, Y be Fr´echet spaces, and let U be an open subset of X. A function ϕ : U Y is said to be horizontal (or tangential) at → 0 if and only if for each neighbourhood V of 0 in Y there exists a neighbourhood U ′ of 0 in X, and a function o(t) : ( 1, 1) R such that − → o(t) (4.2.3) ϕ(tU ′) o(t)V with lim = 0, ⊂ t→0 t i.e. for any seminorm qn on Y and ǫ> 0, there exists a seminorm pm on E and δ > 0 such that (4.2.4) q (ϕ(tx)) ǫt for p (x) < 1, t δ n ≤ m | |≤ From (4.2.4), putting V = z Y q (z) < 1 ,U ′ = x X p (x) < 1 , we may recover (4.2.3). { ∈ | n } { ∈ | m } (ii) For given Banach spaces (X, ) and (Y, ), “horizontal” implies k·kX k·kY ϕ(x) Y x X ψ(x) with ψ : X R, lim ψ(x) = 0 i.e. ϕ(x) Y = o( x X ) as x X 0. k k ≤ k k → x→0 k k k k k k → Definition 4.2.5 (Fr´echet differentiability). (i)(Definition 1.9. of [124]) Let X, Y be Fr´echet spaces with U being an open subset of X. We say that f has a Fr´echet (or is F -)derivative (or f is F -differentiable) at x U, if there exists a continuous linear map A = A : X Y such that ∈ x → ϕ(x; y) is horizontal w.r.t y at 0, where ϕ(x; y) is defined by

ϕ(x; y)= f(x + y) f(x) A y. − − x We call A = Ax the F -derivative of f at x, and we denote Axy as dF f(x; y). Moreover, we denote f C1 (U : Y ) if f is F -differentiable and d f : U X (x,y) d f(x; y) Y is jointly continuous. ∈ F F × ∋ → F ∈ (ii) For Banach spaces, f is F -differentiable at x if there exists a continuous linear map A = Ax : X Y satisfying → f(x + y) f(x) Ay = o( y ) as y 0. k − − kY k kX k kX → Moreover, f C1 (U : Y ) if f is F -differentiable and X x A L(X : Y ) is continuous. ∈ F ∋ → x ∈ 4.3. FUNCTIONS ON SUPERSPACE 51

Remark 4.2.2. If f is F -differentiable, then it is also G-differentiable. Moreover, ′ ′ fG(x; y)= dGf(x; y)= dF f(x; y)= fF (x; y).

Definition 4.2.6 (Higher order derivatives). (i) Let X, Y be Fr´echet spaces with U being an open subset of X.A F -differentiable function f : U Y is twice F -differentiable at x U if → ∈ d f : U X (x,y) d f(x; y) Y is F -differentiable at x X. That is, the function F × ∋ → F ∈ ∈ ψ(x; y, z)= d f(x + z; y) d f(x; y) d2 f(x) y, z , F − F − F { } is horizontal w.r.t. z at 0. (ii) (p.72 of [12]) Let X, Y be Banach spaces. A F -differentiable function f : U Y is twice → F -differentiable at x U if f ′ : X L(X : Y ) is F -differentiable at x X and f ′′ (x), the ∈ F → ∈ F derivative of f ′ (x), belongs to L(X : L(X : Y )) = L(X X : Y ). f C2(U : Y ) if (a) f is twicely F × ∈ F -differentiable for each x U and (b) f ′′ (x) : U L(X X : Y ) is continuous. ∈ F → × (iii) Analogously N-times F -differentiability is defined.

N Definition 4.2.7 (Many variables case). (i) Let U = i=1 Ui with each Ui being an open subset of Fr´echet spaces X . For x = (x , ,x ) U with x U and h X s.t. x + h U , i 1 · · · N ∈ Qi ∈ i i ∈ i i i ∈ i if there exists F (x; h ) Y such that i i ∈ ϕ (x; h )= f(x , ,x ,x + h, x , x ) f(x , ,x ,x ,x , x ) F (x; h ) i i 1 · · · i−1 i i+1 · · · N − 1 · · · i−1 i i+1 · · · N − i i is horizontal w.r.t. hi. We denote Fi(x; hi) as ∂xi f(x)hi, the of f w.r.t. xi.

(ii)(p.69 of [12]) In case Xi are Banach spaces, the partial derivative of f w.r.t. xi, ∂xi f(x), is defined by f(x , ,x ,x + h ,x , x ) f(x , ,x ,x ,x , x )= ∂ f(x)h + o( h ). 1 · · · i−1 i i i+1 · · · N − 1 · · · i−1 i i+1 · · · N xi i k ik More generally, for each x if there exists a continuous linear map d f : X h d f(x; h) Y F ∋ → F ∈ such that f(x + h) f(x) d f(x; h) = o( h ) for h X. k − − F kY k kX ∈ We denote also d f(x; h) = f ′ (x) h with f ′ (x) L(X : Y ). Moreover, there exist operators F F { } F ∈ ∂ f(x) L(X : Y ) such that xi ∈ i N N (4.2.5) f ′ (x; h)= f ′ (x) h = ∂ f(x) h = ∂ f(x; h ) with h = (h , , h ). F F { } xi { i} xi i 1 · · · N Xi=1 Xi=1 4.3. Functions on superspace

4.3.1. Grassmann continuation. Let φ(q) bea C-valued function on an open set Ω Rm, ⊂ that is, I φ(q)= φI(q)σ with φI :Ω q φI(q) C. I ∋ → ∈ X∈I By the definition of the topology of C, we have I lim φ(q)= lim φI(q) σ . q→q0 q→q0 I∈I X  The differentiation and integration of such φ(q) are defined by

∂ ∂ I I φ(q)= φI(q)σ and dq φ(q)= dq φI(q) σ . ∂qj I ∂qj Ω I Ω X∈I Z X∈I  Z  52 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

We say φ C∞(Ω : C) if φI C∞(Ω : C) for each I . ∈ ∈ ∈I Remark 4.3.1. If we use Banach-Grassmann algebra instead of Fr´echet-Grassmann algebra, we need to check whether I φI(q) < , etc., which seems cumbersome or rather impossible to ∈I | | ∞ check for applying it to concreteP problems. Lemma 4.3.1. Let φ(t) and Φ(t) be continuous C-valued functions on an interval [a, b] R. ⊂ Then, b (1) a dt φ(t) exists, R b (2) if Φ′(t)= φ(t) on [a, b], then dt φ(t)=Φ(b) Φ(a), − Za (3) if λ C is a constant, then ∈ b b b b dt (φ(t) λ)= dt φ(t) λ and dt (λ φ(t)) = λ dt φ(t). · · · · Za  Za  Za Za Moreover, we may generalize above lemma for a C-valued function φ(q) on an open set Ω Rm. ⊂ Definition 4.3.1. A set U Rm|0 = Rm is called an even superdomain if U = π (U ) ev ⊂ ev B ev ⊂ Rm is open and connected and π−1(π (U )) = U . When U Rm|n is represented by U = U Rn B B ev ev ⊂ ev× od with a even superdomain U Rm|0, U is called a superdomain in Rm|n. ev ⊂ Proposition 4.3.1. Let U Rm|0 be a even superdomain. Assume that f is a smooth ev ⊂ function from Rm U = π (U ) into C, denoted simply by f C∞(U : C). That is, we have the ⊃ B ev ∈ expression

J ∞ (4.3.1) f(q)= fJ(q) σ with fJ(q) C (U : C) for each J . J ∈ ∈I X∈I Then, we may define a mapping f˜ of Uev into C, called the Grassmann continuation of f, by ˜ 1 α α α α J (4.3.2) f(x)= ∂q f(xB) xS where ∂q f(xB)= ∂q fJ(xB) σ . α! J |αX|≥0 X Here, we put x = (x , ,x ), x = x + x with x = (x , ,x ) = (q , ,q ) = q U, 1 · · · m B S B 1,B · · · m,B 1 · · · m ∈ x = (x , ,x ) and xα = xα1 xαm . S 1,S · · · m,S 1 · · · m

Proof. [Since circulation of our paper A. Inoue and Y. Maeda [80] is so-limited, I repeat here the proof whose main point is to check whether this mapping (4.3.2) is well-defined or not. Therefore, by using the degree argument, we need to define f˜[k], the k-th degree component of f˜.]

[k1] Denoting by x1,S , the k1-th degree component of x1,S, we get [r ] [r ] (xα1 )[k1] = (x 1 )p1,1 (x ℓ )p1,ℓ . 1,S 1,S · · · 1,S Here, the summation is taken for all partitionsX of an integer α into α = p + + p satisfying 1 1 1,1 · · · 1,ℓ ℓ r p = k , r 0. Using these notations, we put i=1 i 1,i 1 i ≥ 1 (4.3.3)P f˜[k](x)= (∂αf)[k0](x ) (xα1 )[k1] (xαm )[km] α! q B 1,S · · · m,S |α|≤k, k0+Xk1+···+km=k k1,··· ,km are even 4.3. FUNCTIONS ON SUPERSPACE 53 where

α [k0] α J (∂q f) (xB)= ∂q fJ(xB) σ . J | X|=k0 Or more precisely, we have [0] [0] f˜ (x)= f (xB), [1] [1] f˜ (x)= f (xB), m ˜[2] [2] [0] [2] f (x)= f (xB)+ (∂qj f) (xB)(xj,S) , Xj=1 m ˜[3] [3] [1] [2] f (x)= f (xB)+ (∂qj f) (xB)(xj,S) , Xj=1 m ˜[4] [4] [2] [2] f (x)= f (xB)+ (∂qj f) (xB)(xj,S) Xj=1 m 1 2 [0] 2 [4] 2 [0] [2] [2] + (∂ f) (xB)(x ) + (∂ f) (xB)(xj,S) (x ) , etc. 2 qj j,S qj qk k,S Xj=1 Xj6=k ˜[j] ˜[k] ∞ ˜[j] ∞ [k] Since f (x) = f (x) (j = k) in C, we may take the sum j=0 f (x) C = k=0C , which 6 ˜ 6 ∈ ⊕ is denoted by f(x). Therefore, rearranging the above “summation”,P we get rather the “familiar” expression as in (4.3.2). 

Remark 4.3.2. Concerning the summation in (4.3.3), summation w.r.t. α is clearly finite, but that in (xα1 )[k1] w.r.t. J is infinite for J = k . 1,S ∈I | | 1 Corollary 4.3.1. If f and f˜ be given as above, then

(i) f˜ is continuous and

(ii) f˜(x) = 0 in Uev implies f(xB) = 0 in U. Moreover, if we define the partial derivative of f˜ in the j-direction by j d (4.3.4) ∂ f˜(x)= f˜(x + te ) where e = (0, , 0, 1, 0, , 0) Rm|0, xj dt (j) (j) · · · · · · ∈ t=0 z }| { then we get

(4.3.5) ∂ f˜(x)= ∂ f(x) for j = 1, , m. xj qj · · ·

Proof. Let y = y + y R . Forgy = y e = y e + y e = y + y Rm|0, j j,B j,S ∈ ev (j) j (j) j,B (j) j,S (j) (j),B (j),S ∈ as

d d 1 α J α f˜(x + ty )= ∂ fJ(x + ty )σ (x + ty ) , dt (j) dt α! q B (j),B S (j),S ( α J ! ) X X we get easily,

d ˜ 1 α J α 1 αˇ J αˇ f(x + ty(j)) = y(j),B ∂q fJ(xB)σ xS + y(j),S ∂q ∂qj fJ(xB)σ xS dt t=0 α! αˇ! α J ! αˇ J ! X X X X 1 = y ∂α∂ f(x ) xα = y ∂ f(x). j α! q qj B S j qj α X g 54 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

Here,α ˇ = (α , , α , α 1, α , , α ). Putting y = y + y = 1 in the above, we have 1 · · · j j − j+1 · · · m j j,B j,S (4.3.5). 

Remark 4.3.3. (i) By the same argument as above, we get, for y = (y , ,y ) Rm|0, 1 · · · m ∈ m m d 1 (4.3.6) f˜(x + ty) = y ∂α∂ f(x ) xα = y ∂ f˜(x). dt j α! q qj B S j xj t=0 j=1 α j=1 X X X

(ii) Unless there occurs confusion, we denote f˜ simply by f.

4.3.2. Supersmooth functions and their derivatives. How to define the continuity and differentiability of functions from Rm|n to C? Problem 4.3.1. Since R, C and Rm|n are Fr´echet spaces, we may define G- or F -differentiable functions as before. By the way, how to take into account the ring structure of Fr´echet-Grassmann algebra in the definition of total differentiability?

In order to answer this problem, we introduce “the desired or tractable form of functions on Rm|n” and called them as “supersmooth (or called superfield by physicist)”. In the next section, we study their properties and we characterize them.

Definition 4.3.2. (1) Let U Rm|0 be a even super domain. A mapping F from U to C ev ⊂ ev is called supersmooth if there exists a smooth mapping f from U = πB(Uev) to C such that F = f˜. We denote the set of supersmooth functions on U as (U : C). ev CSS ev (2) Let U be a superdomain in Rm|n. A mapping f from U to C is called supersmooth if it is decomposed as a (4.3.7) f(x,θ)= θ fa(x). |aX|≤n Here, a = (a , a ) 0, 1 n, θa = θa1 θan and f (x) (U : C). Without mentioning it, 1 · · · n ∈ { } 1 · · · n a ∈ CSS ev we assume always that f (x) C (or C ) for all a, and call them as even (or odd) supersmooth a ∈ ev ∈ od functions denoted by (U : C). Moreover, CSS / = f(x,θ) (U : C) f (x ) C . CSS { ∈CSS | a B ∈ } Therefore, if f / , f (x) may be put any side of θa. ∈ CSS a (3) Let f (U : C). We put ∈CSS F (X)= θa∂ f (x) for j = 1, 2, , m, j xj a · · · |a|≤n (4.3.8)  X  F (X)= ( 1)l(a)θa1 θas−1 θan f (x) for s = 1, 2, ,n  s+m − 1 · · · s · · · n a · · · |aX|≤n s−1 −1 with l(a)= j=1 aj and θs = 0. In this case, Fκ(X) is the partial derivative of f at X = (x,θ)= (Xµ) w.r.t. PXκ ∂ Fj(X)= f(x,θ)= ∂xj f(x,θ)= fxj (x,θ) for j = 1, 2, , m, (4.3.9) ∂xj · · ·  ∂ Fm+s(X)= f(x,θ)= ∂θs f(x,θ)= fθs (x,θ) for s = 1, 2, ,n ∂θs · · · (4.3.10)  F (X)= ∂ f(X)= f (X) for κ = 1, ,m + n. κ Xκ Xκ · · · 4.3. FUNCTIONS ON SUPERSPACE 55

Remark 4.3.4. (1) In this lecture, we use the left odd derivatives. This naming stemms from putting most left the variable w.r.t. which we differentiate. There are some authors (see, for example V.S. Vladimirov and I.V. Volovich [138]) who give the name right derivative to this.

Put (r)(U : C)= f(x,θ)= f (x)θa f (x) (U : C) . CSS { a | a ∈CSS ev } |aX|≤n For f (r)(U : C) with j = 1, 2, ,m and s = 1, 2, ,n, we note here the right-derivatives: ∈CSS · · · · · · (r) a Fj (X)= ∂xj fa(x)θ ,  |aX|≤n  F (r) (X)= ( 1)r(a)f (x)θa1 θas−1 θan  s+m − a 1 · · · s · · · n |aX|≤n  n (r) We put here r(a) = j=s+1 aj. Fκ (X)is called the (right) partial κ-derivative w.r.t. Xκ at X = (x,θ) denoted by P ← (r) ∂ (r) ∂ ← Fj (X)= f(x,θ)= ∂xj f(x,θ), Fm+s(X)= f(x,θ) = f(x,θ)∂θs . ∂xj ∂θs

(2) Since we use a countably infinite Grassmann generators, the decomposition (4.3.8) is unique. In fact, if θaf (x) 0 on U, then f (x) 0. (see, p 322 in Vladimirov and Volovich [138].) a a ≡ a ≡ (3) The higher derivatives are defined analogously. For a multiindex α = (α , , α ) (N 0 )m P 1 · · · m ∈ ∪{ } and a = (a , , a ) 0, 1 n, we put 1 · · · n ∈ { } ∂α = ∂α1 ∂αm and ∂a = ∂a1 ∂an . x x1 · · · xm θ θ1 · · · θn

Assume that for X = (x,θ),Y = (y,ω) Rm|n, we have X + tY U (for any t [0, 1]). ∈ ∈ ∈ Repeating the proof used in the proof of Corollary 4.3.1, for f (U : C), the following holds: ∈CSS m m d ∂ ∂ (4.3.11) f(X + tY ) = y f(X)+ ω f(X) dt j ∂x s ∂θ t=0 j=1 j s=1 s X X

Definition 4.3.3. A function f from the super domain U Rm|n to C, is called G-differentiable ⊂ at X = (x,θ) if f(x + y,θ + ω) f(x,θ)= (y F + ω F )+ (y R + ω R ). − i i s s i i s s Here, X X d(R , 0) 0, d(R , 0) 0, d ((y,ω), 0) 0. i → s → m|n →

4.3.2.1. Taylor’s Theorem. For f (U : C), we have ∈CSS m m d ∂ ∂ (4.3.12) f(X + tY ) = y f(X)+ ω f(X). dt j ∂x s ∂θ t=0 j=1 j s=1 s X X

From this, we define Definition 4.3.4. For a supersmooth function f, we define its differential df as m+n ∂f(X) df(X)= d f(X)= dX , X κ ∂X κ=1 κ X 56 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE or m n ∂f(x,θ) ∂f(x,θ) df(x,θ)= dx + dθ . j ∂x s ∂θ j s=1 s Xj=1 X From the before mentioned Definition 5.2.1, we have

Proposition 4.3.2. Let U be a superdomain in Rm|n. For any f, g (U : C), the product ∈ CSS fg belongs to (U : C) and their differentials d f(X) and d g(X) are continuous linear maps CSS X X from Rm|n to Cm+n.

Moreover,

(1) For any homogeneous elements λ, µ C, ∈ (4.3.13) d (λf + µg)(X) = ( 1)p(λ)p(X)λ d f(X) + ( 1)p(µ)p(X)µ d g(X). X − X − X (2) (Leibnitz’ formula)

(4.3.14) ∂ [f(X)g(X)] = (∂ f(X))g(X) + ( 1)p(Xκ)p(f(X))f(X)(∂ g(X)). Xκ Xκ − Xκ

Proof. (4.3.13) is trivial. For f, g (U : C), we have ∈CSS m m d ∂ ∂ f(X + tY )g(X + tY ) = y f(X)+ ω f(X) g(X) dt  j ∂x s ∂θ  t=0 j=1 j s=1 s (4.3.15) X X  m m  ∂ ∂ + f(X) y g(X)+ ω g(X) .  j ∂x s ∂θ  j s=1 s Xj=1 X Therefore, we get the desired result.   

4.3.3. Characterization of supersmooth functions. In previous lecture, we introduce abruptly a class (U : C) of functions on super domain U Rm|n. But such introduction is CSS ⊂ reasonable or it is stable under rather ordinary operations? Or how may we characterize it?

Though there exists multiplication in C but not in R2. How the ring structure of the definition domain affects the total-differentiability of functions? How do we characterize such functions? (a) We decompose a function f(z) from C to C as C z = x + iy f(z)= u(x, y)+ iv(x, y), u(x, y)= f(z) R, v(x, y)= f(z) R ∋ −→ ℜ ∈ ℑ ∈ For z = x + iy and z = x + iy , since z = x2 + y2, we have 0 0 0 | | f(z) f(z0) = u(x, y)+ iv(x,p y) (u(x0,y0)+ iv(x0,y0)) | − | | − | = (u(x, y) u(x ,y ))2 + (v(x, y) v(x ,y ))2 − 0 0 − 0 0 Therefore, if f(z) is continuousp at z = z0, u(x, y), v(x, y) are continuous at (x0,y0) as real-valued functions with 2 real variables. (b) A function f(z) from C to C is called total differentiable at z = z0 if there exists a number γ C = L(C : C) such that it satisfies ∈ ∼ (4.3.16) f(z + w) f(z ) γw = o( w ) ( w 0). | 0 − 0 − | | | | | → ′ This number γ = α+iβ (α, β R) is denoted by f (z0). We check a little bit more precisely. putting w = h + ik (h, k R),∈ then γw = (α + iβ)(h + ik) = (hα kβ)+ i(kα + hβ), we ∈ − 4.3. FUNCTIONS ON SUPERSPACE 57

have f(z + w) f(z ) γw | 0 − 0 − | = u(x + h,y + k) u(x ,y ) (hα kβ) | 0 0 − 0 0 − − + i(v(x + h,y + k) v(x ,y ) (kα + hβ)) 0 0 − 0 0 − | = [u(x + h,y + k) u(x ,y ) (hα kβ)]2 0 0 − 0 0 − − 1/2 + [v(x + h,y + k) v(x ,y ) (kα + hβ)]2 0 0 − 0 0 − Therefore, when (h, k) 0(i.e. √h2 + k2 0), → →  2 2 u(x0 + h,y0 + k) u(x0,y0) (hα kβ) = o( h + k ), (4.3.17) | − − − | v(x + h,y + k) v(x ,y ) (kα + hβ) = o(ph2 + k2). | 0 0 − 0 0 − | From the first equation above, putting h = 0 and k 0, wep get β = uy(x0,y0), and putting k = 0 and h 0 then α = u (x ,y ). From→ the second one− above, we have → x 0 0 α = vy(x0,y0) and β = vx(x0,y0). Therefore, we get a system of PDE (4.3.18) u = v , u = v x y y − x called Cauchy-Riemann equation. If real valued functions u, v with two real variables satisfy Cauchy-Riemann equation, then they belong to C∞, moreover, u(x, y)+iv(x, y) is shown as a convergent power series1 in z = x + iy which is written f(z), and called analytic. Without confusion, we write ∂ f(z)= f ′(z)= α + iβ = u iu = v + iv . ∂z x − y y x (c) Using above notation, we consider a map Φ from R2 to R2 x u(x, y) Φ: R2 R2. ∋ y → v(x, y) ∈     Denoting x0 by (x ,y ), Φ is said to be totally differentiable at (x ,y ) if there exists y0 0 0 0 0 Φ′ (x ,y ) L(R2 : R2) such that F 0 0 ∈  h h Φ(x + h, y + k) Φ(x ,y ) Φ′ (x ,y ) = o( ) k 0 0 − 0 0 − F 0 0 k k k k k     ′ Representing ΦF (x, y) as u (x, y) u (x, y) (4.3.19) Φ′ (x, y)= x y , F v (x, y) v (x, y)  x y  we have (u(x + h,y + k) u(x ,y ) (hu (x ,y )+ ku (x ,y )))2 0 0 − 0 0 − x 0 0 y 0 0 1/2  + (v(x + h,y + k) v(x, y) (hv (x ,y )+ kv (x ,y )))2 = o( h2 + k2), − − x 0 0 y 0 0 i.e.  p 2 2 u(x0 + h,y0 + k) u(x0,y0) (hux(x0,y0)+ kuy(x0,y0)) = o( h + k ), (4.3.20) | − − | v(x + h,y + k) v(x ,y ) (hv (x ,y )+ kv (x ,y )) = o( ph2 + k2). | 0 0 − 0 0 − x 0 0 y 0 0 | 2 ′ 2 2 Identifying R as C, we seek a condition that ΦF (x0,y0) L(R : Rp) is regarded as a multiplication in C. When an element a + ib C acts as a multiplication∈ operator, then a ∈ a b linear operator in R2, that is, a matrix is identified with a + ib − . Since ∼ b a   u (x, y) u (x, y) Φ′ (x, y)= x y , F v (x, y) v (x, y)  x y  ′ 2 2 we have ux = vy, uy = vx. In another presentation, ΦF (x0,y0) L(R : R ) is not only R but also C-linear, that− is, for any a,b R, ∈ ∈ a b a b Φ′ (x ,y )=Φ′ (x ,y ) b− a F 0 0 F 0 0 b− a     1in general, this is proved by applying Cauchy’s integral representation 58 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

hold. Here, b = 0 is essential. 6 (d) Generalizing above to a map φ from Cm to C, φ : Cm z = t(z , ,z ) φ(z) C. ∋ 1 ··· m → ∈ This is totally differentiable if there exists φ′ (z) L(Cm : C) at z such that F ∈ φ(z + w) φ(z) φ′ (z)w = o( w ). k − − F k k k ′ By same calculation, φF (z) is ∂φ(z) ∂φ(z) (4.3.21) φ′ (z)= , , F ∂z ··· ∂z  1 m  and each component of φ is analytic w.r.t. each variable. From Hartogs’ theorem, φ is holomorphic having a convergent power series expansion. m n ′ m (e) A map Φ from C to C is totally differentiable at z if there exists ΦF (z) L(C : Cn) ∈

∂Φ1(z) ∂Φ1(z) ∂z1 ··· ∂zm ′ . . ΦF (z)=  . .  ∂Φ (z) ··· ∂Φ (z)  n n   ∂z1 ··· ∂zm  such that   Φ(z + w) Φ(z) Φ′ (z)w = o( w ). k − − F k k k Problem 4.3.2. Does there exist Cauchy-Riemann equation corresponding to supersmooth functions?

4.4. Super differentiable functions on Rm|n

4.4.1. Superdifferentiability of functions on Rm|n.

Definition 4.4.1 (see, A. Yagi [144]). Let f be a C-valued function on a superdomain U ⊂ Rm|n. Then, a function f is said to be super C1 -differentiable, denoted by f G1 (U : C) or G ∈ SD simply f G1 if there exist C-valued continuous functions F (1 A m + n) on U such that ∈ SD A ≤ ≤ m+n d (4.4.1) f(X + tH) = f ′ (X,H)= H F (X) dt G A A t=0 A=1 X or

f(X + tH) f(X) tf ′ (X,H) 0 in C, when H 0 in C, − − G → → for each X U and H Rm|n where f(X + tH) is considered as a C-valued function w.r.t. t R. ∈ ∈ ∈ We denote F (X) by f (X). Moreover, for r 2, f is said to be in Gr if F are Gr−1. f is A XA ≥ SD A SD said to be G∞ or superdifferentiable if f is Gr for all r 1. SD SD ≥ Definition 4.4.2. Let f be a C-valued function on a superdomain U Rm|n. A function f ⊂ is said to be super C1 -differentiable, denoted by f F1 (U : C) or simply f F1 if there exist F ∈ SD ∈ SD C-valued continuous functions F (1 A m + n) on U and functions ρ : U Rm|n C such A ≤ ≤ A × → that m+n m+n m|n (a) f(X + H) f(X)= HAFA(X)+ HAρA(X; H) for X R , (4.4.2) − ∈ Xj=1 AX=1 (b) ρ (X,H) 0 in C when H 0 in Rm|n, A → → 4.4. SUPER DIFFERENTIABLE FUNCTIONS ON Rm|n 59

2 1 m|n for each X U and X + H U. f is said to be super CF -differentiable, when FA FSD(R : C) ∈ ∈ r ∈ (1 A m + n). Analogously, we may define super CF -differentiablity and we say it superdiffer- ≤ ≤ ∞ ∞ entiable if it is super CF -differentiable, denoted by FSD.

1 1 ∞ Question 4.4.1. Does there exist the difference between GSD and FSD, or between GSD and ∞ FSD?

Remark 4.4.1. Let V be an open set Cm|n. When f : V C is in F∞ , f is also said to be → SD superanalytic.

4.4.2. Remarks on Grassmann continuation. From Taylor’s expansion formula (4.2.2) mentioned before in general Fr´echet space, we get

Lemma 4.4.1. For f(q) C∞(Rm), its Grassmann continuation f˜ has the following Taylor’s ∈ expansion formula: For any N, there exists τ˜ (x,y) C such that N ∈ N 1 (4.4.3) f˜(x + y)= ∂αf˜(x)yα +τ ˜ (f; x,y). α! x N |αX|=0 Here, 1 α 1 N α ˜ τ˜N (f; x,y)= y dt (1 t) ∂x f(x + ty). 0 N! − |α|X=N+1 Z

′ Proof: Putting q = xB and q = yB into Taylor’s expansion formula (4.2.2), we have

N 1 ′ 1 α ′α ′α 1 N α ′ f(q + q )= ∂q f(q)q + q dt (1 t) ∂q f(q + tq ). α! 0 N! − |αX|=0 |α|X=N+1 Z α α ′α α Taking Grassmann continuation of both sides, and remarking ∂q f(x) = ∂x f(x), q = y , we get the desired equality (4.4.3).  g f Corollary 4.4.1. For f(q) C∞(Rm), f˜ is super F-differentiable. ∈

Proof: We prove the case m = 1. From Lemma, we have 1 f˜(x + y) f˜(x)= yf˜′(x)+ yτ(x,y), τ(x,y)= y dt(1 t)f˜′′(x + ty) − − Z0 and because when y 0 in R , then τ(x,y) 0 in C.  → ev → Exercise 4.4.1. Prove more precisely, the statement above “ when y 0 in R , then → ev τ(x,y) 0 in C”. → Corollary 4.4.2. For f(q) C∞(Rm), its Grassmann continuation f˜(x) F1 (Rm|0). ∈ ∈ SD

Proof: Putting N = 1 in Taylor’s expansion formula (4.2.2), we get m m ˜ ˜ ˜ f(x + y)= f(x)+ yj∂xj f(x)+ yjρj(x,y) Xj=1 Xj=1 60 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

Remarking

∂ f˜(x)= ∂ f(x) C, xj qj ∈ (4.4.4) m 1 2 ˜ ρj(x,y)= ykgj,k(x,y), gj,k(x,yg)= (1 t)∂xj xk f(x + ty)dt, 0 − Xk=1 Z we need to prove

Claim 4.4.1. When y 0 in Rm , then ρ (x,y) 0 for each x Rm and j = 1, ,m. → ev j → ∈ ev · · · That is, for any ǫ > 0, j and x Rm , there exists δ > 0 such that if dist (y) < δ, then ∈ ev m|0 dist1|0 ρj(x,y) <ǫ.

Proof. Take any I and decompose I = J + K. Remarking Corollary 4.4.1, we have ∈Iev ∞ 2 α ∂q q ∂q f(xB + tyB) ∂2 f˜(x + ty)= j k (x + ty )α. xj xk α! S S |αX|=0 If I = 0˜ = (0, 0, 0, ), then · · · m proj ρ (x,y) y π g (x,y) 0 for y 0. | 0˜ j |≤ | k,B|| B j,k |→ B → Xk=1 For fixed I = 0,˜ the family of index sets K K I has finite elements, xB+tyB t [0, 1], yB 6 { | ⊂ } α { | ∈ | |≤ 1 is compact, if taking α such that 2 α > K , then projK(x + ty ) = 0. Therefore, there exists } | | | | S S a constant CK = CK(xS,yS) such that ∞ ∂2 ∂αf(x + ty ) qj qk q B B α projK (x + ty ) α! S S |α|=0 X  ∂2 ∂αf(x + ty ) qj qk q B B α projK(x + ty ) CK. ≤ α! S S ≤ α X α In fact, if 2 α > K then projK(x + ty ) = 0, then | | | | S S 1 2 projK(g (x,y)) dt (1 t) projK(∂ f˜(x + ty)) j,k ≤ − xj xk Z0 max ∂2 ∂αf(x + ty ) 1 t qj qk q B B α dt projK(x + ty ) , ≤ α! S S 2|α|≤|I| Z0 X and 1 α CK = CK(x ,y )= dt projK(x + ty ) 0 when y 0 in C, S S S S → S → Z0 therefore m m projI(ρ (x,y)) projJ(y ) projK(g (x,y)) projJ(y ) CK. j ≤ k j,k ≤ k k=1 I=J+K k=1 I=J+K X X X X

This finite sum tends to 0 when y 0, this implies f˜(x) F1 (Rm|0). // → ∈ SD Remark 4.4.2. By the way, concerning the Grassmann continuation f˜ of f, B.S. de Witt [35] claimed in p.7 as follows: 4.4. SUPER DIFFERENTIABLE FUNCTIONS ON Rm|n 61

“The presence of a soul in the independent variable evidently has little practical effect on the variety of functions with which one may work in applications of the

theory. In this respect Rev is a harmless generalization of its own subspace R, the real line.”

Though he didn’t give more explanation of this intuitional claim in [35], but we interpret his saying as Proposition 4.4.1. Let F C∞(Rm|0 : C). Putting f(q)= F (q) for q Rm, we have f˜ = F . ∈ G ∈ We rephrase this as

∞ J Claim 4.4.2. Let a CG differentiable function H = H(x)= J∈I HJ(x)σ be given as a map Rm|0 C Rm Rm α J from to such that it is 0 on . That is, for any α and Pq , if ∂q H (q) = 0, then H m|0 ∈ equals to 0 on R , i.e. for any I , projI(H(x)) = 0. ∈I Proof: We apply Taylor’s expansion formula (4.2.2) once more: For any N and J, remarking α ∂q HJ(q) = 0, we have 1 α 1 N α HJ(xB + xS)= τN (HJ; xB,xS)= xS dt (1 t) ∂x HJ(xB + txS). 0 N! − |α|X=N+1 Z α We need to show that for any I, projI(τN (HJ; xB,xS)) = 0. Since all terms consisting of xS α have at least 2 α as the degree of Grassmann generators, if 2 α > I 0 then projI(xS ) = 0. | | α | | | | ≥ Taking N sufficiently large such that projI(x ) = 0 for all α with α = N + 1, then for any J, S | | projI(τN (HJ; xB,xS)) = 0. Therefore, J projI( σ τN (HJ; xB,xS)) = 0. // J X∈I The proof of Proposition 4.4.1 is given by applying above Claim to H(x)= F (x) f˜(x).  − Following is claimed as (1.1.17) in [35] without proof and cited as Theorem 1 in [100]. Claim 4.4.3. Let f be an from an open set V C to C. Then, we have an ⊂ unique Grassmann continuation f˜ : C1|0 C which is super analytic. ∞ → 1 (4.4.5) f˜(z)= f (n)(z )zn for z = z + z with z V . n! B S B S B ∈ Xn=0 That is, f˜(z + w)= f˜(z)+ wF (z)+ wρ (z, w) when w 0 in C1|0, then ρ (z, w) 0 in C. j → j → Remark 4.4.3. Above Claim itself is proved, since F -differentiability of f˜ F1 (C1|0 : C) from ∈ SD C to C is shown, by applying Corollary 4.4.2. Moreover, from Theorem 4.4.2 below, f˜ F∞ (C1|0 : ∈ SD C), i.e. f˜ is super analytic.

But I want to point out the argument in the proof of Proposition 4.3.1 in S. Matsumoto and K. Kakazu [100] which seems not transparent. They claim the convergence of the right-hand side of (4.4.5) and using this, they proceed as follows: ∞ 1 f˜(z + w)= f (n)(z + w )(z + w )n n! B B S S n=0 X 62 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

∞ ∞ n 1 1 n! = f (ℓ+n)(z )wℓ zn−kwk (analyticity of f on C) n! ℓ! B B k!(n k)! S S n=0  ℓ=0  k=0 −  X∞ ∞ X X 1 1 = f (ℓ+j+k)(z )wℓ zj wk (renumbering) ℓ! B B k!j! S S nX=0  Xℓ=0  k+Xj=n  ∞ ∞ 1 1 n! = f (n+j)(z )zj wℓ wk (rearranging) n! j! B S ℓ!k! B S nX=0  Xj=0  ℓ+Xk=n  ∞ ∞ ∞ 1 1 1 = f (n+j)(z )zj (w + w )n = f˜(n)(z)wn. n! j! B S B S n! n=0 n=0 X  Xj=0  X From this expression, they conclude that f˜ is super analytic. Surely, from this expression, putting ∞ 1 F (z)= f˜(1)(z), ρ(z, w)= f˜(n)(z)wn−1, n! n=2 X they have f˜(z + w) f˜(z)= F (z)w + wρ(z, w). − But we need to show that F (z) is continuous w.r.t. z and ρ(z, w) is horizontal w.r.t w to claim f˜ is super analytic. This horizontality is not so clear from their last argument. To clarify this, I propose to use the analogous proof in Claim 4.4.2.

Remark 4.4.4. If f is real analytic on Rm, there exists a function δ(q) > 0 such that for q′ δ(q), f(q + q′) has Taylor expansion at q. From above proof, f˜(x + y) is Pringsheim regular | |≤ w.r.t. y δ(x ). Here, those who is not familiar with Pringsheim regular, please check it in | B| ≤ B inter-net.

4.4.3. Super smooth functions on superdomain. For future use, we prepare some alge- braic lemmas.

Lemma 4.4.2. Suppose that there exist elements A ∞ R satisfying { i}i=1 ⊂ od (4.4.6) σ A + σ A = 0 for any i, j N j i i j ∈ Then there exists a unique element F R such that A = σ F for i = 1, , . ∈ i i · · · ∞

Proof. We follow the argument in Lemma 4.4 of [144]. Since Ai is represented by Ai = ai σJ with ai C and σ A = 0, we have ai σJ = 0. Therefore, each A can be J∈I J J i i {J | ji=0} J i ∈ i J i written uniquely as Ai = ( J bJσ )σi for some bJ C. From the condition (4.4.6), we have P { | ji=0} P ∈ i j i bJ = b for J with j = j = 0. Letting bJ = bJ for J j = 0 , we put J i j P { | i } ∞ J i J F = bJσ = ( bJσ ) J J X∈I Xi=1 { | jXi=0}∈I which is well-defined and further more Ai = σiF holds for each i. Since we may change the order of summation freely in R, we have J J J J F = bJσ + bJσ = bJσ + bJσ .  J J J J { X| ji=0} { X| ji6=0} { X| jj =0} { X| jj 6=0} Repeating above argument, we have 4.4. SUPER DIFFERENTIABLE FUNCTIONS ON Rm|n 63

Corollary 4.4.3 (Lemma 4.4 of [144]). Let AJ R J = od satisfy { ∈ | | | } K J σ AJ + σ AK = 0 for J, K . ∈Iod J Then there exists a unique element F R such that AJ = σ F for J . ∈ ∈Iod Definition 4.4.3. We denote the set of maps f : R R which are continuous and R - od → ev linear (i.e. f(λX)= λf(X) for λ R , X R ) by f LR (R : R). ∈ ev ∈ od ∈ ev od

Corollary 4.4.4 (The self-duality of R). For f LR (R : R), there exists an element ∈ ev od u R satisfying f ∈ f(X)= X u for X R . · f ∈ od

Proof. Since f : R R is R -linear, we have f(XYZ) = XYf(Z) = XZf(Y ) for any od → ev − X,Y,Z R . By putting X = σ ,Y = σ ,Z = σ and f = f(σ ) R for i = 1, , , we have ∈ od k j i i i ∈ · · · ∞ σ (σ f +σ f ) = 0 for any k. Therefore, σ f +σ f = 0, and by Lemma above, there exists u R k j i i j j i i j f ∈ such that f = σ u for i = 1, , . i i f · · · ∞ N i1+···+i ˇI For I = (i1, ) 0, 1 and I = odd, then ik = 1 for some k. Rewrite I = ( 1) k−1 σ k σk · · · ∈ { } | | I − ˇI with ˇI = (i , , i , 0, i , ), by R -linearity of f, then we have f(σ ) = ( 1)i1+···+ik−1 σ k f(σ ). k 1 · · · k−1 k+1 · · · ev − k Then, this map is well-defined because of σjfi + σifj = 0, that is, it doesn’t depend on other de- composition of I.

∵) Put ˜I = (i , ,i , 0,i , ,i , 0,i , ), for I = (i , ,i , 1,i , ,i , 1,i , ). 1 ··· j−1 j+1 ··· k−1 k+1 ··· 1 ··· j−1 j+1 ··· k−1 k+1 ··· Then, for ℓ = i + + i + i + + i N, remarking ˜I =odd, we have 1 ··· j−1 j+1 ··· k−1 ∈ | | ℓ ˜I ℓ ˜I ℓ ˜I I ( 1) σ σ σ f(( 1) σ σ σ ) = ( 1) σ σ f(σ ), σ = − k j = − k j − j k ( 1)ℓσ σ σ˜I, ⇒ f( ( 1)ℓσ σ σ˜I)= ( 1)ℓσ σ˜If(σ ). (− − j k ( − − j k − − k j By σj fk + σkfj = 0, we have ˜I ˜I ˜I f(( 1)ℓσ σ σ ) f( ( 1)ℓσ σ σ )= ( 1)ℓσ [σ f + σ f ]=0. − k j − − − j k − − j k k j ˜ ˜ I I We extend f as f(X) = I∈I XIf(σ ) for X = I∈I XIσ Rod. Then, since XI C, I ∈ ∈ f˜(X) = I XIσ u = X u . In fact, if I with I =odd with i = 0, then, by f = f(σ ) = σu ∈I f · f P | | P k 6 k k f R and ev-linearity,P I I ˇI ˇI I f˜(σ )= f(σ ) = ( 1)i1+···+ik−1 σ k f(σ ) = ( 1)i1+···+ik−1 σ k σ u = σ u . − k − k f f Clearly f˜(X)= f(X). 

Remark 4.4.5. K. Masuda gives the following example which exhibits that BL is not necessarily self-dual.

A counter-example2: Let L = 2. Define a map f as f(X σ + X σ )= X σ for any X , X R. 1 1 2 2 1 2 1 2 ∈ Then, remarking that (b + b σ σ )(X σ + X σ ) = b (X σ + X σ ), we have readily f 0 1 1 2 1 1 2 2 0 1 1 2 2 ∈ LB (B : B ). If we assume that there exists a u B such that f(X) = X u , then 2,ev 2,od 2 f ∈ 2 · f σ f(σ )= σ σ u = 0 but σ f(σ )= σ σ = 0, contradiction! Hence, there exists no u B 1 1 1 · 1· f 1 1 1 · 2 6 f ∈ 2 such that f(X)= X u . · f 2Though I don’t recognize at first reading, but analogously examples are considered in [16] or [85] 64 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

Repeating the argument in proving Corollary 4.3.1, we get m m d ∂ ∂ (4.4.7) f (U : C)= f(X + tY ) = y f(X)+ ω f(X) ∈CSS ⇒ dt j ∂x s ∂θ t=0 j=1 j s=1 s X X where X = (x,θ),Y = (y,ω) Rm|n such that X + tY U for any t [0, 1]. That is, ∈ ∈ ∈ Corollary 4.4.5. (U : C)= G1 (U : C). CSS ⇒ SD

To relate the definitions and G∞ or F∞ , we need the following notion. CSS SD SD Definition 4.4.4 (p.246 of [144]). Let U be an open set in Rm|n and f : U R(or C). f → → is said to be admissible on U if there exists some L 0 and a R(or C)-valued function φ defined ≥ on U = p (U) such that f(X) = φ p (X) = φ(p (X)). For r with (0 r ), f is said to be L L ◦ L L ≤ ≤ ∞ admissible Cr (or simply f Cr (U : C)) if φ Cr(U : R) or Cr(U : C). ∈ Y ∈ L L I Let f(X)= I σ fI(X) with fI is R(or C)-valued on U. For each I , if fI is admissible ∈I · ∈I Cr (or simply f Cr ) on U, f Cr (U : C) is called admissible on U. More precisely, there P∈ Y ∈ Y exists some LI 0 and a R(or C)-valued function φI defined on U = p (U) such that fI(X) = ≥ L L φI p (X)= φI(p (X)). Moreover, we define its partial derivatives by ◦ L L

∂f J ∂fJ K = ev if 1 A m, = σ = | | ≤ ≤ . K K ∂XA, J ·∂XA, ( K = od if m + 1 A m + n X | | ≤ ≤ Definition 4.4.5 (p.246 of [144]). A R (or C)-valued function f on U is said to be projectable if for each L 0, there exists a R(or C)-valued function f defined on U Rm|n such that ≥ L L ⊂ L p f = f p on U. L◦ L◦ L Claim 4.4.4. A projectable function on U is also admissible on U.

I Proof. We use the map projI : R X = I X σ XI R(or C) introduced in 2. Then, ∋ ∈I I → ∈ § for each I , taking L such that I , we have ∈I ∈IL P proj ◦f U f R U I C −−−−→ −−−−→ pL pL = pL Id.  ⇒    x UL RL UL C y −−−−→fL y y −−−−−→projI◦fL 

Theorem 4.4.1 (Theorem 1 of [144]). Let U be a convex open set in Rm|n. If f : U R is in 1 1 → GSD, then f is projectable and CY on U.

d m+n Proof. Since dt f(X + tH)= A=1 HAFA(X + tH), we have m+n P 1 d 1 f(X + H) f(X)= f(X + tH)dt = HA FA(X + tH)dt. − 0 dt 0 Z AX=1 Z This means that if p (H ) = 0, then p (f(X + H) f(X)) = 0. Therefore if we define f : L A L − L U R by f (p (Z)) = p (f(Z)), then it implies that f is projectable and so admissible. For L → L L L L 4.4. SUPER DIFFERENTIABLE FUNCTIONS ON Rm|n 65

A K E K = σ e Rm|n with e = (0, , 0, 1, 0, , 0), we have A, A ∈ A · · · · · · z }| m{+n ∂ d K | {z K } f(X)= f(X + tEA, ) t=0 = σ FA(X), A = K . ∂XA,K dt | | | | 1 1 fL is C on UL, thus the function f is admissible C on U. 

4.4.4. Cauchy-Riemann relation. To understand the meaning of supersmoothness, we con- sider the dependence with respect to the “coordinate” more precisely.

I ∞ Proposition 4.4.2 (Theorem 2 of [144]). Let f(X)= I fI(X)σ GSD(U : C) where U is a m|n ∈I superdomain in R . Let X = (X ) be represented by X = I X Iσ where A = 1, ,m + n, A A P A, · · · X I C for I = 0 and X R. Then, f(X), considered as a function of countably many A, ∈ | | 6 A,0˜ ∈ P variables X I with values in C, satisfies the following (Cauchy-Riemann type) equations. { A, } ∂ I ∂ f(X)= σ f(X) for 1 A m, I = ev, ∂XA,I ∂X ˜ ≤ ≤ | | (4.4.8)  A,0  K ∂ J ∂  σ f(X)+ σ f(X) = 0 for m + 1 A m + n, J =od= K . ∂XA,J ∂XA,K ≤ ≤ | | | |  Here, we define A ∂ d I I m|n (4.4.9) f(X)= f(X + tEA,I) with EA,I = σ eA = (0, , 0,σ , 0, , 0) R . ∂X I dt · · · · · · ∈ A, t=0 z }| { I ∞ Conversely, let a function f(X)= I fI(X)σ be given such that fI(X + tY ) C ([0, 1] : C) ∈ for each fixed X,Y U and f(X) satisfies above (4.4.8) with (4.4.9). Then, f G∞ (U : C). ∈ P ∈ SD

Proof. Replacing Y with E J with 1 A m and J =even in (4.4.7), we get readily the first A, ≤ ≤ | | equation of (4.4.8). Here, we have used (4.3.5). Considering E J or E K for m + 1 A m + n A, A, ≤ ≤ and J = odd = K in (4.4.7) and multiplying σK or σJ from left, respectively, we have the second | | | | equality in (4.4.8) readily.

To prove the converse statement, we have to construct functions F (1 A m + n) which A ≤ ≤ satisfies m+n d (4.4.10) f(X + tH) = H F (X) dt t=0 A A A=1 X for X U and H = (H ) Rm|n. ∈ A ∈ ∂ For 1 A m, we put FA(X)= f(X) for X U. ≤ ≤ ∂XA,0˜ ∈ On the other hand, from the second equation of (4.4.8) and Lemma 4.4.2, we have an element j ∂ FA(X)(m + 1 A m + n) such that σ FA(X)= f(X). ≤ ≤ ∂XA,J Using these F (X) defined above, we claim that (4.4.10) holds following Yagi’s argument. { A } Since f is admissible, for any L 0, p f is so also, therefore there exist some N 0 and ≥ L◦ ≥ a R -valued C∞ function f such that p f(X) = f p (X) on X U. By natural imbedding L N L◦ N ◦ N ∈ 66 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE from R to R , we may assume N L. Then, we can show that L N ≥ ∂ ∂ pL f(X) if K N , fN (pN (X)) = ∂XA,K ∈I ∂XA,K   0  if otherwise. Therefore, for any L 0, ≥  d d d d p f(X + tH) = p (f(X + tH)) ∵) p ( g(t) )= (p (g(t)) , L dt dt L L dt t=0 dt L t=0 t=0 t=0  d  = f (p (X + tH )) ∵) p (f(X)) = f (p (X)) dt N N L N N t=0 ∂  ∵ = (pN (H))A,K fN (pN (X)) ) finite dimensional case ·∂XA,K A K X X  ∂ = (pN (H))A,K pL f(X) ∵) pL(g(X)) = gN (pN (X)) · ∂XA,K A K X X K   = (p (H)) K p (σ F (X)) ∵) by (4.4.8) N A, · L A A K X X K  = (p (H)) K p (σ ) p (F (X)) N A, · L · L A A K X X K = (p (H)) K p (σ ) p (F (X)) N A, · L L A A K X X K  = p ((p (H)) K p (σ ))p (F (X)) L N A, · L L A XA = (p (H)) p (F (X)) = p ( H F (X)). L A· L A L A A XA XA Thus, we have (4.4.10). The continuity of FA(X) is clear. 

Remark 4.4.6. For function with finite number of independent variables, it is well-known how to define its partial derivatives. But when that number is infinite, it is not so clear whether the change of order of differentiation affects the result, etc. Therefore, we reduce the calculation to the cases with finite number L of generators and making that L to infinity.

Theorem 4.4.2 (Theorem 3 of [144]). Let f be a C-valued C∞ function on an open set U Rm|n. If f is G1 (U : C), then f is G∞ on U. ⊂ SD SD

1 ∞ Proof. Since f GSD, it satisfies Cauchy-Riemann equation. As f is C on U, g(X) = ∂ ∈ ∂X f(X) also satisfies the C-R equation, for 1 A m. In fact, for 1 B m, J =even, A,0˜ ≤ ≤ ≤ ≤ | | ∂ ∂ ∂ ∂ ∂ g(X)= f(X)= f(X) ∂XB,J ∂XB,J ∂XA,0˜ ∂XA,0˜ ∂XB,j ∂ J ∂ J ∂ ∂ J ∂ = σ f(X)= σ f(X)= σ g(X). ∂XA,0˜ ∂XB,0˜ ∂XB,0˜ ∂XA,0˜ ∂XB,0˜ And for m + 1 A m + n, J = K = odd, ≤ ≤ | | | | K ∂ J ∂ K ∂ ∂ J ∂ ∂ σ g(X)+ σ g(X)= σ f(X)+ σ f(X) ∂XB,J ∂XB,K ∂XB,J ∂XA,0˜ ∂XB,K ∂XA,0˜ ∂ K ∂ J ∂ = σ f(X)+ σ f(X) = 0. ∂X ∂X J ∂X K A,0˜  B, B,  4.4. SUPER DIFFERENTIABLE FUNCTIONS ON Rm|n 67

Hence ∂ f(for 1 A m) is G1 on U. ∂XA ≤ ≤ SD Analogously, for m + 1 A m + n, ∂ f = σJ ∂ f is also G1 on U. In fact, we have, ≤ ≤ ∂XA,J · ∂XA SD for K = even, | | Lemma 4.4.3 (Lemma 5.1 of [144]). Let f G∞ (R0|n). Then ∈ SD f(θ)= f(θ , ,θ )= θaf with f C. 1 · · · n a a ∈ |aX|≤n Proof. For n = 1 and J = odd, we have, | | d J ∂ J d I f(θ + tσ ) = f(θ)= σ f(θ) with θ = θIσ , θI C. dt t=0 ∂θJ ·dθ ∈ I∈I Xod Hence ∂ ∂ ∂ ∂ J K K J d d f(θ)= f(θ + tσ + sσ ) = σ σ f(θ). ∂θK ∂θJ ∂s ∂t · ·dθ dθ t=s=0 J K K J Since J , K are odd, we have σ σ = σ σ and therefore | | | | − ∂ ∂ K J d d J K d d ∂ ∂ f(θ)= σ σ f(θ)= σ σ f(θ)= f(θ). ∂θK ∂θJ · ·dθ dθ − · ·dθ dθ −∂θJ ∂θK Since f is C∞ as a function of infinite variables θJ and its higher derivatives are symmetric, we { } have therefore ∂ ∂ f(θ) = 0. ∂θK ∂θJ K By representing f(θ)= K σ fK(θ), the each component fK(θ) is a of degree 1 with J d ∂ d variables θJ J od P. Then σ f(θ)= f(θ) is constant for any J =odd. Thus f(θ) { | ∈ I } ·dθ ∂θJ | | dθ d is constant denoted by a C. Then, (f(θ) θa) = 0. Therefore there exists b C such that ∈ dθ − ∈ f(θ)= θa + b.

We proceed by induction w.r.t. n. Let f be a G∞ function on an open set U R0|n. Fixing SD ⊂ θ , ,θ , f(θ , ,θ ,θ ) is a G∞ function with one variable θ . Thus, we have 1 · · · n−1 1 · · · n−1 n SD n ∂ f(θ1, ,θn−1,θn)= θng(θ1, ,θn−1)+ h(θ1, ,θn−1) with f(θ)= g(θ1, ,θn−1). · · · · · · · · · ∂θn · · · Therefore g is G∞ w.r.t. (θ , ,θ ), h is also G∞ w.r.t. (θ , ,θ ).  SD 1 · · · n−1 SD 1 · · · n−1 Remark 4.4.7. Though this Lemma with a sketch of the proof is announced in [35] and is cited in [100] without proof, but I feel some ambiguity of his proof. This point is ameliorated by [144] as above.

Lemma 4.4.4 (Lemma 5.2 of [144]). Let f G∞ (Rm|0) on a convex open set Rm|0 which ∈ SD U ⊂ vanishes identically on = π ( ). Then, f vanishes identically on . UB B U U

Proof. It is essential to prove the case m = 1. Take an arbitrary point t B and we consider −1 −1 ∈U the behavior of f on πB (t). Let X πB (t) and XL = pL(X). Then XK K L, K = ev 2 −1 ∈ { | ∈I | | ≥ } is a coordinate for (πB (t))L as the ordinary space C. Let fL be the L-th projection of f. Then, ∂ K ∂ fN (XL)= σ fL(XL) for K L and K = ev. ∂XK ·∂X0˜ ∈I | | 68 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

∂ ∂ K1 K If K1, , Kh L, Kj =even> 0 and 2h>L, then σ σ h = 0and fL(XL)=0. · · · ∈I | | · · · ∂xK1 · · ·∂xKh This implies that f is a polynomial on (π−1(t)) . Moreover, for any h 0, L B L ≥ ∂ ∂ ∂ h K1 Kh fL(t)= σ σ fL(t). ∂xK · · ·∂xK · · · ∂X 1 h  0˜  Since f vanishes on , we have UB ∂ h f (t)=0 on ∂X L UB  0˜  and hence ∂ ∂ fL(t) = 0 for any h 0 and K1, , Kh L with Kj = even > 0. ∂xK1 · · ·∂xKh ≥ · · · ∈I | |

Thus the polynomial fL π−1(t) must vanish identically and hence fL 0 on L. This holds for any B ≡ U  L 0. Thus f 0 on . ≥ ≡ U 4.4.5. Proof of Main Theorem 4.4.3.

Theorem 4.4.3. Let U be a superdomain in Rm|n and let a function f : U C be given. → Following conditions are equivalent: (a) f is super Fr´echet (F-, in short) differentiable on U, i.e. f F∞ (U : C), ∈ SD (b) f is super Gˆateaux (G-, in short) differentiable on U, i.e. f G∞ (U : C), ∈ SD (c) f is -times G-differentiable and f G1 (U : C), ∞ ∈ SD (d) f is -times G-differentiable and its G-differential df is R -linear, ∞ ev (e) f is -times G-differentiable and its G-differential df satisfies Cauchy-Riemann type equations, ∞ (f) f is supersmooth, i.e. it has the following representation, called superfield expansion, such that ∞ 1 ∂αf (q) f(x,θ)= θaf˜ (x) with f (q) C∞(π (U)) and f˜ (x)= a xα. a a ∈ B a α! ∂qα S |a|≤n |α|=0 q=xB X X

Remark 4.4.8. In the above, (f) stands for the “algebraic” nature and (a) claims the “analytic” nature of “superfields”. Yagi [144] proves essentially the equivalence (b) (e) (f). ⇐⇒ ⇐⇒

It is clear from outset that (a) (b) (c) (d). From Proposition 4.4.2, (d) (a). Lastly, ⇒ ⇒ ⇒ ⇒ the equivalence of (d) and (e) is given by

Theorem 4.4.4 (Thorem 4 of [144]). Let f be a G∞ function on a convex open set Rm|n. SD U ⊂ Then, there exist R-valued C∞ functions u on such that a UB a f(x,θ)= θ u˜a(x). |aX|≤n Moreover, this expression is unique.

Proof. = ) For fixed x, by Lemma 4.4.3, f(x,θ) has the representation f(x,θ)= θaϕ (x) ⇒ |a|≤n a C G∞ C Rm|0 with ϕa(x) . Since f SD, it is clear that for each a, ϕa(x) is on Pand moreover ∈ ∞ m ∈ ∈ ϕa(xB) is in C (R ). Denoting the Grassmann continuation of it byϕ ˜a(x), we should have

ϕ˜a(x)= fa(x) by Lemma 4.4.4. =) Since the supersmoothness leads the C-R relation, we get the superdifferentiability.  ⇐ 4.5.INVERSEANDIMPLICITFUNCTIONTHEOREMS 69

4.5. Inverse and implicit function theorems

4.5.1. Composition of supersmooth functions. Following is the slight modification of the arguments in Inoue and Maeda [80].

Definition 4.5.1. Let U Rm|n and V Rp|q be superdomains and let ϕ be a continuous ⊂ ⊂ mapping from U to V, denoted by ϕ(X) = (ϕ (X), , ϕ (X), ϕ (X), , ϕ (X)) Rp|q. ϕ is 1 · · · p p+1 · · · p+q ∈ called a supersmooth mapping from U to V if each ϕ (X) (U : R) for A = 1, ,p + q and A ∈ CSS · · · ϕ(U) V. ⊂ Proposition 4.5.1 (Composition of supersmooth mappings). Let U Rm|n and V Rp|q ⊂ ⊂ be superdomains and let F : U Rp|q and G : V Rr|s be supersmooth mappings such that → → F (U) V. Then, the composition G F : U Rr|s gives a supersmooth mapping and ⊂ ◦ →

(4.5.1) dX G(F (X)) = [dX F (X)][dY G(Y )] Y =F (X). Or more precisely,

r+s (4.5.2) d G(F (X)) = (∂ (G F ) (X)) = ( ∂ G (X))(∂ F (Y )) ). X XA ◦ B XA C YC B Y =F (X) C=1 X

r+s p+q m+n r+s Proof. Put F (Y ) = (GB(Y ))B=1, F (X) = (FC )C=1, X = (XA)A=1 , and Y = (YC )C=1. By smooth G-differentiability of the composition of mappings between Fr´echet spaces, we have the smoothness of Φ(X + tH) w.r.t t. Moreover, we have (4.5.2).

By the characterization of supersmoothness, we need to say Rev-linearity of dF Φ, i.e. dF Φ(X)(λH)= λd Φ(X)(H) for λ R which is obvious from F ∈ ev d d(G F )(X)(λH)= G(F (X + tλH)) ◦ dt t=0 r+s

= ( λHA∂XA GC (X))(∂YC FB(Y )) Y =F (X)) C=1 X d = λ G(F (X + tH)) = λd(G F )(X)(H).  dt ◦ t=0

Definition 4.5.2. Let U Rm|n and V Rp| q be superdomains and let ϕ : U V be a ⊂ ⊂ → supersmooth mapping represented by ϕ(X) = (ϕ (X), , ϕ (X)) with ϕ (X) (U : R). 1 · · · p+q A ∈CSS (1) ϕ is called a supersmooth diffeomorphism if

(i) ϕ is a homeomorphism between U and V and

(ii) ϕ and ϕ−1 are supersmooth mappings. (2) For any f (V : R), (ϕ∗f)(X) = (f ϕ)(X) = f(ϕ(X)), called the pull back of f, is ∈ CSS ◦ well-defined and belongs to (U : R). CSS Remark 4.5.1. It is easy to see that if ϕ is a supersmooth diffeomorphism, then ϕB = πB ϕ ∞ ◦ is an (ordinary) C diffeomorphism from UB to VB. Remark 4.5.2. If we introduce the topologies in (V : C) and (U : C) properly, ϕ∗ gives a CSS CSS continuous linear mapping from (V : C) to (U : C). Moreover, if ϕ : U V is a supersmooth CSS CSS → diffeomorphism, then ϕ∗ defines an automorphism from (V : R) to (U : R). CSS CSS 70 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

4.5.2. Inverse and implicit function theorems. We recall

Proposition 4.5.2 (Inverse function theorem on Rm). Let U be an open set in Rm. Let f : U x y = f(x) Rm be a Ck (k 1) mapping such that f ′(x ) = 0 for some x U. ∋ → ∈ ≥ 0 6 0 ∈ Then, there exist a neighbourhood W of y = f(x ) and a neighbourhood U U of x , such 0 0 0 ⊂ 0 that f maps U injectively onto W . Therefore, f has its inverse φ = (f )−1 : W U 0 |U0 |U0 → 0 ∈ Ck(W ). Moreover, for any y = f(x) W with x U , we have φ′(y)= f ′(x)−1. ∈ ∈ 0

Applying this, we have

Theorem 4.5.1 (Inverse function theorem on Rm|n). Let F = (f, g) : Rm|n X Y = ∋ → F (X) Rm|n be a supersmooth mapping on some superdomain containing X˜. That is, ∈ f(X) = (f (X))m Rm , g(X) = (g (X))n Rn , i i=1 ∈ ev k k=1 ∈ od More precisely, we put

f(X) = (f (X))m Rm , g(X) = (g (X))n Rn , i i=1 ∈ ev k k=1 ∈ od such that f (x,θ)= θaf (x), g (x,θ)= θbg (x) / (Rm|n) i ia k kb ∈ CSS |a|=ev≤n |b|=od≤n (4.5.3) X X f (x ), g (x ) C if a , b = 0, with ia B kb B ∈ | | | | 6 f (x ) R if a = 0. ( ia B ∈ | |

We assume the super matrix [d F (X)] is invertible at X˜, i.e. π (sdet [d Φ(X)] ) = X B X |X=X˜ 6 0. Then, there exist a superdomain U, a neighbourhood of X˜ and another superdomain V, a neighbourhood of Y˜ = F (X˜ ) such that F : U V has a unique supersmooth inverse Φ = F −1 = → (φ, ψ) : V U satisfying → m m n n φ(Y ) = (φ ′ (Y )) ′ R , ψ(Y ) = (ψ ′ (Y )) ′ R i i =1 ∈ ev k k =1 ∈ od and

(4.5.4) Φ(F (X)) = X for X U and F (Φ(Y )) = Y for Y V. ∈ ∈ Moreover, we have

−1 V (4.5.5) dY Φ(Y ) = (dX F (X)) X=Φ(Y ) in .

Remark 4.5.3. A question is posed on the meaning of “the supermatrix f has inverse”. If |U0 G(X) : U Rm|n Rm|n is a super mapping represented by ⊂ → G(X) = (g (x,θ), , g (x,θ), g (x,θ), , g (x,θ)) Rm|n, 1 · · · m m+1 · · · m+n ∈ from the definition of dgj , we get A C d G(X)= . X D B   4.5.INVERSEANDIMPLICITFUNCTIONTHEOREMS 71

Here, ∂g1 ∂g2 ∂gm ∂gm+1 ∂gm+2 ∂gm+n ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂g1 ∂g2 · · · ∂gm ∂gm+1 ∂gm+2 · · · ∂gm+n  ∂x2 ∂x2 ∂x2   ∂x2 ∂x2 ∂x2  A = · · · , C = · · · , ......  . . .   . . .   ∂g1 ∂g2 · · · ∂gm   ∂gm+1 ∂gm+2 · · · ∂gm+n       ∂xm ∂xm · · · ∂xm   ∂xm ∂xm · · · ∂xm   ∂g1 ∂g2 ∂gm   ∂gm+1 ∂gm+2 ∂gm+n  ∂θ1 ∂θ1 ∂θ1 ∂θ1 ∂θ1 ∂θ1 ∂g1 ∂g2 · · · ∂gm ∂gm+1 ∂gm+2 · · · ∂gm+n  ∂θ2 ∂θ2 ∂θ2   ∂θ2 ∂θ2 ∂θ2  D = · · · , B = · · · ......  . . .   . . .   ∂g1 ∂g2 · · · ∂gm   ∂gm+1 ∂gm+2 · · · ∂gm+n       ∂θn ∂θn · · · ∂θn   ∂θn ∂θn · · · ∂θn      Therefore, dX G(X) gives an even super matrix.

Proof of Theorem 4.5.1 . (I) To make clear the point, we consider the case m = 1, n = 2, that is, U, V R1|2. Let ⊂ F (X)= F (x,θ) = (f(x,θ), g (x,θ), g (x,θ)) : U V 1 2 → with

f(x,θ)= f(0)(x)+ f(12)(x)θ1θ2,

(4.5.6)  g1(x,θ)= g1(1)(x)θ1 + g1(2)(x)θ2, and f (x ) R, f (x ), g (x ) C. (0) B ∈ (12) B k(l) B ∈   g2(x,θ)= g2(1)(x)θ1 + g2(2)(x)θ2,  In this case, we have f ′ (x)+ f ′ (x)θ θ g′ (x)θ + g′ (x)θ g′ (x)θ + g′ (x)θ (0) (12) 1 2 1(1) 1 1(2) 2 2(1) 1 2(2) 2 A C d F (X)= f (x)θ g (x) g (x) = X  (12) 2 1(1) 2(1)  D B f (x)θ g (x) g (x)   − (12) 1 1(2) 2(2)   with sdet (d F (X)) = det[A CB−1D](det B)−1 and det B = β(x)= g (x)g (x) g (x)g (x). X − 1(1) 2(2) − 1(2) 2(1) Therefore,

′ −1 (4.5.7) πB(sdet (dX F (X))) = f(0)(xB)β(xB) .

We need to find Φ = (φ, ψ1, ψ2) such that

φ(f(xB,θ), g1(xB,θ), g2(xB,θ)) = xB, φ(y,ω)= φ(0)(y)+ φ(12)(y)ω1ω2,

(4.5.8)  ψ1(f(xB,θ), g1(xB,θ), g2(xB,θ)) = θ1, with  ψ1(y,ω)= ψ1(1)(y)ω1 + ψ1(2)(y)ω2,    ψ2(f(xB,θ), g1(xB,θ), g2(xB,θ)) = θ2  ψ2(y,ω)= ψ2(1)(y)ω1 + ψ2(2)(y)ω2.   To state more precisely, we have  yB = f(0)(xB), yS = f(12)(xB)θ1θ2, ′  φ(0)(yB + yS)= φ(0)(yB)+ φ(0)(yB)yS,  ′  φ(12)(yB + yS)= φ(12)(yB)+ φ(12)(yB)yS,  and from the first equation of (4.5.8), φ(0)(yB)= φ(0)(f(0)(xB)) = xB, (4.5.9) ′ ′ ( φ(0)(yB)yS + φ(12)(yB)β(xB)θ1θ2 = [φ(0)(yB)f(12)(xB)+ φ(12)(yB)β(xB)]θ1θ2 = 0. 72 4. ELEMENTARYDIFFERENTIALCALCULUSONSUPERSPACE

Since f ′ (˜x ) = 0, there exists a neighborhood U R ofx ˜ and V R of f (˜x ) where (0) B 6 0 ⊂ B 0 ⊂ (0) B we find a function φ (y ) satisfying the first equation (4.5.9). Moreover, since β(x ) = 0, taking (0) B B 6 the smaller neighborhood if necessary, we define

′ −1 φ(12)(yB)= φ(0)(yB)f(12)(xB)β(xB) . − xB=φ(0)(yB)

On the other hand, putting

ω1 = g1(1)(xB)θ1 + g1(2)(xB)θ2,

ω2 = g2(1)(xB)θ1 + g2(2)(xB)θ2, ω1ω2 = β(xB)θ1θ2 and remarking ySωj =0 for j = 1, 2, from the last two equations of (4.5.8), we should have

ψ1(1)(yB)(g1(1)(xB)θ1 + g1(2)(xB)θ2)+ ψ1(2)(yB)(g2(1)(xB)θ1 + g2(2)(xB)θ2)= θ1,

ψ2(1)(yB)(g1(1)(xB)θ1 + g1(2)(xB)θ2)+ ψ2(2)(yB)(g2(1)(xB)θ1 + g2(2)(xB)θ2)= θ2, that is, ψ (y ) ψ (y ) g (x ) g (x ) 1 0 1(1) B 1(2) B 1(1) B 1(2) B = . ψ (y ) ψ (y ) g (x ) g (x ) 0 1  2(1) B 2(2) B  2(1) B 2(2) B    Therefore, we have ψ∗(yB), which satisfy the desired property. (II) Do analogously as above for general m,n by putting

a b fi(x,θ)= fi,a(x)θ , gk(x,θ)= gk,b(x)θ , |a|=evX≤n |b|=odX≤n and a′ b′ φi′ (y,ω)= φi′,a′ (y)ω , ψk′ (y,ω)= ψk′,b′ (y)ω , ′ ′ |a |X=ev≤n |b |=odX≤n but with more patience. 

Remark 4.5.4. Above theorem holds for functions f (Rm|n : R ) and g (Rm|n : i ∈CSS ev k ∈ CSS Rod).

Moreover, we have

Proposition 4.5.3 (Implicit function theorem). Let Φ(X,Y ) : U V Cp|q be a super- × → smooth mapping and (X,˜ Y˜ ) U V, where U and V are superdomains of Rm|n and Rp|q, respec- ˜ ˜ ∈ × tively. Suppose Φ(X, Y ) = 0 and ∂Y Φ = [∂yj Φ, ∂ωr Φ] is a continuous and invertible supermatrix at (X˜ , Y˜ ) π (U) π (V). Then, there exist a superdomain V U satisfying X˜ π (V) and B B ∈ B × B ⊂ B ∈ B a unique supersmooth mapping Y = f(X) on V such that Y˜ = f(X˜ ) and Φ(X,f(X)) = 0 in V. Moreover, we have

(4.5.10) ∂ f(X)= [∂ Φ(X,Y )][∂ Φ(X,Y )]−1 . X − X Y Y =f(X)

Proof. (4.5.10) is easily obtained by

0= ∂ Φ(X,f(X)) = (∂ Φ(X,Y )+ ∂ f(X)∂ Φ(X,Y )) . X X X Y |Y =f(X) The existence proof is omitted here because the arguments in proving Proposition 4.5.1 work well in this situation. 

Problem 4.5.1. Use Ekeland’s idea [42] to give another proof of above Theorems, if possible. 4.5.INVERSEANDIMPLICITFUNCTIONTHEOREMS 73

4.5.3. Global inverse function theorem. We have the following theorem of Hadamard type:

Proposition 4.5.4 (Global inverse function theorem on Rm). Let f : Rm x y = f(x) m m ∋ → m∈ R be a smooth mapping on R . We assume the Jacobian matrix [dxf(x)] is invertible on R , and (det[d f(x)]) δ > 0 for any x. Then, f gives a smooth diffeomorphism from Rm onto Rm. k x k≥ Proposition 4.5.5 (Global inverse function theorem on Rm|n). Let F = (f, g) : Rm|n X ∋ → Y = F (X) Rm|n be a supersmooth mapping on Rm|n. We assume the super matrix ∈ ∂fi ∂gk d F (X)= ∂xj ∂xj X ∂fi ∂gk ∂θl ∂θl ! is invertible at any X Rm|n, and there exists δ > 0 such that for any x ∈ ∂fi ∂gk πB(sdet ) δ > 0, and x πB det( ) = 0 = . k ∂xj k≥ { | ∂θl } ∅ Then, F gives a supersmooth diffeomorphism from Rm|n onto Rm|n.

Proof. From the proof of above Theorem 4.5.1, it is obvious. 

CHAPTER 5

Elementary integral calculus on superspace

As is well-known, to study a scalar PDE by applying , we use essentially the following tools: Taylor expansion, integration by parts, the formula for the change of variables under integral sign and Fourier transformation. Therefore, beside the elementary differential calculus, it is necessary to develop the elementary integral calculus on superspace Rm|n. But as is explained soon later, we have the relations

dx dx = dx dx for even variables x m , (5.0.11) j ∧ k − k ∧ j { j}j=1 dθ dθ = dθ dθ for odd variables θ n , which differs from ordinary one. ( j ∧ k k ∧ j { k}k=1 Therefore, the integration containing odd variables doesn’t follow our conventional intuition.

5.1. Integration w.r.t. odd variables – Berezin integral

It seems natural to put formally

I I dθj = dθj,I σ for θj = θj,I σ . I I I I I I ∈ X,| |=od ∈ X,| |=od

Remark 5.1.1. Since above sum I stands for the position in the sequence space ω of K¨othe and the element of it is given by dθ I for I is finite, we may give the meaning to dθ . j, P | | j

J K Then, rather formally, since for J and K with J = od = K , dθ J, dθ K and σ , σ are | | | | j, k, anticommutative, minus signs cancel out each other and the second equality in (5.0.11) holds. Analogously the first one in (5.0.11) holds. This make us imagine that even if there exists the notion of integration, it differs much from the standard one on Rm.

Here, we borrow explanation of Vladimirov and Volovich. Since the supersmooth functions on R0|n are characterized as the with value in C, we need to define the integrability for those under the conditions that

(i) integrability of all polynomials,

(ii) linearity of an integral, and

(iii) invariance of the integral w.r.t. shifts. a Put = (C)= u(θ)= n θ u u C . Pn Pn { a∈{0,1} a | a ∈ } We say a mapping J :P C is an integral if it satisfies n Pn → (1) C-linearity (from the right): J (uα + vβ)= J (u)α + J (v)β for α, β C, u, v . n n n ∈ ∈ Pn (2) translational invariance: J (u( + ω)) = J (u) for all ω R0|n and u . n · n ∈ ∈ Pn 75 76 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

Theorem 5.1.1. For the existence of the integral Jn satisfying above conditions (1) and (2), it is necessary and sufficient that (5.1.1) J (φ ) = 0 for φ (θ)= θa, a n 1. n a a | |≤ − Moreover, we have

∂ ∂ ˜ J (u)= u(θ) J (φ ) where φ (θ)= θ1 = θ θ . n ∂θ · · ·∂θ n 1˜ 1˜ 1· · · n n 1 θ=0

Proof. If there exists Jn satisfying (1) and (2), then we have a Jn(v)= I(φa)va for v(θ)= θ va = φa(θ)va. |aX|≤n |aX|≤n |aX|≤n As (θ + ω)a = θa + ( 1)∗θbωa−b, − |a−bX|≥1,b≤a J (v( + ω)) = J (φ ( + ω))v = J (φ )v + ( 1)∗J (φ )v ωa−b, n · n a · a n a a − n b b |aX|≤n |aX|≤n |aX|≤n |a−bX|≥1,b≤a by virtue of (2), we have ( 1)∗J (φ )v ωa−b = 0. − n b b |aX|≤n |a−bX|≥1,b≤a Here, v C and ω Rn are arbitrary, we have (5.1.1). Converse is obvious.  a ∈ ∈ od

Definition 5.1.1. We put Jn(φ1˜) = 1, i.e.,

(5.1.2) dθn dθ1 θ1 θn = 1. 0|n · · · · · · ZR Therefore, we put, for any v = θav (C) |a|≤n a ∈ Pn P Jn(v)= dθ v(θ)= dθn dθ1 v(θ1, ,θn) = (∂θn ∂θ1 v)(0) 0|n 0|n · · · · · · · · · (5.1.3) ZR ZR n = v1˜ = d θv(θ). ZBerezin This is called the (Berezin) integral of v on R0|n.

Then, we have

Proposition 5.1.1. Given v, w (C) , we have the following: ∈ Pn (1) (C-linearity) For any homogeneous λ, µ C, ∈ (5.1.4) dθ(λv + µw)(θ) = ( 1)np(λ)λ dθ v(θ) + ( 1)np(µ)µ dθ w(θ). 0|n − 0|n − 0|n ZR ZR ZR (2) (Translational invariance) For any ρ R0|n, we have ∈ (5.1.5) dθ v(θ + ρ)= dθ v(θ). 0|n 0|n ZR ZR (3) (Integration by parts) For v (C) such that p(v) = 1 or 0, we have ∈ Pn p(v) (5.1.6) dθ v(θ)∂θs w(θ)= ( 1) dθ (∂θs v(θ))w(θ). 0|n − − 0|n ZR ZR 5.1. INTEGRATION W.R.T. ODD VARIABLES – BEREZIN INTEGRAL 77

(4) (Linear change of variables) Let A = (A ) with A R be an . Then, jk jk ∈ ev (5.1.7) dθ v(θ) = (det A)−1 dω v(A ω). 0|n 0|n · ZR ZR (5) (Iteration of integrals)

(5.1.8) dθ v(θ)= dθn dθk+1 dθk dθ1 v(θ1, ,θk,θk+1, ,θn) . 0|n 0|n−k · · · 0|k · · · · · · · · · ZR ZR ZR  (6) (Odd change of variables) Let θ = θ(ω) be an odd change of variables such that θ(0) = 0 ∂θ(ω) and det = 0. Then, for any v (C), ∂ω 6 ∈ Pn ω=0

∂θ(ω) (5.1.9) dθ v(θ)= dω v(θ(ω)) det−1 . 0|n 0|n ∂ω ZR ZR (7) For v (C) and ω R0|n, ∈ Pn ∈

(5.1.10) dθ (ω1 θ1) (ωn θn)v(θ)= v(ω). 0|n − · · · − ZR Proof. We follow the arguments in pp.755-757 of V.S. Vladimirov and I. V. Volovivh [139] with slight modifications if necessary. By definition, we have (1). (2) Remarking the top term of v(θ + ρ) containing the term θ θ is same as v(θ), combining 1· · · n (5.1.1) we get the result. (3) Using ∂ (vw)= ∂ v w + ( 1)p(v)v ∂ w with (2), we get (3). θj θj · − · θj (4) As (Aθ) = n a θ with a R , j k=1 jk k jk ∈ ev J P(v(Aθ)) = ∂ ∂ v(Aθ) = ∂ ∂ v(0)a a n θn · · · θ1 θ=0 θσ(n) · · · θσ(1) σ(1)1· · · σ(n)n σ∈℘n X = ∂ ∂ v(0) sgn (σ)a a = J (v) det A. θn · · · θ1 σ(1)1· · · σ(n)n n σ∈℘ Xn (5) Obvious. (6) We prove by induction w.r.t. n. When n = 1, for θ = aω where a R with π a = 0, it is 1 1 ∈ ev B 6 clear that (5.1.9) holds. In fact,

−1 ∂θ1 −1 J1(v)= a J(v(aω1)) = J1( v(aω1)). ∂ω1  Assuming (5.1.9) holds for Jn−1, we prove it for Jn. Let θj = θj(ω1, ,ωn) for j = 1, ,n. ∂θ (ω) · · · · · · Without loss of generality, we may assume that 1 is invertible. By the property (5), we ∂ω 1 ω=0 have

Jn(v)= J1(Jn−1(v)). Puttingω ˜ = (ω , ,ω ), we solve the equation θ = θ (ω , ω˜) w.r.t. ω , having 2 · · · n 1 1 1 1 (5.1.11) ω1 =ω ¯1(θ1, ω˜) with θ1 = θ1(¯ω1(θ1, ω˜), ω˜), ω1 =ω ¯1(θ1(ω1, ω˜), ω˜).

We put this relation into θj = θj(ω) to have θ = θ (¯ω (θ , ω˜), ω˜)= θ′ (θ , ω˜) for j = 2, , n. j j 1 1 j 1 · · · 78 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

By (5) and the induction hypothesis, we have ∂θ′ J (v)= J (det −1 v(θ ,θ′(θ , ω˜))). n−1 n−1 ∂ω˜ 1 1 Changing the order of integration w.r.t. θ1 andω ˜, we get ∂θ′ J (v)= J (J (det −1 v(θ ,θ′(θ , ω˜)))). n n−1 1 ∂ω˜ 1 1

Using θ1 = θ1(ω1, ω˜), we have ∂θ′ ∂θ −1 J (v)= J (J (det −1 1 v(θ (ω , ω˜),θ′(θ (ω , ω˜), ω˜)))) n n−1 1 ∂ω˜ ∂ω 1 1 1 1  1  −1 ′ ∂θ1 −1 ∂θ = Jn( det v(θ(ω))). ∂ω1 ∂ω˜   θ1=θ1(ω)

On the other hand, if we have

∂θ′ ∂θ ∂θ det 1 = det , ∂ω˜ ∂ω ∂ω   1 then we prove (5.1.9). In fact, from (5.1.11), we get ∂θ ∂ω¯ ∂θ ∂θ 1 = 1 1 + 1 for k = 2, , n, ∂ωk ∂ωk ∂ω1 ∂ωk · · · and ∂θ′ ∂θ ∂θ ∂ω¯ ∂θ ∂θ ∂θ ∂θ −1 i = i + i 1 = i i 1 1 . ∂ω ∂ω ∂ω ∂ω ∂ω − ∂ω ∂ω ∂ω k k 1 k k 1 k  1  ∂θi −1 Subtracting from k-row (k 2) in ( ) the first row multiplied by ∂θ1/∂ωk(∂θ1/∂ω1) , we obtain, ≥ ∂ωk using above relation,

∂θ1 ∂θ2 ∂θn ∂ω1 ∂ω1 ∂ω1 ∂θ′ · · · ∂θ′ ′ ∂θ 0 2 n ∂θ1 ∂θ det = ∂ω2 · · · ∂ω2 = det . ∂ω ∂ω ∂ω 1 · θ1=θ1(ω) ···∂θ ···′ ··· ···′ 0 2 ∂θn ∂ωn · · · ∂ωn

(7) is clear. 

Remark 5.1.2. Above Berezin integration is defined without measure but using inner-multiplication in exterior algebra1 such that ∂ dz = δ dθ θ = δ . ∂z ⌋ k jk ∼ j k jk j Z Remark 5.1.3. (i) We get the integration by parts formula, without the fundamental theorem of elementary analysis. (ii) Moreover, since in conventional integration we get dyf(y) = a dxf(ax), therefore the formula in (5.1.7) is very different from usual one. AnalogousZ differenceZ appears in (5.1.9). (iii) (5.1.10) allows us to put

δ(θ ω) = (θ ω ) (θ ω ), − 1 − 1 · · · n − n though δ( θ) = ( 1)nδ(θ). − −

1See, Mini-column 2 in Chapter 3 5.1. INTEGRATION W.R.T. ODD VARIABLES – BEREZIN INTEGRAL 79

(iv) For the future use, we give a Lie group theoretic proof of (5.1.9) due to Berezin [10]. Let a transformation T may be included in a one-parameter family Tt of transformations θ = θ(ω), that is, T1ω = θ(ω) and T0ω = ω with Tt+s = TtTs. Set

−1 ∂θ(t) (5.1.12) θk(t) = (Ttω)k and g(t)= dω det v(θ(t)). Rn ∂ω Z od Claim 5.1.1. g(t) is an analytic function w.r.t. t and g(t)= g(0).

In fact, using the multiplicativity of determinant, we have ∂θ(t + s) ∂θ(t) g(t + s)= dω det −1 det −1 v(θ(t + s)). Rn ∂θ(t) · ∂ω Z od    

Putting R(s)= ∂θ(t+s) R , ∆(s) = det −1R(s) R , we get ∂θ(t) ∈ ev ∈ ev   d ∂θ(t) d g′(t)= g(t + s) = dω det −1 ∆(s)v(θ(t + s)) . ds s=0 ∂ω ds s=0 Z    

To continue calculation, we remark the following: (i)Since Tt is a one-parameter trans- formation group, θj (t)= θj (t; ω) satisfy an autonomaous system of differential equations

θ′ (t)= Φ (θ (t), ,θ (t)). k − k 1 ··· n (ii) As ∆(s) = det R−1(s) = exp( tr log R(s)), − we have d∆(s) = tr (R′(s)R−1(s)) exp ( tr log R(s)) ds − − s=0 s=0

′ d ∂θk(t + s) ∂Φk(θ(t)) = tr R (0) = = . − − ds ∂θk(t) ∂θk(t) k s=0 k X X

Therefore, we have, using Φ (θ(t)) R , k ∈ od d ∂(Φk(θ(t))v(θ(t))) ∆(s)v(θ(t + s)) s=0 = , ds ∂θk(t) k   X and ∂θ(t) ∂(Φ (θ(t))v(θ(t))) (5.1.13) g′(t)= dω det −1 k . Rn ∂ω ∂θk(t) Z od   Xk Applying the reasoning of the proof of (2) (translational invariance), we have

′ ∂ ′ g (0) = dω ψk dω =0 with ψk = θk(0)v(ω). Rn ∂ωk Z od Xk   Since g′(t) has the same form as g(t), that is, (5.1.13) is obtained by replacing v(θ(t)) with ∂(Φk(θ(t))v(θ(t))) in (5.1.12), we get g′′(0) = 0. Repeating this procedure, we get k ∂θk(t) g(n)(0) = 0 for n 1. P ≥ It follows from the Lie group theory that an arbitrary transformation θ = θ(ω) can be repre- sented in the form of a product of a finite number of transformations T = T T , each of which 1· · · r is included in a one-parameter group. 80 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

5.2. Berezin integral w.r.t. even and odd variables

5.2.1. A naive definition and its problem. Because of Remark 4.4.2 in 4 of Chapter 4, § we are inclined to “define” ˜ ∞ b b 1 dqf(q)= dxf˜(x) where f˜(x)= ∂nf(q)xn with x = q + x . n! q S S a a˜ n=0 Z Z X Therefore,

Definition 5.2.1. For a set U Rm, we define π−1(U) = X Rm|0 π (X) U .A ⊂ B { ∈ | B ∈ } set U Rm|0 is called an “even superdomain” if U = π (U ) Rm is open, connected and ev ⊂ B ev ⊂ π−1(U)= U . U is denoted also by U . When U Rm|n is represented by U = U Rn with B ev ev,B ⊂ ev × od an even superdomain U Rm|0, U is called a “superdomain” in Rm|n. ev ⊂ Definition 5.2.2 (A naive definition of Berezin integral). @For a super domain U = U R0|n ev × and a supersmooth function u(x,θ)= θau (x) : U R, we “define” its integral as |a|≤n a → P B dxdθ u(x,θ)= dx dθ u(x,θ) = dq u1˜(q), − 0|n ZZU ZUev  ZR  ZπB(Uev) (5.2.1) n ∂ ∂ ˜ where dθ u(x,θ)= u(x,θ) = u1˜(x) and 1 = (1, , 1). R0|n ∂θn · · ·∂θ1 · · · Z θ1=···=θn=0 z }| { In the above, u˜(x) is the Grassmann continuation of u ˜(q). 1 1 Desiring that the standard formula of the change of variables under integral sign(=CVF) holds by replacing standard Jacobian with super Jacobian(= super determinant of Jacobian matrix) on Rm|n, we have

Theorem 5.2.1. Let U = U R0|n Rm|n and V = V R0|n Rm|n be given. Let ev × ⊂ X ev × ⊂ Y (5.2.2) ϕ : V Y = (y,ω) X = (x,θ) = (ϕ¯(y,ω), ϕ¯(y,ω)) U ∋ → 0 1 ∈ be a supersmooth diffeomorphism from V onto U, that is,

∂ϕ0¯(y,ω) ∂ϕ1¯(y,ω) (5.2.3) sdet J(ϕ)(y,ω) = 0 and ϕ(V)= U where J(ϕ)(y,ω)= ∂y ∂y . 6 ∂ϕ0¯(y,ω) ∂ϕ1¯(y,ω) ∂ω ∂ω ! Then, for any function u (U : C) with “compact support”, that is, u(x,θ) = θau (x) ∈ CSS |a|≤n a where u (x ) C∞(U : R) for all a 0, 1 n except a = 1˜, we have CVF a B ∈ 0 ev,B ∈ { } P (5.2.4) B dxdθ u(x,θ) = B dydω sdet J(ϕ)(y,ω)u(ϕ(y,ω)). − − −1 ZZU ZZϕ (U) Remark 5.2.1. Seemingly, this theorem implies that Berezin “measure” D0(x,θ) is trans- formed by ϕ as (5.2.5) (ϕ∗D (x,θ))(y,ω)= D (y,ω) sdet J(ϕ)(y,ω), 0 0 · where ∂ ∂ D0(x,θ)= dx1 dxm = dx1 dxm ∂θn ∂θ1 = dx∂θ, D0(y,ω)= dy∂ω. ∧···∧ ⊗ ∂θn · · ·∂θ1 · · · · · · · But this assertion is shown to be false in general by the following examples. Moreover, we remark also that the condition of “the compact supportness of integrands” above seems not only cumbersome from conventional point of view but also fatal in holomorphic category. 5.2.BEREZININTEGRALW.R.T.EVENANDODDVARIABLES 81

Remark 5.2.2. Though we give some examples which show the immatureness of the above naive definition, but we give a precise proof of this theorem in 1 of Chapter 9 for future use. § −1 2 1|2 1|0 Example 5.2.1. Let U = πB (Ω) Rod R with Ω = (0, 1), πB : R R and let u be 1|2 × ⊂ → supersmooth on R with value in R such that u(x,θ)= u0˜(x)+ θ1θ2u1˜(x). Then, we have

B dxdθ u(x,θ)= dx dθ u(x,θ)= dx u1˜(x)= dq u1˜(q). − U Ω π−1(Ω) Ω ZZ Z Z Z B Z But, if we use the coordinate change (5.2.6) ϕ : (y,ω) (x,θ) with x = y + ω ω φ(y), θ = ω : U U → 1 2 k k → whose Berezinian is ′ 1+ ω1ω2φ (y) 0 0 ′ Ber(ϕ)(y,ω) = sdet J(ϕ)(y,ω)=1+ ω1ω2φ (y) where J(ϕ)(y,ω)= ω2φ(y) 1 0 ,  ω φ(y) 0 1 − 1 and if we assume that the formula (5.2.4) holds, then since   ′ u(ϕ(y,ω)) = u0˜(y + ω1ω2φ(y)) + ω1ω2u1˜(y + ω1ω2φ(y)) = u0˜(y)+ ω1ω2(φ(y)u0˜(y)+ u1˜(y)), ′ ′ ′ and (1 + ω1ω2φ (y))u(ϕ(y,ω)) = u0˜(y)+ ω1ω2(φ(y)u0˜(y)+ φ (y)u0˜(y)+ u1˜(y)), we have ′ ′ B dydω (1 + ω1ω2φ (y))u(ϕ(y,ω)) = dy (φ(y)u0˜(y)) + dx u1˜(x). − ϕ−1(U) π−1(Ω) π−1(Ω) ZZ Z B Z B ′ Therefore, if π−1(Ω) dy (φ(y)u0˜(y)) = 0, then U D0(x,θ) u(x,θ) = ϕ−1(U) D0(y,ω) u(ϕ(y,ω)). B 6 6 This implies thatR if we apply (5.2.1) as definition,RR the change of variablesRR formula doesn’t hold when, for example, the integrand hasn’t compact support.

Example 5.2.2. [Inconsistency related to Q-integration where matrix Q is mentioned in Chap- ter 3] Let a set of matrix Q be given by

x1 θ1 2|2 = Q = x1,x2 Rev, θ1,θ2 Rod = R Q θ2 ix2 ∈ ∈ ∼     dx1dx2 and let regard Q as a variable with its “volume element” dQ = 2π dθ2dθ1. Then, we have

2 2 2 − str Q dx1dx2 −(x +x +2θ1θ2) (5.2.7) dQ e = dθ2dθ1 e 1 2 = 1. 2|2 2π ZQ ZR We apply change of variables to a super matrix Q as θ θ iθ θ y = x + 1 2 , y = x 1 2 , 1 1 x ix 2 2 − x ix (5.2.8)  1 − 2 1 − 2 θ θ  ω = 1 , ω = 2 ,  1 x ix 2 −x ix 1 − 2 1 − 2 or   x1 = y1 + ω1ω2(y1 iy2), x2 = y2 iω1ω2(y1 iy2), (5.2.9) − − − ( θ1 = ω1(y1 iy2), θ2 = ω2(y1 iy2), − − − to make it diagonal. Then, y 0 y2 0 (5.2.10) GQG−1 = 1 , GQ2G−1 = 1 0 iy 0 y2  2  − 2 where 1 + 2−1ω ω ω 1 + 2−1ω ω ω G = 1 2 1 , G−1 = 1 2 1 . ω 1 2−1ω ω ω 1 2−−1ω ω  2 − 1 2  − 2 − 1 2 82 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

Clearly, x ix = y iy , and str Q2 = x2 + x2 + 2θ θ = y2 + y2. 1 − 2 1 − 2 1 2 1 2 1 2 and their Jacobian (called Berezian) is dx dx dy dy dQ = 1 2 dθ dθ = 1 2 dω dω (y iy )−2. 2π 2 1 − 2π 2 1 1 − 2 This implies dy1dy2 −2 −(y2+y2) dω dω (y iy ) e 1 2 = 0 − 2π 2 1 1 − 2 Z which contradicts to (5.2.7).

5.2.2. Integration of Gaussian type and Pfaffian. In spite of above immatureness of the naive definition, we may give examples which mention the relation of Gaussian type integral, determinant and Pfaffian.

Definition 5.2.3. For n n-anti symmetric matrix B˜ = (B˜ ) with even elements, we define × jk the Pfaffian Pf (B˜) of B as 1 (5.2.11) Pf (B˜)= sgn (ρ)B˜ B˜ . (n/2)! ρ(1) ρ(2) · · · ρ(n−1) ρ(n) ρ∈℘ Xn Here, ℘ is the permutation group of degree n, sgn (ρ) is the signature of ρ ℘ . n ∈ n

Remark 5.2.3. Let n be even, and let A = (Aij) be anti symmeric matrix. Then, we have 1 dθ exp( A θ θ )=Pf(A). −2 ij i j Z Moreover, Pf (A)2 = det(A) holds. A C Definition 5.2.4. A even super matrix M = is called positive-definite if the follow- D B ing conditions are satisfied:  

(gs.1): A has the body part AB which is regular positive definite symmetric matrix. (gs.2): B is a regular anti-symmetric matrix. (gs.3): C and D satisfies tC + D = 0.

For above super matrix M, we define the corresponding bilinear form as X,MX = tXMX h i m m n m n n = xjAjkxk + xjCj m+sθs + θtDm+tkxk + θsBm+s m+tθt. s=1 t=1 s,t=1 j,kX=1 Xj=1 X Xk=1 X X Lemma 5.2.1. Let M be a even, positive definite matrix. Then,

−1 −1 G(λ, M)= dXe−λ 2 hX,MXi for λ> 0 Rm|n (5.2.12) Z 0 if n is odd, = (2πλ)m/2(2λ)−n/2(det A)−1/2 Pf (B DA−1C) if n is even. ( − Comparison 5.2.1. For a positive definite symmetric real matrix H, we have 2π m/2 (5.2.13) e−λx·Hx/2dx = (det H)−1/2. m λ ZR   5.3.CONTOURINTEGRALW.R.T.ANEVENVARIABLE 83

Exercise 5.2.1. Prove the following by Berezin integral: (i) det(A)=Pf(A)2 for antisymmeric matrix A, (ii) For any 2n 2n antisymmeric matrix A and any 2n 2n matrix B, det(BtAB) = det(B)Pf(A), × × (iii) For any n n matrix B, × 0 B Pf = ( 1)n(n−1)/2 det(B). Bt 0 − −  5.3. Contour integral w.r.t. an even variable

To overcome the inconsistency in above examples, we need to reconsider the meaning of “body part”, that is, not to insist on Remark 4.4.2 of de Witt ( 4 of Chapter 4). § We recall the idea of the contour integral noted in A. Rogers [118].

Contour integrals are a means of “pulling back” an integral in a space that is alge- braically (as well as possibly geometrically) more complicated than Rm. A familiar example, of course, is complex contour integration; if γ : [0, 1] C is piecewise C1 → and f : C C, one has the one-dimensional contour integral → 1 1 f(z)dz = f(γ(t)) γ′(t)dt = dt γ′(t) f(γ(t)). · · Zγ Z0 Z0 This involves the algebraic structure of C because the right-hand side of above includes multiplication of complex numbers. · We follow this idea to define the integral of a supersmooth function u(x) on an even superdomain U Rm|0 = Rm (see also, Rogers [115, 116, 118] and Vladimirov and Volovich [139]). ev ⊂ ev Definition 5.3.1. Let u(x) be a supersmooth function defined on an even superdomain U ev ⊂ R1|0 such that [a, b] π (U ). Let λ = λ + λ , µ = µ + µ U with λ = a, µ = b, and let ⊂ B ev B S B S ∈ ev B B a continuous and piecewise C1-curve γ : [a, b] U be given such that γ(a) = λ, γ(b) = µ. We → ev define b (5.3.1) dx u(x)= dt γ˙ (t) u(γ(t)) C · ∈ Zγ Za and call it the integral of u along the curve γ.

Using the integration by parts for functions on R, we get the following fundamental result.

Proposition 5.3.1 (p.7 of de Witt [35]). Let u(t) C∞([a, b] : C) and U(t) C∞([a, b] : C) ∈ ∈ be given such that U ′(t)= u(t) on [a, b]. We denote the Grassmann continuations of them as u˜(x) and U˜ (x). Then, for any continuous and piecewise C1-curve γ : [a, b] U R1|0 such that → ev ⊂ [a, b] π (U ) and γ(a)= λ, γ(b)= µ with λ = a, µ = b, we have ⊂ B ev B B (5.3.2) dx u˜(x)= U˜(λ) U˜(µ). − Zγ Proof. By definition, we get b b 1 (ℓ) ℓ dt γ˙ (t)u(γ(t)) = dt (γ ˙ B(t) +γ ˙ S(t)) u (γB(t))γS(t) a a ℓ! Z Z Xℓ≥0 84 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

b b 1 (k) k = dt γ˙B(t)u(γB(t)) + dt γ˙B(t) u (γB(t))γS(t) a a k! Z Z Xk≥1 b 1 (ℓ) ℓ + dt u (γB(t))γ ˙ S(t)γS(t) a ℓ! Z Xℓ≥0 1 = U(b) U(a)+ U (ℓ+1)(b)µℓ+1 U (ℓ+1)(a)λℓ+1 − (ℓ + 1)! S − S Xℓ≥0 n o = U˜(µ) U˜(λ). − Here, we used the integration by parts formula for functions on R with value in Fr´echet space: b (ℓ) ℓ dt u (γB(t))γ ˙ S(t)γS(t) Za b d γ (t)ℓ+1 = dt u(ℓ)(γ (t)) S B dt ℓ + 1 Za b γ (t)ℓ+1 µ ℓ+1 λ ℓ+1 = dt γ˙ (t)u(ℓ+1)(γ (t)) S + U (ℓ+1)(b) S U (ℓ+1)(a) S .  − B B ℓ + 1 ℓ + 1 − ℓ + 1 Za Problem 5.3.1. How do we extend Proposition 5.3.1 to the case when u(t) C([a, b] : C)? ∈ Lemma 5.3.1 (Lemma 3.9 in [115] on B ). (a) (reparametrization of paths) Let γ : [a, b] R L → ev be a path in R and let c, d R. Also let φ : [c, d] [a, b] be C1 with φ(c) = a, φ(d) = b and ev ∈ → φ′(s) > 0 for all s [c, d]. Then ∈ dx u(x)= dx u(x). Zγ Zγ◦φ (b)(sum of paths) Let γ : [a, b] R and γ : [c, d] R be two paths with γ (b) = γ (c) Also 1 → ev 2 → ev 1 2 define γ + γ to be the path γ + γ : [a, b + d c] R defined by 1 2 1 2 − → ev γ1(t), a t b, γ1 + γ2(t)= ≤ ≤ γ (t b + c), b t b + d c. ( 2 − ≤ ≤ − Then if U is open in R , u : U R is in and γ ([a, b]) U , γ ([c, d]) U , ev ev ev → CSS 1 ⊂ ev 2 ⊂ ev dx u(x)= dx u(x)+ dx u(x). Zγ1+γ2 Zγ1 Zγ2 (c)(inverse of a path) Let γ : [a, b] R be a path in R . Define the curve γ : [a, b] R by → ev ev − → ev γ(t)= γ(a + b t) − − Then if U is open in R with γ([a, b]) U and u : U R is supersmooth, ev ev ⊂ ev ev → dx u(x)= dx u(x). − Z−γ Zγ Proof. Applying CVF on R for t = φ(s) and dt = φ′(s)ds, we have d d dx u(x)= ds (γ(φ(s)))′u(γ(φ(s))) = ds φ′(s)[γ′(φ(s))u(γ(φ(s)))] Zγ◦φ Zc Zc b = dt γ′(t)u(γ(t)) = dx u(x). Za Zγ Others are proved analogously.  5.3.CONTOURINTEGRALW.R.T.ANEVENVARIABLE 85

Corollary 5.3.1 (Corollary 3.7 in [115] on BL). Let u(x) be a supersmooth function defined on a even superdomain U R1|0 into C. ev ⊂ (a) Let γ , γ be continuous and piecewise C1-curves from [a, b] U such that λ = γ (a)= γ (a) 1 2 → ev 1 2 and µ = γ1(b)= γ2(b). If γ1 is homotopic to γ2, then

(5.3.3) dx u(x)= dx u(x). Zγ1 Zγ2 (b) If u : R R is on all R , one can write “unambiguously” ev → CSS ev µ dx u(x)= dx u(x). Zλ Zγ Here, γ : [a, b] R is any path in R with γ(a)= λ, γ(b)= µ. → ev ev Proposition 5.3.2. For a given change of variable x = ϕ(y), we define the pull-back of 1-form ∗ ∂ϕ(y) 1|0 −1 1|0 v = dx ρ(x) by (ϕ v) = dy ρ(ϕ(y)). Then, for paths γ : [a, b] Rx , ϕ γ : [a, b] Ry x y ∂y → ◦ → and u, we have

∗ ∗ ∗ vu = dx vx ρ(x)u(x)= dy (ϕ v)y ρ(ϕ(y))u(ϕ(y)) = ϕ v ϕ u. −1 −1 Zγ Zγ Zϕ ◦γ Zϕ ◦γ Proof. By definition, we have not only b ′ vxu(x)= dt γ (t)ρ(γ(t))u(γ(t)), Zγ Za but also ∗ ∗ ∂ϕ(y) (ϕ v)yϕ u(y)= dy ρ(ϕ(y))u(ϕ(y)) −1 −1 ∂y Zϕ ◦γ Zϕ ◦γ b ∂ϕ(y) = dt (ϕ−1(γ(t)))′ ρ(ϕ(y))u(ϕ(y)) ∂y −1 Za y=ϕ ◦γ(t)

b = dt γ˙ (t)ρ(γ(t))u(γ(t)). Za Here, we used y = ϕ−1(ϕ(y)), x = γ(t), y = ϕ−1(γ(t)) with 1 ∂ϕ(y) 1 =ϕ ˙(y)ϕ ˙ −1(ϕ(y)), ϕ˙(y)= = .  ϕ˙ −1(ϕ(y)) ∂y Example 5.3.1 (Translational invariance). Let I = (a, b) R. We identify q I as γ(q) = ⊂ ∈ x R . We put M = γ(I) = x R π (x) = q I R . Taking a non-zero nilpotent ∈ ev { ∈ ev | B ∈ } ⊂ ev element ν R , we put τ : R y x = ϕ(y)= τ (y)= y ν R , ∈ ev ν ev ∋ → ν − ∈ ev M = τ −1(M)= x + νR π (x)= q I , γ (q)= τ −1(γ(q)). 1 ν { ev | B ∈ } 1 ν Then, we have b b dx u(x)= dq γ′(q)u(γ(q)) = dq γ′ (q)u(γ(q)) = dy u(y ν). 1 − ZM Za Za ZM1 Remark 5.3.1. Above identification γ(q)= x R is obtained as the Grassmann continuation ∈ ev ˜ι of a function ι(q)= q C∞(I : R). In fact, ∈ ∂αι(q) ˜ι(x)= (x )xα = x + x = x. ∂qα B S B S α X 86 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

5.4. Modification of Rogers, Vladimirov and Volovich’s approach

Now, we modify arguments of Vladimirov and Volovich [139] suitably to get a new definition.

Definition 5.4.1 (Parameter set, paths and integral). For any domain Ω in Rm, we denote Ω=Ω Rn as a parameter set. × od Rm|n ∞ Rm|n e (1) A smooth map γ from Ω to belongs to C (Ω : ) when

γ(q,ϑ) = (γ0¯(q,ϑ), γ1¯(q,ϑ)) = (γ0¯,j(q,ϑ), γ1¯,k(q,ϑ)) j=1,···,m , e e k=1,···,n and a a γ¯ (q,ϑ)= ϑ γ¯ (q) R , γ¯ (q,ϑ)= ϑ γ¯ (q) R . 0,j 0,j,a ∈ ev 1,k 1,k,a ∈ od |aX|≤n |aX|≤n Here, I J γ0¯,j,a(q)= γ0¯,j,a,I(q)σ , γ1¯,k,a(q)= γ1¯,k,a,J(q)σ I J | |=|aX|(mod2) | |=|a|X+1(mod2) and ∞ n ∞ m γ¯ I(q), γ¯ J(q) C (Ω : C ) and γ (q) C (Ω : R ) 0,a, 1,a, ∈ 0¯,0¯,0˜ ∈ with 0=¯ even or 0¯ = (0, , 0) 0, 1 n, 0˜ = (0, ) . · · · ∈ { } · · · ∈I Moreover, if ∂γ(q,ϑ) ∂γ0¯(q,ϑ) ∂γ1¯(q,ϑ) sdet J(γ)(q,ϑ) = 0 where J(γ)(q,ϑ)= = ∂q ∂q , 6 ∂(q,ϑ) ∂γ0¯(q,ϑ) ∂γ1¯(q,ϑ) ∂ϑ ∂ϑ ! then, this γ is called a path from Ω to Rm|n, whose image is said to be FSM(=foliated singular mani -fold) denoted by e m|n n M = M(γ, Ω) = γ(Ω) = (x,θ) R x = γ¯(q,ϑ),θ = γ¯(q,ϑ),q Ω,ϑ R . { ∈ 0 1 ∈ ∈ od} a (2) Let M be given ase above. For a supersmooth function u(x,θ)= |a|≤n θ ua(x) defined on M M , we define the integration of u(x,θ) on as follows: P (5.4.1) RVV dxdθ u(x,θ)= dϑ dq sdet J(γ)(q,ϑ)u(γ(q,ϑ)) . − M Rn Ω ZZ Z od  Z  Here, we assume that for each ϑ Rn , integrands in the bracket [ ] above are integrable on Ω. ∈ od · · · Remark 5.4.1. The reason for nomination “singular” is explained in A. Kharenikov [88]. It stems from the difference from the naive Definition 5.2.2 of Berezin i.e., their definition domain are not U = x Rm|0 π (x) Ω though defined via Ω in Rm. ev { ∈ | B ∈ }

We need to check the well-definedness of (5.4.1) in Definition 5.4.1. First of all, we remark that by the algebraic nature of integration w.r.t. odd variables, we may interchange the order of integration as ∂ ∂ dϑ dq sdet J(γ)(q,ϑ) u(γ(q,ϑ)) = dq sdet J(γ)(q,ϑ) u(γ(q,ϑ)) Rn Ω · ∂ϑn · · ·∂ϑ1 Ω · Z od  Z  Z ϑ=0

∂ ∂ = dq sdet J(γ)(q,ϑ) u(γ(q,ϑ)) ∂ϑ · · ·∂ϑ · ZΩ n 1 ϑ=0 

= dq dϑ sdet J(γ)(q,ϑ) u(γ(q,ϑ)) . Ω Rn · Z  Z od  5.4. MODIFICATION OF ROGERS, VLADIMIROV AND VOLOVICH’S APPROACH 87 ¯ In case when γ0¯(q,ϑ) doesn’t depend on ϑ, putting ϑ = γ1¯(q,ϑ) andq ¯ = γ0¯(q), we get ∂ ∂ dq sdet J(γ)(q,ϑ) u(γ(q,ϑ)) ∂ϑ · · ·∂ϑ · ZΩ n 1 ϑ=0  ∂γ0¯(q) −1 ∂γ1¯(q,ϑ) = dq det dϑ det u(γ0¯(q), γ1¯(q,ϑ)) Ω ∂q Rn ∂ϑ · Z   Z od    ∂γ0¯(q) ¯ ¯ = dq det dϑ u(γ0¯(q), ϑ) Ω ∂q Rn Z   Z od  = dq¯ dϑ¯ u(¯q, ϑ¯)= dx dθ u(x,θ) . γ (Ω) Rn Z Z Z 0¯  Z od  n That is, putting Ω=Ω˜ R and M = γ(Ω),˜ we have γ(q,ϑ) = (γ¯(q), γ¯(q,ϑ)) and × od 0 1 RVV dxdθ u(x,θ)= dθ dx u(x,θ) = dθ dx u(x,θ) n − M R γ¯(Ω) (5.4.2) ZZ Z od  Z 0  Z  Z  = dx dθ u(x,θ) = dx dθ u(x,θ) . γ (Ω) Rn Z 0¯  Z od  Z  Z  Moreover, we need the following definition:

Definition 5.4.2. Let two FSM M = γ(Ω) and M′ = γ′(Ω′) be given. They are called superdiffeomorphic each other if there exist diffeomorphisms φ : Ω′ Ω and ϕ : M′ M, such → → that the following diagram holds with γ′ = ϕ−1 eγ φ: e ◦ ◦ γ e e Ω M = γ(Ω) −−−−→ φ ϕ e e x′ γ1 ′ x ′ ′ Ω M =γ (Ω ).  −−−−→  Using this notion, we have the desirede result: e

Proposition 5.4.1 (Reparametrization invariance). Let Ω and Ω′ be domains in Rm and we put Ω˜ and Ω˜ ′ as above. We assume Ω˜ and Ω˜ ′ are superdiffeomorphic each other, that is, there exist ′ ′ ′ ∂φ0¯(q ) ′ ∂φ0¯(q ) a diffeomorphism φ0¯ :Ω Ω such that ∂q′ which is continuous in Ω and det( ∂q′ ) > 0 and a → ′ ′ ′ n ′ ′ ′ ′ n ′ ∂φ¯(q ,η ) map φ¯ :Ω R (q , η ) φ¯(q , η ) R which is supersmooth w.r.t. η with det( 1 ) = 1 × od ∋ → 1 ∈ od ∂η′ 6 0. Put

′ ′ ′ ′ ′ ′ ′ ′ ′ ′ ′ ′ ′ ′ ′ M = X = (x ,θ ) X = γ φ(q , η ), (q , η ) Ω˜ where φ(q , η ) = (φ¯(q ), φ¯(q , η )). { | ◦ ∈ } 0 1 For a given path γ : Ω˜ Rm|n, we define a path γ φ : Ω˜ ′ Rm|n. Then, we have → ◦ → RVV dxdθ u(x,θ) = RVV dx′dθ′ u(x′,θ′). − ˜ − ˜ ′ ZZ γ(Ω) ZZ γ◦φ(Ω ) Proof. By definition, we have

RVV dxdθ u(x,θ)= dη dq sdet J(γ)(q, η)u(γ(q, η)) − γ(Ω)˜ Rn Ω ZZ Z od  Z  and

RVV dx′dθ′ u(x′,θ′)= dη′ dq′ sdet J(γ φ)(q′, η′)u(γ φ(q′, η′)) . − γ◦φ(Ω˜ ′) Rn Ω′ ◦ ◦ ZZ Z od  Z  88 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

Using ′ ′ ′ ′ ′ ′ ′ ′ γ φ(q , η ) = (γ¯(φ¯(q ), φ¯(q , η )), γ¯(φ¯(q ), φ¯(q , η ))), ◦ 0 0 1 1 0 1 J(γ φ)(q′, η′)= J(γ)(φ(q′, η′)J(φ)(q′, η′), ◦ ′ ′ ′ ∂φ¯(q , η ) ∂φ¯(q ) sdet J(φ)(q′, η′) = det −1 1 det 0 , ∂η′ ∂q′     we have ′ ′ ′ ′ ′ ∂φ0¯(q ) −1 ∂φ1¯(q , η ) sdet J(γ φ)(q , η ) = det det sdet J(γ)(q, η) ′ . ′ ′ q=φ0¯(q ) ◦ ∂q ∂η ′ ′     η=φ1¯(q ,η )

Remarking the order of integration, we have

dη′ dq′ sdet J(γ φ)(q′, η′)u(γ φ(q′, η′)) Rn Ω′ ◦ ◦ Z od  Z  ′ ′ ′ ′ ∂φ0¯(q ) ′ −1 ∂φ1¯(q , η ) = dq det dη det sdet J(γ)(q, η)u(γ(q, η)) ′ ′ ′ q=φ0¯(q ) Ω′ ∂q Rn ∂η ′ ′ Z   Z od   η=φ1¯(q ,η )   

= dq dη sdet J(γ)(q, η)u(γ(q, η)) = dqdη sdet J(γ)(q, η)u(γ(q, η)) .  Ω Rn Ω˜ Z  Z od  ZZ  Finally, we prove our goal:

Theorem 5.4.1 (CVF=change of variable formula). Let a supersmooth diffeomorphism ϕ be given from a foliated singular manifold N(δ, Ω) Rm|n onto a neighbourhood O of another foliated ⊂ singular manifold M(γ, Ω) Rm|n: ⊂ (5.4.3) ϕ : (y,ω) (x,θ) with x = ϕ¯(y,ω), θ = ϕ¯(y,ω) → 0 1 That is, M = ϕ(N) and sdet J(ϕ) = 0. Moreover, we assume that δ = ϕ−1 γ is sdet J(γ) = 0. 6 ◦ 6 Then, for any integrable function u (O : R), CVF holds. ∈CSS (5.4.4) RVV dxdθ u(x,θ) = RVV dydω sdet J(ϕ)(y,ω) u(ϕ(y,ω)). − − −1 · ZZM ZZϕ (M) m|n Remark 5.4.2. Analogous result is proved on superspace BL based on Banach-Grassmann algebra assuming the set x Bm x = γ(q,ϑ), q Ω is independent from each ϑ Bn in { ∈ L0¯ ∈ } ∈ L1¯ [139].

Remark 5.4.3. Formulas (5.2.4) and (5.4.4) have the same form but their definitions (5.4.1) and (5.4.1) are very different each other! This difference is related to the primitive question “How to consider the body of supermanifolds?” (see, R. Catenacci, C. Reina and P. Teofilatto [23]). Though we don’t develop supermanifolds theory with charts based on Rm|n in this note, but in Chapter 8, we consider the simplest case Rm|n.

5.4.0.1. Proof of Theorem 5.4.1 – change of variable formula under integral sign. We want to prove the following diagram: γ u(x,θ) (Ω˜, dqdη) (M, dxdθ) RVV dxdθ u(x,θ) R −−−−→ −−−−→ − M ∈ ϕ RR ˜ x (Ω, dqdη ) (N, dydω ) RVV −1 M dydω sdet J (ϕ)(y,ω)u(ϕ(y,ω)) R. −−−−→δ  −−−−−→ϕ∗u(y,ω) − ϕ ( ) ∈ RR 5.4. MODIFICATION OF ROGERS, VLADIMIROV AND VOLOVICH’S APPROACH 89

By definition, we have paths Ω Rn (q, η) γ(q, η) = (x,θ), × od ∋ → Ω Rn (q, η) γ (q, η) = (y,ω), × od ∋ → 1 which are related each other

(x,θ)= γ(q, η)= ϕ(y,ω)= ϕ(γ (q, η)), γ = ϕ−1 γ. 1 1 ◦ We define pull-back of a “superform” as

v = dxdθ u(x,θ) ϕ∗v = dydω sdet J(ϕ)(y,ω) u(ϕ(y,ω)). →

Then, we have

Claim 5.4.1.

(5.4.5) RVV ϕ∗v = RVV v. − −1 ˜ − ˜ ZZ ϕ ◦γ(Ω) ZZ γ(Ω)

Proof. Since J(ϕ−1 γ)= J(γ) J(ϕ−1) which yields ◦ · sdet J(ϕ−1 γ)(q, η)(sdet J(ϕ)(y,ω) = sdet J(γ)(q, η), ◦ −1 (y,ω)=ϕ ◦γ(q,η)

and by the definitions of path(contour) and integral, we have

RVV ϕ∗v − −1 ˜ ZZϕ ◦γ(Ω) = dϑ dq sdet J(ϕ−1 γ)(q,ϑ)(sdet J(ϕ)(y,ω) u(ϕ(y,ω)) Rn Ω ◦ · −1 Z od  Z (y,ω)=ϕ ◦γ(q,ϑ)

= dϑ dq sdet J(γ)(q,ϑ) u(γ(q,ϑ)) = RVV v. Rn Ω · − γ(Ω)˜ Z od  Z  ZZ we have the claim. //

Now, we interpret (5.4.5) as change of variables: Since we may denote integrals as

RVV v = RVV dxdθ u(x,θ), − ˜ − ZZ γ(Ω) ZZ M and RVV ϕ∗v = RVV dydω sdet J(ϕ)(y,ω)u(ϕ(y,ω)), − −1 ˜ − −1 ZZ ϕ ◦γ(Ω) ZZ ϕ M we have

RVV dxdθ u(x,θ) = RVV dydω sdet J(ϕ)(y,ω)u(ϕ(y,ω)).  − − −1 ZZ M ZZ ϕ M Remark 5.4.4. It is fair to say that new definitions without inconsistency for CVF are in- troduced M.J. Rothstein [120] or M.R. Zirnbauer [147], but they are not so easy to understand at least for me. 90 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE

Resolution of inconsistency: Here, we resolve the inconsistency derived from the naive defi- nition of Berezin integral by applying modified Rogers, Vladimirov and Volovich’s definition above.

Resolution of inconsistency in Example 5.2.1: From Theorem 5.4.1, we interpret as follows:

For Ω = (0, 1), we are given Ω=Ω R2 , Defining a map γ : Ω M as × od →

γ : Ω (q,ϑe) (x,θ) = (γ¯(q,ϑ), γ¯(q,ϑ)) =eγ(q,ϑ), ∋ → 0 1 1|2 2 then we may regard M =e (x,θ) R πB(x) Ω, θ Rod as a foliated singular manifold 1|2 { ∈ ∈ ∈ } 1|2 γ(Ω) in R . We are given another foliated singular manifold N = δ(Ω) in R such that they are super-differentiably isomorphic e e ϕ : δ(Ω) (y,ω) ϕ(y,ω) = (x,θ) γ(Ω), ∋ → ∈ with e e

x = ϕ0¯(y,ω)= y + ω1ω2φ(y), (θ1 = ϕ1¯,1(y,ω)= ω1, θ2 = ϕ1¯,2(y,ω)= ω2, and moreover

−1 δ = ϕ γ : (q,ϑ) (q ϑ ϑ φ(q),ϑ) = (δ¯(q,ϑ), δ¯(q,ϑ)) = (y,ω). ◦ → − 1 2 0 1 Then, N = ϕ−1(M) and

′ 1+ ω1ω2φ (y) 0 0 1 0 0 J(ϕ)(y,ω)= ω2φ(y) 1 0 , J(γ)(q,ϑ)= 0 1 0 ,  ω φ(y) 0 1 0 0 1 − 1   ′   1 ϑ1ϑ2φ (q) 0 0 J(δ)(q,ϑ)= − ϑ φ(q) 1 0 .  − 2  ϑ1φ(q) 0 1   In this case, for u(x,θ)= u0¯(x)+ θ1θ2u1¯(x), we have

RVV dxdθ u(x,θ)= dϑ dq sdet J(γ)(q,ϑ)u(γ(q,ϑ)) − M R2 Ω ZZ Z od  Z  1 1 ∂ ∂ = dq dϑ u(q,ϑ)= dq u(q,ϑ) 0 R2 0 ∂ϑ2 ∂ϑ1 Z Z od Z ϑ=0

1 = dq u1¯(q), Z0 and

RVV dydω (ϕ∗u)(y,ω) − ZZN

= dqdϑ sdet J(δ)(q,ϑ) sdet J(ϕ)(y,ω)u(ϕ(y,ω)) (y,ω)=δ(q,ϑ) Ω ZZe (5.4.6) 1   = dq dϑ sdet J(δ)(q,ϑ) sdet J(ϕ)(y,ω)u(ϕ(y,ω)) (y,ω)=δ(q,ϑ) 0 R2 Z  Z od  1   = dq dϑ u(q,ϑ). 0 R2 Z Z od 5.4. MODIFICATION OF ROGERS, VLADIMIROV AND VOLOVICH’S APPROACH 91

Therefore, without any condition on support of integrand u, we get the following:

RVV dxdθ u(x,θ) − ZZM = dqdϑ sdet J(γ)(q,ϑ) u(γ(q,ϑ)) Ω · ZZe = dqdϑ sdet J(δ)(q,ϑ) sdet J(ϕ)(y,ω) u(ϕ(y,ω)) Ω · ZZe (y,ω)=δ(q,ϑ)  

= RVV dydω sdet J(ϕ)(y,ω) u(ϕ(y,ω)) − −1 · ZZϕ (N) = RVV dydω (ϕ∗u)(y,ω).  − ZZN Remark 5.4.5. In order to recognize this phenomena and for future use, we calculate more precisely (please sensitive to the underlined parts): sdet J(ϕ)(y,ω) u(ϕ(y,ω)) · ′ = (1+ ω1ω2φ (y))[u0¯(y + ω1ω2φ(y)) + ω1ω2u1¯(y + ω1ω2φ(y))] (5.4.7) ′ ′ = (1+ ω1ω2φ (y))[u0¯(y)+ ω1ω2(φ(y)u0¯(y)+ u1¯(y))] ′ = u0¯(y)+ ω1ω2[(φ(y)u0¯(y)) + u1¯(y)], and putting (y,ω)= δ(q,ϑ), then we have sdet J(δ)(q,ϑ) sdet J(ϕ)(y,ω) u(ϕ(y,ω)) · (y,ω)=δ(q,ϑ) ′ ′ = (1 ϑ ϑ φ (q)) u¯(y)+ω ω [(φ(y)u¯(y)) + u¯(y)] − 1 2 0 1 2 0 1 y=q−ϑ1ϑ2φ(q), (5.4.8) ω1=ϑ1,ω2=ϑ2 ′ ′  ′ = (1 ϑ ϑ φ (q)) u¯(q) ϑ ϑ φ(q)u (q) + ϑ ϑ [(φ(q)u ¯(q)) + u¯(q)] − 1 2 0 − 1 2 0¯ 1 2 0 1 ′ ′ = u¯(q)+ ϑ1ϑ2[(φ(q)u¯(q)) + u¯(q) (φ(q)u¯(q)) ]= u¯(q)+ ϑ1ϑ2u¯(q). 0 0 1 − 0 0 1 Or, since u(ϕ(y,ω))(y,ω)=δ(q,ϑ) = u(q,ϑ) and sdet J(δ)(q,ϑ) sdet J(ϕ)(δ(q,ϑ)) = (1 ϑ ϑ φ′(q))(1 + ϑ ϑ φ′(q)) = 1, · − 1 2 1 2 we get the result.

′ From these, one reason of inconsistent term in (5.4.7) comes from ω1ω2(φ(y)u0¯(y)) .

Resolution of inconsistency in Example 5.2.2: [An incosistency derived from the diagonaliza- tion of matrix Q mentioned in Chapter 3] From (3.3.5), we define a path γ from (q, η) to (x,θ) and γ˜ from (q, η) to (y,ω) −1 (x,θ) = (γ¯(q, η), γ¯(q, η)) = γ(q, η) and (y,ω) =γ ˜(q, η)= ϕ γ(q, η). 0 1 ◦ Then, η η iη η y = q + 1 2 , y = q 1 2 , 1 1 q iq 2 2 − q iq 1 − 2 1 − 2  η1 η2  ω1 = i , ω2 = i ,  − q iq q iq 1 − 2 1 − 2 we have  −2 −2 −2 −2 1 η1η2(q1 iq2) iη1η2(q1 iq2) iη1(q1 iq2) iη2(q1 iq2) iη− η (q −iq )−2 1+ η η (q− iq )−2 η (q −iq )−2 − η (q −iq )−2 J(˜γ)(q, η)= 1 2 1 2 1 2 1 2 1 1 2 2 1 2  η (q −iq )−1 iη (q −iq )−1 i(q − iq )−1 − 0−  2 1 − 2 − 2 1 − 2 − 1 − 2  η (q iq )−1 iη (q iq )−1 0 i(q iq )−1   − 1 1 − 2 1 1 − 2 1 − 2    92 5.ELEMENTARYINTEGRALCALCULUSONSUPERSPACE and sdet J(˜γ)(q, η)= (q iq )−2, therefore − 1 − 2 −(x2+x2−2θ θ ) −(q2 +q2−2η η ) dQ e 1 2 1 2 = dqdη sdet J(γ)(q, η)e 1 2 1 2 , ZZ ZZ 2 2 2 2 dQ˜ e−(y1 +y2 ) = dqdη sdet J(˜γ)(q, η) sdet (ϕ)(y,ω) e−(y1 +y2 ) . ZZ ZZ  (y,ω)=˜γ(q,η)

By this calculation, we have no inconsistency.

5.5. Supersymmetric transformation in superspace – as an example of change of variables

Stimulated probably from the success of QED, Berezin and Marinov claim in their paper [11] “Treat bosons and fermions on equal footing”, as mentioned before. As the object of this lecture notes, we give a basic idea of this “equal footing”. To do this, we need to develop an integration theory which admits the change of variables under integral sign. Especially we use transformations of mixing even and odd variables.

a In the following, we assume fa(xB) C, that is,fa(x) Cev, therefore, we denote also θ fa(x) a ∈ ∈ by fa(x)θ . Let x = (x , ,x ) Rm Rm|0 and (θ ,θ ) C2 . Taking ε C , γ > 0 and λ, µ Rm 1 · · · m ∈ ⊂ 1 2 ∈ od ∈ od ∈ ⊂ Rm|0, we define transformation τ(λ, µ) as x y = x + 2µεθ + 2λεθ , that is, y = x + 2µεθ + 2λ εθ −→ 1 2 j j 1 j 2 θ ω = θ + γλ xε, with λ x = m λ x  1 −→ 1 1 · · j=1 j j   θ2 ω2 = θ2 γµ xε P −→ − · By the definition of Grassmann continuation, for any f C∞(Rm : C), we get ∈ m ∂ f(x + 2µεθ1 + 2λωθ2)= f(x) + 2µ fεθ1 + 2λ fεθ2, with µ = µj . ·∇ ·∇ ·∇ ∂xj Xj=1 Therefore, (for a = (a , a ) 0, 1 2, putting 0¯ = (0, 0), 1¯ = (1, 0), 2¯ = (0, 1), 3¯ = (1, 1)), for 1 2 ∈ { } a u(y,ω)= ua(y)ω = u0¯(y)+ u1¯(y)ω1 + u2¯(y)ω2 + u3¯(y)ω1ω2, |Xa|≤2 remarking (θ + γλ xε)(θ γµ xε)= θ θ + γλ xεθ θ γµ xε 1 · 2 − · 1 2 · 2 − 1 · and calculating slightly, we have (τ ∗(λ, µ)u)(x,θ)= u(τ(λ, µ)(x,θ))

= u(x,θ) + [(γλ x u1¯ γµ x u2¯(x)) (5.5.1) · − · + ( 2 u¯(x) µ + γµ xu¯(x))θ + ( 2 u¯(x) λ + γλ xu¯(x))θ − ∇ 0 · · 3 1 − ∇ 0 · · 3 2 + 2( u¯(x) λ u¯(x) µ)θ θ ]ε. ∇ 1 · −∇ 2 · 2 1 Definition 5.5.1. A function u / (Rm|2 : C) is called supersymmetric, if for any λ, µ ∈ CSS ∈ Rm Rm|0, it satisfies ⊂ (τ ∗(λ, µ)u)(x,θ)= u(x,θ)(= (τ ∗(0, 0)u)(x,θ)). 5.5. SUPERSYMMETRIC TRANSFORMATION IN SUPERSPACE 93

Proposition 5.5.1 (Proposition 4.1 of KLP [89]). Following conditions are equivalent for u / (Rm|2 : C): ∈ CSS (i) u / (Rm|2 : C) is supersymmetric. ∈ CSS (ii) u1¯(x)= u2¯(x) = 0 and moreover 2 2 ∂ (5.5.2) u0¯(x)= xu3¯(x), i.e. u0¯(x)= xju3¯(x). γ ∇ γ ∂xj

(iii) There exists a function φ( ) C∞([0, ) : C) satisfying · ∈ ∞ 4 4 u(x,θ)= φ x2 θ θ = φ(x2) φ′(x2)θ θ . − γ 1 2 − γ 1 2   Proof. [(i)= (ii)] If u is supersymmetric, then the coefficient of ε of the right-hand side of ⇒ (5.5.1) should be 0 for any λ, µ Rm. This implies (5.5.2). ∈ m 2 [(ii) = (iii)] Restricting (5.5.2) to R , we get that u0¯(q) depends only on q = q q, that is, ⇒ ∞ 2 | | · there exists a function φ( ) C ([0, ) : C) such that u¯(q)= φ( q ). Since the derivative of the · ∈ ∞ 0 | | Grassmann continuated function equals to the Grassmann continuated of the derivative, therefore (iii) follows. [(iii)= (i)] Obvious.  ⇒ Proposition 5.5.2. Let u / (Rm|2 : C) with u ( ) is integrable for each a. Then, for any ∈ CSS a · τ = τ(λ, µ), we have

∗ dxdθ2dθ1 (τ u)(x,θ1,θ2)= dxdθ2dθ1 u(x,θ1,θ2). m|2 m|2 ZR ZR Proof. Integrating w.r.t. θ, we have

dxdθ2dθ1 u(x,θ1,θ2)= dx u3¯(x), m|2 − m|0 ZR ZR ∗ dxdθ2dθ1 (τ u)(x,θ1,θ2)= dx u3¯(x) 2 dx ( u1¯(x) λ u2¯(x) µ)ε. m|2 − m|0 − m|0 ∇ · −∇ · ZR ZR ZR On the other hand, from integrability, ¯ ¯ ¯ dx u¯j(x) = 0, j = 1, 2 m|0 ∇ ZR we get the result. 

Lemma 5.5.1. Let u / (R2|2 : C) be supersymmetric and integrable. Then, ∈ CSS 4π dxdθ2dθ1 u(x,θ1,θ2)= u0¯(0). 2|2 γ ZR Proof. From previous Proposition, there exists a function φ( ) C∞([0, ) : C) such · ∈ ∞ 2 4 ¯ 2 that u(x,θ1,θ2) = φ(x + γ θθ), and especially u0¯(x) = φ(x ). Since this is integrable, it implies limt→∞ φ(t)=0 and

4 ′ 2 dxdθ2dθ1 u(x,θ1,θ2)= dx φ (x ) R2|2 −γ R2|0 Z Z ∞ ∞ 8 ′ 2 4 ′ 4 4 = rdr φ (r )= ds φ (s)= φ(0) = u¯(0).  −γ −γ γ γ 0 Z0 Z0

CHAPTER 6

Efetov’s method in Random Matrix Theory and beyond

In 1950s, physicists get so many experimental data which are obtained after making neutron collided with Uranium 238, etc. and checking these data, they are embarrassed so much to grasp the meaning of them. At that time, E. Wigner poses a working hypothesis or Ansatz that high resonance of these experiments behaves like eigenvalues of large size matrix. In other word, we quote from M.L. Mehta [103]

Consider a large matrix whose elements are random variables with given probability laws. Then, what can one say about the probabilities of a few of its eigenvalues or of a its eigenvectors?

We give another quotation from Y.V. Fyodorov [52]:

Wigner suggested that fluctuations in positions of compound nuclei resonances can be described in terms of statistical properties of eigenvalues of very large real sym- metric matrices with independent, identically distributed entries. The rational be- hind such a proposal was the idea that in the situation when it is hardly possible to understand in detail individual spectra associated with any given nucleus composed of many strongly interacting quantum particles, it may be reasonable to look at the corresponding systems as ”black boxes” and adopt a kind of statistical description, not unlike thermodynamics approach to classical matter.

We should remark that there exist mathematical works concerning the distribution of the zeros of the Riemann zeta function from this point of view.

In any way, in Random Matrix Theory(=RMT), Wigner’s semi-circle law gives a well-known corner-stone.

We report mathematical refinements of this law as an application of superanalysis. That is, using Efetov’s idea, we rewrite the average of the empirical measure of the eigenvalue distribu- tion of the Hermitian matrices in a compact form. Careful calculations give not only the precise convergence rate of that law, but also the precise rate of the edge mobility.

Remark 6.0.1. The usage of the method of steepest descent by physicists are not so mathemat- ically rigorous in the sense of de Bruijn’s criteria, because we have no general method of choosing the steepest descent path for the integral considered.

95 96 6.EFETOV’SMETHODINRMTANDBEYOND

6.1. Results – outline

2 Let U be a set of Hermitian N N matrices, which is identified with RN as a topological N × space. In this set, we introduce a probability measure dµN (H) by N N dµ (H)= d( H ) d( H )d( H )P (H), N ℜ kk ℜ jk ℑ jk N,J (6.1.1) kY=1 j

f = f( ) = dµ (H) f(H), N · N N ZUN we get

Theorem 6.1.1 (Wigner’s semi-circle law). (2πJ 2)−1√4J 2 λ2 for λ < 2J, (6.1.3) lim ρN (λ) N = wsc(λ)= − | | N→∞ (0 for λ > 2J. | | Remark. By definition, the limit (6.1.3) is interpreted as N −1 lim φ( ), dµN (H) N δ( Eα(H)) = φ, wsc = dλ φ(λ)wsc(λ) N→∞h · ·− i h i UN α=1 R Z X Z ∞ ′ for any φ C0 (R)= (R). , stands for the duality between (R) and (R). We need more ∈ D h· ·i −1 N D D interpretation to give the meaning to dµN (H) N δ( Eα(H)), which will be given in UN α=1 ·− 2???. Or, H ρ (λ; H)dλ is considered a measure (on R)-valued random variable on U and § → N R P N R ρN (λ) N dλ is considered a family of probability measures on which is tight. Though there exist several methods to prove this fact, we explain a new derivation of this fact using odd variables obtained by K. B. Efetov [41]. We follow mainly Fyodorov [52] and E. Br´ezin [17] (see also, P.A. Mello [104], M.R. Zirnbauer [147]).

Moreover, we get, as a byproduct of this new treatise,

Theorem 6.1.2 (A refined version of Wigner’s semi-circle law). For each λ with λ < 2J, | | when N , we have →∞ √4J 2 λ2 ρ (λ) = − N N 2πJ 2 (6.1.4) ( 1)N J λ√4J 2 λ2 λ − cos (N[ − + 2 arcsin ( )])N −1 + O(N −2). − π(4J 2 λ2) 2J 2 2J − 6.1. RESULTS – OUTLINE 97

When λ satisfies λ > 2J, there exist constants C (λ) > 0 and k (λ) > 0 such that | | ± ± (6.1.5) ρ (λ) C (λ) exp [ k (λ)N] N N ≤ ± − ±

with k±(λ) 0 and C±(λ) for λ 2J or λ 2J, respectively. → →∞ ց ր− Theorem 6.1.3 (The spectrum edge problem). Let z [ 1, 1]. We have ∈ − −2/3 −1/3 −2/3 ρN (2J zN ) = N f(z/J)+ O(N ) as N , (6.1.6) − N →∞ −2/3 −1/3 −2/3 ρN ( 2J + zN ) = N f(z/J)+ O(N ) as N , − N − →∞ where 1 i f(w)= (Ai ′(w)2 Ai ′′(w) Ai (w)), Ai (w)= dx exp [ x3 + iwx]. 4π2J − −3 ZR (A) One of the key expression obtained by introducing new auxiliary variables, is

(6.1.7) ρ (λ) = π−1 dQ (λ i0)I Q −1 exp [ N (Q)] N N ℑ { − 2 − } bb − L ZQ where I stands for n n -identity matrix and  n × (Q) = str [(2J 2)−1Q2 + log ((λ i0)I Q)], L − 2 − x1 ρ1 2|2 dx1dx2 Q = Q = x1,x2 Rev, ρ1, ρ2 Rod = R , dQ = dρ1dρ2, (6.1.8) ρ2 ix2 ∈ ∈ ∼ 2π    (λ i0 x )( λ i0 ix )+ ρ ρ ((λ i0)I Q)−1 = − − 1 − − 2 1 2 . − 2 − bb (λ i0 x )2(λ i0 ix ) − − 1 − − 2 Here in (6.1.4), the parameter N appears only in one place. This formula is formidably charming but not yet directly justified, like Feynman’s expression of certain quantum objects applying his notorious measure.

(B) In physics literatures, for example in [52], [147], they claim without proof that they may apply the method of steepest descent to (6.1.7) when N . More precisely, as →∞ d δ (Q)Q˜ = (Q + ǫQ˜) , L dǫL ǫ=0 they seek solutions of Q 1 δ (Q) = str = 0. L J 2 − λ Q − As a candidate of effective saddle points, they take1  1 1 Q = ( λ + λ2 4J 2)I , c 2 2 − 2 and they have p −1 −1 lim ρN (λ) = π (λ Qc) = wsc(λ).  N→∞ N ℑ − bb

Problem 6.1.1. Not only the expression (6.1.7) nor the applicability of the saddle point method to it are not so clear. Though these formulas obtained are not yet well-defined mathematically but so charming. Unfortunately for the time being, we mathematicians may not use them directly. For example, the expression (6.1.7) is mathematically verified only for λ iǫ (ǫ > 0), but we need − probably some new integration theory which admits taking limit ǫ 0 under integral sign. Can we → 1for what reason? 98 6.EFETOV’SMETHODINRMTANDBEYOND justify the physicist procedures by using this new integration theory? More precisely, under what condition, do we have the following equality? 1 1 1 1 (6.1.9) lim lim tr = lim lim tr . N→∞ ǫ→0 N (λ iǫ)I H N ǫ→0 N→∞ N (λ iǫ)I H N − N − − N − If this assertion is true, may we justify the physicists argument of “saddle point method”?

Remark 6.1.1. (i) For mathematical rigour, we dare to loose such a beautiful expression like (6.1.7), but we have the two formulae (6.2.21) and (6.2.22) below which lead to our conclusion. (ii) It is not so simple even in the integral on Rm to apply the saddle point method, that is, to take an appropriately deformed “path” in Cm, as is explained in de Bruijn [30]. (iii) The set (U , dµ ( )) is called GUE=the Gaussian Unitary Ensemble. Other ensembles may N N · be treated analogously as indicated in [147] but are not treated here.

6.2. The derivation of (6.1.7) with λ iǫ(ǫ> 0 fixed) and its consequences − It is well-known that 1 1 1 1 1 1 ǫ δ(q)= lim = lim = lim in ′(R), π ǫ→0 ℑq iǫ 2πi ǫ→0 q iǫ − q + iǫ π ǫ→0 q2 + ǫ2 D −  −  that is, for any φ C∞(R), ∈ 0 φ(q) ǫφ(q) φ(ǫq) π−1 dq = π−1 dq = π−1 dq φ(0) = φ, δ as ǫ 0. ℑ q iǫ q2 + ǫ2 1+ q2 → h i → ZR − ZR ZR Therefore, for any fixed φ C∞(R), we have ∈ 0 N N 1 def 1 ǫ dµN (H) φ( ), δ( Eα(H)) = dµN (H) lim dλ φ(λ) h · N ·− i ǫ→0 πN (λ E (H))2 + ǫ2 UN α=1 UN R α=1 α Z X Z Z X − N 1 ǫ = lim dµN (H) dλ φ(λ) by Lebesgue’s dom.conv.theorem ǫ→0 πN (λ E (H))2 + ǫ2 UN R α=1 α Z Z X − N 1 ǫ = lim dλ φ(λ) dµN (H) by Fubini’s theorem. ǫ→0 πN (λ E (H))2 + ǫ2 R UN α=1 α Z Z X − The second equality is guaranteed by the fact that for any φ C∞(R), we have, for any ǫ> 0 and ∈ 0 H UN , ∈ N 1 ǫ dλ φ(λ) max φ(λ) . πN (λ E (H))2 + ǫ2 ≤ | | ZR α=1 − α 2 2 X−1 Here, we used the fact R dλ ǫ(λ + ǫ ) = π. The third equality holds because we have N 1 R ǫ N ǫ 1 φ(λ) ǫ−1 φ(λ) ∵) , πN (λ E (H))2 + ǫ2 ≤ | | πN a2 + ǫ2 ≤ ǫ α=1 α   X − and the right hand side is integrable w.r.t. the product measure dλ dµN (H) for any fixed ǫ> 0. In order to check whether we may take the limit before integration w.r.t. dλ in the last line above, we calculate the following quantity as explicitly as possible: N 1 1 (6.2.1) g(λ, ǫ, N)= dµ(H) . πN ℑ λ iǫ E (H) UN α=1 α Z X − − 6.2. THE DERIVATION OF (6.1.7) WITH λ − iǫ 99

We claim in this section that (i) g(λ, ǫ, N) exists as a function of λ for any ǫ> 0 and N N and ∈ (ii) lim g( , ǫ, N) exists in ′(R) for any N N and it is denoted by ρ (λ) . ǫ→0 · D ∈ N N Now, we put z = x + iy , z = x iy , x , y R ; θ , θ R = C , j j j j j − j j j ∈ ev k k ∈ od od X = t(z,θ), z = t(z , , z ), θ = t(θ , ,θ ), 1 · · · N 1 · · · N X∗ = (z∗,θ∗), z∗ = (z , , z ), θ∗ = (θ , , θ ). 1 · · · N 1 · · · N Here, θk and θk are considered as two different odd variables. The following is the key formula which is well known:

Lemma 6.2.1. Put µ = λ iǫ (ǫ> 0). − N 1 1 tr = µI H µ E (H) N α=1 α (6.2.2) − X − N N dzj dzj ∗ ∗ = i dθkdθk (z z) exp [ iX (I2 (µIN H))X]. CN|2N 2πi · − ⊗ − Z jY=1 kY=1 To prove this lemma, we need the following lemma.

Lemma 6.2.2. Let Γ=the diagonal matrix with diagonal given by (γ , , γ ) where γ R. 1 · · · N j ∈ Putting (z∗ z)= N z z = z 2, we have · j=1 j j | | P N N N dzj dzj ∗ ∗ 1 (6.2.3) i dθkdθk (z z) exp [ iX (I2 (Γ iǫIN ))X]= . CN|2N 2πi · − ⊗ − γj iǫ Z jY=1 kY=1 Xj=1 − Proof. Identifying z = x + iy , z = x iy , dz dz = 2idx dy , using the polar j j j j j − j j ∧ j j ∧ j coordinates (r ,ω ) R S1 and denoting z 2 = x2 + y2 = r2, 0 ω 2π, we get j j ∈ + × | j| j j j ≤ j ≤ ∞ 2π dz dz 2 2 j j 2 −i(γj −iǫ)|zj| 2 −i(γj −iǫ)rj i zj e = i rjdrjdωj rj e 1|0 2πi | | ZC Z0 Z0 i 1 1 = = . (ǫ + iγ )2 ǫ + iγ ·γ iǫ j j j − Analogously, N N 2 dzk dzk −i(γk−iǫ)|zk| 1 1 [ e ]= = (ǫ + iγj) . CN−1|0 2πi ǫ + iγk ǫ + iγk Z kY6=j kY6=j kY=1 Remarking N N N dzj dzj ∗ ∗ 1 1 i (z z) exp [ iz (Γ iǫIN )z]= , CN|0 2πi · − − γj iǫ ǫ+iγj Z jY=1  Xj=1 −  jY=1 and N N ∗ dθkdθk exp [ iθ (Γ iǫIN )θ]= (ǫ+iγk), C0|2N − − Z kY=1 kY=1 we get the result (6.2.3) readily.  100 6.EFETOV’SMETHODINRMTANDBEYOND

Proof of Lemma 6.2.1. By diagonalization of λIN H, we reduce Lemma 6.2.1 to Lemma 6.2.2. ∗ ∗ ∗ − In fact, taking G such that GG = G G = IN GHG = Γ, and defining a change of variables ¯ ¯ z˜ = G∗z, z˜¯ = z∗G, θ˜ = G∗θ, θ˜ = θ∗G, dz = dz,˜ dz¯ = dz,˜¯ dθ = dθ,˜ dθ¯ = dθ,˜ ∗ ∗ ∗ G(µIN H)G 0 z ∗ ∗ ∗ ∗ (z ,θ ) − ∗ = z G(µIN H)G z + θ G(µIN H)G θ 0 G(µIN H)G θ − −  −   =z ˜∗(µI Γ)˜z + θ˜∗(µI Γ)θ,˜ N − N − we reduce it to the diagonal case (Even if we use the naive definition of integration, every linear transformation is permitted under integral sign, see 1 of the last chapter).  § Lemma 6.2.3. For µ = λ iǫ (ǫ> 0), − N N 1 dz dz j j ∗ ∗ I I tr N = i dθkdθk (z z) exp [ iX ( 2 µ N )X] µIN H 2πi · − ⊗ − Z j=1 k=1 (6.2.4) Y Y N J 2 exp [ (z z + θ θ )(z z + θ θ )]. × −2N j k j k k j k j j,kX=1

Proof. By definition, we have

N N 1 dzj dzj tr N = dµN (H) i dθkdθk (6.2.5) h µIN H i U 2πi − Z N Z j=1 k=1  Y Y (z∗ z) exp [ iX∗(I (µI H)X] . × · − 2 ⊗ N −  As X∗(I H)X = H (z z + θ θ ), we have 2 ⊗ jk j k j k N N J 2 (6.2.6) exp [ i H (z z + θ θ )] = exp (z z + θ θ )(z z + θ θ ) . ± jk j k j k N − 2N j k j k k j k j j,k=1 j,k=1 X  X  After changing the order of integration and substituting (6.2.6) into (6.2.5), we get (6.2.4). 

There are at least two approach from (6.2.4) to Wigner’s law: The method (I) permits us to make ǫ 0 rather easily and leads us to a not so simple looking formula but which is calculable, → the other one (II) yields the beautiful formula (6.1.4) formally, but in order to make ǫ 0 in that → formula rigorously, we reform it until it is represented by Hermite polynomials, and at that time the beauty of the formula (6.1.4) is lost.

(I) The following calculation is proposed by Br´ezin [17, 18]:

Using N (z z + θ θ )(z z + θ θ ) = (z∗ z)2 + 2(θ∗ z)(z∗ θ) (θ∗ θ)2, j k j k k j k j · · · − · j,kX=1 we have J 2 N 1/2 ∞ N (6.2.7) exp [ (θ∗ θ)2]= dτ exp [ τ(θ∗ θ) τ 2]. 2N · 2πJ 2 − · − 2J 2   Z−∞ 6.2. THE DERIVATION OF (6.1.7) WITH λ − iǫ 101

∵) This equality is non-trivial if we adopt the naive definition of integration. Formally, N putting γ = , by J 2 γ γ τ(θ∗ θ) τ 2 = [(τ + γ−1(θ∗ θ))2 γ−2(θ∗ θ)2] − · − 2 − 2 · − · we get

∞ γ (θ∗ θ)2 ∞ γ θ∗ θ dτ exp [ τ(θ∗ θ) τ 2] = exp [ · ] dτ exp [ (τ + · )2] − · − 2 2γ − 2 γ Z−∞ Z−∞ ∞ γ = ds exp [ s2] − 2 Z−∞ and therefore it seems the above equality (6.2.7) follows. But for s = τ + γ−1(θ∗ θ), do we consider as “dτ = ds”? (Though RVV-integral is applicable, we dare to prove this· fact as follows only with naive definition.) We may regard the integrand as C-valued real τ-variable function, that is, enclosing (θ∗ θ), we may integrate. In fact, since (θ∗ θ)N+1 = 0, · ·

∞ N ∗ j N ∞ ∗ j γ 2 ( τ(θ θ)) γ 2 (θ θ) dτ e−γτ /2 − · = dτ e−γτ /2( τ)j · . 2π j! 2π − j! r −∞ j=0 r j=0 −∞ Z X X Z Using

∞ ∞ 2 2 2π 2(2ℓ 1)! dτ τ 2ℓ+1e−γτ /2 =0, dτ τ 2ℓe−γτ /2 = − , γ 2ℓγℓ(ℓ 1)! Z−∞ Z−∞ r − the right-hand side of above equation,

[N/2] ∞ ∗ 2ℓ γ 2 (θ θ) 1 dτ e−γτ /2( τ)2ℓ · = exp [ (θ∗ θ)2]. // 2π − (2ℓ)! 2γ · ℓ=0 r −∞ X Z For comparison, we remember a little for ordinary integral case: By x2 +2ixξ = (x + iξ)2 + ξ2, we have

2 2 2 2 2 dx e−ixξe−x /2 = e−ξ /2 dx e−(x+iξ) /2 = e−ξ /2 dy e−y /2 R R R Z Z Z ∞ n ∞ m 2m ( ixξ) 2 ( 1) ξ 2 = dx ( − )e−x /2 = − dx x2me−x /2. R n! (2m)! R n=0 m=0 Z X X Z Even in this case, we need to explain why the second equality and the last one holds in the above, but we take these admitted.

Substituting this relation into (6.2.5), we get

N N 1 dzj dzj ∗ tr N =i dθkdθk (z z) µIN H 2πi · − Z j=1 k=1 Y Y J 2 exp [ iX∗(I µI )X ((z∗ z)2 + 2(θ∗ z)(z∗ θ))] × − 2 ⊗ N − 2N · · · N 1/2 ∞ N dτ exp [ τ(θ∗ θ) τ 2]. × 2πJ 2 − · − 2J 2   Z−∞ 102 6.EFETOV’SMETHODINRMTANDBEYOND

Proposition 6.2.1. We have the following formula: 1 tr (λ i0)I H N − N − 1 N 1/2 N N+1 ∞ N = i ds sN exp [ (2iλs + s2)] (N 1)! 2πJ 2 J 2 −2J 2 −     Z0 ∞ N (6.2.8) dτ (τ + iλ)N−1(τ + iλ + s) exp [ τ 2] × −2J 2 Z−∞ 1 N 1/2 N N+1 = i dsdτ (1 + (τ + iλ)−1s) (N 1)! 2πJ 2 J 2 −     ZZR+×R 1 exp[ N( (τ 2 + 2iλs + s2) log s(τ + iλ))]. × − 2J 2 −

Proof. Since J 2 J 2 iµ(θ∗ θ) (θ∗ z)(z∗ θ) τ(θ∗ θ)= θ ((τ + iµ)δ + z z )θ , − · − N · · − · − a ab N a b b Xa,b using Lemma 2.3 and Lemma 6.2.5 below, we get

N J 2 J 2 (6.2.9) dθ dθ exp [ iµ(θ∗ θ) (θ∗ z)(z∗ θ) τ(θ∗ θ)] = (τ + iµ)N−1(τ + iµ + (z∗ z)). k k − · − N · · − · N · Z kY=1 Using the expression (6.2.9), we have

N 1 dz dz J 2 tr =i j j (z∗ z) exp [ iµ(z∗ z) (z∗ z)2] µI H N 2πi · − · − 2N · N − Z j=1 Y N 1/2 ∞ J 2 N dτ (τ + iµ)N−1(τ + iµ + (z∗ z)) exp [ τ 2]. × 2πJ 2 N · −2J 2   Z−∞ N 2N Identifying C = R by zj = xj + iyj, zj = xj iyj, dzj dzj = 2idxj dyj and using the N − N ∧ ∧ dz dz dx dy polar coordinate (r, ω) R S2N−1 with j j = j j = π−N r2N−1dr dω, dω = ∈ +× 2πi π S2N−1 jY=1 jY=1 vol(S2N−1) = 2πN /(N 1)!, we get, R − 1 1 ∞ J 2 tr = i dr2 r2N exp [ iµr2 r4] µI H N (N 1)! − − 2N N − − Z0 N 1/2 ∞ J 2 N dτ (τ + iµ)N−1(τ + iµ + r2) exp [ τ 2]. × 2πJ 2 N −2J 2   Z−∞ Changing the independent variables as r2 = (N/J 2)˜r and making ǫ 0, i.e. µ λ i0, we get the → → − result. Here, this procedure of making ǫ 0 under integral sign is admitted because of Lebesgue’s → dominated convergence theorem. 

Remark 6.2.1. The formula (6.2.8) equals to Br´ezin’s equality (2.16) of [17] where he takes λ + iǫ instead of our choice λ iǫ. On the other hand, probably, he miscopies his equality in (44) − of [18], more precisely, the term (1 + xy) in (44) should be (1 + (x iz)−1y). −

Now, we prepare a technical lemma: 6.2. THE DERIVATION OF (6.1.7) WITH λ − iǫ 103

Lemma 6.2.4. For any ℓ> 0 and n = 0, 1, 2, 3, , ∞ ∞ · · · −ℓt2 2n+1 −ℓt2 2n+1 n! (6.2.10) dt e t = 0, dt e t = n+1 , −∞ 0 2ℓ Z ∞ Z −ℓt2 2n −n− 1 2n + 1 −n− 1 (2n)! 1 (6.2.11) dt e t = ℓ 2 Γ = ℓ 2 π 2 . 2 n!22n Z−∞   Let δ > 0, d > 0. For τ = d N −γ such that 0 γ 1/2, we have 0 0 N 0 ≤ ≤ ∞ 2 2 1−2γ −δ0Nt 1−γ −1 −δ0d N (6.2.12) dt e < (2δ0d0N ) e 0 . ZτN

Proof. The first two are well known. As t>τN , we have δN(t2 τ 2 )= δN(t τ )(t + τ ) > 2δNτ (t τ ). − N − N N N − N Therefore, we get ∞ ∞ 2 2 1−γ 2 1−2γ −δ0Nt −δ0Nτ −2δ0d0N (t−τN ) 1−γ −1 −δ0d N dt e < e N dt e = (2δ0d0N ) e 0 .  ZτN ZτN Then, we have

Corollary 6.2.1. For λ = 0, we get readily 1 1 1 5 (6.2.13) ρ (0) = [1 ( 1)N N −1 + N −2 + ( 1)N N −3 + O(N −4)]. N N πJ − − 4 32 − 128

Proof. Using (6.2.10), (6.2.11) to have 1 1 ρ (0) = tr N N πN ℑ i0I H N − N − 1 N 1/2 N N+1 ∞ N ∞ N = ds sN exp [ s2] dτ τ N−1(τ + s) exp [ τ 2] πN! 2πJ 2 J 2 −2J 2 −2J 2     Z0 Z−∞ 1 N −N−1 N + 1 2 1/2 N+1 Γ ,N=even, 1 N N 2 2J 2 2 =   −N−1   πN! 2πJ 2 J 2 × 1 N N N + 2      Γ Γ ,N=odd, 2 2J 2 2 2        we may calculate explicitly by the Stirling formula.  In proving (6.2.9) above, we have also used

Lemma 6.2.5. Let M = (Mab) with Mab = αδab + βzazb. Then, we have det M = αN−1(α + β z 2), z 2 = z∗ z. | | | | ·

Proof. Let u satisfy Mu = γu. Then, we have (6.2.14) z Mu = γz u, u Mu = γu u. · · · · From the first equation above, we get N (γ α β z 2) u z = 0. − − | | j j Xj=1 If N u z = 0, γ = α + β z 2. On the other hand, if N u z = 0, the second one in (6.2.14) j=1 j j 6 | | j=1 j j  impliesP that γ = α. Taking into account the multiplicity,P we have the desired result. 104 6.EFETOV’SMETHODINRMTANDBEYOND

(II) In order to get the expression (6.1.4), we proceed as follows: Putting

N z z N θ z A = j=1 j j j=1 j j , X N θ z N θ θ Pj=1 j j Pj=1 j j! we have P P N 2 str AX = (zjzk + θjθk)(zkzj + θkθj). j,kX=1 On the other hand, the following is known as the Hubbard-Stratonovich formula:

Lemma 6.2.6. Let A be any even 2 2 supermatrix. For Q Q given in (6.1.5), we have × ∈ J 2 N (6.2.15) exp [ str A2]= dQ exp [ str Q2 i str (QA)]. −2N −2J 2 ± ZQ

a θ1 Proof. Let A = with a, b Rev and θ1, θ2 Rod. For any γ > 0, we claim θ2 b ∈ ∈   1 dQ exp [ str (γQ iγ−1A)2] = 1. −2 ± ZQ As we have readily

str (γQ iγ−1A)2 = γ2(x2 + x2 + 2ρ ρ ) 2i(x a + ρ θ ρ θ ix b) γ−2(a2 b2 + 2θ θ ), ± 1 2 1 2 ± 1 1 2 − 2 1 − 2 − − 1 2 we get

dρ dρ exp [ γ2ρ ρ i(ρ θ ρ θ )+ γ−2θ θ ] = (γ2 θ θ )(1 + γ−2θ θ )= γ2, 1 2 − 1 2 ∓ 1 2 − 2 1 1 2 − 1 2 1 2 Z dx dx γ2 γ−2 1 2 exp [ (x2 + x2) i(x a ix b)+ (a2 b2)] 2π − 2 1 2 ∓ 1 − 2 2 − Z dx dx 1 1 = 1 2 exp [ (γx iγ−1a)2 (γx γ−1b)2]= γ−2.  2π −2 1 ± − 2 2 ± Z Substituting (6.2.15) with A = A into (6.2.4), noting str (QA )= X∗(Q I )X, taking the X X ⊗ N part of integral and changing the order of integration, we have

N N dz dz i j j dθ dθ (z∗ z) exp [ iX∗ (µI Q) I X] 2πi k k · − 2 − ⊗ N Z j=1 k=1 Y Y  N = ( µI Q) I −1) sdet −1(i(µI Q) I ). { 2 − ⊗ N } bb,jj 2 − ⊗ N Xj=1 Therefore, we have

Lemma 6.2.7. For µ = λ iǫ (ǫ> 0), − N 1 (6.2.16) tr = dQ ( (µI Q) I −1) sdet −1(i(µI Q) I ). µI H N { 2 − ⊗ N } bb,jj 2 − ⊗ N N − ZQ j=1 X Here, (C)bb,jj is the j-th diagonal element of the boson-boson block of the (even) supermatrix C. 6.2. THE DERIVATION OF (6.1.7) WITH λ − iǫ 105

Remarking ( µI Q) I −1) = ( µI Q −1) for any j = 1, 2, , N, { 2 − ⊗ N } bb,jj { 2 − } bb · · · sdet −1(i(µI Q) I ) = sdet −N (µI Q), 2 − ⊗ N 2 − str (AB) = str (BA), str (A + B) = str A + str B, log (sdet ℓ A)= ℓ str (log A) for ℓ Z, ∈ we have 1 (6.2.17) tr = dQ N µI Q −1 exp [ N (µ; Q)] µI H N { 2 − } bb − L N − ZQ with  (µ; Q) = str [(2J 2)−1Q2 + log (µI Q)], L 2 − µ ix (µ x )(µ ix )+ ρ ρ µI Q) −1 = − 2 = − 1 − 2 1 2 , { 2 − } bb,11 (µ x )(µ ix ) ρ ρ (µ x )2(µ ix ) − 1 − 2 − 1 2 − 1 − 2  x1 ρ1 Q = Q = x1,x2 Rev, ρ1, ρ2 Rod . ρ2 ix2 ∈ ∈    Remark 6.2.2. If we could make directly ǫ 0 in (6.2.17), we had the formula (6.1.4). We → claim at least symbolically we do that.

Lemma 6.2.8. For µ = λ iǫ (ǫ> 0), − 1 1 dx1dx2 N(µ x1 ix2) (6.2.18) tr N = 2 − − exp [ NΦ(x1,x2; µ)], N h µI H i 2 2π J (µ x )(µ ix ) − N − ZR − 1 − 2 where x2 + x2 µ x Φ(x ,x ; µ)= 1 2 + log − 1 . 1 2 2J 2 µ ix − 2 Proof. As the integrand in (6.2.17) is represented by (µ x )(µ ix )+ ρ ρ 1 µ x ρ ρ − 1 − 2 1 2 exp [ N (x2 + x2 + 2ρ ρ ) + log − 1 1 2 ], (µ x )2(µ ix ) − {2J 2 1 2 1 2 µ ix − (µ x )(µ ix )} − 1 − 2 − 2 − 1 − 2 we have (µ x )(µ ix )+ ρ ρ 1 1 dρ dρ − 1 − 2 1 2 exp [ N ρ ρ ] 1 2 (µ x )2(µ ix ) − {J 2 − (µ x )(µ ix )} 1 2 (6.2.19) Z − 1 − 2 − 1 − 2 1 N 1 1 = − + . (µ x )2(µ ix ) µ x {J 2 − (µ x )(µ ix )} − 1 − 2 − 1 − 1 − 2 Remarking (µ x )−2 = ∂ (µ x )−1, by integration by parts, we have − 1 x1 − 1 dx dx 1 1 2 − exp [ NΦ(x ,x ; µ)] 2π (µ x )2(µ ix ) − 1 2 Z − 1 − 2 dx dx N x 1 = 1 2 − 1 exp [ NΦ(x ,x ; µ)], 2π (µ x )(µ ix ) J 2 − µ x − 1 2 Z − 1 − 2 − 1  which yields (6.2.18). 

Remark 6.2.3. As the right-hand side of (6.2.18) is rewritten N−1 2 2 1 1 N (µ x1 ix2)(µ ix2) x1 + x2 (6.2.20) tr N = 2 dx1dx2 − − N−+1 exp [ N 2 ], N h µI H i 2πJ 2 (µ x ) − 2J N − ZR − 1 there is no singularity in the integrand when µ = 0.  ℑ 6 106 6.EFETOV’SMETHODINRMTANDBEYOND

Using the fact that for any real smooth integrable function f, ǫ lim dx (λ iǫ x)−1f(x) = lim dx f(x)= πf(λ), ǫ→0 ℑ − − ǫ→0 (λ x)2 + ǫ2 ZR ZR − and integrating by parts based on ∂ℓ (µ x )−1 = ℓ!(µ x )−ℓ−1, we have, x1 − 1 − 1 2 ℓ 2 −ℓ−1 x1 ( 1) ℓ λ lim dx1 (λ iǫ x1) exp[ N ]= π − ∂λ exp[ N ]. ǫ→0 ℑ − − − 2J 2 ℓ! − 2J 2 ZR Using the Hermite polynomial Hℓ(x) defined by

[ℓ/2] k ℓ−2k ∞ ℓ x2/2 ℓ −x2/2 ( 1) ℓ!x 1 −t2/2 ℓ Hℓ(x) = ( 1) e ∂xe = −k = dt e (x it) , − 2 k!(ℓ 2k)! √2π −∞ ∓ Xk=0 − Z with ℓ+1 ∞ γ −γ2t2/2 ℓ Hℓ(γx)= dt e (x it) , √2π ∓ Z−∞ we have

Lemma 6.2.9. ( 1)ℓ λ2 π N (2ℓ+1)/2 λ2 t2 (6.2.21) π − ∂ℓ exp[ N ]= exp[ N ] dt (λ it)ℓ exp[ N ]. ℓ! λ − 2J 2 ℓ! J 2 − 2J 2 ∓ − 2J 2   ZR Proof. Using Bell’s polynomial, we have

[ℓ/2] ( 1)ℓ λ2 ( 1)k2−k N ℓ−k λ2 π − ∂ℓ exp[ N ]= π − λℓ−2k exp[ N ] ℓ! λ − 2J 2 k!(ℓ 2k)! J 2 − 2J 2 Xk=0 −   π N ℓ/2 N 1/2 λ2 = H λ exp[ N ].  ℓ! J 2 ℓ J 2 − 2J 2      On the other hand,

x2 N −(ℓ+1)/2 N 1/2 dx (µ ix )ℓ exp [ N 2 ]= √2π H µ , 2 − 2 − 2J 2 J 2 ℓ J 2 ZR      and x2 J 2 x2 dx ( ix )(µ ix )ℓ exp [ N 2 ]= ℓ dx (µ ix )ℓ−1 exp [ N 2 ] 2 − 2 − 2 − 2J 2 − N 2 − 2 − 2J 2 ZR ZR N −(ℓ+2)/2 N 1/2 = ℓ√2π H µ . − J 2 ℓ−1 J 2      Therefore, we have N ρN (λ) = lim( K1 + K2), N 2π2J 2 ǫ→0 ℑ ℑ where 2 2 −N N−1 x1 + x2 K1 = dx1dx2 (µ x1) (µ ix2) exp[ N 2 ], R2 − − − 2J Z 2 2 −N−1 N−1 x1 + x2 K2 = dx1dx2 (µ x1) ( ix2)(µ ix2) exp[ N 2 ]. 2 − − − − 2J ZR 6.3.THEPROOFOFSEMI-CIRCLELAWANDBEYONDTHAT 107

Moreover, we get π N (N−1)/2 N 1/2 λ2 lim K1 = HN−1 λ exp[ N ] ǫ→0 ℑ (N 1)! J 2 J 2 − 2J 2 −      N −N/2 N 1/2 √2π H λ , × J 2 N−1 J 2      π N N/2 N 1/2 λ2 lim K2 = HN λ exp[ N ] ǫ→0 ℑ N! J 2 J 2 − 2J 2      N −(N+1)/2 N 1/2 ( 1)(N 1)√2π H λ . × − − J 2 N−2 J 2      Combining these, we have proved

Proposition 6.2.2. For any λ R, we have ∈ 1 N 2 ρN (λ) = exp [ λ ] N √2πJ(N 1)! −2J 2 − N 1/2 N 1 N 1/2 √NH2 λ − H λ × N−1 J 2 − √ N J 2     N    N 1/2 (6.2.22) H λ × N−2 J 2    N 1/2 1 N N = 2πJ 2 2π(N 1)! J 2   −   dt ds exp [ Nφ±(t, s, λ)]a±(t, s, λ; N), × 2 − ZZR where 1 φ (t, s, λ)= (t2 + s2 + λ2) log (λ it)(λ is), ± 2J 2 − ∓ ∓ (6.2.23) 1 1 1 1 a (t, s, λ : N)= (1 N −1) + . ± (λ it)(λ is) − 2 − (λ it)2 (λ is)2 ∓ ∓  ∓ ∓  6.3. The proof of semi-circle law and beyond that

To prove semi-circle law, we need to apply the method of saddle point which is only proved in the ordinary space Rm. As we mentioned before, it seems difficult even ordinary case to assure whether we may apply that method suitably. And this point is stressed by the old book of de Bruijn [30], pp.77-78 :

The saddle point method, due to B. Riemann and P. Debye, is one of the most important and most powerful methods in asymptotics. (omission) · · · · · · Any special application of the saddle point method consists of two stages. (i) The stage of exploring, conjecturing and scheming, which is usually the most difficult one. It results in choosing a new integration path, made ready for applica- tion of (ii). (ii) The stage of carrying out the method. Once the path has been suitably chosen, this second stage is, as a rule, rather a matter of routine, although it may be complicated. It essentially depends on the Laplace method of Chapter 4. 108 6.EFETOV’SMETHODINRMTANDBEYOND

(omission) Most authors dealing with special applications do not go into · · · · · · the trouble of explaining what arguments led to their choice of path. The main reason is that it is always very difficult to say why a certain possibility is tried and others are discarded, especially since this depends on personal imagination and experience.

In the following, we only explain the rough idea to prove certain claims whose precise proofs are shown in A. Inoue and Y. Nomura [82].

Now, we study the asymptotic behavior of the following integral w.r.t. N: 1 1 N 1/2 N N+1 tr = i (I + I ), (λ i0)I H N (N 1)! 2πJ 2 J 2 1 2 − N − −     ∞ N ∞ N (6.3.1) I = ds sN exp [ (2iλs + s2)] dτ (τ + iλ)N exp [ τ 2], 1 −2J 2 −2J 2 Z0 Z−∞ ∞ N ∞ N I = ds sN+1 exp [ (2iλs + s2)] dτ (τ + iλ)N−1 exp [ τ 2]. 2 −2J 2 −2J 2 Z0 Z−∞ Theorem 6.3.1. Let λ < 2J. Putting θ = arg τ , τ = 2−1( iλ + √4J 2 λ2), we have | | − + + − − 1 3e−iN(sin 2θ+2θ) (6.3.2) I + I = 2πe−N J 2(N+1) e−iθN −1 + e−iθ ( 1)N N −2 + O(N −3) . 1 2 12 − − cos 2 θ     For the proof, see Appendix A.4 of [82].

Remarking the Stirling formula 1 1 139 571 (N 1)! = e−N N N−1/2√2π(1 + N −1 + N −2 N −3 N −4 + O(N −5)), − 12 288 − 51840 − 2488320 that is, 1 eN N −N+1/2 1 1 (6.3.3) = (1 N −1 N −2 + O(N −3)), (N 1)! √2π − 12 − 96 − we get 1 1 N 1/2 N N+1 tr = i (I + I ), (λ i0)I H N (N 1)! 2πJ 2 J 2 1 2 − N − −     N 2 1 3e−iN(sin 2θ+2θ) = i e−iθN −1 + [e−iθ ( 1)N ]N −2 + O(N −3) J 12 − − cos 2 θ   1 (1 N −1 + O(N −2)) × − 12 N ( 1)N e−iN(sin 2θ+2θ) = i e−iθ − N −1 + O(N −2) . J − 4 cos 2 θ   Therefore, we proved the first part of Theorem 1.2. 

The relation (6.1.5) for λ 2J is proved analogously: That is, we have | |≥ Theorem 6.3.2. Let λ> 2J. There exists constsnt k(λ) > 0 and C(λ) > 0 such that

2N −N − 1 −k(λ)N I + I = J e K(N)+ pure imaginary part with K(N) C(λ)N 2 e . 1 2 ≤

See, Appendix A.4 of [82], for the proof. 6.4. EDGE MOBILITY 109

Substituting this estimate into the definition of ρN (λ) N , we get 1 1 ρ (λ) = tr N N ℑπN (λ i0)I H N − N − 1 N 1/2 N N+1 = i (I + I ), ℑ πN! 2πJ 2 J 2 1 2     1 N 1/2 N N+1 = i J 2N e−N (K(N) + pure imaginary part) ℑ πN! 2πJ 2 J 2     1 N 1/2 N N+1 = J 2N e−N K(N). πN! 2πJ 2 J 2     Applying the Stirling formula to the last line of the above, we get the estimate (6.1.4). 

6.4. Edge mobility

To study the asymptotic behavior of ρ (2J zN −2/3) , or ρ ( 2J +zN −2/3) for z 1 N − N N − N | |≤ as N , we use in this section the formula (6.2.22): →∞ N N+1/2 ρ (2J zN −2/3) = dt ds a (t,s, 2J zN −2/3; N) N N 3/2 2N+1 + (6.4.1) − (2π) (N 1)!J R2 − − ZZ exp [ Nφ (t,s, 2J zN −2/3)], × − + −

N N+1/2 ρ ( 2J + zN −2/3) = dt ds a (t,s, 2J + zN −2/3; N) N N 3/2 2N+1 − (6.4.2) − (2π) (N 1)!J R2 − − ZZ exp [ Nφ (t,s, 2J + zN −2/3)], × − − − where 1 φ (t, s, λ)= (t2 + s2 + λ2) log (λ it)(λ is), ± 2J 2 − ∓ ∓ (6.4.3) 2(λ it)(λ is) (1 N −1)[(λ it)2 + (λ is)2] a (t, s, λ; N)= ∓ ∓ − − ∓ ∓ . ± 2(λ it)2(λ is)2 ∓ ∓ Proposition 6.4.1. For z 1, we have | |≤ −1/3 −2/3 N 2 i 3 3 ρN (2J zN ) N = 2 5 dx dy (x y) exp [ 3 (x + y 3zJ(x + y))] (6.4.4) − 8π J 2 − −3J − ZZR + O(N −2/3).

The right-hand integral above, should be interpreted as the oscillatory one.

Proof. In this proof, we abbreviate the subscript + of φ+ and a+. Put u = N −1/3. For λ = 2J zu2, using the change of variables s = iJ + yu, t = iJ + xu, − − − we have

ϕ(x,y,z; u)= φ( iJ + xu, iJ + yu, 2J zu2)= h(u) log g(u), − − − − 110 6.EFETOV’SMETHODINRMTANDBEYOND where 1 h(u)= (( iJ + xu)2 + ( iJ + yu)2 + (2J zu2)2) 2J 2 − − − i(x + y) x2 + y2 4zJ z2 = 1 u + − u2 + u4, − J 2J 2 2J 2 g(u) = (2J zu2 i( iJ + xu))(2J zu2 i( iJ + yu)) − − − − − − = J 2 iJ(x + y)u (xy + 2zJ)u2 + iz(x + y)u3 + z2u4. − − Analogously, we put

α(x,y,z; u)= a( iJ + xu, iJ + yu, 2J zu2; u−3) − − − (x y)2u2 + u3[2J 2 2iJ(x + y)u (x2 + y2 + 4zJ)u2 + 2i(x + y)zu3 + 2z2u4] = − − − . 2g2(u)

Using Taylor’s expansion of ϕ(x,y,z; u) w.r.t. u at u = 0, we get i ϕ(x,y,z; u) = 1 log J 2 + [x3 + y3 3zJ(x + y)]u3 + R(u), − 3J 3 − with u4 1 R(u)= dτ (1 τ)3ϕ(4)(x,y,z; τu), 3! − Z0 4!z2 3(g′′(u)2 + g′(u)g(3)(u)) 12g′(u)2g′′(u) 6g′(u)4 ϕ(4)(x,y,z; u)= + , g(u) − g(u)2 g(u)3 − g(u)4

∂k∂ℓR(u) C u4 for u 0, x, y R, z 1, k + ℓ 2. | x y |≤ k,ℓ ≥ ∈ | |≤ ≤ Moreover, 1 e−NR(u) =1+ S(u), S(u)= u−2 dτ R′(τu), − Z0 2u3 1 u5 1 R′(u)= dτ (1 τ)3ϕ(4)(x,y,z; τu)+ dτ (1 τ)3ϕ(5)(x,y,z; τu). 3 − 6 − Z0 Z0 Therefore, we have

i −1/3 exp [ Nϕ(x,y,z; N −1/3)] = e−N J 2N exp [ (x3 + y3 3zJ(x + y))]e−NR(N ). − −3J 3 −

On the other hand, as we have

1 g(u)−2 = J −4 2u dτ g′(τu)g(τu)−3, − Z0 we get

(x y)2 1 α(x,y,z; u)= − u2 + A(u), A(u)= u3(x y)2 dτ g′(τu)g(τu)−3. 2J 4 − − Z0 with

∂k∂ℓA(u) C u3 for u 0, x, y R, z 1, k + ℓ 2. | x y |≤ k,ℓ ≥ ∈ | |≤ ≤ 6.4. EDGE MOBILITY 111

Combining these, we get

dt ds exp [ Nφ(t,s, 2J zN −2/3)] a(t,s, 2J zN −2/3; N) 2 − − − ZZR = N −2/3 dx dy exp [ Nϕ(x,y,z; N −1/3)]α(x,y,z; N −1/3) 2 − ZZR −N 2N −2/3 i 3 3 = e J N dx dy exp [ 3 (x + y 3zJ(x + y))] R2 −3J − (6.4.5) ZZ (x y)2 (1 + S(N −1/3)) − N −2/3 + A(N −1/3) × 2J 4   2 −N 2N −4/3 (x y) i 3 3 = e J N dx dy − 4 exp [ 3 (x + y 3zJ(x + y))] 2 2J −3J −  ZZR + O(N −5/3) .  Here, we applied the lemma below to (x y)2 f(x,y)= A(u), A(u)S(u), S(u) − u2, 2J 2 for getting the last term O(N −5/3).

Moreover, we may rewrite the above (6.4.5) using the Stirling formula to get −1/3 −2/3 N i 3 3 2 ρN (2J zN ) N = 2 5 dx dy exp [ 3 (x + y 3zJ(x + y))](x y) − 8π J 2 × −3J − − ZZR + O(N −2/3).  Lemma 6.4.1. If f satisfies ∂k∂ℓf(x,y) C , | x y |≤ k,ℓ we have i 3 3 dxdy f(x,y) exp [ 3 (x + y 3zJ(x + y))] C < . 2 −3J − ≤ ∞ ZZR

iφ(q) iφ(q) Proof. We use Lax’s technique (combining integration by parts with ∂qe = i∂qφ(q)e where ∂ φ(q) = 0) to estimate the oscillatory integrals noting | q | 6 i i (1 ∂2 ∂2) exp [ (x3 + y3 3zJ(x + y))] = Φ(x,y) exp [ (x3 + y3 3zJ(x + y))] − x − y −3J 3 − −3J 3 − with 2i(x + y) 6z(x2 + y2) x4 + y4 Φ(x,y) = 1 + 18z2J 2 + + + . J 3 J 2 J 6 Therefore, we have

i 3 3 dxdy f(x,y) exp [ 3 (x + y 3zJ(x + y))] 2 −3J − ZZR 2 2 f(x,y) i 3 3 = dxdy (1 ∂x ∂y ) exp [ 3 (x + y 3zJ(x + y))]. 2 − − Φ(x,y) −3J − ZZR By the assumption, (1 ∂2 ∂2)(f(x,y)/Φ(x,y)) is integrable w.r.t. dxdy, we get the desired − x − y result. 

Using the Airy function defined by i i Ai (z)= dx exp [ x3 + izx]= dx exp [ x3 izx]= Ai (z) for z R, −3 3 − ∈ ZR ZR 112 6.EFETOV’SMETHODINRMTANDBEYOND we have ix3 izx z dx exp [ + ]= J Ai ( ), −3J 3 J 2 J ZR ix3 izx z dx x exp [ + ]= iJ 2 Ai ′( ), −3J 3 J 2 − J ZR ix3 izx z dx x2 exp [ + ]= J 3 Ai ′′( ). −3J 3 J 2 − J ZR And we get ρ (2J zN −2/3) = N −1/3f(zJ −1)+ O(N −2/3), N − N where 1 f(z)= (Ai ′(z) Ai ′(z) Ai ′′(z) Ai (z)).  4π2J − Corollary 6.4.1. For z 1, we have | |≤ ρ ( 2J + zN −2/3) = N −1/3f(zJ −1)+ O(N −2/3). N − N − Remark 6.4.1. Though Br´ezin and Kazakov applied the Br´ezin formula (2.7) to obtain the analogous statement, but we can’t follow their proof (48) of [18].

======Mini Column 3: On GOE, GUE and GSE ======

In the following, to expect giving a trigger to audience having curiosity to RMT, we mention here Gaussian ensembles classified by F. Dyson. The Dyson index β is defined by numbers of real components in matrix elements belonging to each ensemble. Though I give a look to not only a book by Mehta [103] but also survey paper by J. Verbaarschot [137], it seems not sufficient to comprehend this interesting branch only “a look”.

Definition 6.4.1. A set of N N Hermitian matrix H = (H ) whose matrix element is × jk distributed with the probability −1 −(Nβ/4) tr H2 P (H)dH = ZβN e dH is called Wigner-Dyson Ensemble. Especially

(1) When H = (Hjk) is real symmetric matrix, then it’s index is β = 1 with volume element

dH = k≤j dHkj. This is called Gaussian Orthogonal Ensemble(=GOE). (0) (1) (0) (1) R (2) When QH = (Hjk) is complex Hermite matrix with Hkj = Hkj + iHkj ,Hkj ,Hkj , (0) (1) ∈ then it’s index is β = 2 with volume element dH = k≤j dHkj k

Remark 6.4.3. Following facts are known:

(1) GOE is invariant unde the similarity transformation (O) : H tOHO by O belonging T1 7→ to O(N)=real orthogonal group, (2) GUE is invariant under the similarity transformation (U) : H U −1HU by U belonging T2 7→ to U(N)=unitary group, (3) GSE is invariant under the similarity transformation (S) : H tSHS by S belonging T4 7→ to Sp(N)=symplectic group.

Let λ , , λ be eigenvalues of N N Hermite matrix. We check the relation between their 1 · · · N × differential dλ and Lebesgue measure dH. In case H GOE, since the number of independent j ∈ components of H equals to N(N +1)/2, we have ℓ = N(N +1)/2 N = N(N 1)/2 independent jk − − variables µm except λj . Because { } N 2 2 tr H = λj Xj=1 and putting Jacobian of change of variables as ∂(H ,H , ,H ) J(λ, µ)= det 11 12 · · · NN , ∂(λ , λ ,µ , ,µ ) |  1 N 1 ℓ  · · · · · · we have N ℓ dH = J(λ, µ) dλj dµk. jY=1 kY=1 We need to calculate dλdµJ(λ, µ)= dλ dµJ(λ, µ) ? ZZ Z  Z  In other word, find the reason for the appearance of difference product of eigenvalues (van der Monde determinant)?

[Report problem 6-1]: Give a precise description for above representation using eigenvalues (see, pp. 55-69 in Mehta [103] or Theorem 5.22 of Deift [32] for β =22.

Theorem 6.4.1 (Theorem 3.3.1 of [103]). Let x , ,x be eigenvalues for a hermite matrix 1 · · · N belonging to GOE (β = 1), GUE (β = 2) or GSE (β = 4). Then, the joint probablity density of x1, ,xN is given · · · −(β/2) N x2 P (x , ,x )= Z−1 e Pj=1 j x x β. Nβ 1 · · · N Nβ | j − k| j 1, ∆(x)=∆(x1, ,xN )= − · · · 1 if N = 1 (Q

2Frankly speaking, atlom has not enough patience to understand these facts, therefore he leaves these explanation for younger people expecting young gives stimulation to old 114 6.EFETOV’SMETHODINRMTANDBEYOND and

N Φ(x)=Φ(x , ,x )= ∆(x) 2γ xα−1(1 x )β−1. 1 · · · N | | j − j jY=1 Then, for

1 α β α> 0, β > 0, γ > min , ℜ , ℜ , ℜ ℜ ℜ − N N 1 N 1  − −  we have

N−1 1 1 Γ(1 + γ + jγ)Γ(α + jγ)Γ(β + jγ) I(α, β, γ, N)= dxΦ(x)= . 0 · · · 0 Γ(1 + γ)Γ(α + β + (N + j 1)γ) Z Z jY=0 − ======End of Mini Column 3 ======

6.5. Relation between RMT and Painlev´etranscendents

It is shown rather recently that there is a mysterious connection between RMT, combinatorics and Painlev´efunctions. Borrowing the description of Tracy and Widom [133], [134], we explain our problem.

Let U be a set of unitary N N matrices with Haar measure. Denoting (real) eigenvalues N × of U U as λ λ λ with its maximum λ = λ (U), we consider P (λ (U)

Theorem 6.5.1 ([134]).

∞ λN (U) 2N 2 lim P2 −

i q′′ = sq + 2q3 q(s) Ai (s)= dx exp [ x3 + isx] when s . ∼ −3 →∞ ZR

Following theorem, which has curious resemblance to above, is proved by J. Baik, P. Deift and K. Johansson [7]: Putting uniform probability measure P on symmetric group , we denoteℓ = SN N ℓ (σ) the length of the longest increasing subsequence for each σ . Then, we have N ∈ SN

Theorem 6.5.2 ([7]).

ℓN 2N lim P − s = F2(s). N→∞ N 1/6 ≤   6.5. RELATION BETWEEN RMT AND PAINLEVETRANSCENDENTS´ 115

Table of Painlev´eequations. : (I) w′′ = 6w2 + s, (II) w′′ = 2w3 + sw + α, w′2 w′ αw2 + β δ (III) w′′ = + + γw3 + , w − s s w w′2 3w3 β (IV) w′′ = + + 4sw2 + 2(s2 α)w + , 2w 2 − w 1 1 w′ (w 1)2 β γw δw(w + 1) (V) w′′ = w′2 + + − αw + + + , 2w w 1 − s s2 w s w 1  −    − w′2 1 1 1 1 1 1 (VI) w′′ = + + w′ + + 2 w w 1 w s − s s 1 w s  − −   − −  w(w 1)(w s) βs γ(s 1) δs(s 1) + − − α + + − + − . s2(s 1)2 w2 (w 1)2 (w s)2 −  − −  Problem 6.5.1. (i) Can we find another explanation of above theorems by finding “slowness variables” as analogous as Efetov’s reproof of Wigner’s semi-circle law? (ii) Moreover, the results, for example, C. Itzykson and J.B. Zuber [84], D. Bessis, C. Itzykson and J.B. Zuber [13] or A. Matytsin [102], should be viewed from our point of view, but I have not enough intelligence to appreciate these works. By the way, the article by A. Zvonkin [148] seems a nice guide in this direction.

Remark 6.5.1. (i) To finish this lecture notes, I am almost drowned by checking by internet searching papers on RMT. For young researchers, T.Tao [131] may be recommended. But this is also thick to begin with, therefore take a look to X. Zeng and Z. Hou [146]. (ii) There are also papers concerned about. Whether Matytsin’s procedure [102] which tries to generalize Itzykson-Zuber formula has some relations to Functional Derivative Equation or not. At least, my life in heaven or hell will be full of mathematical problems considered.

CHAPTER 7

Fundamental solution of Free Weyl equation `ala Feynman

Because of the integration theory is not yet completed when lecture is delivered, this chapter is chosen because here we only use the “naive” definition of integral on R3|2.

Let V be a representation space and let a function ψ(t,q) : R R3 V be given satisfying × → ∂ ~ ∂ i~ ψ(t,q)= Hˆ ψ(t,q), Hˆ = Hˆ ( i~∂q)= cσj , (7.0.1) ∂t − i ∂qj   ψ(0,q)= ψ(q). Here, c and ~ are positive constants, the summation with respect to j = 1, 2, 3 is abbreviated. Put I as an identity map from V to V , and define maps σ : V V satisfying { i → }

(7.0.2) σjσk + σkσj = 2δjkI for j, k = 1, 2, 3, (Clifford relation)

(7.0.3) σ1σ2 = iσ3, σ2σ3 = iσ1, σ3σ1 = iσ2.

2 t Especially when we put V = C and ψ(t,q)= (ψ1(t,q), ψ2(t,q)), we have so-called Pauli matrices: 0 1 0 i 1 0 1 0 (7.0.4) σ = , σ = , σ = , I = I = . 1 1 0 2 i −0 3 0 1 2 0 1      −    This equation (7.0.1) is called free Weyl equation and we want to construct a fundamental solution of this modifying Feynman’s procedure. (Modification is really necessary because Feynman doesn’t proceed as he wants at that time).

Before this, we recall a very primitive and well-known method mentioned in Chapter 1. Ap- plying formally the Fourier transformation (which contains a parameter ~) with respect to q R3 ∈ to (7.0.1), we get

∂ p3 p1 ip2 i~ ψˆ(t,p)= Hψˆ(t,p) where H = H(p)= cσjpj = c − . ∂t p1 + ip2 p3  −  As H2 = c2 p 2I by (7.0.2), we easily have | | 2 −1 (7.0.5) e−i~ tHψˆ(p)= cos (c~−1t p )I ic−1 p −1 sin(c~−1t p )H ψˆ(p). | | 2 − | | | | In other word, denoting γ = c~−1t p ,  t | | ψˆ (t,p) −1 1 = e−i~ tHψˆ(p) ψˆ2(t,p) (7.0.6)   1 p cos γ ip sin γ i(p ip )sin γ ψˆ (p) = | | t − 3 t − 1 − 2 t 1 . p i(p1 + ip2)sin γt p cos γt + ip3 sin γt ψˆ (p) | |  − | |  2 ! Therefore, we have

117 118 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

Proposition 7.0.1. For any t R, ∈ −1 ˆ −1 −1 ˆ (7.0.7) e−i~ tHψ(q) = (2π~)−3/2 dp ei~ qpe−i~ tHψˆ(p)= dq′ E(t,q,q′)ψ(q′) 3 3 ZR ZR with ′ −3 i~−1(q−q′)p −1 −1 −1 −1 (7.0.8) E(t,q,q ) = (2π~) dp e cos (c~ t p )I2 ic p sin(c~ t p )H(p) . 3 | | − | | | | ZR   Remark 7.0.2. This calculation doesn’t work when (7.0.1) is changed to 3 ~ ∂ e (7.0.9) Hˆ (t)= cσσj Aj(t,q) + eA0(t,q). i ∂qj − c Xj=1   We don’t insist on this equation having physical meaning because we add minimally electro-magnetic potential to (7.0.1) very formally. But it seems mathematically interesting to seek a solution of this toy model represented as Feynman suggested like below.

Here, we give another formula for a solution of (7.0.1).

Theorem 7.0.3 (Path-integral representation of a solution for the free Weyl equation).

−1 ¯ (7.0.10) ψ(t,q)= ♭ (2π~)−3/2~ dξdπ 1/2(t, x,¯ θ,ξ¯ ,π)ei~ S(t,x,¯ θ,ξ,π) (♯ψ)(ξ,π) . R3|2 D F ZZ x¯B=q   Here, (t, x,¯ θ,ξ¯ ,π) and (t, x,¯ θ,ξ¯ ,π) are solutions of the Hamilton-Jacobi and continuity equa- S D tions, (7.2.13) and (7.2.16) respectively, and ♯, ♭ are given in (7.2.1) below.

7.1. A strategy of constructing parametrices

Taking the free Weyl equation as a simplest model of constructing parametrices for Dirac (1.2.5) in Chapter 1 or Weyl (7.0.9) above, we explain our strategy which is a superly extended version of what we explained before. (1) Is it possible to extract “symbol” corresponding to a given system of PDO’s? (2) To define “symbol”, we need to represent the matrix structure as the differential operators acting on superspace. For example, to regard 2 2 matrix structure as differential operators, we × decompose a c a + b 1 0 a b 1 0 c + d 0 1 c d 0 i = + − + + i − − , d b 2 0 1 2 0 1 2 1 0 2 i 0      −      and attach differential operators for each σj and we get the symbol of that system of PDO’s. (3) For the Hamiltonian (x,ξ,θ,π), we construct a solution (t,x,ξ,θ,π) of the Hamilton-Jacobi H S equation with given initial data. (4) Calculating the Hessian of (t,x,ξ,θ,π) w.r.t. ξ,π, we define its determinant below, called S super Van Vleck determinant: ∂2S ∂2S (t,x,ξ,θ,π) = sdet ∂xj ∂ξk ∂xj ∂πb with j.k = 1, 2, 3, a, b = 1, 2. D ∂2S ∂2S ∂θa∂ξk ∂θa∂πb !

(5) Finally, defining an operator

−1 ¯ ( (t)u)(x,θ) = (2π~)−3/2~ dξdπ 1/2(t, x,¯ θ,ξ¯ ,π)ei~ S(t,x,¯ θ,ξ,π) u(ξ,π), U 3|2 D F ZZR 7.2. SKETCHY PROOFS OF THE PROCEDURE MENTIONED ABOVE 119 we check its properties and show that it’s the desired parametrix (or fundamental solution for this free Weyl case).

7.2. Sketchy proofs of the procedure mentioned above

(1) A “spinor” ψ(t,q) = t(ψ (t,q), ψ (t,q)) : R R3 C2 is identified with an even super- 1 2 × → smooth function u(t,x,θ)= u (t,x)+ u (t,x)θ θ : R R3|2 C as follows: 0 1 1 2 × → ev ♯ 2 ψ1 C u(θ)= u0 + u1θ1θ2 Cev ∋ ψ2 → ∈ (7.2.1)   ←♭ with u0 = u(0) = ψ1, u1 = ∂θ2 ∂θ1 u(θ) θ=0 = ψ2.

Here, functions u0(t,x), u1(t,x) are the Grassmann continuation of ψ1(t,q), ψ2(t,q), respectively. (2) Pauli matrices σ have differential operator representations { j} 2 ¯λ ∂ −1 2 ∂ σ1 θ, = i¯λ θ1θ2 +λ ¯ , i ∂θ ∂θ1∂θ2    2  ¯λ ∂ −1 2 ∂ (7.2.2) σ2 θ, = ¯λ θ1θ2 ¯λ , i ∂θ − − ∂θ1∂θ2  ¯λ ∂   ∂ ∂  σ3 θ, = 1 θ1 θ2 , i ∂θ − ∂θ1 − ∂θ2   satisfying (7.0.2) and (7.0.3). Here, we take an arbitrarily chosen parameterλ ¯ C× = C 0 . ∈ − { } Remark 7.2.1. Only for ¯λ = 1, ♭σ (θ, i¯λ∂ )♯ are unitary matrices. From here, we take | | { j − θ } ¯λ = i to have the matrix representation (7.0.4) for (7.2.2).

(3) Since, using (7.2.2), the differential operator H given by (7.0.1) is identified with

~ ∂ ∂ ∂2 ∂ ∂2 ∂ ˆ , θ, = ic~ θ θ +c~ θ θ + H0 i ∂x ∂θ 1 2 ∂θ ∂θ ∂x 1 2 ∂θ ∂θ ∂x (7.2.3) − − 1 2 1 1 2 2      ∂ ∂ ∂ ic~ 1 θ1 θ2 − − ∂θ1 − ∂θ2 ∂x3   where the “ordinary symbol” of the operator (7.2.3) is considered as

(7.2.4) (ξ,θ,π)= c(ξ + iξ )θ θ + c¯k−2(ξ iξ )π π + cξ [1 i¯k−1(θ π + θ π )]. H0 1 2 1 2 1 − 2 1 2 3 − 1 1 2 2

Therefore, the super-version of Weyl equation is given ∂ ~ ∂ ∂ i~ u(t,x,θ)= ˆ , θ, u(t,x,θ), (7.2.5) ∂t H0 i ∂x ∂θ     u(0,x,θ)= u(x,θ). On the other hand, “complete Weyl symbol” of the right-hand side of the differential operator (7.2.3) is given by

(7.2.6) (ξ,θ,π)= c(ξ + iξ )θ θ + c¯k−2(ξ iξ )π π ic¯k−1ξ (θ π + θ π ). H 1 2 1 2 1 − 2 1 2 − 3 1 1 2 2 120 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

Here,k ¯ R× or iR× (R× = R 0 ) is a parameter introduced for Fourier transformations of ∈ ∈ \ { } even and odd variables: −m/2 −i~−1hx|ξi (Fev)(ξ) = (2π~) dx e v(x), m|0 ZR −m/2 i~−1hx|ξi (F¯ew)(x) = (2π~) dξ e w(ξ), m|0 ZR n/2 −i¯k−1hθ|πi (Fov)(π) =k ¯ ιn dθ e v(θ), 0|n ZR n/2 i¯k−1hθ|πi (F¯ow)(θ) =k ¯ ιn dπ e w(π), 0|n ZR which are explained more precisely later in 2 of Chapter 9. Here, m § n η y = η y , ρ ω = ρ ω , ι = e−πin(n−2)/4. h | i j j h | i k k n Xj=1 Xk=1 Moreover,

−i~−1hX|Ξi a ( u)(ξ,π)= cm,n dX e u(X)= [(Feua)(ξ)][(Foθ )(π)], F m|n R a Z X i~−1hX|Ξi a ( ¯v)(x,θ)= cm,n dΞ e v(Ξ) = [(F¯eva)(x)][(F¯oπ )(θ)] F m|n R a Z X with X Ξ = x ξ + ~¯k−1 θ π R , c = (2π~)−m/2¯kn/2ι . h | i h | i h | i∈ ev m,n n More explicitly, we have

Example 7.2.1 (n = 1).

iπ/4 1/2 −i¯k−1θπ iπ/4 1/2 −1 e ¯k dθ e (u0 + u1θ)= e ¯k (u1 i¯k u0π), 0|1 −  ZR  iπ/4 1/2 i¯k−1θπ −1  e ¯k dπ e (u1 i¯k u0π)= u0 + u1θ. 0|1 − ZR Example 7.2.2 (n = 2). For u(θ)= u +θ θ u and v(π)= π v +π v with u , u , v , v C,  0 1 2 1 1 1 2 2 0 1 1 2 ∈ we have −i¯k−1hθ|πi −2 (Fou)(π) =k ¯ dθ e u(θ) =k ¯(u1 +k ¯ π1π2u0), 0|2 ZR i¯k−1hθ|πi −1 −1 (F¯ov)(θ) =k ¯ dπ e v(π) =k ¯( i¯k θ2v1 + i¯k θ1v2), 0|2 − ZR i¯k−1hθ|πi −2 F¯o(Fou)(θ) =k ¯ dπ e [¯k(u1 +k ¯ π1π2u0)] = u0 + θ1θ2u1 = u(θ), 0|2 ZR −i¯k−1hθ|πi −1 −1 Fo(F¯ov)(π) =k ¯ dθ e [¯k( i¯k θ2v1 + i¯k θ1v2)] = π1v1 + π2v2 = v(π). 0|2 − ZR (A) From here, we mention Jacobi’s method to construct a solution of Hamilton-Jacobi equa- tion:

(4) Consider classical mechanics or Hamilton flow corresponding to (ξ,θ,π) H d ∂ (ξ,θ,π) d ∂ (ξ,θ,π) xj = H , ξk = H =0 for j, k = 1, 2, 3, dt ∂ξj dt − ∂x (7.2.7)  k  d ∂ (ξ,θ,π) d ∂ (ξ,θ,π)  θa = H , πb = H for a, b = 1, 2, 3. dt − ∂πa dt − ∂θb   7.2. SKETCHY PROOFS OF THE PROCEDURE MENTIONED ABOVE 121

Proposition 7.2.1 (existence). Under above setting, for any initial data (x(0),ξ(0),θ(0),π(0)) = (x,ξ,θ,π) R6|4, (7.2.7) has a unique solution (x(t),ξ(t),θ(t),π(t)). ∈ Remark 7.2.2. (i) The solution of above Proposition is denoted by x(t) or x(t,x,ξ,θ,π), etc. (ii) Instead of R3|2 R3|2, we regard R6|4 as the cotangent space ∗R3|2 of R3|2. × T

Inverse mapping:

Proposition 7.2.2 (inverse). For any (t,ξ,π), the map defined by

(x,θ) (¯x = x(t,x,ξ,θ,π), θ¯ = θ(t,x,ξ,θ,π)) 7→ is supersmooth from R3|2 to R3|2. The inverse map of this, defined by (¯x, θ¯) (x = y(t, x,ξ¯ , θ,π¯ ),θ = ω(t, x,ξ¯ , θ,π¯ )), → satisfies x¯ = x(t,y(t, x,ξ¯ , θ,π¯ ),ξ,ω(t, x,ξ¯ , θ,π¯ ),π), θ¯ = θ(t,y(t, x,ξ¯ , θ,π¯ ),ξ,ω(t, x,ξ¯ , θ,π¯ ),π), (7.2.8) ( x = y(t,x(t,x,ξ,θ,π),ξ,θ(t,x,ξ,θ,π),π), θ = ω(t,x(t,x,ξ,θ,π),ξ,θ(t,x,ξ,θ,π),π)

Action integral: For notational simplicity, we introduce the following short-hand symbols in this chapter: 3 −1 2 2 2 2 γt = γ(t,ξ)= c¯k t ξ , ζ = ξ + iξ , ζ¯ = ξ iξ , ξ = ξ = ζ + ξ , (7.2.9) | | 1 2 1 − 2 | | j | | 3 Xj=1 δ(t)= δ(t,ξ)= ξ cos γ + iξ sin γ , δ¯(t)= δ¯(t,ξ)= ξ cos γ iξ sin γ . | | t 3 t | | t − 3 t Putting t (7.2.10) (t,x,ξ,θ,π)= ds x˙ (s) ξ(s) + θ˙(s) π(s) (ξ(s),θ(s),π(s)) , S0 {h | i h | i−H } Z0 and

¯ ~ −1 (7.2.11) (t, x,ξ¯ , θ,π)= x ξ + ¯k θ π + 0(t,x,ξ,θ,π) x=y(t,x,ξ¯ ,θ,π¯ ), S h | i h | i S θ=ω(t,x,ξ¯ ,θ,π¯ )

we have Proposition 7.2.3 (phase function). Above defined (t, x,ξ¯ , θ,π¯ ) is represented S (7.2.12) (t, x,ξ¯ , θ,π¯ )= x¯ ξ + δ¯(t)−1 ~¯k−1 ξ θ¯ π ¯kζ sin γ θ¯ θ¯ ¯k−1(2~¯k−1 1)ζ¯sin γ π π . S h | i | |h | i− t 1 2 − − t 1 2 Moreover, if ~ =k ¯, then it satisfies Hamilton-Jacobi equation:  ∂ ∂ ∂ (t, x,ξ¯ , θ,π¯ )+ S , θ,¯ S = 0, (7.2.13) ∂tS H ∂x¯ ∂θ¯  (0, x,ξ¯ , θ,π¯ )= x¯ ξ+ θ¯ π .   S h | i h | i Then, we put 

∂2S ∂2S ∂x¯ ∂ξ ∂x∂π¯ (7.2.14) (t, x,ξ¯ , θ,π¯ ) = sdet 2 2 . D  ∂ S ∂ S  ∂θ¯ ∂ξ ∂θ∂π¯ Here, “sdet ” stands for super-determinant.   122 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

Proposition 7.2.4 (amplitude function). By calculation, we have (7.2.15) (t, x,ξ¯ , θ,π¯ ) = (~−1¯k)2 ξ −2δ¯(t)2. D | | Moreover, if ~ =k ¯, then it satisfies the continuity equation (or the 0th part of transport equation): ∂ ∂ ∂ ∂ ∂ + H + H = 0, (7.2.16) ∂tD ∂x¯ D ∂ξ ∂θ¯ D ∂π      (0, x,ξ¯ , θ,π¯ ) = 1.  D In the above, the independent variables of are (t, x,ξ¯ , θ,π¯ ), those of ∂ /∂ξ or ∂ /∂π are  D H H ( , θ,¯ ¯). Sx¯ Sθ

(B) There exists another method (in the next section) to solve (7.2.13) with the initial data (0, x,ξ¯ , θ,π¯ )= x¯ ξ + ~¯k−1 θ¯ π , S h | i h | i for any ~ andk ¯. Curiously, by that method, we have a solution slightly different from (7.2.12) but coincides when ~ =k ¯ (I haven’t found the reason why so!).

(5) Quantization: Using these classical quantities and , we have a new representation of S A solution desired as follows: In this paragraph, we rewrite variables from R6|4 (¯x,ξ, θ,π¯ ) (¯x, θ,ξ¯ ,π) R3|2 R3|2 = ∗R3|2. ∋ → ∈ × T We define an operator

−1 ¯ (7.2.17) ( (t)u)(¯x, θ¯) = (2π~)−3/2¯k dξdπ 1/2(t, x,¯ θ,ξ¯ ,π)ei~ S(t,x,¯ θ,ξ,π) u(ξ,π), U D F ZZ where stands for the Fourier transformation defined for functions on the superspace. The function F u(t, x,¯ θ¯) = ( (t)u)(¯x, θ¯) will be shown as a desired solution for (7.2.5) if ~ =k ¯. U It is shown that ~ ∂ ∂ ~ ∂ ∂ (7.2.18) ˆ , θ, = ˆW = ˆW , θ, H0 i ∂x ∂θ H H i ∂x ∂θ     where the left-hand side is given (7.2.3) with ¯λ = i and ˆW is a (Weyl type) pseudo-differential H operator with symbol (ξ,θ,π) defined by H −1 −1 θ + ω (7.2.19) ( ˆW u)(x,θ) = (2π~)−3¯k2 dξdπdydω ei~ hx−y|ξi+i¯k hθ−ω|πi ξ, ,π u(y,ω). H H 2 ZZ   Proposition 7.2.5. (1) For t R, (t) is a well defined unitary operator in /2 (R3|2) if ∈ U LSS,ev ~ =k ¯. (2) (i) R t (t) B( /2 (R3|2), /2 (R3|2)) is continuous. ∋ 7→ U ∈ LSS,ev LSS,ev (ii) (t) (s)= (t + s) for any t,s R. U U U ∈ (iii) For u / (R3|2), we put u(t, x,¯ θ¯) = ( (t)u)(¯x, θ¯). Then, it satisfies ∈ CSS,ev,0 U ∂ i~ u(t, x,¯ θ¯)= ˆW u(t, x,¯ θ¯), (7.2.20) ∂t H  ¯ ¯  u(0, x,¯ θ)= u(¯x, θ).

Finally, we interprete the above theorem with ~ =k ¯ using the identification maps (7.2.21) ♯ : L2(R3 : C2) / 2 (R3|2) and ♭ : / 2 (R3|2) L2(R3 : C2). → LSS,ev LSS,ev → That is, remarking ♭ ˆW ♯ψ = Hˆ ψ and putting U(t)ψ = ♭ (t) ♯ψ, we have H U 7.3. ANOTHER CONSTRUCTION OF THE SOLUTION FOR H-J EQUATION 123

Proposition 7.2.6. (1) For t R, U(t) is a well defined unitary operator in L2(R3 : C2). ∈ (2) (i) R t U(t) B(L2(R3 : C2),L2(R3 : C2)) is continuous. ∋ 7→ ∈ (ii) U(t)U(s)= U(t + s) for any t,s R. ∞ 3 2∈ (iii) Put ¯λ = i. For ψ C0 (R : C ), we put ψ(t,q)= ♭ (t)♯ψ . Then, it satisfies ∈ U x¯B=q ∂  i~ ψ(t,q)= Hˆ ψ(t,q), (7.2.22) ∂t   ψ(0,q)= ψ(q). Corollary 7.2.1. H is an essentially self-adjoint operator in L2(R3 : C2).

Remark 7.2.3. Since the free Weyl equation is simple, it is not necessary to use Fr´echet- Grassmann algebra with countably infinite Grassmann generators, because I can construct “classical quantities” explicitly. This point is exemplified in the last chapter for the construction of Hamilton flow for the Weyl equation with electro-maganetic potentials. Odd variables are symbolically im- portant for presenting the position in matrix structure and for operations being consistent. In this case, odd and even variables are rather separated without interaction, therefore simple!

Claim 7.2.1. (7.2.17) reduces to (7.0.5) after integration w.r.t. dπ.

Proof of this claim will be given later.

7.3. Another construction of the solution for H-J equation

In the above explanation, most essential part is to define “phase” function satisfying Hamilton- Jacobi equation. We introduce another new method here to seek the reason why and when we need to put ~ =k ¯ (~ has a physical meaning but ¯k is artificially introduced).

Claim 7.3.1. Let (ξ,θ,π) be given in (7.2.6) as a Hamiltonian function corresponding to H free Weyl equation. A solution of the Hamilton-Jacobi equation with a = 0 6 (7.3.1) + ( , θ, ) = 0 with (0,x,ξ,θ,π)= x ξ + a θ π St H Sx Sθ S h | i h | i is constructed without solving Hamilton equation.

Remark 7.3.1. Though to quantize, we need to take a = ~¯k−1 and finally ~¯k−1 = 1, to clarify the dependence of ~ in “super version of classical mechanics”, we prefer to introduce parameter a.

Assuming that the solution (t,x,ξ,θ,π) of (7.3.1) is even supersmooth, we expand it as S

(t,x,ξ,θ,π)= 0¯0¯ + θ2θ1 θ1θ2 + π1θ1 θ1π1 + π2θ2 θ2π2 (7.3.2) S S S S S + θ π + θ π + π π + θ θ π π . Sπ2θ1 1 2 Sπ1θ2 2 1 Sπ2π1 1 2 Sπ2π1θ2θ1 1 2 1 2 Here, (f ) = f , = (t,x,ξ) = (t,x,ξ, 0, 0), etc. 0¯ = (0, 0), 1¯ = (1, 1) 0, 1 2, θ1 θ2 θ2θ1 Sθ2θ1 Sθ2θ1 Sθ2θ1 ∈ { } ¯¯ = S11 Sπ2π1θ2θ1 Lemma 7.3.1. Let be a solution of (7.3.1). Put S ˜ = ( (t,x,ξ,θ,π), θ, (t,x,ξ,θ,π)) and ˜0 = ˜ , H∗ H∗ Sx Sθ H∗ H∗ θ=π=0

124 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN then terms in (7.3.2) satisfy S∗∗ 0 (7.3.3) ¯¯ + ˜ = 0 with ¯¯(0,x,ξ)= x ξ , S00,t H S00 h | i (7.3.4) + ˜0 2 + ( ˜0 + ˜0 ) + ˜0 = 0 with (0,x,ξ) = 0, Sθ2θ1,t Hπ2π1 Sθ2θ1 Hπ1θ1 Hπ2θ2 Sθ2θ1 Hθ2θ1 Sθ2θ1 (7.3.5) + ( ˜0 + ˜0 ) = 0 with (0,x,ξ)= a, Sπ1θ1,t Hπ1θ1 Sθ2θ1 Hπ2π1 Sπ1θ1 Sπ1θ1 (7.3.6) + ( ˜0 + ˜0 ) = 0 with (0,x,ξ)= a, Sπ2θ2,t Hπ2θ2 Sθ2θ1 Hπ2π1 Sπ2θ2 Sπ2θ2 (7.3.7) + ( ˜0 + ˜0 ) = 0 with (0,x,ξ) = 0, Sπ2θ1,t Hπ1θ1 Sθ2θ1 Hπ2π1 Sπ2θ1 Sπ2θ1 (7.3.8) + ( ˜0 + ˜0 ) = 0 with (0,x,ξ) = 0, Sπ1θ2,t Hπ2θ2 Sθ2θ1 Hπ2π1 Sπ1θ2 Sπ1θ2

Proof: Restricting (7.3.1) to θ = π = 0, we get (7.3.3). Differentiating (7.3.1) w.r.t θ1 and then θ2 and restricting to θ = π = 0, we get Riccati type ODE (7.3.4) with parameter (x,ξ). In fact, from the differential formula for composite functions and remarking ∂ ∂ = 0, = 0, θj θj S Hξkξj we get ∂ ∂ ˜ xj ˜ ˜ θ2 ˜ ∂θ1 = S ξj + θ1 + S π2 , H ∂θ1 H H ∂θ1 H ∂2 ∂ ∂ ˜ xj ˜ xj ˜ ˜ θ2 ˜ ∂θ2 ∂θ1 = S ξj S ∂θ2 ξj + ∂θ2 θ1 + S ∂θ2 π2 H ∂θ2∂θ1 H − ∂θ1 H H ∂θ1 H and ∂ ˜ ˜ θ1 ˜ ∂θ2 ξj = θ2ξj + S π1ξj , H H ∂θ2 H ∂ x ∂ ˜ j ˜ ˜ θ1 ˜ ∂θ2 θ1 = S ξj θ1 + θ2θ1 + S π1θ1 , H ∂θ2 H H ∂θ2 H ∂ ∂ ˜ xj ˜ ˜ θ1 ˜ ∂θ2 π2 = S ξj π2 + θ2π2 + S π1π2 . H ∂θ2 H H ∂θ2 H 0 0 Remarking also ˜ = 0, ˜ = 0, etc, and restricting ∂θ ∂θ ˜ to θ = π = 0, we have (7.3.4). Hξj Hξj θk 2 1 H Analogously, from ∂2 ∂ ∂ ∂2 ˜ xj ˜ xj ˜ ˜ θ2 ˜ θ2 ˜ ∂π1 ∂θ1 = S ξj S ∂π1 ξj + ∂π1 θ1 + S ∂π1 π2 + S π2 , H ∂π1∂θ1 H − ∂θ1 H H ∂θ1 H ∂π1∂θ1 H and =0= with Hπ2θ1 Hπ1θ2 ∂ ∂ ∂ ˜ θk ˜ ˜ xj ˜ θk ˜ ∂π1 ξj = S πkξj , ∂π1 θ1 = S ξj θ1 + S πkθ1 , H ∂π1 H H ∂π1 H ∂π1 H ∂ ∂ ˜ xj ˜ θ1 ˜ ∂π1 π2 = S ξj π2 + S π1π2 , H ∂π1 H ∂π1 H we get (7.3.5). Other equations (7.3.6)-(7.3.8) are obtained analogously. 

Lemma 7.3.2. Regarding x and ξ as parameters in (7.2.9) and solving ODEs in the above lemma, we get

(7.3.9) ¯¯(t,x,ξ)= x ξ , S00 h | i ¯kζ sin γ (7.3.10) = − t , Sθ2θ1 δ¯(t) ξ (7.3.11) = = a | | , Sπ1θ1 Sπ2θ2 δ¯(t) (7.3.12) = = 0. Sπ1θ2 Sπ2θ1 7.3. ANOTHER CONSTRUCTION OF THE SOLUTION FOR H-J EQUATION 125

Proof. From (7.3.2), using = 0, we get ˜0 = 0 and so (7.3.9) is obvious. Putting Sθj θ=π=0 H this into (7.3.4), we get

˜0 =k ¯−2( i ) =k ¯−2(ξ iξ ) =k ¯−2ζ,¯ Hπ2π1 Sx1 − Sx2 θ=π=0 1 − 2 0 −1 0 −1 ˜ = i¯k x = ˜ = i¯k ξ3, Hπ1θ1 − S 3 θ=π=0 Hπ2θ2 − 0 ˜ = ( x + i x ) = ξ1 + iξ2 = ζ Hθ2θ1 S 1 S 2 θ=π=0 and (7.3.4) becomes

(7.3.13) +k ¯−2ζ¯ 2 2i¯k−1ξ + ζ = 0 with (0,x,ξ) = 0. Sθ2θ1,t Sθ2θ1 − 3Sθ2θ1 Sθ2θ1 Solving ODE of Riccati type: For a given ODE

′ 2 y = q0(t)+ q1(t)y + q2(t)y , assuming q = 0 and putting v = q (t)y, we define 2 6 2 ′ q2 P = q1 + , Q = q2q0 q2 ′ ′ then v = (q2(t)y) is calculated as v′ = v2 + P (t)v + Q(t).

′ Moreover, differentiating v = u w.r.t. t, we get − u u′′ P (t)u′ + Q(t)u = 0. − u′ Solving this and using u, we put y = , then this is a solution of ODE of Riccati −q2u type.

Problem 7.3.1. If q2(t) has 0-point, then is there any explicit formula for so- lution for Riccati ODE?

Using this, we calculate (7.3.13). Putting q = ζ, q = 2i¯k−1, q = ¯k−2ζ¯, we get 0 − 1 2 − u¨ 2i¯k−1u˙ +k ¯−2 ζ 2u = 0 − | | and defining λ = i¯k−1(ξ ξ ), we have ± 3 ± | | u(t)= αeλ+t + βeλ−t.

Therefore, using u˙ θ θ (0) = = 0, S 2 1 ¯k−2ζu¯ and u˙(0) = 0 = i¯k−1[α(ξ + ξ )+ β(ξ ξ )] 3 | | 3 − | | we finally get the desired result (7.3.10),

i¯k−1|ξ|t −i¯k−1|ξ|t α(ξ3 + ξ )e + β(ξ3 ξ )e ¯kζ sin γt θ2θ1 (t)= | | −1 −−1 | | = − . S αei¯k |ξ|t + βe−i¯k |ξ|t ξ cos γt iξ3 sin γt | | − Using this result and putting ¯ ′ 0 0 0 0 δ(t) ′ (7.3.14) w (t,x,ξ)= ˜ + ¯¯ ˜ = ˜ + ¯¯ ˜ = = (log δ¯(t)) , 0 Hπ1θ1 S10Hπ2π1 Hπ2θ2 S10Hπ2π1 δ¯(t) 126 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN we get

− R t dr w (r,x,ξ) ξ (7.3.15) (t,x,ξ)= a ξ e s 0 = a | | = (t,x,ξ), Sπ1θ1 | | δ¯(t) Sπ2θ2 (7.3.16) (t,x,ξ)= (t,x,ξ) = 0.  Sπ2θ1 Sπ1θ2

Lemma 7.3.3. Terms and ¯¯, satisfy the following: Sπ2π1 S11 (7.3.17) + ˜0 = 0 with (0,x,ξ) = 0, Sπ2π1,t Sπ1θ1 Sπ2θ2 Hπ2π1 Sπ2π1 (7.3.18) ¯¯ + 2w ¯¯ + w = 0 with ¯¯(0,x,ξ) = 0. S11,t 0S11 1 S11 Here, we put

w1 =w1(t,x,ξ) =( ) ˜0 + ( ) ˜0 Sθ2θ1 Sπ2π1,xj − Sπ1θ1 Sπ2θ2,xj Hξj π1θ1 Sθ2θ1 Sπ2π1,xj − Sπ1θ1,xj Sπ2θ2 Hξj π2θ2 + [ 2 + ( ) ] ˜0 + ˜0 . Sθ2θ1 Sπ2π1,xj Sπ1θ1 Sπ2θ2 Sθ2θ1,xj − Sθ2θ1 Sπ1θ1 Sπ2θ2 xj Hξj π2π1 Sπ2π1,xj Hξj θ2θ1

Proof. Differentiating (7.3.1) w.r.t. π1 and then π2, restricting to θ = π = 0, we have (7.3.3) and (7.3.17) is obtained, using = 0 and restricting to θ = π = 0 and Sπ1θ2 ∂ x ∂ ∂ ∂ ∂ ∂ ˜ = ∂ S j ˜ + Sθ1 ˜ = Sθ1 Sθ2 ˜ π2 π1 H π2 ∂π Hξj ∂π Hπ1 ∂π ∂π Hπ2π1  1 1  1 2 (7.3.18) is get, since

4 2 2 ∂ x ∂ x ∂ x ∂ ∂ ∂ ∂ ∂ ˜ = S j ˜ + S j ∂ ∂ ˜ S j ∂ ˜ + Sθ1 ˜ π2 π1 θ2 θ1 H ∂π ∂π ∂θ ∂θ Hξj ∂θ ∂θ π2 π1 Hξj − ∂π ∂θ π2 Hθ2ξj ∂θ Hπ1ξj 2 1 2 1 2 1 1 1  2  ∂2 ∂3 ∂ xj ˜ ˜ θ1 ˜ θ1 ˜ S ∂π1 ξj θ1 + ∂π2 ∂π1 θ2θ1 + S π1θ1 + S ∂π2 ∂π1 π1θ1 − ∂π2∂θ2 H H ∂π2∂π1∂θ2 H ∂θ2 H 3 ∂ ∂ x ∂ + Sθ2 S j ˜ + ˜ + Sθ1 ˜ ∂π ∂π ∂θ ∂θ Hξj π2 Hθ2π2 ∂θ Hπ1π2 2 1 1  2 2  ∂ ∂ x ∂ + Sθ2 ∂ ∂ S j ˜ + ˜ + Sθ1 ˜ + , ∂θ π2 π1 ∂θ Hξj π2 Hθ2π2 ∂θ Hπ1π2 · · · 1  2 2  then remarking

∂ ∂ ˜ = ˜ , π2 π1 Hξj Sπ1θ1 Sπ2θ2 Hπ2π1ξj ∂ ∂ ∂ ∂ ˜ θ1 ˜ θ2 ˜ θ1 θ2 ˜ ∂π2 ( θ2ξj + S π1ξj )= S π2θ2ξj + S S π2π1ξj + , H ∂θ2 H ∂π2 H ∂θ2 ∂π2 H · · · ∂ ˜ θ1 ˜ ∂π1 ξj θ1 = S π1ξj θ1 , H ∂π1 H ∂2 ˜ xj ˜ ∂π2 ∂π1 θ2θ1 = S ξj θ2θ1 , H ∂π2∂π1 H ∂2 ˜ xj ˜ ∂π2 ∂π1 π1θ1 = S ξj π1θ1 , H ∂π2∂π1 H ∂ x ∂ ∂ ∂ S j ˜ + ˜ + Sθ1 ˜ π2 π1 ∂θ Hξj π2 Hθ2π2 ∂θ Hπ1π2  2 2  ∂2 ∂ ∂2 ∂3 ∂ ∂2 xj θ1 ˜ xj ˜ θ1 ˜ θ1 xj ˜ = S S π1ξj π2 + S ξj θ2π2 + S π1π2 + S S ξj π1π2 + , −∂π2∂θ2 ∂π1 H ∂π2∂π1 H ∂π2∂π1∂θ2 H ∂θ2 ∂π2∂π1 H · · · 7.3. ANOTHER CONSTRUCTION OF THE SOLUTION FOR H-J EQUATION 127 and restricting to θ = π = 0, we have ˜0 = ˜ ( ˜ + ˜ ) H1¯1¯ Sθ2θ1xj Sπ1θ1 Sπ2θ2 Hπ1π2ξj − Sπ1θ1xj Sπ2θ2 Hπ2θ2ξj Sθ2θ1 Sπ2θ2 Hπ2π1ξj ˜ + ˜ + ¯¯ ˜ + ˜ − Sπ2θ2xj Sπ1θ1 Hπ1ξj θ1 Sπ2π1xj Hξj θ2θ1 S11Hπ1θ1 Sθ2θ1 Sπ2π1xj Hξj π1θ1 ¯¯( ˜ + ˜ ) − S11 Hθ2π2 Sθ2θ1 Hπ1π2 ( ˜ + ˜ + ¯¯ ˜ + ˜ ) − Sθ2θ1 −Sπ2θ2xj Sπ1θ1 Hπ1ξj π2 Sπ2π1xj Hξj θ2π2 S11Hπ1π2 Sθ2θ1 Sπ2π1xj Hξj π1π2 = ¯¯[ ˜ ( ˜ + ˜ ) ˜ ] S11 Hπ1θ1 − Hθ2π2 Sθ2θ1 Hπ1π2 − Sθ2θ1 Hπ1π2 + [ ˜ + ˜ ˜ + ˜ Sθ2θ1 −Sπ1θ1xj Hπ2π1ξj Sπ2θ2xj Sπ1θ1 Hπ1ξj π2 − Sπ2θ2xj Hξj θ2π2 Sπ2π1xj Hξj π1θ1 ˜ ] − Sθ2θ1 Sπ2π1xj Hξj π1π2 + ˜ ˜ ˜ + ˜ .  Sθ2θ1xj Hπ1π2ξj − Sπ1θ1xj Sπ2θ2 Hπ2θ2ξj − Sπ2θ2xj Sπ1θ1 Hπ1ξj θ1 Sπ2π1xj Hξj θ2θ1 Remarking = 0 for = θ ,π , we have w = 0. From (7.3.17) and (7.3.18), S∗∗xj ∗ a b 2 2 −1 sin γt (7.3.19) π π (t,x,ξ)= a ¯k ζ¯ , ¯¯(t,x,ξ) = 0. S 2 1 − δ¯(t) S11

Finally we have (7.3.20) (t,x,θ,ξ,π)= x ξ + δ¯(t)−1 a ξ θ π ¯kζ sin γ θ θ a2¯k−1ζ¯sin γ π π . S h | i | |h | i− t 1 2 − t 1 2 Claim 7.3.2. (t,x,θ,ξ,π) defined above satisfies (7.3.1).  S

Proof. Indeed, as = ξ for j = 1, 2, 3, and ζ = + i , ζ¯ = i , Sxj j Sx1 Sx2 Sx1 − Sx2 = δ¯(t)−1 a ξ π ¯kζ sin γ θ , = δ¯(t)−1 a ξ π +kζ ¯ sin γ θ , Sθ1 | | 1 − t 2 Sθ2 | | 2 t 1 we get     = δ¯(t)−2 (a ξ )2π π ¯ka ξ ζ sin γ θ π + (¯kζ sin γ )2θ θ , Sθ1 Sθ2 | | 1 2 − | | th | i t 1 2 ¯ −1 θ1 θ +θ2 θ = δ(t) a ξ θ π 2¯kζ sin γtθ1θ2 .  S 1 S 2 | |h | i− Substituting these into ( , θ, ), we have H Sx Sθ   −2 ¯ −1 ( x, θ, θ)= cζθ1θ2 + c¯k ζ θ1 θ2 ic¯k ξ3(θ1 θ1 + θ2 θ2 ) (7.3.21) H S S S S − S S = c ξ 2δ¯−2 ζθ θ a¯k−1( ξ sin γ + iξ cos γ ) θ π + a2¯k−2ζπ¯ π . | | 1 2 − | | t 3 t h | i 1 2 On the other hand, since we get easily  ∂ (δ¯(t)−1)= c¯k−1 ξ δ¯(t)−2( ξ sin γ + iξ cos γ ) and ∂ (δ¯(t)−1 sin γ )= c¯k−1 ξ 2δ¯(t)−2, t | | | | t 3 t t t | | we have (7.3.22) = ca¯k−1 ξ 2δ¯(t)−2( ξ sin γ +iξ cos γ ) θ π +c¯k−1 ξ 2δ¯(t)−2 ¯kζθ θ a2¯k−1ζπ¯ π . St | | | | t 3 t h | i | | − 1 2 − 1 2 Therefore, we have + ˜ = 0.    St H Remark 7.3.2. (1) From above calculation, we needn’t assume that a = 1 to make satisfy S (7.3.1). Since it seems preferable to calculate “classical quantities” without mentioning ~, this construction is better than Jacobi’s method mentioned before. But to get the desired object after quantization, we need some relation between ~ and ¯k.

(2) Theoretically, we may calculate even when Aj(q) are added, but we mayn’t have such an explicit formula of ¯¯(t,x,ξ) as (7.3.20). Especially when A (t,q) depends also on t, we have only the S11 j existence of the solution of ODE of Riccati type. In this case, we need to construct a solution 128 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN following Jacobi’s method explained in the previous section. Therefore, it is mysterious why by such method, we need the condition ~ =k ¯ even in “classical mechanics”?!

From this, we get

Proposition 7.3.1. Van Vleck determinant is calculated as ∂2S ∂2S ∂x∂ξ ∂x∂π −2 −2 ¯ 2 (t,x,ξ,θ,π) = sdet ∂2S ∂2S = a ξ δ(t) . D ∂θ∂ξ ∂θ∂π ! | | Moreover, for any a = 0, it satisfies 6 ∂ ∂ ∂ ∂ ∂ + H + H = 0, (7.3.23) ∂tD ∂x D ∂ξ ∂θ D ∂π      (0,x,ξ,θ,π)= a−2.  D Here, independent variables in is (t,x,ξ,θ,π), those for ∂ /∂ξ and ∂ /∂π are ( , θ, ¯). D H H Sx Sθ Corollary 7.3.1. Putting as the of , we have A D 1 ∂ + ˜ + ˜ + [∂ ( ˜ )+ ∂ ( ˜ )] = 0. tA AxHξ AθHπ 2A x Hξ θ Hπ In our case, we have 1 (7.3.24) ∂ + ∂ ( ˜ ) = 0. tA 2A θ Hπ

Proof. Since = δ¯(t)/ ξ , = 0, =0 and A | | Axj Aθk ∂ ( ˜ )= ˜ + ˜ = 0, xk Hξk Sxkxj Hξj ξk Sxkθb Hπbξk ∂ ( ˜ )= ˜ + ˜ + ˜ , θa Hπa Sθaxj Hξj πa Hθaπa Sθaθb Hπbπa we get the desired result. 

7.4. Quantization

To compare with the Jacobi’s method in A. Inoue [71]. we reproduce some calculations therein.

7.4.1. Feynman’s Quantization. To quantize, it seems better to change the order of inde- pendent variables from (x,ξ,θ,π) to (x,θ,ξ,π). We “define” an operator

−1 ( (t)u)(x,θ) = (2π~)−3/2¯k dξdπ (t,ξ)ei~ S(t,x,θ,ξ,π) u(ξ,π) U A F (7.4.1) ZZ δ¯(t) with (t,ξ)= . A a ξ | | Following Feynman’s idea, it seems natural to have

Claim 7.4.1 (From (ξ,θ,π) to ˆW ( i~∂ ,θ,∂ )). If a = ~¯k−1 = 1, we have H H − x θ ∂ (7.4.2) i~ ( (t)u)(x,θ) = ˆ ( i~∂ ,θ,∂ )u(x,θ)= ˆW ( i~∂ ,θ,∂ )u(x,θ) ∂t U H0 − x θ H − x θ t=0 where ˆ is given in (7.2.3) with ¯λ = i. Moreover, for (ξ,θ,π) in (7.2.6), we put H0 H ′ −1 ′ θ + θ ˆW ( i~∂ ,θ,∂ )u(x,θ) = (2π~)−3¯k2 dξdπdx′dθ′ ei~ hX−X |Ξi ξ, ,π (u (x)+u θ′ θ′ ), H − x θ H 2 0 1 1 2 ZZ   for X = (x,θ), X′ = (x′,θ′), Ξ = (ξ,π), X Ξ = x ξ + ~¯k−1 θ π . h | i h | i h | i 7.4. QUANTIZATION 129

Proof. Remarking ∂ (t,ξ) ( , θ, ) = 0, x A Hξ Sx Sθ and applying Hamilton-Jacobi (7.3.1) and continuity (7.3.24) equations, we have ∂ −1 −1 −1 (7.4.3) ( ei~ S ) = ( + i~−1 )ei~ S = [ ] ei~ S , ∂t A At ASt −A · · · where 1 [ ]= ∂ ˜ + i~−1 ˜ · · · 2 θa Hπa H  ξ sin γ + iξ cos γ = c¯k−1ξ | | t 3 t (7.4.4) 3 δ¯(t) 2 −1 ξ −1 2 −2 + ic~ | | [ζθ1θ2 a¯k ( ξ sin γt + iξ3 cos γt) θ π + a ¯k ζπ¯ 1π2]. δ¯(t)2 − | | h | i Therefore, −1 −1 ∂ −1 ic¯k ic~ ζ ( ei~ S ) = (1 ia~−1 θ π )ξ θ θ ica2~−1¯k−2ζπ¯ π . ∂t A − a − h | i 3 − a 1 2 − 1 2 t=0

Since (0,x,θ,ξ,π)= x ξ + a θ π , we have S h | i h | i −1 −1 dπ ei~ S(0,x,θ,ξ,π)(1 ia~−1 θ π )(¯k2uˆ +u ˆ π π )= ei~ hx|ξi[ˆu a2~−2¯k2uˆ θ θ ], − h | i 1 0 1 2 0 − 1 1 2 Z i~−1S(0,x,θ,ξ,π) 2 i~−1hx|ξi dπ e θ1θ2(¯k uˆ1 +u ˆ0π1π2)= e uˆ0θ1θ2, Z i~−1S(0,x,θ,ξ,π) 2 i~−1hx|ξi 2 dπ e π1π2(¯k uˆ1 +u ˆ0π1π2)= e ¯k uˆ1. Z These imply that −1 −1 ∂ −1 ic¯k ic~ ζ dπ ( ei~ S) (¯k2uˆ +ˆu π π )= ξ (ˆu a2~−2¯k2uˆ θ θ ) uˆ θ θ iac~−1ζ¯uˆ , ∂t A 1 0 1 2 − a 3 0− 1 1 2 − a 0 1 2− 1 Z t=0 which proves, if a = ~¯k−1 = 1,

∂ i~ ( (t)u)(x,θ) = (2π~)−3/2 dξ ei~hx|ξic[ξ (ˆu uˆ θ θ )+ ζuˆ θ θ + ζ¯uˆ ] ∂t U 3 0 − 1 1 2 0 1 2 1 t=0 Z

= ( ˆ0( i~∂x,θ,∂θ)u)(x,θ).  H − More generally, we have

Claim 7.4.2. For any t, we have ∂ i~ ( (t)u)(x,θ)= ˆ ( i~∂ ,θ,∂ )( (t)u)(x,θ)= ˆW ( i~∂ ,θ,∂ )( (t)u)(x,θ). ∂t U H0 − x θ U H − x θ U Proof 1.[Reduction to matrix form] Putting v (t,x) = ( (t)u)(x,θ) , v (t,x)= ∂ ∂ ( (t)u)(x,θ) , 0 U θ=0 1 θ2 θ1 U θ=0 we get

(7.4.5) (t)u(x,θ)= v (t,x)+ v (t,x)θ θ U 0 1 1 2 where −3/2 i~−1hx|ξi v0(t,x) = (2π~) dξ e V0(t,ξ), (7.4.6) Z −3/2 i~−1hx|ξi v1(t,ξ) = (2π~) dξ e V1(t,ξ), Z 130 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN with ¯ −1 ¯ V0(t,ξ) 1 δ(t) i~ ¯kaζ sin γt uˆ0(ξ) (7.4.7) = −1 − 2 −2 2 . V1(t,ξ) a ξ i~ ¯kζ sin γt a ~ ¯k δ(t) uˆ1(ξ)   | | −   Therefore, when a =1= ~¯k−1, above reduces to the matrix representation (7.0.6).

In fact, rewriting ¯ −3/2 δ(t) −1 −1 (t)u(x,θ) = (2π~) dξ exp i~ ( x ξ ¯kδ¯(t) ζ sin γtθ1θ2) U 3|0 a ξ { h | i− } ZR | | −1 ¯ −1 2 −1 ¯ 2 dπ exp i~ δ(t) (a ξ θ π a ¯k ζ sin γtπ1π2) (¯k uˆ1 +u ˆ0π1π2), × 0|2 { | |h | i− } ZR and since

−1 ¯ −1 2 −1 ¯ 2 dπ exp i~ δ(t) (a ξ θ π a ¯k ζ sin γtπ1π2) (¯k uˆ1 +u ˆ0π1π2) 0|2 { | |h | i− } ZR =u ˆ +k ¯2uˆ ∂ ∂ exp i~−1δ¯(t)−1(a ξ θ π a2¯k−1ζ¯sin γ π π ) 0 1 π2 π1 { | |h | i− t 1 2 } π=0 −1 2¯ −1 ¯ 2 −2 2 ¯ −2 2 =u ˆ0 +u ˆ1( i~ ¯ka δ(t) ζ sin γt + a ~ ¯k δ(t) ξ θ1θ2), − | | we have

−3/2 1 −1 −1 ( (t)u)(x,θ) = (2π~) dξ exp i~ ( x ξ ¯kδ¯(t) ζ sin γtθ1θ2) U 3|0 a ξ { h | i− } ZR | | (δ¯(t)ˆu +u ˆ ( i~−1¯ka2ζ¯sin γ + a2~−2¯k2δ¯(t)−1 ξ 2θ θ )). // × 0 1 − t | | 1 2 Remark 7.4.1. Above formula (7.4.7) with a =1= ~¯k−1 gives the proof for Claim 7.2.1.

Proof 2.[Another direct calculation] Interchanging the order of differentiation and derivation under integral sign, we get

∂ ∂ −1 i~ ( (t)u)(x,θ)= i~(2π~)−3/2~ dξdπ ( ei~ S )( u)(ξ,π) ∂t U ∂t A F ZZ −1 = i~(2π~)−3/2 dξ dπ [ ]ei~ S(~2uˆ +u ˆ π π ) − A · · · 1 0 1 2 Z  Z  where [ ] is given in (7.4.4). On the other hand, remarking Weyl quantization of θ π gives · · · h | i 1 θ ∂ , − a θa ˆW ( i~∂ ,θ,∂ )( (t)u)(x,θ) H − x θ U −1 = (2π~)−3/2~ dξdπ (ξ,θ,∂ )[ ei~ S ](~2uˆ +u ˆ π π ) H θ A 1 1 1 2 ZZ −1 = (2π~)−3/2 dξ dπ ei~ S (~2uˆ +u ˆ π π ) , A{· · · } 1 1 1 2 Z  Z  where

−1 −1 (ξ,θ,∂ )ei~ S = ei~ S H θ {· · · } with ∂2 ∂ ∂ (ξ,θ,∂ )= cζθ θ cζ¯ + cξ 1 θ θ H θ 1 2 − ∂θ ∂θ 3 − 1 ∂θ − 2 ∂θ 1 2  1 2  7.4. QUANTIZATION 131 and ia2~−1¯kζ sin γ ~−2 = cζθ θ + cζ¯ t + a2 ξ 2π π a¯kζ sin γ ξ θ π {· · · } 1 2 − δ¯(t) δ¯(t)2 | | 1 2 − t| |h | i  i~−1 +k ¯2ζ2 sin2 γθ θ + cξ 1 ξ θ π 2¯kζ sin γ θ θ 1 2 3 − δ¯(t) | |h | i− t 1 2 (7.4.8)    = c ξ δ¯(t)−1(ξ cos γ i ξ sin γ )   | | 3 t − | | t + cδ¯(t)−2 ξ 2 ζθ θ + ~−2ζπ¯ π | | 1 2 1 2 −1 −1 + i~ ( ξ3 cos γt + i ξ sin γt) θ π when a = ~¯k = 1. − | | h | i Comparing (7.4.4) and (7.4.8), we have i~[ ]= when a= ~¯k−1 = 1, which implies − · · · {· · · } ∂ ~ ∂ ∂ i~ (t)u(x,θ)= ˆW , θ, ( (t)u)(x,θ).  ∂tU H i ∂x ∂θ U   Remark 7.4.2. In calculating the right-hand side above, here we used the closed explicit form. But, in general, we must establish the composition formula of pseudo-differential operators of Weyl type and Fourier integral operators of above type (see, Theorem 4.5 of Inoue and Maeda [?]).

7.4.2. L2 boundedness. In order to prove the unitarity of the operator (t), rewriting (7.4.6) U with a = ~¯k−1 = 1, using the Parceval equality only w.r.t. x or ξ, we have

Proposition 7.4.1. (t)u = u in /2 (R3|2). ||U || || || LSS

Proof. From above calculation, we have δ¯(t) 2 V (t,ξ) 2 = | | (ˆu (ξ) iδ¯(t)−1ζ¯sin γ uˆ (ξ))(ˆu (ξ) iδ¯(t)−1ζ¯sin γ uˆ (ξ)), | 0 | ξ 2 0 − t 1 0 − t 1 | | δ¯(t) 2 V (t,ξ) 2 = | | ( iδ¯(t)−1ζ sin γ uˆ (ξ)+ δ¯(t)−1δ(t)ˆu (ξ))( iδ¯(t)−1ζ sin γ uˆ (ξ)+ δ¯(t)−1δ(t)ˆu (ξ)), | 1 | ξ 2 − t 0 1 − t 0 1 | | therefore

(t)u 2 = dξ( V (t,ξ) 2 + V (t,ξ) 2)= dξ( uˆ (ξ) 2 + uˆ (ξ) 2)= u 2.  ||U || | 0 | | 1 | | 0 | | 1 | || || Z Z

7.4.3. Continuity.

Proposition 7.4.2. lim (t)u u = 0 t→0 kU − k

Proof. Since

−1 ( (t)u)(x,θ) u(x,θ) = (2π~)−3/2 dξ ei~ hx|ξi[V (t,ξ) V (0,ξ) + (V (t,ξ) V (0,ξ))θ θ ], U − 0 − 0 1 − 1 1 2 Z we have

(t)u u 2 = V (t,ξ) V (0,ξ) 2 + V (t,ξ) V (0,ξ) 2 0 when t 0.  kU − k k 0 − 0 k k 1 − 1 k → → 132 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

Problem 7.4.1. Prove above Proposition using

( (t)u)(x,θ) u(x,θ) U − −1 −1 = (2π~)−3/2~ dξdπ [ (t,ξ)ei~ S(t,x,ξ,θ,π) (0,ξ)ei~ S(0,x,ξ,θ,π)]( u)(ξ,π) A −A F ZZ t d −1 = (2π~)−3/2~ dξdπ ds ( (s,ξ)ei~ S(s,x,ξ,θ,π)) ( u)(ξ,π) ds A F ZZ  Z0  t d −1 = (2π~)−3/2 dξ ds dπ ( (s,ξ)ei~ S(s,x,ξ,θ,π))( u)(ξ,π) . ds A F Z Z0  Z 

7.4.4. Evolutional property.

Proposition 7.4.3.

(t) (s)u = (t + s)u for any u / (R3|2). U U U ∈ CSS,0

Proof. Denoting (s)u = u(s,x,θ)= v (s,x)+ v (s,x)θ θ , we have U 0 2 1 2 (t)u(s,x,θ) U −1 = (2π~)−3/2 dξ ei~ hx|ξi (t,ξ) (s,ξ) (ˆu iδ(s)−1η¯ sin γ uˆ ) A A 0 − s 1 Z iδ(t)−1ζ¯sin γ δ(s)−1( iζ sin γ uˆ + δ¯(s)ˆu ) − t − s 0 1 −1 + (2π~)−3/2 dξ ei~ hx|ξi (t,ξ) (s,ξ) iδ(t)−1ζ sin γ (ˆu iδ(s)−1ζ¯sin γ uˆ )  A A − t 0 − s 1 Z + δ(t)−1δ¯(t)δ(s)−1( iζ sin γ uˆ + δ¯(s)ˆu ) θ θ . − s 0 1 1 2 By simple calculation, we get 

ξ −2[(δ(t)δ(s) ζ 2 sin γ sin γ )ˆu = ξ −1δ(t + s)ˆu , | | − | | t s 0 | | 0

i(δ(t)ζ¯sin γ + δ¯(s)ζ¯sin γ )ˆu = iζ¯ ξ −1 sin γ uˆ , − s t 1 − | | t+s 1 and

ξ −1δ(t + s)ˆu i ξ −1ζ¯sin γ uˆ = (t + s,ξ)[ˆu iδ(t + s)−1ζ¯sin γ uˆ ]. | | 0 − | | t+s 1 A 0 − t+s 1

Analogously, the coefficient of θ1θ2 is calculated as

ξ −1[ iζ sin γ uˆ + δ¯(t + s)ˆu ]. | | − t+s 0 1 Therefore, we have the evolutional property. 

7.5. Fundamental solution

Though we have proved (t ) (t )= (t +t ) by using explicit representation in this case, we U 2 U 1 U 1 2 reconsider this evolutional property as products formula for FIOp modifying arguments in Chapter 10 of H. Kumano-go [91]. 7.5. FUNDAMENTAL SOLUTION 133

7.5.1. Super version of sharp products of phase functions. For the future study of resolution of Feynman’s problem, we proceed without using explicit formula for the solution of H-J equation as far as possible:

Let H(t,X, Ξ) be given as C∞(R ∗R3|2 : C). ×T Assumption (A) There exists T > 0 such that for any T

Assumption (B) For any T

(7.5.3) ∂ (t,s,X, Ξ) (s, ( 1)p(Ξ)∂ (t,s,X, Ξ), Ξ)=0. sS −H − ΞS Putting

(7.5.4) ϕ(t ,t ,t , X2, Ξ1, X1, Ξ0)= (t ,t , X2, Ξ1) X1 Ξ1 + (t ,t , X1, Ξ0), 2 1 0 S 2 1 − h | i S 1 0 1 1 2 1 1 0 1 1 we need to get critical points (X˜ , Ξ˜ ) of ϕ(t2,t1,t0, X , Ξ , X , Ξ ) w.r.t. Ξ , X which satisfy 2 ˜1 ˜ 1 0 ∂X1 ϕ(t2,t1,t0, X , Ξ , X , Ξ ) = 0, 2 ˜1 ˜ 1 0 ( ∂Ξ1 ϕ(t2,t1,t0, X , Ξ , X , Ξ ) = 0.

2 0 1 1 Assumption (C) For given (t2,t1,t0, X , Ξ ), there exists a solution (X˜ , Ξ˜ ), denoted by

1 1 1 2 0 1 2 0 (X˜ , Ξ˜ ) = (X˜ (t2,t1,t0, X , Ξ ), Ξ˜ (t2,t1,t0, X , Ξ )), which satisfies ˜ 1 0 ˜1 ∂X1 (t1,t0, X , Ξ ) Ξ = 0, (7.5.5) S − 2 1 p(Ξ1) 1 ( ∂ 1 (t ,t , X , Ξ˜ ) ( 1) X˜ = 0. Ξ S 2 1 − − Proposition 7.5.1. Defining #-(called sharp) product as

2 0 2 1 1 0 [ (t2,t1)# (t1,t0)](X , Ξ )= ϕ(t2,t1,t0, X , Ξ , X , Ξ ) 1 ˜ 1 2 0 , S S X =X (t2,t1,t0,X ,Ξ ) 1 ˜1 2 0 Ξ =Ξ (t2,t1,t0,X ,Ξ ) then

(7.5.6) [ X Ξ # (t,s)](X2, Ξ0) = [ (t,s)# X Ξ ](X2, Ξ0)= (t,s,X2, Ξ0), h | i S S h | i S (7.5.7) [ (t ,t )# (t ,t )](X2, Ξ0)= (t ,t , X2, Ξ0) for any t , t t t . S 2 1 S 1 0 S 2 0 1 0 ≤ 1 ≤ 2 134 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

Proof of (7.5.6): Put

ϕ = X2 Ξ1 X1 Ξ1 + (t,s,X1, Ξ0). h | i − h | i S Since 2 1 1 0 1 1 0 ∂ 1 ϕ(X , Ξ , X , Ξ )= Ξ + ∂ 1 (t,s,X , Ξ ) = 0, X − X S 2 1 1 0 p(Ξ1) 2 1 ( ∂ 1 ϕ(X , Ξ , X , Ξ ) = ( 1) (X X ) = 0, Ξ − − we have

1 1 2 2 0 (X˜ , Ξ˜ ) = (X , ∂ 1 (t,s,X , Ξ )). X S Therefore

2 1 1 0 2 0 ϕ(X , Ξ , X , Ξ ) 1 2 = (t,s,X , Ξ ). // X =X S Ξ1=Ξ˜1

Proof of (7.5.7): Substituting the relation derived from (7.5.5), i.e.

1 p(Ξ1) 2 1 1 1 0 (7.5.8) X˜ = ( 1) ∂ 1 (t ,t , X , Ξ˜ ), Ξ˜ = ∂ 1 (t ,t , X˜ , Ξ ), − Ξ S 2 1 X S 1 0 into the definition, we have

(7.5.9) [ (t ,t )# (t ,t )](X2, Ξ0)= (t ,t , X2, Ξ˜1) X˜ 1 Ξ˜1 + (t ,t , X˜ 1, Ξ0). S 2 1 S 1 0 S 2 1 − h | i S 1 0

Differentiating (7.5.9) w.r.t. t1, we have

∂ [ (t ,t )# (t ,t )] =(∂ (t ,t , X2, Ξ˜1)+ ∂ Ξ˜ 1 ∂ (t ,t , X2, Ξ˜1)) t1 S 2 1 S 1 0 t1 S 2 1 t1 · ΞS 2 1 (∂ X˜ 1 Ξ˜1 + X˜ 1 ∂ Ξ˜ 1) − t1 · · t1 + ∂ (t ,t , X˜ 1, Ξ0)+ ∂ X˜ 1 ∂ (t ,t , X˜ 1, Ξ0). t1 S 1 0 t1 · X S 1 0 Remarking (7.5.8), we get

∂ Ξ˜1 ∂ (t ,t , X2, Ξ˜1) (∂ X˜ 1 Ξ˜1 + X˜ 1 ∂ Ξ˜ 1)+ ∂ X˜ 1 ∂ (t ,t , X˜ 1, Ξ0) = 0. t1 · ΞS 2 1 − t1 · · t1 t1 · X S 1 0 From (7.5.2) and (7.5.3), that is,

2 1 p(Ξ) 2 1 1 ∂ (t ,t , X , Ξ˜ )= (t , ( 1) ∂ 1 (t ,t , X , Ξ˜ ), Ξ˜ ), t1 S 2 1 H 1 − Ξ S 2 1 ∂ (t ,t , X˜ 1, Ξ0)= (t , X˜ 1, Ξ0), t1 S 1 0 −H 1 applying (7.5.8) once more, we have

∂ [ (t ,t )# (t ,t )](X0, Ξ0)= ∂ (t ,t , X2, Ξ˜1)+ ∂ (t ,t , X˜ 1, Ξ0) t1 S 2 1 S 1 0 t1 S 2 1 t1 S 1 0 1 = (t , ( 1)p(Ξ )∂ (t ,t , X2, Ξ˜1), Ξ˜1) (t , X˜ 1, ∂ (t ,t , X˜ 1, Ξ0)) = 0. H 1 − ΞS 2 1 −H 1 X S 1 0

Since t1 is arbitrary, taking t1 = t0, then the relations

˜ 1 0 ˜ 1 0 ˜ 1 ˜ 1 0 0 (t0,t0, X , Ξ )= X Ξ , Ξ = ∂X1 (t1,t0, X , Ξ ) =Ξ , S h | i S t1=t0 we have 2 0 2 0 [ (t2,t1)# (t1,t0)](X , Ξ ) = (t2,t0, X , Ξ ). // S S t1=t0 S

7.5. FUNDAMENTAL SOLUTION 135

7.5.2. Superversion of products of FIOp. Denoting

−1 1 0 (t ,t , X1, Ξ0)= (t ,t , X1, Ξ0)ei~ S(t1,t0,X ,Ξ ), K 1 0 A 1 0 we put, for v(X1) = ( (t ,t )u)(X1), U 1 0 1 0 1 0 0 ( (t1,t0)u)(X )= c3,2 dΞ (t1,t0, X , Ξ ) u(Ξ ), U 3|2 K F ZZR 2 1 2 1 1 ( (t2,t1)v)(X )= c3,2 dΞ (t2,t1, X , Ξ ) v(Ξ ). U 3|2 K F ZZR Changing the order of integration rather freely, we have

2 3 1 2 1 [ (t2,t1)( (t1,t0)u)](X )= c3,2 dΞ (t2,t1, X , Ξ ) U U 3|2 K ZZR 1 −i~−1hX1|Ξ1i 0 1 0 0 dX e dΞ (t1,t0, X , Ξ ) u(Ξ ) × 3|2 3|2 K F ZZR  ZZR  0 2 0 0 = c3,2 dΞ [ (t2,t1)⋆ (t1,t0)](X , Ξ ) u(Ξ ) 3|2 K K F ZZR where we put [ (t ,t )⋆ (t ,t )](X2, Ξ0) K 2 1 K 1 0 2 1 1 2 1 −i~−1hX1|Ξ1i 1 0 = c3,2 dΞ dX (t2,t1, X , Ξ )e (t1,t0, X , Ξ ) (7.5.10) 6|4 K K ZZR −1 2 1 1 0 2 1 1 2 1 1 0 i~ ϕ(t2,t1,t0,X ,Ξ ,X ,Ξ ) = c3,2 dΞ dX (t2,t1, X , Ξ ) (t1,t0, X , Ξ )e 6|4 A A ZZR 2 1 1 0 with ϕ(t2,t1,t0, X , Ξ , X , Ξ ) defined in (7.5.4). Though we want to show, by direct calculation with certain remainder term if necessary,

(7.5.11) [ (t ,t )⋆ (t ,t )](X2, Ξ0)= (t ,t , X2, Ξ0), K 2 1 K 1 0 K 2 0 this procedure is simple philosophically (because we know how to proceed in ordinary Schr¨odinger case) but seems complicated technically even for the free Weyl equation case (because odd variables stay as “variable coefficients”).

To prove this as generally as possible, we need to calculate Taylor expansion of ϕ defined in (7.5.4) at (X˜ 1, Ξ˜1). Putting as (η , ̟ )= Υ =Ξ1 Ξ˜1 = (ξ1 ξ˜1, π˜1 ̟ ), (y ,ϑ )= Y = X1 X˜ 1 = (x1 x˜1,θ1 θ˜ ), j a − j − j − a k b − k − k b − b a = (α, a), α = (α , α , α ), a = (a , a ), a = α + a , b = (β, b), etc, 1 2 3 1 2 | | | | | | a b α β a b a b α β a b ∂ ∂ = ∂ 1 ∂ ∂ 1 ∂ 1 ∂ ∂ = ∂ ∂ ∂ ∂ , X Ξ x ξ1 θ π ∼ Y Υ y η ϑ ̟ 2 1 1 0 Φ(sΥ,sY )=Φ(sηj,syk, s̟a,sϑb)= ϕ(t2,t1,t0, X , Ξ˜ + sΥ, X˜ + sY, Ξ ), we have, abbreviating the dependence on (X2, Ξ0), d 1 d2 Φ(Υ,Y ) = Φ(0, 0) + Φ(0, 0) + Φ(0, 0) + R, ds 2! ds2 1 (1 s)2 d3 with R = R(Υ,Y )= ds − Φ(sΥ,sY ). 2! ds3 Z0 Claim 7.5.1. By the definition of (X˜ 1, Ξ˜1), we have

(7.5.12) Φ(0, 0) = (t ,t , X2, Ξ0), S 2 0 136 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

d (7.5.13) Φ(0, 0) = Υ ∂ Φ(0, 0) + Y ∂ Φ(0, 0), ds Υ Y d2 (7.5.14) Φ(0, 0) = Υ aY b∂b ∂a Φ(0, 0), ds2 Y Υ |a|+X|b|=2 d3 (7.5.15) Φ(sΥ,sY )= Υ aY b∂b ∂a Φ(sΥ,sY ). ds3 Y Υ |a|+X|b|=3

Proof of Claim 7.5.1. By definition of #-product, we have (7.5.12). By (7.5.5), (7.5.13) is obtained from d Φ= η Φ + y Φ + ̟ Φ + ϑ Φ . ds j ηj k yk a ̟a b ϑb (7.5.14) is rewritten as d2 Φ=η [η ′ Φ + y ′ Φ + ̟ ′ Φ + ϑ ′ Φ ] ds2 j j ηj′ ηj k yk′ ηj a ̟a′ ηj b ϑb′ ηj ′ ′ ′ ′ + yk[ηj Φηj′ yk + yk Φyk′ yk + ̟a Φ̟a′ yk + ϑb Φϑb′ yk ]

′ ′ ′ ′ + ̟a[ηj Φηj′ ̟a + yk Φyk′ ̟a + ̟a Φ̟a′ ̟a + ϑb Φϑb′ ̟a ]

′ ′ ′ ′ + ϑb[ηj Φηj′ ϑb + yk Φyk′ ϑb + ̟a Φ̟a′ ϑb + ϑb Φϑb′ ϑb ]

ηj′ yk′ = (ηj,yk, ̟a,ϑb) N   · ̟a′  ϑ ′   b    where Φ Φ Φ Φ ηj′ ηj yk′ ηj − ̟a′ ηj − ϑb′ ηj Φη ′ y Φy ′ y Φ̟ ′ y Φϑ ′ y N N (7.5.16) N = j k k k − a k − b k = BB BF . Φ Φ Φ Φ  N N ηj′ ̟a yk′ ̟a ̟a′ ̟a ϑb′ ̟a  FB FF Φ Φ Φ Φ   ηj′ ϑb yk′ ϑb ̟a′ ϑb ϑb′ ϑb    More precisely,

Φη1η1 Φη2η1 Φη3η1 Φy1η1 Φy2η1 Φy3η1 Φ Φ Φ Φ Φ Φ  η1η2 η2η2 η3η2 y1η2 y2η2 y3η2  Φη ′ η Φy ′ η Φ Φ Φ Φ Φ Φ N = j j k j = η1η3 η2η3 η3η3 y1η3 y2η3 y3η3 , BB Φ Φ Φ Φ Φ Φ Φ Φ   ηj′ yk yk′ yk   η1y1 η2y1 η3y1 y1y1 y2y1 y3y1  Φη y Φη y Φη y Φy y Φy y Φy y   1 2 2 2 3 2 1 2 2 2 3 2  Φη y Φη y Φη y Φy y Φy y Φy y   1 3 2 3 3 3 1 3 2 3 3 3   

Φ̟1η1 Φ̟2η1 Φϑ1η1 Φϑ2η1 −Φ −Φ −Φ −Φ − ̟1η2 − ̟2η2 − ϑ1η2 − ϑ2η2  Φ̟a′ ηj Φϑ ′ ηj Φ̟1η3 Φ̟2η3 Φϑ1η3 Φϑ2η3 NBF = − − b = − − − − , Φ Φ  Φ Φ Φ Φ  − ̟a′ yk − ϑb′ yk  − ̟1y1 − ̟2y1 − ϑ1y1 − ϑ2y1   Φ̟ y Φ̟ y Φ Φ  − 1 2 − 2 2 − ϑ1y2 − ϑ2y2   Φ̟ y Φ̟ y Φϑ y Φϑ y  − 1 3 − 2 3 − 1 3 − 2 3    Φη1̟1 Φη2̟1 Φη3̟1 Φy1̟1 Φy2̟1 Φy3̟1 Φη ′ ̟ Φy ′ ̟ Φ Φ Φ Φ Φ Φ N = j a k a = η1̟2 η2̟2 η3̟2 y1̟2 y2̟2 y3̟2 , FB Φ Φ Φ Φ Φ Φ Φ Φ   ηj′ ϑb yk′ ϑb  η1ϑ1 η2ϑ1 η3ϑ1 y1ϑ1 y2ϑ1 y3ϑ1 Φ Φ Φ Φ Φ Φ   η1ϑ2 η2ϑ2 η3ϑ2 y1ϑ2 y2ϑ2 y3ϑ2    7.5. FUNDAMENTAL SOLUTION 137

Φ̟1̟1 Φ̟2̟1 Φϑ1̟1 Φϑ2̟1 Φ Φ Φ Φ Φ Φ N = ̟a′ ̟a ϑb′ ̟a = ̟1̟2 ̟2̟2 ϑ1̟2 ϑ2̟2 . FF Φ Φ Φ Φ Φ Φ   ̟a′ ϑb ϑb′ ϑb  ̟1ϑ1 ̟2ϑ1 ϑ1ϑ1 ϑ2ϑ1 Φ Φ Φ Φ   ̟1ϑ2 ̟2ϑ2 ϑ1ϑ2 ϑ2ϑ2  Concerning (7.5.15), we don’t mention precisely here. // 

In the above, to get an even super-matrix corresponding to the Jacobian for Φ(Υ,Y ), we have changed the order of as R3|2 R3|2 (Ξ1, X1) = (ξ1,π1,x1,θ1) (ξ1,x1,π1,θ1)=(Ξ1 , X1 , Ξ1 , X1 ) R6|4, × ∋ k b j a → k j b a B B F F ∈ R3|2 R3|2 (Υ,Y ) = (η , ̟ ,y ,ϑ ) (η ,y , ̟ ,ϑ ) = (Υ ,Y ,Υ ,Y ) R6|4. × ∋ j a k b → j k a b B B F F ∈ Then, we need to calculate

−1 2 1 1 0 2 1 1 1 0 i~ ϕ(t2,t2,t0,X ,Ξ ,X ,Ξ ) c3,2 dΞ dX (t2,t1, Ξ ) (t1,t0, Ξ )e 6|4 A A ZZR 2 ˜1 0 i~−1[Φ(0,0)+(1/2)Φ(0¨ ,0)+R] = c3,2 dΥdY (t2,t1, Ξ + Υ ) (t1,t0, Ξ )e . 6|4 A A ZZR For free Weyl case. Now, we return to our special case such that ϕ = (τ ,x2,θ2,ξ1,π1) x1 ξ1 θ1 π1 + (τ ,x1,θ1,ξ0,π0) S 2 − h | i − h | i S 1 = x2 ξ1 +C1 θ2 π1 D1θ2θ2 E1π1π1 h | i h | i− 1 2 − 1 2 x1 ξ1 θ1 π1 + x1 ξ0 +C0 θ1 π0 D0θ1θ1 E0π0π0 − h | i − h | i h | i h | i− 1 2 − 1 2 where τ = t t , τ = t t , ζ∗ = ξ∗ + iξ∗, ζ¯∗ = ξ∗ iξ∗ with = 0, 1 and 2 2 − 1 1 1 − 0 1 2 1 − 2 ∗ ξ1 ~ζ1 sin γ(τ , ξ1 ) ζ¯1 sin γ(τ , ξ1 ) C1 = | | , D1 = 2 | | , E1 = 2 | | , δ¯(τ , ξ1 ) δ¯(τ , ξ1 ) ~δ¯(τ , ξ1 ) 2 | | 2 | | 2 | | ξ0 ~ζ0 sin γ(τ , ξ0 ) ζ¯0 sin γ(τ , ξ0 ) C0 = | | , D0 = 1 | | , E0 = 1 | | . δ¯(τ , ξ0 ) δ¯(τ , ξ0 ) ~δ¯(τ , ξ0 ) 1 | | 1 | | 1 | | ˜ 1 ˜1 1 ˜ 1 1 ˜1 From (7.5.5), we may define Ξ = (ξk, π˜b ), X = (˜xj , θa) satisfying ˜ 1 0 ˜1 ˜1 0 ϕx1 = ∂x1 (t1,t0, X , Ξ ) ξj = ξj + ξj = 0, j j S − −  1 ˜ 1 0 1 2 1 1 2 2 1 1 1 1 ϕξ1 = x˜k + ∂ξ1 (t1,t0, X , Ξ )=Cξ1 θ π Dξ1 θ1θ2 Eξ1 π1π2 x˜k = 0,  k − k S k h | i− k − k −  ˜ 1 0 1 1 0 0 0 ˜1  ϕθ1 = ∂θ1 (t1,t0, X , Ξ ) π˜2 = π˜1 +C π1 D θ2 = 0,  1 1 S − − −  ˜ 1 0 1 1 0 0 0 ˜1  ϕθ1 = ∂θ1 (t1,t0, X , Ξ ) π˜1 = π˜2 +C π2 + D θ1 = 0,  2 2 S − − 2 ˜1 ˜1 1 2 1 1 ˜1 ϕπ1 = ∂π1 (t2,t1, X , Ξ )+ θ1 = C θ1 E π˜2 + θ1 = 0,  1 1 S − −  2 ˜1 ˜1 1 2 1 1 ˜1  ϕ 1 = ∂ 1 (t2,t1, X , Ξ )+ θ2 = C θ2 +E π˜1 + θ2 = 0.  π2 π2  S −  From these, we have ξ0 δ¯(τ + τ , ξ0 ) ξ0 ζ¯0 sin γ(τ , ξ0 ) 1 D0E1 = | | 1 2 | | with C1 = | | , E1 = 2 | | . − 0 δ¯(τ , ξ0 )δ¯(τ , ξ0 ) 0 δ¯(τ , ξ0 ) 0 ~δ¯(τ , ξ0 ) 1 | | 2 | | 2 | | 2 | | Therefore, θ˜1 E1 1 C0π0 1 = (1 D0E1)−1 − 0 − 2 , π˜1 − 0 1 D0 C1θ2   2  −  0 1  1 1 0 0  θ˜ 0 1 −1 E 1 C π  2 = (1 D E ) 0 − 1 , π˜1 − 0 1 D0 C1θ2  1 − −  0 2    138 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN 1 ˜1 ˜1 2 0 0 ¯ 0 2 ~−1 ¯0 0 0 θ1 = θ1(τ2,τ1,θ ,ξ ,π )= 0 [δ(τ1, ξ )θ1 + ζ sin γ(τ2, ξ )π2], δ¯(τ1 + τ2, ξ ) | | | |  | | 1  π˜1 =π ˜1(τ ,τ ,θ2,ξ0,π0)= [~ζ0 sin γ(τ , ξ0 )θ2 + δ¯(τ , ξ0 )π0],  2 2 2 1 ¯ 0 1 1 2 2  δ(τ1 + τ2, ξ ) | | | | (7.5.17)  | |  1 1 2 0 0 1 −1 0 0 0 0 2  θ˜ = θ˜ (τ2,τ1,θ ,ξ ,π )= [~ ζ¯ sin γ(τ2, ξ )π + δ¯(τ1, ξ )θ ],  2 2 δ¯(τ + τ , ξ0 ) | | 1 | | 2 1 2 | |  1  π˜1 =π ˜1(τ ,τ ,θ2,ξ0,π0)= [δ¯(τ , ξ0 )π0 ~ζ0 sin γ(τ , ξ0 )θ2].  1 1 2 1 ¯ 0 2 | | 1 − 1 | | 2  δ(τ1 + τ2, ξ )  | |  For notational simplicity, we put Φ(sΥ,sY )= x2 ξ˜1 + sη +C1(s) θ2 π˜1 + s̟ D1(s)θ2θ2 E1(s)(˜π1 + s̟ )(˜π1 + s̟ ) h | i h | i− 1 2 − 1 1 2 2 x˜1 + sy ξ˜1 + sη θ˜1 + sϑ π˜1 + s̟ − h | i − h | i + x˜1 + sy ξ0 +C0 θ˜1 + sϑ π0 D0(θ˜1 + sϑ )(θ˜1 + sϑ ) E0π0π0 h | i h | i− 1 1 2 2 − 1 2 where ξ0 + sη ~(ξ0 + sη + iξ0 + isη )sin γ(τ , ξ0 + sη ) C1(s)= | | , D1(s)= 1 1 2 2 2 | | , δ¯(τ , ξ0 + sη ) δ¯(τ , ξ0 + sη ) 2 | | 2 | | (ξ0 + sη iξ0 isη )sin γ(τ , ξ0 + sη ) E1(s)= 1 1 − 2 − 2 2 | | . ~δ¯(τ , ξ0 + sη ) 2 | | Therefore, we have 1 1 1 1 1 1 C (0) = C0 , D (0) = D0, E (0) = E0 .

By definition above, we have d Φ(sΥ,sY )= x2 η + C˙ 1(s) θ2 π˜1 + s̟ +C1(s) θ2 ̟ D˙ 1(s)θ2θ2 ds h | i h | i h | i− 1 2 E˙ 1(s)(˜π1 + s̟ )(˜π1 + s̟ ) E1(s)[̟ (˜π1 + s̟ )+(˜π1 + s̟ )̟ ] − 1 1 2 2 − 1 2 2 1 1 2 y ξ˜1 + sη x˜1 + sy η ϑ π˜1 + s̟ θ˜1 + sϑ ̟ − h | i − h | i − h | i − h | i + y ξ0 +C0 ϑ π0 D0[ϑ (θ˜1 + sϑ ) + (θ˜1 + sϑ )ϑ ], h | i h | i− 1 2 2 1 1 2 d2 Φ(sΥ,sY )= C¨ 1(s) θ2 π˜1 + s̟ + 2C˙ 1(s) θ2 ̟ D¨ 1(s)θ2θ2 ds2 h | i h | i− 1 2 (7.5.18) E¨ 1(s)(˜π1 + s̟ )(˜π1 + s̟ ) 2E˙ 1(s)[̟ (˜π1 + s̟ )+(˜π1 + s̟ )̟ ] − 1 1 2 2 − 1 2 2 1 1 2 2E1(s)̟ ̟ 2 y η 2 ϑ ̟ 2D0ϑ ϑ , − 1 2 − h | i− h | i− 1 2 d3 ...... Φ(sΥ,sY )= C1(s) θ2 π˜1 + s̟ + 3C¨ 1(s) θ2 ̟ D1(s)θ2θ2 ds3 h | i h | i− 1 2 ...1 1 1 1 1 1 E (s)(˜π + s̟1)(˜π + s̟2) 3E¨ (s)[̟1(˜π + s̟2)+(˜π + s̟1)̟2] − 1 2 − 2 1 6E˙ 1(s)̟ ̟ − 1 2 1 1 = r0(s)+ r1(s)̟1 + r2(s)̟2 + r2(s)̟1̟2 where ...1 2 1 ...1 2 2 ...1 1 1 r0(s)= C (s) θ π˜ D (s)θ1θ2 E (s)˜π1π˜2, ... h | i− ...− 1 1 2 ¨ 1 2 1 1 ¨ 1 1 r1(s)= sC (s)θ1 + 3C (s)θ1 + sE (s)˜π2 + 3E (s)˜π2, ...... r1(s)= sC1(s)θ2 + 3C¨ 1(s)θ2 sE1(s)˜π1 3E¨ 1(s)˜π1, 2 2 2 − 1 − 1 2...1 1 1 r2(s)= s E (s) 6sE¨ (s) 6E˙ (s). − − − 7.5. FUNDAMENTAL SOLUTION 139

Exercise 7.5.1. Show, by plugging (7.5.17) into Φ(0, 0), Φ(0, 0) = x2 ξ˜1 +C1(0) θ2 π˜1 D1(0)θ2θ2 E1(0)˜π1π˜1 h | i h | i− 1 2 − 1 2 x˜1 ξ˜1 θ˜1π˜1 + x˜1 ξ0 +C0 θ˜1 π0 D0θ˜1θ˜1 E0π0π0 − h | i − h i h | i h | i− 1 2 − 1 2 = (t t , X2, Ξ0). S 2 − 0 Proceed analogously to prove, d Φ(0, 0) = 0. ds

For terms in (7.5.18), using integration by parts, we get 1 (1 s)2 d3 ds − Φ(sΥ,sY )= R + R1̟ + R1̟ + R ̟ ̟ 2! ds3 0 1 1 2 2 2 1 2 Z0 with 1 2 2 0 (1 s) ...1 2 1 ...1 2 2 ...1 1 1 R0 = R0(X , η, Ξ )= ds − (C (s) θ π˜ D (s)θ θ E (s)˜π π˜ 2! h | i− 1 2 − 1 2 Z0 1 1 = (C(1) C(0) C(0)˙ C(0))¨ θ2 π˜1 (D(1) D(0) D(0)˙ D(0))¨ θ2θ2 − − − 2 h | i− − − − 2 1 2 1 (E(1) E(0) E(0)˙ E(0))˜¨ π1π˜1, − − − − 2 1 2 1 (1 s)2 ...... R1 = R1(X2, η, Ξ0)= ds − (sC1(s)θ2 + 3C¨ 1(s)θ2 + sE1(s)˜π1 + 3E¨ 1(s)˜π1) 1 1 2! 1 1 2 2 Z0 = (C(1) C(0) C(0))˙ θ2 + (E(1) E(0) E(0))˜˙ π1, − − 1 − − 2 1 (1 s)2 ...... R1 = R1(X2, η, Ξ0)= ds − (sC1(s)θ2 + 3C¨ 1(s)θ2 sE1(s)˜π1 3E¨ 1(s)˜π1) 2 2 2! 2 2 − 1 − 1 Z0 = (C(1) C(0) C(0))˙ θ2 (E(1) E(0) E(0))˜˙ π1, − − 2 − − − 1 1 2 2 0 (1 s) 2...1 1 1 R2 = R2(X , η, Ξ )= ds − (s E (s) + 6sE¨ (s) + 6E˙ (s)) − 2! Z0 = E1(1) + E1(0). − From (7.5.17), we have 1 θ2 π˜1 = (δ¯(τ , ξ0 ) θ2 π0 2~ζ0 sin γ(τ , ξ0 )θ2θ2), h | i δ¯(τ + τ , ξ0 ) 2 | | h | i− 1 | | 1 2 1 2 | | 1 π˜1π˜1 = (δ¯2(τ , ξ0 )π0π0 ~ζ0 sin γ(τ , ξ0 )δ¯(τ , ξ0 ) θ2 π0 1 2 δ¯2(τ + τ , ξ0 ) 2 | | 1 2 − 2 | | 2 | | h | i 1 2 | | ~2(ζ0)2 sin2 γ(τ , ξ0 )θ2θ2. − 2 | | 1 2 Using above, we have d2 Φ(0, 0) = C¨ 1(0) θ2 π˜1 + 2C˙ 1(0) θ2 ̟ D¨ 1(0)θ2θ2 ds2 h | i h | i− 1 2 E¨ 1(0)˜π1π˜1 2E˙ 1(0)[̟ π˜1 +π ˜1̟ ] (7.5.19) − 1 2 − 1 2 1 2 2E1(0)̟ ̟ 2 y η 2 ϑ ̟ 2D0ϑ ϑ − 1 2 − h | i− h | i− 1 2 =Φ (X2, η, Ξ0) 2 y η +Ψ (θ2,η,̟,ϑ,π0,ξ0) 2 − h | i 2 with Φ (X2, η, Ξ0)= C¨ 1(0) θ2 π˜1 D¨ 1(0)θ2θ2 E¨ 1(0)˜π1π˜1, 2 h | i− 1 2 − 1 2 Ψ (̟,ϑ,θ2,η,π0,ξ0) = 2C˙ 1(0) θ2 ̟ 2E˙ 1(0)[̟ π˜1 +π ˜1̟ ] 2E1(0)̟ ̟ 2 ϑ ̟ 2D0ϑ ϑ . 2 h | i− 1 2 1 2 − 1 2 − h | i− 1 2 140 7. FUNDAMENTAL SOLUTION OF FREE WEYL EQUATION A` LA FEYNMAN

From (7.5.19), we have

2 ˜1 0 i~−1[Φ(0,0)+2−1Φ(0¨ ,0)+R] c3,2 dΥdY (τ2, Ξ + Υ ) (τ1, Ξ )e R6|4 A A ZZ 0 0 −1 δ¯(τ , ξ ) δ¯(τ , ξ + η ) = ei~ Φ(0,0)~2 d̟dϑ (2π~)−3 dηdy 1 | | 2 | | ξ0 ξ0 + η ZZ  ZZ | | | | i~−1(−hy|ηi+2−1Φ (X2,η,Ξ0)+2−1Ψ (̟,ϑ,η,π0,ξ0)+R1̟ +R (η)̟ ̟ +R ) e 2 2 2 2 2 1 2 0 . ×  Remarking

−1 (2π~)−3 dydη e−i~ hy|ηig(η)= g(0), ZZ holds for suitable function g, and ... ˙ 1 ¨ 1 1 C (0) η=0 = 0, C (0) η=0 = 0, C (0) η=0 = 0, ... (7.5.20) D˙ 1(0) = 0, D¨ 1(0) = 0, D1(0) = 0, η=0 η=0 η=0 ... E˙ 1(0) = 0, E¨ 1(0) = 0, E1(0) = 0, η=0 η=0 η=0 we have

d2 Φ(0, 0)(X2, η, Ξ0) = 2E1(0)̟ ̟ 2 ϑ ̟ 2D0ϑ ϑ and R = 0. ds2 η=0 − 1 2 − h | i− 1 2 η=0

Therefore

2 ˜1 0 i~−1[Φ(0,0)+2−1Φ(0¨ ,0)+R] c3,2 dΥdY (τ2, Ξ + Υ ) (τ1, Ξ )e R6|4 A A ZZ 0 0 −1 δ¯(τ , ξ ) δ¯(τ , ξ ) = ei~ Φ(0,0)~2 1 | | 2 | | d̟dϑ exp [ E1(0)̟ ̟ ϑ ̟ D0ϑ ϑ ] ξ0 ξ0 − 1 2 − h | i− 1 2 ZZ (7.5.21) | | | 0 | 0 −1 2 0 δ¯(τ , ξ ) δ¯(τ , ξ ) = ei~ S(τ1+τ2,X Ξ )~2 1 | | 2 | | (1 D0E1(0)) ξ0 ξ0 − | | 0 | | −1 2 0 δ¯(τ + τ , ξ ) = ei~ S(τ1+τ2,X Ξ )~2 1 2 | | . ξ0 | |

In fact, to prove (7.5.20),

d ξ0 + sη η d2 η 2 ξ0 + sη 2 ξ0 + sη η 2 ξ0 + sη = h | i, ξ0 + sη = | | | | − h | i , ds| | ξ0 + sη ds2 | | ξ0 + sη 3 | | | | and putting γ (s)= c~−1τ ξ0 + sη , we have τ | | d d d2 d2 γ (s)= c~−1τ ξ0 + sη , γ (s)= c~−1τ ξ0 + sη , ds τ ds| | ds2 τ ds2 | | therefore, d d2 ξ0 + sη = 0, ξ0 + sη = 0, ds| | η=0 ds2 | | η=0 d d2 γ (s) = 0, γ (s) = 0. ds τ η=0 ds2 τ η=0

7.5. FUNDAMENTAL SOLUTION 141

Using above d d δ¯(τ, ξ0 + sη )= ξ0 + sη cos γ (s) ξ0 + sη γ˙ (s)sin γ (s) ds | | ds| | τ − | | τ τ iη sin γ (s) i(ξ0 + sη )γ ˙ (s) cos γ (s), − 3 τ − 3 3 τ τ d δ¯(τ, ξ0 + sη ) = 0, ds | | η=0 d2 d2 d δ¯(τ, ξ0 + sη )= ξ0 + sη cos γ (s) 2 ξ0 + sη γ˙ (s)sin γ (s) ds2 | | ds2 | | τ − ds| | τ τ ξ0 + sη [¨γ (s)sin γ (s) +γ ˙ 2(s) cos γ (s)] − | | τ τ τ τ 2iη γ˙ (s) cos γ (s) i(ξ0 + sη )[¨γ (s) cos γ (s) γ˙ 2(s)sin γ (s)], − 3 τ τ − 3 3 τ τ − τ τ d2 δ¯(τ, ξ0 + sη ) = 0. ds2 | | s=0

From these, we get

d d C˙ 1(s)= ξ0 + sη δ¯−1(τ, ξ0 + sη ) ξ0 + sη δ¯−2(τ, ξ0 + sη ) δ¯(τ, ξ0 + sη ), ds| | | | − | | | | ds | | d2 d d d2 C¨ 1(s)= ξ0 + sη δ¯−1(τ, ξ0 + sη ) 2 ξ0 + sη δ¯−1(τ, ξ0 + sη )+ ξ0 + sη δ¯−1(τ, ξ0 + sη ), ds2 | | | | − ds| |ds | | | |ds2 | | C˙ 1(0) = 0, C¨ 1(0) = 0, η=0 η=0

Analogously, we prove other equalities in (7.5.20). //

CHAPTER 8

Supersymmetric Quantum Mechanics and Its Applications

8.1. What is SUSYQM

8.1.1. Another interpretation of the Atiyah-Singer index theorem. Seemingly, being stimulated by a physicist E. Witten’s paper [143], a mathematician E. Gezler declared in the introduction of his paper [55] that

The Atiyah-Singer index theorem is nothing but the superversion of the Weyl’s theorem on the asymptotic behavior w.r.t. time t for et∆g/2.

Here, (M, g) is a compact d-dimensional Riemannnian manifold, ∆g is the Laplace-Beltrami oper- j k ator corresponding to the Riemannnian metric g = gjk(q)dq dq . Though, he declared this, but he didn’t try to demonstrate this assertion directly in that paper.

Our goal in this chapter. We interprete his declaration and calculate the index for the simplest example following prescription of Witten and Getzler.

Roughly speaking, his declaration is sketched as follows: Let K(t,q,q′) be the kernel of the fundamental solution of IVP ∂ 1 v(t,q)= ∆gv(t,q) with lim v(t,q)= v(q). ∂t 2 t→0 That is,

′ ′ ′ t∆g /2 v(t,q)= dgq K(t,q,q )v(q ) = (e v)(q) ZM where

∗ 1 ij g(q) = det(gij(q)), dgq = g(q)dq, ∆g = d d = ∂qi ( g(q)g (q)∂qj ). g(q) p p Then, the Weyl’s theorem states that et∆g/2 belongs to tracep class and

tr (et∆g /2)= d q K(t,q,q) Ct−d/2 d q 1 when t 0. g → g → ZM ZM ∗ ∗ Here, d is the exterior differential, d is the adjoint of d w.r.t. dgq and ∆g = d d which is desired to be derived from d gjk(q)p p C∞(T ∗M : R) by “quantization”. j,k=1 j k ∈ His claim goesP as follows: Extend1 (M, g) superly denoted by (M,˜ g˜) where M˜ is a supermani- fold corresponding to M andg ˜ is a super Riemann metric of g. In this case, ∆g˜ corresponds to the form Laplacian dd∗ +d∗d acting on differential forms on (M, g), moreover, it has the supersymmetric structure.

1 d i j for simplicity, we only consider the case M = R in this section, and concerning Riemann metric g = gij (q)dq dq on Rd

143 144 8. SUSYQM AND ITS APPLICATIONS

∗ ∗ Therefore, calculating the trace of the kernel for “et(dd +d d)/2” , we get the Witten index which gives us new proof of Atiyah-Singer index theorem.

Remark 8.1.1. (0) For a given Riemann metric g on Rd, we calculate its super-extension g˜ in 6 of Chapter 9 which corresponds to the symbol of dd∗ + d∗d. § d j k (i) What occurs when we quantize Lagrangian (1/2) j,k=1 gjk(q)q ˙ q˙ on (M, g)? In case if we quantize following Feynman’s prescription with purelyP imaginary time, the quantized object deviate (1/12)R from (1/2)∆g with R=the scalar curvature (see B. DeWitt [34], Inoue-Maeda [79]). (ii) See also, the recent work of Y. Miyanishi [105], where he constructs a parametrix for the 2 Schr¨odinger equation on S with action integral deformed with (1/12)R from (1/2)∆g, but R = 2 for S2. More precisely, he goes as follows;

Let q, q¯ be 2 points on S2, let γ C be the shortest path between them with length d(q, q¯). 0 ∈ t,q,q¯ Taking a bump function χ with compact support contained in d(q, q¯) < π, he defines an integral operator 1 i~(S(t,q,q¯)+iRt/12) U(t)u(¯q)= dgq χ(d(q, q¯))A(t,q, q¯)e u(q) 2πi~ 2 ZS where d(q, q¯)2 ∂2S(t,q, q¯) 1/2 S(t,q, q¯)= and A(t,q, q¯)= g−1/2(q)g−1/2(¯q) det . 2t ∂q∂q¯    Then, he asserts that taking the suitable products of these operators corresponding to time slicing method and restricting it to “lower energy” part of (1/2)∆g, then it converges to the solution of ∂u(t, q¯) ~2 i~ = ∆gu(t, q¯) with lim u(t, q¯)= u(¯q). ∂t 2 t→0 His definition of the integral operator is different from ours because he needs to introduce additionally the cut off and to use not only the action integral but also van Vleck determinant corresponding to the shortest path between two points 2. By the way, how to recognize the claim “put equal weights for every possible paths” in physics literature? From my point of view, if we consider “weights” as amplitude, we need to use ∂2S(γ) 1/2 det with for any γ C ∂q∂q¯ ∈ t,q,q¯    or need another phase factor for each path as proposed in L. Schulman [121].

The usage of the projection to low energy part corresponding to the spectral decomposition for (1/2)∆g make us suspicious “Is his procedure truely quantization?”, because the quantization should be carried out only using classical quantities. To overcome this point, it seems reasonable to present such projector using classical objects like Fujiwara [48].f Moreover, he adds also the factor g−1/2(q)g−1/2(¯q) to permit Copenghagen interpretation, that is, consider time evolution in the intrinsic Hilbert space (=half density bundle).

In spite of these, we have3 2 1/2 ∂ ∂ S(t,q, q¯) i~−1S(t,q,q¯) 2 1 1 i~ dgq det e u(q) = ~ ∆g + R u(¯q). ∂t 2 ∂q∂q¯ 2 12  ZS     t=0  

2 In our case considered, we only have the unique classical trajectory! 3no problem for integrability and differentiation under integral sign for this case 8.1. WHAT IS SUSYQM 145

That is, this guarantees us Feynman’s picture of quantization. Therefore, it seems more natu- ral to consider separately4 two things, one is quantization process and another is construction of the fundamental solution for evolution equation corresponding to that quantized object having as infinitesimal operator.

Remark 8.1.2. As mentioned before, how do we interpret the saying “put equal weights for every possible paths”: it is explained, for example, in D. V. Perepelitsa [109] as follows(with slight modification):

Feynman [44] posits that the contribution to the propagator from a particular trajec- tory is exp[i~−1S(γ)] where γ = γ( ) C . That is, every possible path contributes · ∈ t,q,q¯ with equal amplitude to the propagator, but with a phase related to the classical ac- tion. Summing over all possible trajectories, we arrive at the propagator. The nor- malization constant A(t) is independent of any individual path and therefore depends only on time. −1 U(¯q,t; q, 0) = A(t) ei~ S(γ). γ∈XCt,q,q¯ Since I explained in sectiion 1.3 of Chapter 1, as there doesn’t exist full Feynman measure5, we

“approximate” DF γ on Ct,q,q¯ by the measure on M with some density function, that is,

∂2S˜(t,q, q¯) 1/2 D(γ)= det = D(t,q, q¯), ∂q∂q¯    t where S(γ)= S(t,q, q¯)= dt L(γ(s), γ˙ (s)), Z0 but even taking the classical trajectory γ C in D(γ ), it generally depends not only on t but c ∈ t,q,q¯ c also (q, q¯)?

8.1.2. What is SUSYQM?. In order to make clear what should be calculated, we cite the definition.

Definition 8.1.1 (p.120, H.L. Cycon, R.G. Froese, W. Kirsh and B. Simon [28]). Let H be a Hilbert space and let H and Q be selfadjoint operators, and P be a bounded self-adjoint operator in H such that H = Q2 0, P2 = I, [Q, P] = QP + PQ = 0. ≥ + Then, we say that the system (H, P, Q) has or it defines a SUSYQM(=supersymmetric Quantum Mechanics).

Under this circumstance, we may decompose

H = H H where H = u H Pu = u , H = u H Pu = u . b ⊕ f f { ∈ | − } b { ∈ | }

4From Feynman’s introduction and Fujiwara’s procedure, we, at least myself, insist too much to get quantized object from Feynman picture implies also to have directly fundamental solution of Schr¨odinger equation by his method 5Recall also, there doesn’t exist “functor” called full quantization, see Abraham-Marsden [2] 146 8. SUSYQM AND ITS APPLICATIONS

ub Using this decomposition and identifying an element u = ub + uf H as a vector , we have a ∈ uf representation   I 0 1 0 P = b = (or simply denoted by) . 0 I 0 1  − f  −  Since P and Q anti-commute and Q is self-adjoint, Q has always the form 0 A∗ A∗A 0 (8.1.1) Q = and H = , A 0 0 AA∗     where A, called the annihilation operator, is an operator which maps Hb into Hf , and its adjoint ∗ A , called the creation operator, maps Hf into Hb. Thus, P commutes with H, and Hb and Hf are invariant under H, i.e. HH H and HH H . That is, there is a one-to-one correspondence b ⊂ b f ⊂ f between densely defined closd operators A and self-adjoint operators Q (supercharges) of the above form.

Definition 8.1.2. We define a supersymmetric index of H if it exists by ind (H) dim(Ker(H H )) dim(Ker(H H )) Z¯ = Z . s ≡ b − f ∈ ∪ {±∞}

On the other hand, we have

Definition 8.1.3. Let X, Y be two Banach spaces and let (X,Y ) be a set of densely defined C closed operators from X to Y . T (X,Y ) is called Fredholm iff the range of T , R(T ), is closed ∈ C in Y and both kerT and Y/R(T ) are finite-dimensional. T (X,Y ) is called semi-Fredholm iff ∈ C R(T ) is closed in Y and at least one of kerT and Y/R(T ) is finite-dimensional. If the operator is semi-Fredholm, then the Fredholm index ind (T ) = dim(ker T ) dim(Y/R(T )) exists in Z¯. F − Corollary 8.1.1. If the operator A is semi-Fredholm, we have the relation ind (H)= ind (A) dim(Ker A) dim(Ker A∗). s F ≡ − In order to check whether the supersymmetry is broken or unbroken, E. Witten introduced the so-called Witten index.

Definition 8.1.4. Let (H, P, Q) be SUSYQM with (8.1.1). (I) Putting, for t> 0 A∗A AA∗ H ∆ (H) = tr (e−t e−t ) = str e−t , t − we define, if the limit exists, the (heat kernel regulated) Witten index of (H, P, Q) by WH H = lim ∆t(H). W t→∞ We define also the (heat kernel regulated) axial anomaly of (H, P, Q) by AH H = lim ∆t(H). A t→0

(II) Putting, for z C [0, ), ∈ \ ∞ ∆ (H)= z tr [(A∗A z)−1 (AA∗ z)−1]= z str (H z)−1, z − − − − − − we define the (resolvent regulated) Witten index of (H, P, Q), if the limit exists, by WR R = lim ∆z(H) for some C0 > 0. W z→0, |ℜz|≤C0|ℑz| 8.1. WHAT IS SUSYQM 147

Similarly, we define the (resolvent regulated) axial anomaly by AR = lim ∆ (H) for some C > 0. AR − z→∞ z 1 |ℜz|≤C1|ℑz|

We have

Theorem 8.1.1. Let Q be a supercharge on H. If exp( tQ2) is trace class for some t > 0, − then Q is Fredholm and

indt(Q)(independent of t) = indF (Q) = inds(H).

If (Q2 z)−1 is trace class for some z C [0, ), then Q is Fredholm and − ∈ \ ∞

indz(Q)(independent of z) = indF (Q) = inds(H).

Concerning definitions used in above theorem:

Definition 8.1.5. Let X, Y be two Banach spaces and let (X, Y ) be the set of all densely defined closed operators from X to Y . T (X, Y ) isC called Fredholm if it has closed range R(T ) in Y and ker T and Y/R(T ) are∈ C finite dimension. T (X, Y ) is called semi-Fredholm if R(T ) is closed in Y and if at least one of ker T or Y/R∈(T C ) is finite dimension. If T (X, Y ) is semi-Fredholm, then Fredholm index indF (T ) = dim(ker T ) dim(Y/R(T )) exists in ∈CZ¯. −

Corollary 8.1.2. If an operator A is semi-Fredholm, we have

ind (H) = ind (A) dim(ker A) dim(ker A∗). s F ≡ −

8.1.3. Examples of SUSYQM.

d i j Example 1. [Witten [143]] Let (M, g), g = i,j=1 gij(q)dq dq be a d-dimensional smooth Riemannian manifold. We put Λ(M)= d Λk(M) or Λ (M)= d Λk(M), where ∪k=0 P 0 ∪k=0 0 Λk(M)= ω = ω (q)dqi1 dqik ω (q) C∞(M : C) , { i1 ··· ik ∧···∧ | i1 ··· ik ∈ } 1≤i1

Example 1′. [Witten’s deformed Laplacian [143]] For any real-valued function φ on M, we put −λφ λφ ∗ λφ ∗ −λφ dλ = e de , dλ = e d e 2 ∗ 2 where λ is a real parameter. We have dλ =0= dλ . Q = d + d∗ , Q = i(d d∗ ), H = d d∗ + d∗ d . 1λ λ λ 2λ λ − λ λ λ λ λ λ Defining P as before, we have the supersymmetric system (Hλ, Qα, P) on H for each α = 1, 2.

Now, we calculate Hλ more explicitly: d D2φ H = dd∗ + d∗d + λ2(dφ)2 + λ [ai∗, aj] . λ DqiDqj − i,jX=1 Here, the annihilation and creation operators aj and aj∗, respectively, are defined as follows: For any 0 ℓ d and q M, ≤ ≤ ∈ ai∗dqj1 dqjℓ = dqi dqj1 dqjℓ , q ∧···∧ ∧ ∧···∧  ℓ  ai dqj1 dqjℓ = ( 1)kgijk (q)dqj1 dqjk−1 dqjk+1 dqjℓ .  q ∧···∧ − ∧···∧ ∧ ∧···∧ Xk=1 Then, these give mappings from Λ(T ∗M) Λ(T ∗M), and we get  → i j i j∗ ij i∗ j∗ [aq, aq]+ = 0, [aq, aq ]+ = g (q), [aq , aq ]+ = 0. Moreover, ∂φ ∂φ D2φ Dψi dψi (dφ)2 = gij , = φ, = Γi q˙jψk. ∂qi ∂qj DqiDqj ∇i∇j Dt dt − jk

i1 i [Notation]: For ω = ωi ··· i (q)dq dq k , we put 1≤i1<···

P ∂ωi1 ··· i ( ω) = k Γl ω . ∇j i1 ··· ik ∂qj − jir i1···ir−1lir+1···ik Xr,l Then, we have d d d k l∗ m l∗ ∗ l∗ ∗ l = ∂ j Γ g a a , d = a = a , d = a , ∇j q − jl km ∇l − ∇l − ∇l k,l,mX Xl=1 Xl=1 Xl=1 ∂ω (q) d∗ω = ( 1)r−1gjir [ i1 ··· ik Γl ω (q)] − ∂qj − jis i1 ··· is−1 lis+1 ··· ik Xj,r Xl,s dqi1 dqir−1 dqir+1 dqik . × ∧···∧ ∧ ∧···∧

The most important thing is to consider the operator Hλ as the quantized one from the Lagrangian 1 dqi dqj Dψj 1 ∂φ ∂φ D2φ = dt g + iψ¯i + R ψ¯iψkψ¯jψl λ2gij λ ψ¯iψj . Lλ 2 ij dt dt Dt 4 ijkl − ∂qi ∂qj − DqiDqj Z    Here, we used the summation convention and ψi and ψ¯i are anti-commuting fields tangent to M, which becomes the creation and annihilation operators after quantization. After representing the solution of ∂ λ−1 u(t)= λ−2H u(t), ∂t λ 8.2. WICK ROTATION AND PIM 149 and applying the SUSYQM structure, he concludes that the principal term above when λ → (λ−1 ~) is governed by the instantons or tunneling paths corresponding to . That is, those ∞ ∼ Lλ paths are defined by Lagrangean below; 1 dqi dqj ∂φ ∂φ ¯ = dt g + λ2gij Lλ 2 ij dt dt ∂qi ∂qj Z   1 dqi ∂φ 2 dφ = dt λgij λ dt . 2 dt ± ∂qj ∓ dt Z Z

Using these paths with physicists’ steepest descent method , he may calculate Witten index.

Example 2. (Deift [31] in p.123 Cycon et al. [28]). Let H = L2(R) C2 = L2(R : C2) = ⊗ L2(R)2, and φ be a polynomial in q. Set A = d/dq + φ(q) and A∗ = d/dq + φ(q) with domains − D(A)= D(A∗)= u H1(R) φu L2(R) . { ∈ | ∈ } 0 d + φ 1 0 Q = − dq , P = . d + φ 0 0 1 dq !  −  Then, D(AA∗)= D(A∗A)= u L2(R) u′′, φ2u, φ′u L2(R) , { ∈ | ∈ } d2 d2 A∗A = + φ2(q) φ′(q), AA∗ = + φ2(q)+ φ′(q), −dq2 − −dq2 and d2 2 ′ 2 dq2 + φ (q) φ (q) 0 H = Q = − − 2 . 0 d + φ2(q)+ φ′(q) − dq2 ! This (H, Q, P) forms a SUSYQM in H.

(i) Especially φ(q)= q, we have ∗ dim(Ker A) = 1, dim(Ker A ) = 0, indF (A)=1=inds(H).

(ii) (Boll´eet al. [15]). For real-valued φ, φ′ L∞(R), we assume that ∈ 2 2 lim φ(q)= φ± R, φ φ , q→±∞ ∈ − ≤ + dq (1 + q 2) φ′(q) < and dq (1 + q 2) φ(q) φ < . | | | | ∞ | | | − ±| ∞ ZR ZR Then, they assert that, for z C [0, ), ∈ \ ∞ 1 1 ∆ (H)= [φ (φ2 z)−1/2 φ (φ2 z)−1/2], W = [sign(φ ) sign(φ )] and = 0. z 2 + + − − − − − R 2 + − − AR Remark 8.1.3. Especially, physicists have the above result by calculating the quantity

− R t dsL(q(s),q˙(s),ψ(s),ψ¯(s)) ∆t(H)= dqdψdψ¯ [dq][dψ][dψ¯]e 0 , {t−periodic} Z h Z i but it seems difficult to make rigorous their procedure mathematically.

8.2. Wick rotation and PIM

Though arguments so far in this chapter mainly treat (abstract) heat equation, but we need special trick if we use Fourier transformation within path-integral method. Because, if we naively proceed as before, we may have 1 v(t, q¯)= dp D1/2(t, q,p¯ )eS(t,q,p¯ )vˆ(p) √2π Z 150 8. SUSYQM AND ITS APPLICATIONS and by Fourier inversion formula, we need to have at t = 0,

D1/2(t, q,p¯ ) = 1, eS(t,q,p¯ ) = eiqp¯ . t=0 t=0

Since the second requirement above seems strange because S(t, q,p¯ ) R, therefore we need to ∈ reconsider the procedure from scratch.

8.2.1. Quantization and Path-Integral Method. From a Lagrangian m (8.2.1) L(q, q˙)= q˙2 + A(q)q ˙ V (q) C∞(T R), 2 − ∈ by Legendre transformation, we get a Hamiltonian function 1 (8.2.2) H(q,p)= (p A(q))2 + V (q) C∞(T ∗R) 2m − ∈ whose hamilton flow is defined by

q˙ = Hp, q(0) q (8.2.3) with = . p˙ = H , p(0) p ( − q     Quantum mechanics. Substituting i~∂ into p in (8.2.2), we “quantize” this as − q ∂u(t,q) i~ = H(q, i~∂ )u(t,q), (8.2.4) ∂t − q   u(t,q) t=0 = u(q), with  1 ∂ 2 (8.2.5) H(q, i~∂ )= i~ A(q) + V (q). − q 2m − ∂q −   8.2.2. LPIM and HPIM. Feynman feels not good at this “quantization” because Bohr’s corresponding principle is not seen transparently. Therefore, he introduces LPI=Lagrangian Path- Integral as

L i~−1 R t ds L(q(s),q˙(s)) K (t, q,q¯ )= dF q e 0 , L (8.2.6) ZCt,q,q¯ with L = q( ) ([0,t] : R) q(0) = q, q(t) =q ¯ Ct,q,q¯ { · ∈ P | } which gives the solution of (8.2.4) by

u(t,q)= dq KL(t,q,q)u(q). Z How about HPI=Hamiltonian Path-Integral?

H i~−1 R t ds[˙q(s)p(s)−H(q(s),p(s))] K (t, q,q¯ )= dF q dF p e 0 , H (8.2.7) ZCt,q,q¯ with H = (q( ),p( )) ([0,t] : T ∗R) q(0) = q,q(t) =q,p ¯ ( ) : arbitrary . Ct,q,q¯ { · · ∈ P | · } Though Hamilton mechanics enlarges the scope of Lagrange mechanics, is such “extension” truely necessary for PI? From above representation, check at least when t 0 whether → (8.2.8) lim dq KH (t, q,q¯ )u(q)= u(¯q). t→0 Z 8.2. WICK ROTATION AND PIM 151

In other word, i~−1 R t ds[˙q(s)p(s)−H(q(s),p(s))] dF q dF p e 0 = δ(¯q q)? H − ZCt,q,q¯ But this is doubtful because if paths q( ) or p( ) are smooth and t 0, then seemingly · · → t ds[q ˙(s)p(s) H(q(s),p(s))] 0? − → Z0 Remark 8.2.1. Therefore in stead of above (8.2.7), Kumano-go and Fujiwara [94] put

ds[p(s) dq(s) H(q(s),p(s))] = to gain “jump” p(0)(q(t) q(0)) − − Z[0,t) using Lebesgue-Stieltjes integral for left-continuous q( ) on (0,t] and right-continuous p( ) on [0,t). · · That is, by definition, N ds p(s) dq(s) = lim p(θj)[q(tj) q(tj−1)] where θj [tj−1,tj) [0,t) N→∞ − ∈ Z Xj=1 N tj = lim [p(tj−1 + 0)(q(tj) q(tj−1)) + ds p(s)q ˙(s)]. N→∞ − tj−1 Xj=1 Z Though it seems necessary to include discontinuous paths in , but I’m not sure such choice of P paths are adequate or not. This will be touched on at the last section.

By the way, since there exists no rigorous Feynman measure, one may define “path-integral like object” rather freely. I declare my opinion concerning Path-Integral Mehod (not Path-Integral).

My dogmatic opinion 1 : Path-Integral Method, if it is the one related to quantum mechanics, should be the representation of solution which exhibits the Bohr correspondence transparently!

My dogmatic opinion 2 : Broken line approximation is a sort of Trotter-Kato formula far from PIM idea! Schematically,

i~−1 R t dsL(γ(s),γ˙ (s)) i~−1 R t dsV (γ(s)) i~−1 R t ds 1 γ˙ 2(s) e 0 d γ e 0 e 0 2 d γ. F 6∼ F But formulation applying this procedure seems meaningful to construct a parametrix or a funda- mental solution for a already known PDE. Feynman’s procedure is not to derive a PDE rather to offer a black box to yield physical explanation for experimental facts.

Claim 8.2.1. Under these dogmas, we explain why we need HPIM if the order of spacial derivatives are one. This point of view is already explained in 2 of Chapter 1 concerning IVP for § (1.2.11).

Problem 8.2.1. 1. Reinterprete the papers N. Kumano-go [92], J. Le Rousseau [98] from above point of view. 2. How to find the representation of solution for Dirac equation under my dogma, we need HPIM with superanalysis.

∂ ~ ∂ e 2 i~ ψ(t,q) = [cααj( Aj(t,q)) + mc β]ψ(t,q). ∂t i ∂qj − c

8.2.3. Relation between quantum and heat. 152 8. SUSYQM AND ITS APPLICATIONS

8.2.3.1. Purely imaginary ~. It seems well-known that for a solution u(~ : t,q) of IVP of Schr¨odinger equation (8.2.4), bringing ~ to i, that is, putting v(t,q)= u( i : t,q), we have − − ∂ (∂ + A(q))2 (8.2.9) v(t,q) = [ q + V (q)]v(t,q). ∂t 2m We need to pay something making Schr¨odinger to heat. In fact, comparing this (8.2.9) with (8.2.14) below, we need to take +A and +V instead of A and V ! By the way, from heat to Schr¨odinger, − − we pay more, because in general analytic semi-group is not extendable to imaginary time!

8.2.3.2. Purely imaginary mass. As another way connecting Schr¨odinger to heat, E. Nelson[106] proposed to complexify the mass m to imaginary one iµ to have ∂ ( i~∂ A(q))2 (~∂ iA(q))2 (8.2.10) ~ v(t,q) = [ − q − iV (q)]v(t,q) = [ q − iV (q)]v(t,q) ∂t − 2µ − 2µ − with ~ = 1 and A = 0.

8.2.3.3. Wick rotation. [Purely imaginary time] Though ~ and m may be considered as param- eters, but time t is a dependent variable. Therefore we need some care to make time imaginary.

For a solution u(t,q) of (8.2.4), taking θ [0, π/2] and τ R, we put ∈ ∈ (8.2.11) w(τ,q)= u(t,q) = u(e−iθτ,q). t=e−iθ τ

Then it satisfies at least formally

∂w(τ,q) i~ = e−iθH(q, i~∂ )w(τ,q), (8.2.12) ∂τ − q   w(τ,q) τ=0 = u(q). This implies that for initial data w(0,q )= u(q),  ∂w(τ,q) i~ = H(q, i~∂q)w(τ,q) with θ = 0, (8.2.13) ∂τ −  ∂w(τ,q)  ~ = H(q, i~∂q)w(τ,q) with θ = π/2. ∂τ − − For heat type equation, we “assume in general” that all coefficients are real-valued and it satisfies ∂ (∂ A(q))2 (8.2.14) v(t,q) = [ q − V (q)]v(t,q), ∂t 2m − but by Wick rotation from u(t,q) of (8.2.4), we get ∂ ( i~∂ A(q))2 (~∂ iA(q))2 (8.2.15) ~ v(t,q) = [ − q − V (q)]v(t,q) = [ q − V (q)]v(t,q). ∂t − 2m − 2m − I feel discrepancy! Because even if ~ = 1, we must change A iA! − →− Therefore, starting from (8.2.2), after quantization we get (8.2.5), from it we have three slightly different heat type equations.

Problem 8.2.2. From the right-hand operator in (8.2.12), we define its symbol

−1 −1 (8.2.16) H˜ (q,p)= e−iθH(q,p)= e−i~ qp[e−iθH(q, i~∂ )]ei~ qp . − q ~=0 ˜ Can we quantize H(q,p), so I called, by PIM? And ask whether a functor-like procedure PIM (Lagrangian or Hamiltonian) and a restriction of θ to θ = 0 or θ = π/2 are commutative under what condition? 8.2. WICK ROTATION AND PIM 153

A tentative conclusion. Applying Lagrangian PIM, answers are affirmative.

But for θ = π/2 with Hamiltonian PIM, something curious occurs, in the sense that it seems impossible to use Fourier inversion formula and by a device, we may have heat type equation with not real coefficients when A(q) exists.

Question. Is that impossible in general to apply HPIM for heat type equation? Especially, is it possible to include vector potential in Riemann metric by considering A(q) stemming from the connection?

8.2.4. H-J equation.

8.2.4.1. Classical-flow. Putting H˜ (q,p)= e−iθH(q,p), we have ˜ ∂H(q,p) −iθ 1 q˙j = = e (pj Aj(q)), ∂pj m − qj(0) = q (CM)  with j .  ˜ pj(0) = p  ∂H(q,p) −iθ 1 j!  p˙j = = e (pj Aj(q))∂qj Aj(q) ∂qj V (q) − ∂qj m − −    8.2.4.2. Action integral(L). For the solution (q(σ, q,p),p(σ, q,p)) of (CM), we put τ (8.2.17) S˜ (τ,q,p)= dσ[q ˙(σ, q,p)p(σ, q,p) H˜ (q(σ, q,p),p(σ, q,p))]. 0 − Z0 Solving as p = ξ (τ, q,q¯ ) fromq ¯ = q (τ,q,p), by inverse function theorem, we define j j j j

˜L ˜L (8.2.18) S (τ, q,q¯ )= S0 (τ,q,p) . p=ξ(τ,q,q¯ )

Then, this satisifies ˜L ˜ ˜L ˜L 2 (8.2.19) Sτ + H(¯q, Sq¯ ) = 0 with lim 2τS (τ, q,q¯ )= q¯ q . τt→0 | − |

∂2S˜L 8.2.4.3. Continuity equation (L). For D˜ L(τ, q,q¯ ) = det , we have − ∂q∂q¯   ˜ L ˜ L ˜ ˜L ˜ L (8.2.20) Dτ + ∂q¯(D Hp(¯q, Sq¯ (τ, q,q¯ ))) = 0 with lim τD (τ, q,q¯ ) = 1. τ→0

8.2.4.4. Action integral (H). Getting q = x (τ, q,p¯ ) as before, we put, j j

H (8.2.21) S˜ (τ, q,p¯ ) = [qp + S˜0(τ,q,p)] q=x(τ,q,p¯ )

which satisfies (8.2.22) S˜H + H˜ (¯q, S˜H )=0 with S˜H (0, q,p¯ )= q¯ p . t q¯ h | i Continuity equation (H). Putting ∂2S˜H D˜ H (τ, q,p¯ ) = det , ∂q∂p¯   we have ˜ H ˜ H ˜ ˜ ˜ H Dτ + ∂q¯(D Hp(¯q, Sq¯)) = 0 with D (τ, q,p¯ ) τ=0 = 1. Remark: No singularity at τ = 0!

154 8. SUSYQM AND ITS APPLICATIONS

8.2.5. Example: Harmonic Oscillator type.

8.2.5.1. Lagrangian and Hamiltonian functions. From Wick rotated Schr¨odinger equation, we have the symbol (8.2.16) H˜ (q,p) from which we have the Hamilton flow: Putting β2 = a2 + m2ω2, −iθ −iθ cos (e ωσ)=Cσ, sin(e ωσ)=Sσ, we have Hamilton flow 1 q(σ)= [(mωCσ aSσ)q +Sσp], 1 mω − withq ¯ = [(mωC aS )q +S p].  1 mω τ − τ τ  p(σ)= [ β2S q + (mωC + aS )p].  mω − σ σ σ We put  τ S˜ (τ,q,p)= dσ[q ˙(σ)p(σ) H˜ (q(σ),p(σ))] 0 − Z0 S = τ β2(mωC aS )q2 + (mωC + aS )p2 2β2S qp . 2m2ω2 − τ − τ τ τ − τ   8.2.5.2. Wick rotated Harmonic Oscillator (L). Fromq ¯ = q(τ,q,p), getting

mωq¯ (mωCτ aSτ )q p = ξ(τ, q,q¯ )= − − , Sτ we define L S˜ (τ, q,q¯ )= S˜0(τ,q,ξ(τ, q,q¯ )) mω(cos ωτ (¯q2 + q2) 2¯qq) a − + (¯q2 q2), (θ = 0), =  2sin ωτ 2 −  imω[cosh ωτ (¯q2 + q2) 2¯qq] a  − + (¯q2 q2), (θ = π/2), 2sinh ωτ 2 − which satisfies   m(¯q q)2 ˜L ˜ ˜L ˜L Sτ + H(¯q, Sq¯ ) = 0 with lim τS (τ, q,q¯ ) − = 0. τ→0 − 2   Defining mω 2 L , (θ = 0), ∂ S˜ (τ, q,q¯ ) mω sin ωτ D˜ L(τ, q,q¯ )= = = − ∂q∂q¯ S  imω τ  , (θ = π/2),  sinh ωτ we have  ˜ L ˜ L ˜ ˜ L ˜ ˜ ˜L Dτ + ∂q¯(D Hp) = 0 with lim τD (τ, q,q¯ )= m, Hp = Hp(¯q, Sq¯ ). τ→0 Finally, we put f f ˜ L 1 ˜L i~−1S˜L(τ,q,q¯ ) ˜ L [Uτ u](¯q)= dq A (τ, q,q¯ )e u(q)= dq K (τ, q,q¯ )u(q). √2πi~ R R Z Z 1 −1 ˜L K˜ L(τ, q,q¯ )= A˜L(τ, q,q¯ )ei~ S (τ,q,q¯ ). √2πi~ Then, not only ∂ i~ U˜ L u = [H˜ˆ (q, i~∂ )]u. (Feynman quantization), ∂τ τ − q τ=0 but also ∂ i~ U˜ L u = [H˜ˆ (q, i~∂ )]U˜ L u. ∂τ τ − q τ 8.2. WICK ROTATION AND PIM 155

8.2.5.3. Wick rotated Harmonic Oscillator(H). Analogously defining

mωq¯ Sτ p q = x(τ, q,p¯ )= − , mωC aS τ − τ we get

H S˜ (τ, q,p¯ ) = [qp + S˜0(τ,q,p)] q=x(τ,q,p¯ )

sin ωτ(β 2q¯2 + p2) mω + qp¯ , (θ = 0), − 2(mω cos ωτ a sin ωτ) mω cos ωτ a sin ωτ =  − −  i sinh ωτ(β2q¯2 + p2) mω  + qp¯ , (θ = π/2), 2[mω cosh ωτ + ia sinh ωτ] mω cosh ωτ + ia sinh ωτ which satisfies   ˜H ˜ ˜H ˜H Sτ + H(¯q, Sq¯ ) = 0 with lim S (τ, q,p¯ ) =qp ¯ . τ→0 Defining mω ∂2S˜H (τ, q,p¯ ) , (θ = 0), ˜ H mω mω cos ωτ a sin ωτ D (τ, q,p¯ )= = =  mω− ∂q∂p¯ mωCτ aSτ , (θ = π/2), −  mω cosh ωτ + ia sinh ωτ we have  ˜ H ˜ H ˜ ˜ H ˜ ˜ H Dτ + ∂q¯(D Hp) = 0 with lim D (τ, q,p¯ )=1 where Hp = Hp(¯q, Sq¯ ). τ→0 Then, we put f f ˜ H 1 ˜H i~−1S˜H (τ,q,p¯ ) [Uτ u](¯q)= dp A (τ, q,p¯ )e uˆ(p) √2π~ R Z 1 −1 ˜H = dp dq A˜H (τ, q,p¯ ) ei~ (S (τ,q,p¯ )−qp)u(q)= dp K˜ H (τ, q,p¯ )u ˆ(p), 2π~ 2 ZZR ZR where 1 mω −1 ˜H K˜ H (τ, q,p¯ )= ei~ S (τ,q,p¯ ). √2π~ mωCτ aSτ r − Simply we have 1 −1 dp K˜ H (τ, q,p¯ )e−i~ qp = K˜ L(τ, q,q¯ ). 2π~ Z 8.2.6. Heat type. Try to get the PIM-like solution for (8.2.14), (8.2.15), (8.2.9) or (8.2.10) from (8.2.2) or else by HPIM!

Problem 8.2.3. Since this symbol (8.2.2) is obtained by putting p in (8.2.14) of ∂q, we want to ask what is the natural definition of symbol for (8.2.14)?

8.2.6.1. LPIM works from (8.2.2). Solving Hamilton equation corresponding to (8.2.2) with q(s)= q(s,q,p) and p(s)= p(s,q,p), we put t φL(t, q,q¯ )= ds[q ˙(s)p(s) H(q(s),p(s))] − Z0 p=ξ(t,q,q¯ )

mω[cos ωt (¯q2 + q2) 2¯qq] a (¯q2 q2) (8.2.23) = − + − 2sin ωt 2 ∂2φL(t, q,q¯ ) mω and αL(t, q,q¯ )= = . − ∂q∂q¯ sin ωt s r 156 8. SUSYQM AND ITS APPLICATIONS

Putting operators as

1 L (8.2.24) [T Lv](¯q)= dq αL(t, q,q¯ )e−φ (t,q,q¯ )v(q) t √2π Z then 2 L 1 m m(¯q q) lim[Tt v](¯q) = lim dq exp − v(q)= v(¯q), t→0 t→0 √2π t {− 2t } Z r and ∂ 1 mω2 [T Lv](¯q) = [ (∂ aq¯)2 q¯2]v(¯q). ∂t t 2m q¯ − − 2 t=0

Moreover, we have ∂ 1 mω2 [T Lv](¯q) = [ (∂ aq¯)2 q¯2][T Lv](¯q). ∂t t 2m q¯ − − 2 t 8.2.6.2. HPIM doesn’t work directly from (8.2.2). Analogously, calculating sin ωt(β2q¯2 + p2) mωqp¯ φH (t, q,p¯ )= + , −2(mω cos ωt a sin ωt) mω cos ωt a sin ωt (8.2.25) − − ∂2φH (t, q,p¯ ) 1 with αH (t, q,p¯ )= = , s ∂q∂p¯ rcos ωt we put

1 H (8.2.26) [T H v](¯q)= dp αH (t, q,p¯ )e−φ (t,q,p¯ )vˆ(p). t √2π Z Unfortunately, 2 2 2 H 1 β q¯ + p lim[Tt v](¯q) = lim dp exp t qp¯ vˆ(p) = v(¯q), t→0 t→0 2π {− 2m − } 6 Z therefore, we need to modify above procedure from scratch.

Devices. (0) We propose to consider as Hamilton function, instead of (8.2.2), (p aq)2 mω2 H˜ (q,p)= i − + q2 . 2m 2   Then Hamilton flow is given by p aq a(p aq) mω2 H˜ = i − , H˜ = i[− − + q], p m − q − m 2 with β2 = a2 + m2ω2, −ia i ˜ m m ˜ 2 2 X = −iβ2 ia and X = ω .  m m  Devices. (I) We propose to consider, instead of (8.2.14), ∂ ( i∂ + iaq)2 mω2 (8.2.27) i v(t,q)= i[ − q q2]v(t,q), ∂t − 2m − 2 with symbol (p + iaq)2 mω2 H˜ (q,p)= i + q2 . − 2m 2   Now, we put γ = a2 m2ω2,C / = cosh ωs,S / = sinh ωs, − s s 8.2. WICK ROTATION AND PIM 157 and we have (mωC / + aS / )q iS / p q(s)= t t − t ,  mω  iγS /tq + (mωC / t aS /t)p  p(s)= − − . mω Therefore   i t φ˜ (t,q,p)= ds [p(s)2 + γq(s)2] 0 −2m (8.2.28) Z0 iS / = t [γ(mωC / + aS / )q2 + (mωC / aS / )p2 2iγS / qp]. −2m2ω2 t t t − t − t From

iS /t mωq¯ + iS /tp (8.2.29)q ¯ = C/ tq (iaq + p), q = x(t, q,p¯ )= , − mω mωC / t + aS /t we have

H φ˜ (t, q,p¯ ) = [qp + φ0(t,q,p)] q=x(t,q,p¯ )

iS / m2ω2 C/ 2 a2 S/2 (8.2.30) = t [γ(mωC / + aS / )q2 + (mωC / aS / )p2]+ t − t qp −2m2ω2 t t t − t m2ω2 q=x(t,q,p¯ )

mω iS / (m2ω2p2 γq¯2) t − = qp¯ + 2 2 . mωC / t + aS /t 2m ω (mωC / t + aS /t) Now define 1 ˜H mω vH (t, q¯)= dp α˜H (t, q,p¯ )eiφ (t,q,p¯ )vˆ(p) withα ˜H (t, q,p¯ )= √2π mωC / (8.2.31) Z r t mω mω a = dq exp (C/ (¯q2 + q2) 2¯qq)+ (¯q2 q2) v(q). 2π/C {− S/ t − 2 − } r t Z t Since mω lim φ˜H (t, q,p¯ ) =qp ¯ , lim = 1, t→0 t→0 mωC / t we get 1 lim vH (t, q¯)= dp eiqp¯ vˆ(p)= v(¯q) t→0 √2π Z and ∂ (∂ aq)2 mω2 vH (t,q) = [ q − + q2]vH (t,q). ∂t 2m 2 Therefore, in this case, we get the desired quantization if ω is replaced by iω. But concerning ± kernel convergence, I have’t checked it!

Devices. (II) We multiply i to both sides of (8.2.14) and replacing ∂q with p, we get something-like symbol as (p aq)2 mω2 H˜ (q,p)= i( − q2), 2m − 2 from which we may define Hamilton flow defined by p aq q˙ = H˜ = i − , p m q˙ q i a 1 or = X˜ , X˜ = −  a(p aq) p˙ p m γ a  p˙ = H˜ = i[− − mω2q],     −   − q − m −  158 8. SUSYQM AND ITS APPLICATIONS where γ = a2 m2ω2, X˜ 2 = ω2. − − Therefore, we get iS mωC iaS iS q(s)= C q + s ( aq + p)= s − s q + s p, s mω − mω mω iS iγS mωC + iaS p(s)= C p + s ( γq + ap)= − s q + s s p, s mω − mω mω and t i t φ˜ (t,q,p)= ds[qp ˙ H˜ (q,p)] = ds (p(s)2 γq(s)2) 0 − 2m − Z0 Z0 iS = t [ γ(mωC iaS )q2 + (mωC + iaS )p2 2iγS qp] 2m2ω2 − t − t t t − t because t S C t S2 ds cos (2ωs)= t t , ds sin(2ωs)= t . ω ω Z0 Z0 From mωC iaS iS mωq¯ iStp q¯ = t − t q + t p, q = − = x(t, q,p¯ ) mω mω mωCt iaSt we put − ˜H ˜ φ (t, q,p¯ ) = [qp + φ0(t,q,p)] q=x(t,q,p¯ ) iS m2ω2C2 + a2S2 = t [ γ(mωC iaS )q2 + (mωC + iaS )p2]+ t t qp {2m2ω2 − t − t t t m2ω2 } q=x(t,q,p¯ ) 2 2 iSt(γq¯ + p ) mω = − + qp¯ . 2(mωC iaS ) mωC iaS t − t t − t mω α˜H (t, q,p¯ )= . mωC iaS r t − t Then, we define

1 ˜H v˜H (t, q¯) = (T H v)(¯q)= dp α˜H eiφ (t,q,p¯ )( v)(p) t √2π F Z 1 ˜H = dpdqα˜H e−iφ (t,q,p¯ )−iqpv(q). 2π ZZ Then, we get lim v˜H (t,q) =v ˜H (0,q)= v(q), t→0 ∂ (∂ aq)2 mω2 v˜H (t,q) = [ q − q2]v(q), ∂t 2m − 2 t=0 H H H [T (T v)](¯ q) = (T v)(¯q), s t t+s ∂ (∂ aq)2 mω2 v˜H (t,q) = [ q − q2]˜vH (t,q). ∂t 2m − 2 8.2.7. Quantum field theory. Though N. Kumano-go and D. Fujiwara [94] is so interesting to give meaning not only to the formula of integration by parts but also the stationary method, etc, in -dimensional space. But I can’t appreciate the reason why they take as candidate paths left ∞ continuous or right-continuous one. For example, to apply this idea to field theory after replacing time-sclicing as standard subdivision, how to settle left or right continuity at each boundary of subdivision.

For reader’s sake, I cite a typical example of the so-called functional method of field theory. 8.2. WICK ROTATION AND PIM 159

It is known that for Lagrangian 2 1 2 1 2 m 2 1 4 I(u)= dtdx ut(t,x) u(t,x) u(t,x) u(t,x) 3 2| | − 2|∇ | − 2 | | − 4| | ZZR×R   whose Euler equation is ( + m2)u(t,x) u(t,x) 2u(t,x) = 0, − | | we consider

ihu|φi+L(u) Z(φ)= dF u e with u φ = dtdx u(t,x)φ(t,x) h | i 3 Z ZZR×R which satisfies FDE(=Functional Derivative Equation) δ δ3 (8.2.32) ( + m2) + Z(φ) = 0. iδφ(t,x) (iδφ(t,x))3   How to give the meaning to Z(φ) above by PIM? Time slicing method for 1-dimension will be generalized to standard subdivision like Dodziuk procedure [38], but in this case, how we define “right-continuous or left continuous paths”?

Remark 8.2.2. To treat FDE like (8.2.32), the first difficulty stems from giving the meaning for the higher order derivative at the point (t,x). δ2 Z(φ). δφ(t,x)2 See, an example treated in Inoue [68] and also [66, 70]. But to develop functional analysis method (which gives the large success to PDE) to FDE, we need integration by parts but without standard measure.

8.2.7.1. [H]: Path-integral method of heat type equation under H-formulation is impossible?

Problem 8.2.4. Whether we may represent the solution of a given PDE by using classical mechanical objects corresponding to it, is our problem.

On the other hand, we may compare this with the problem posed by H. Widom [142]: Let AW be the self-adjoint operator obtained from a real valued symbol A(q,p) by Weyl quantization W whose spectral resolution is given by A = dEλ λ. Taking a function f in the suitable class, we Z W W may define, by functional calculus method, f(A ) = dEλf(λ). In this case, whether f(A ) is a pseudo-differential operator and how its symbol is represeR nted by, are discussed there. Moreover, let two self-adjoint operators AW and BW be given with a function with two variables. Whether f(AW ,BW ) gives a pseudo-differential operator, this is considered in R.S. Strichartz [128]. These consideration is applied to to the system version of Egorov’s theorem in 4 of Chapter 9. § We need to remark the physicist’s usage of analytic continuation w.r.t. time t, because it is W W not so obvious whether the operator like etA is analytically continued to eitA . (1) Let a non-negative function H(q,p) with order 2 w.r.t. p having functions SL(t, q,q¯ ) and DL(t, q,q¯ ) on configuration space as solutions of Hamilton-Jacobi and continuity equation, respec- tively. Taking the normalization constant CL, and defining

L L L SL(t,q,q¯ ) (Tt v)(¯q)= v (t, q¯)= CL dq D (t, q,q¯ )e v(q), R Z q 160 8. SUSYQM AND ITS APPLICATIONS have we a parametrix of corresponding heat type equation ∂ vL(t, q¯)= HW (¯q, ∂ )vL(t, q¯) with vL(0,q)= v(q)? ∂t q¯ (2) Under the same setting as (1), we define functions SH (t, q,p¯ ) and DH (t, q,p¯ ) on phase space as solutions of Hamilton-Jacobi and continuity equation, respectively. Defining

H H H SH (t,q,p¯ ) (Tt v)(¯q)= v (t, q¯)= CH dp D (t, q,p¯ )e vˆ(p) R Z q whether we have a parametrix of corresponding heat type equation with suitable devices?

Remark 8.2.3. For (1), I show not only simple examples in the previous paragraph, but also construct a quantized operator of Riemann metric in Inoue and Maeda [79]. Concerning (2), I give two simple examples showing it seems hard to get the desired results without some devices.

As is mentioned at the very beginning of this section, taking the Fourier transformation of • 1 v = ∂2v with v(0) = v, t 2 q we get readily 1 iqp¯ − 1 p2t v(t, q¯)= dp e 2 vˆ(p). √2 Z But, except normalization constant, it seems impossible to have a solution S(t, q,p¯ ) of Hamilton- Jacobi equation satisfying 2 1/2 iqp¯ − 1 p2t S(t,q,p¯ ) ∂ S e 2 = A(t, q,p¯ )e with A . ∼ ∂q∂p¯   Therefore, multiplying imaginary unit “i” to both sides of heat equation, • 1 iv = i ∂2v with v(0) = v t 2 q and substituting p into i∂ , we assign the Hamilton function H˜ (q,p)= i p2. Then, we have − q − 2 i S˜ (t,q,p)= p2t, q¯ = q ipt, 0 −2 − i S˜(t, q,p¯ )= qp + S˜ (t,q,p) =qp ¯ + p2t. 0 2 q=¯q+ipt

Putting

1 iS˜(t,q,p¯ ) 1 iqp¯ − 1 p2t v(t, q¯)= dp e vˆ(p)= dp e 2 vˆ(p), √2π R √2π R Z Z we have the desired expression. See also, Qi’s equation in 3, Chapter 9. § Unfortunately, this procedure doesn’t work for harmonic oscillator. In fact, since we have • i H˜ (q,p)= (p2 + ω2q2), −2 solutions of Hamilton equation are given by ˜ q˙ = Hp(q,p)= ip, q(s) q 0 i − = esX with X = − , 2 p(s) p iω2 0 ( p˙ = H˜q(q,p)= iω q −       there appeared the terms with cosh ωs, sinh ωs. To get the desired one with cos ωs, sin ωs, we need to complexify ω, but no philosophical evidence to do so. Only when ω 0, we have the desired → result obtained before. 8.3. SPIN ADDITION 161

8.3. Spin addition

Preparing a representation space V with the scalar product ( , ) and two bounded operators · · b and b∗ such that b2 = (b∗)2 = 0, bb∗ + b∗b = 0. 2 In stead of ODE H(q, ∂q) (??) in L (R), we put

1 1 ∗ Q− = (∂q ωq)b, Q+ = (∂q + ωq)b √2 − √2 and we define ω H = H(q, ∂ )= Q2 = H(q, ∂ )I + [b, b∗], q q 2 ∗ Q = Q− + Q+, P = [b, b ], on H = L2(R) V . Since ⊗ 2 P = I, [Q, P]+ = QP + PQ = 0, (H, P, Q) gives SUSYQM on H = L2(R) V . Especially, taking ⊗ V = C2 with (u, v)= u v¯ + u v¯ for u = t(u , u ), v = t(v , v ) C2, 1 1 2 2 1 2 1 2 ∈ 0 0 0 1 1 0 b∗ = , b = , [b, b∗]= bb∗ b∗b = , 1 0 0 0 − 0 1      −  we get an ordinary matrix representation of H in H = L2(R) V . ⊗ We represent these by using an odd variable θ.

Putting Λ = u + u θ u , u C , we decompose { 0 1 | 0 1 ∈ } Λ = u u C , Λ = u θ u C , Λ = Λ Λ . b { 0 | 0 ∈ } f { 1 | 1 ∈ } b ⊕ f Then 0 0 u0 ∂ 0 1 u0 θ(u0 + u1θ)= u0θ , (u0 + u1θ)= u1 , ∼ 1 0 u1 ∂θ ∼ 0 0 u1       and ∂ ∂ 0 1 0 0 0 0 0 1 1 0 [b, b∗] θ θ = . ∼ ∂θ − ∂θ ∼ 0 0 1 0 − 1 0 0 0 0 1        −  Remark 8.3.1. Since u v (bu, v) = (u, b∗v) for u = 0 , v = 0 C2, u1 v1 ∈     we may have scalar product ( , ) in H = L2(R) V such that · · ⊗ ∂ (8.3.1) (u + u θ,θ(v + v θ)) = ( (u + u θ), v + v θ) 0 1 0 1 ∂θ 0 1 0 1 which permits integration by parts w.r.t. θ. Please refer (9.2.19), 2 in Chapter 9. §

8.3.1. Classical Mechanics corresponding to H(q, ∂ ). Prepare ¯k C 0 and define q ∈ \ { } Fourier transformation for functions of θ R0|1 as ∈ −¯k−1θπ −1 R0|1 dθ e (u0 + u1θ)= u1 ¯k u0π, ¯k−1θπ −1 − −1 RR0|1 dπ e (u1 ¯k u0π)= ¯k (u0 + u1θ), ′ ¯k−1(θ−θ′)π − ′  ¯k 0|2 dπ dθ e (u0 + u1θ )= u0 + u1θ −R R  RR 162 8. SUSYQM AND ITS APPLICATIONS

Then the Weyl symbol of H(q, ∂q) is given by 1 ω2 (8.3.2) (x,ξ,θ,π)= ξ2 x2 ¯k−1ωθπ. H −2 − 2 − In fact, since ′ −1 ′ θ + θ 1 ¯k dπdθ′e¯k (θ−θ )π π(u + u θ′)= (u u θ), − 2 0 1 2 0 − 1 ZZ we get

¯k −1 ′ −1 ′ ˆ(x, ∂ ,θ,∂ )u(x,θ)= − dξdx′dπdθ′ei~ (x−x )ξ+¯k (θ−θ )π H x θ 2π~ ZZ x + x′ θ + θ′ ( , ξ, ,π)(u (x′)+ u (x′)θ′) ×H 2 2 0 1 ′ 1 i~−1(x−x′)ξ x + x ′ ′ = e H( ,ξ)(u0(x )+ u1(x )θ)+ ω(u0(x) u1(x)θ) 2π~ 2|0 2 − ZZR ω H(q, ∂ )I + [b, b∗] u. ∼ q 2   8.3.2. LH-formulation. In this case, since the Hamilton equation w.r.t. odd variables is given by θ˙ = ¯k−1ωθ, π˙ =k ¯−1ωπ, − we have −1 −1 θ(s)= e−¯k ωsθ, π(s)= e¯k ωsπ. Without using Fourier transform w.r.t. even variables, we may put

−1 (8.3.3) (t, x,¯ θ,x¯ ,π) = [θπ + S (t,x,ξ)] = e¯k ωtθπ¯ + S(t, x,x¯ ), S 0 ξ=··· ,θ=···

where ω / ω ω / (8.3.4) S(t, x,x¯ )= C x¯2 xx¯ + C x2. 2/ − / 2/ S S S Then (8.3.3) satisfies 2 ¯ (¯x x) ¯ (8.3.5) t + (¯x, x¯, θ, θ¯) = 0 with lim( − θπ) = 0. S H S S t→0 S− 2t − Moreover, putting ∂2S ∂x∂x¯ 0 ω ωt (t, x,¯ θ,x¯ ,π) = sdet − 2 = e , D 0 ∂ S sinh(ωt) − ∂θ∂π¯ ! we have ¯ (8.3.6) t + ∂x¯( ξ)+ ∂θ¯( π) = 0 with lim t (t, x,¯ θ,x,π) = 1 D DH DH t→0 D where = (¯x, , θ,¯ ¯) and = (¯x, , θ,¯ ¯). Hξ Hξ Sx¯ Sθ Hπ Hπ Sx¯ Sθ We put

LH ¯ 1 1/2 −S −1 t u(t, x,¯ θ)= dxdπ e (u1 ¯k u0π) V √2π R1|1 D − (8.3.7) Z ωt/2 L −ωt/2 L ¯ = e Vt u0(¯x)+ e Vt u1(¯x)θ. 8.3. SPIN ADDITION 163

8.3.3. HH-formulation. We define (8.3.8) (t, x,¯ θ,ξ¯ ,π) = [θπ + S (t,x,ξ)] = e−ωtθπ¯ + S(t, x,ξ¯ ), S 0 x=··· ,θ=··· which satisfies

¯ ¯ (8.3.9) t + (¯x, x¯, θ, θ¯) = 0 with lim =xξ ¯ + θπ. S H S S t→0 S Moreover, putting 2 ∂ S 0 ¯ ∂x∂ξ¯ 1 ωt (t, x,¯ θ,ξ,π) = sdet 2 = e , D 0 ∂ S cosh(ωt) − ∂θ∂π¯ ! we have ¯ (8.3.10) t + ∂x¯( ξ)+ ∂θ¯( π) = 0 with lim (t, x,¯ θ,ξ,π) = 1 D DH DH t→0 D where = (¯x, , θ,¯ ¯) and = (¯x, , θ,¯ ¯). Hξ Hξ Sx¯ Sθ Hπ Hπ Sx¯ Sθ

(8.3.11) (t, x,¯ θ,ξ¯ ,π) = [ θ π + S (t,x,ξ)] = e−ωtθπ¯ + S(t, x,ξ¯ ) S − 0 − x=··· ,θ=··· we put

v(t, x,¯ θ¯)= eωt/2V Lv (¯x)+ e−ωt/2V Lv (¯x)θ.¯ Vt t 0 t 1 Putting v(t,x,θ)= v(t,x,θ), we have Vt ∂ 1 ∂2 1 ω ∂ (8.3.12) v(t,x,θ)= ω2x2 + ,θ v(t,x,θ) with v(0,x,θ)= v(x,θ). ∂t 2 ∂x2 − 2 2 ∂θ −   From the kernel of , we may calculate the Witten index which is shown soon later. Vt 8.3.4. A generalization. For d d 1 ω2q2 H(q,p)= p2 j j C∞(T ∗Rd : R), 2 j − 2 ∈ Xj=1 Xj=1 we may extend it as d d d 1 ω2x2 (x,θ,ξ,π)= ξ2 j j + ω θ π . H 2 j − 2 j j j Xj=1 Xj=1 Xj=1 d (8.3.13) (t, x,¯ θ,x¯ ,π) = [ θ π + S (t,x,ξ)] = e−ωj tθ¯ π + S(t, x,x¯ ) S −h | i 0 − j j θ=··· ,ξ=··· j=1 X with d ωj cosh(ωjt) 2 2 ωj S(t, x,x¯ )= (¯xj + xj ) x¯jxj . 2 sinh(ωjt) − sinh(ωjt) Xj=1   We define 2 ∂ S 0 d ∂x∂x¯ ωj ωj t (t, x,¯ θ,ξ¯ ,π) = sdet − 2 = e D ∂ S sinh(ω t) 0 ∂θ∂π¯ ! j jY=1 and we have d d ωj d e−ωj tθ¯ π −S(t,x,x¯ ) 1/2e−S = eωt/2 ePj=1 j j , ω = ω . D v sinh(ω t) j uj=1 j j=1 uY X t 164 8. SUSYQM AND ITS APPLICATIONS

8.3.4.1. [H]:LH-formulation for the case d = 2 with u(t,x,θ)= u0(t,x)+u1(t,x)θ1 +u2(t,x)θ2 + u3(t,x)θ1θ2. Define the Fourier transformations by

dθ ehθ|πi(u + u θ + u θ + u θ θ )= u u π + u π u π π , 0 1 1 2 2 3 1 2 3 − 2 1 1 2 − 0 1 2 dπ e−hθ|πi(u u π + u π u π π )= (u + u θ + u θ + u θ θ ). (R 3 − 2 1 1 2 − 0 1 2 − 0 1 1 2 2 3 1 2 Then, we haveR

1 ¯ u(¯x, θ¯)= − dxdπ 1/2(t, x,x¯ , θ,π¯ )e−S(t,x,x¯ ,θ,π)(u (x) u (x)π + u (x)π u (x)π π ) Vt 2π D 3 − 2 1 1 2 − 0 1 2 Z ω ω = dx 1 2 e−S(t,x,x¯ )eωt/2 2π sinh(ω t) 2π sinh(ω t) Z r 1 2 (u (x) e−ω1tu (x)θ¯ e−ω2tu (x)θ¯ + e−ωtu (x)θ¯ θ¯ ). × 0 − 1 1 − 2 2 3 1 2 Since

ωt/2 (−ω1+ω2)t/2 ¯ (ω1−ω2)t/2 ¯ −ωt/2 ¯ ¯ dθ (e θ1θ2 + e θ1θ2 + e θ1θ2 + e θ1θ2)(u0 + u1θ1 + u2θ2 + u3θ1θ2) Z = eωt/2u e(−ω1+ω2)t/2u θ¯ e(ω1−ω2)t/2u θ¯ + e−ωt/2u θ¯ θ¯ 0 − 1 1 − 2 2 3 1 2 and ∂ ∂ P u = 1 2θ 1 2θ (u + u θ + u θ + u θ θ )= u u θ u θ + u θ θ , − 1 ∂θ − 2 ∂θ 0 1 1 2 2 3 1 2 0 − 1 1 − 2 2 3 1 2  1  2  we have the corresponding SUSYQM with

1 2 2 ∗ ∂ W (x)= (ω1x1 + ω2x2), bj = θj, bk = 2 ∂θk 2 2 2 ∂ W ∗ ∂ ∂ ∂ and [bj, bk]− = ω1 1 2θ1 + ω2 1 2θ2 = ωj[ ,θj]−. − ∂xj∂xk − ∂θ1 − ∂θ2 ∂θj j,kX=1     Xj=1 Moreover, we have

str = dxdθ (eωt/2 e(−ω1+ω2)t/2 e(ω1−ω2)t/2 + e−ωt/2)θ θ e−S(t,x,x) Vt − − 1 2 Z 1 = (eω1t/2 e−ω1t/2)(eω2t/2 e−ω2t/2) = 1. − − 2(cosh(ω t) 1) 2(cosh(ω t) 1) 1 − 2 − p p a 8.3.5. [H]:LH-formulation for general d with u(t,x,θ)= |a|≤d ua(t,x)θ . Since we have

d P d (θ π )bj = ( 1)|b|(|b|−1)/2θbπb with b = (b , , b ) 0, 1 d, b = b , j j − 1 · · · d ∈ { } | | j jY=1 Xj=1 θaθb = ( 1)τ(a,b)θa+b with τ(b, a) a b + τ(a, b) mod2, − ≡ | || | we get, with b = 1 a , a + b = 1˜ = (1, , 1), d = a + b , j − j · · · | | | | d d dθ ehθ|πiθa = dθ (1 + θ π )θa = dθ θa (θ π )bj = ( 1)|b|(|b|−1)/2+τ(a,b)πb, j j j j − Z Z jY=1 Z jY=1 ′ dπ e−hθ|πiπb = ( 1)|a|(|a|−1)/2+τ(b,a)θa, dπ e−hθ|πi dθ′ ehθ |πiθ′a = ( 1)d(d−1)/2θa. − − Z Z  Z  8.3. SPIN ADDITION 165

Therefore

d dθ ehθ|πi u θa = dθ u θa (θ π )bj = ( 1)|b|(|b|−1)/2+τ(a,b)u πb, a a j j − a Z |Xa|≤d Z |Xa|≤d jY=1 |Xa|≤d dπ e−hθ|πi ( 1)|b|(|b|−1)/2+τ(a,b)u πb = ( 1)d(d−1)/2 u θa. − a − a Z |Xa|≤d |Xa|≤d On the other hand, as

d d e−ωj tθ¯ π dπ ePj=1 j j ( 1)|b|(|b|−1)/2+τ(a,b)πb = dπ ( 1)|b|(|b|−1)/2+τ(a,b)πb (e−ωj tθ¯ π )aj − − j j Z Z jY=1 d = e−aj ωj t( 1)d(d−1)/2( 1)|a|+τ(a,b)θ¯a, − − jY=1 we have d(d−1)/2 ( 1) 1/2 −S(t,x,x¯ ,θ,π¯ ) |b|(|b|−1)/2+τ(a,b) b tu(¯x, θ¯)= − dxdπ (t, x,x¯ , θ,π¯ )e ( 1) u (x)π V (2π)d/2 D − a Z |Xa|≤d d ω d j −S(t,x,x¯ ) ωt/2 −aj ωj t |a|+τ(a,b) ¯a = dx e e e ( 1) ua(x)θ . 2π sinh(ωjt) − Z jY=1 r |Xa|≤d jY=1 Since d ′ ′ ′ ′ P (a −b )ωj t/2 a b a dθ e j=1 j j θ θ¯ uaθ ′ ′ Z |a |+X|b |=d |Xa|≤d d = e(bj −aj )ωj t/2( 1)|a|+τ(a,b)u (x)θ¯a. − a |Xa|≤d jY=1 and d ∂ a |b| a P u = 1 2θj uaθ = ( 1) uaθ , − ∂θj − jY=1   |Xa|≤d |Xa|≤d we have the corresponding SUSYQM as before.

Moreover, we have

d str = dxdθ (eωj t/2 e−ωj t/2)θ θ e−S(t,x,x) Vt − 1 · · · d Z jY=1 d 1 = (eωj t/2 e−ωj t/2) = 1. − d j=1 2(cosh(ω t) 1) Y j=1 j − qQ Putting v(t,x,θ)= v(t,x,θ), we have Vt d d d ∂ 1 ∂2 1 ω ∂ (8.3.14) v(t,x,θ)= ω2x2 + j ,θ v(t,x,θ) ∂t 2 ∂x2 − 2 j j 2 ∂θ j −  j=1 j j=1 j=1 j  X X X   d with v(0,x,θ)= v(x,θ). Here, we put ω = j=1 ωj. P 166 8. SUSYQM AND ITS APPLICATIONS

Problem 8.3.1. Extend the procedure in this chapter to the operator posed by M.S. Abdalla and U.A.T. Ramjit [1]: 1 m(t) H(q,p,t)= p2 + ω2q2, m(t)= m e2Γ(t), 2m(t) 2 0 0 Show the difference of solution of above operator from harmonic oscillator represented by Γ(t) and how it changes adding spin?

8.4. A simple example of supersymmetric extension

We consider the simplest 1-dimensional example. Let 1 (8.4.1) H(q,p)= (p A(q))2 + V (q) C∞(T ∗R : R) 2 − ∈ be given with A(q),V (q) C∞(R : R). Using Legendre transformation as ∈ ∂H (8.4.2)q ˙ = = p A(q), L(q, q˙) =qp ˙ H(q,p), ∂p − − we get a Lagrangian 1 L(q, q˙)= q˙2 + A(q)q ˙ V (q) C∞(T R : R). 2 − ∈ Instead of a path q : R t q(t) R, we consider a generalized path ∋ → ∈ (8.4.3) Φ : R t Φ(t)= x(t)+ i(ρ ψ (t) ρ ψ (t)) + iρ ρ F (t) R ∋ → 1 2 − 2 1 1 2 ∈ ev with ρ R (α = 1, 2) being odd parameters and α ∈ od x : R t x(t) R , ∋ → ∈ ev (8.4.4)  ψα : R t ψα(t) Rod with α = 1, 2,  ∋ → ∈  F : R t F (t) R . ∋ → ∈ ev  Introducing operators  ∂ ∂ α = iρα with α = 1, 2, D ∂ρα − ∂t and ǫ = ǫ , ǫ = 1, we extend L(q, q˙) as αβ − βα 12 1 i (8.4.5) ˜ = ( Φ)ǫ ( Φ) + Aρ ǫ Φ iW (Φ). L0 −4 Dα αβ Dβ 2 α αβDβ − In the above, A(q) is extended from q R to Φ = x + i(ρ ψ ρ ψ )+ iρ ρ F R by he ∈ 1 2 − 2 1 1 2 ∈ ev Grassmann extension as

(8.4.6) A(Φ) = A(x)+ iA′(x)(ρ ψ ρ ψ + ρ ρ F )+ A′′(x)ρ ρ ψ ψ . 1 2 − 2 1 1 2 1 2 1 2 and W (Φ) is analogously extended from W (q) to W (Φ) whose relation to V (q) will be given later.

Remark 8.4.1. The following relation will be worth noticing: ∂ ∂ 2 ∂ (8.4.7) iρ = i for each α = 1, 2. ∂ρ − α ∂t − ∂t  α  8.4. ASIMPLEEXAMPLEOFSUPERSYMMETRICEXTENSION 167

Now, we have def ′ = dρ dρ ˜ L0 2 1 L0 (8.4.8) Z 1 1 i = x˙ 2 + A(x)x ˙ + F 2 + ψ ψ˙ ψ˙ ψ + W ′(x)F iW ′′(x)ψ ψ . 2 2 2 2 2 − 1 1 − 1 2 Assuming that the “auxilliary field F ” should satisfy  δ ′ (8.4.9) 0 = L0 = F + W ′, δF we arrived at 1 i 1 (8.4.10) = x˙ 2 + A(x)x ˙ + ψ ψ˙ ψ˙ ψ W ′(x)2 iW ′′(x)ψ ψ . L0 2 2 2 2 − 1 1 − 2 − 1 2   This is the desired Lagrangian with variables x, x,˙ ψα, ψ˙α, but variables ψα, ψ˙α are not independent each other. In fact, they satisfy

ψ , ψ = ψ ψ + ψ ψ = 0, ψ , ψ˙ =0 and ψ˙ , ψ˙ = 0. { α β} α β β α { α β} { α β} To find out “independent Grassmann variables” in (8.4.10), we introduce new variables by the following two methods:

(I) Defining new variables as δ ξ = L0 =x ˙ + ax, δx˙ (8.4.11)   δ 0 i  φα = L = ψα for α = 1, 2, δψ˙α −2 we put   (x,ξ,ψ , ψ ) =xξ ˙ + ψ˙ φ H 1 2 α α −L0 1 1 i = (ξ ax)2 + W ′(x)2 + W ′′(x)ψ ǫ ψ . 2 − 2 2 α αβ β Rewriting the variables ψ1, ψ2 as θ,π, respectively, we get 1 1 (8.4.12) (x,ξ,θ,π)= (ξ ax)2 + W ′(x)2 + iW ′′(x)θπ. H 2 − 2

(II) In the above, we use the “real” odd variables ψα. We “complexify” these variables by putting 1 1 1 1 (8.4.13) ψ = (ψ1 + iψ2), ψ¯ = (ψ1 iψ2), i.e. ψ1 = (ψ + ψ¯), ψ2 = (ψ ψ¯), √2 √2 − √2 √2i − and then we rewrite as L0 1 i 1 (8.4.14) ¯ = x˙ 2 + A(x)x ˙ + (ψψ¯˙ + ψ¯ψ˙) W ′(x)2 W ′′(x)ψψ.¯ L0 2 2 − 2 − Introducing new variables as δ ¯ δ ¯ i δ ¯ i (8.4.15) ξ = L0 =x ˙ + A(x), φ = L0 = ψ,¯ φ¯ = L0 = ψ, δx˙ δψ˙ −2 δψ¯˙ −2 we put def (x,ξ,ψ, ψ¯) =xξ ˙ + ψφ˙ + ψ¯˙φ¯ ¯ H − L0 1 1 = (ξ A(x))2 + W ′(x)2 + W ′′(x)ψψ.¯ 2 − 2 168 8. SUSYQM AND ITS APPLICATIONS

Rewriting ψ and ψ¯ by θ and π, respectively, we get finally a function 1 1 (8.4.16) (x,ξ,θ,π)= (ξ A(x))2 + W ′(x)2 W ′′(x)θπ (R2|2 : R ). H 2 − 2 − ∈CSS ev Here, (x,θ) R1|1, (ξ,π) R1|1. ∈ ∈ Remark 8.4.2. (0) The difference between (8.4.12) and (8.4.16) is the existence of i in front of the term W ′′(x)θπ. This difference is rather significant when we consider Witten index for su- persymmetric quantum mechanics using the kernel representation of the corresponding evolution operator. (1) As there is no preference at this stage to take π and θ instead of θ and π, there is no significance of the sign in front of the terms iW ′′(x)θπ in (8.3.8) or W ′′(x)θπ in (8.4.16) in these cases. ± (2) We may regard (x,ξ,θ,π) as a Hamiltonian in (T ∗R1|1 : R ). H CSS ev (3) These Hamiltonians (8.4.12) and (8.4.16) are called supersymmetric extensions of (8.4.1) be- cause they give supersymmetric quantum mechanics after quantization (see 4). The procedure § above is author’s unmatured understanding of amalgam of physics papers such as F. Cooper and B. Freedman [27], A.C. Davis, A.J. Macfarlane, P.C. Popat and T.W. van Holten [29] etc. But supersymmetry in superspace Rm|n will be studied separately. (4) On the other hand, using the identification (1.13), we have ~ ~ ~ H± 2 b 0 ~,b (x, ∂x,θ,∂θ)=# − ♭ =#H ♭. H± 0 H~ + ~ b ±  ± 2  Moreover, in this case, the “complete Weyl symbol of the above ~(x, ∂ ,θ,∂ )” is calculated by H x θ −i~−1(xξ+θπ) ~ i~−1(xξ+θπ) 1 2 1 2 2 (8.4.17) ±(x,ξ,θ,π) = (e ±(x, ∂x,θ,∂θ)e = (ξ ax) b x +ibθπ. H H ~=0 2 − ± 2 1 2 + equals to (8.4.12) when A(q) = aq and W (x) = bx , and − is obtained from (8.4.16) with H 2 H A(q) = aq and W (x) = i bx2. These give the relation between W (q) and V (q). (See SUSYQM − 2 defined in 1.) § (5) Witten [143] considered as a quantum mechanical operator 1 1 0 1 1 0 (8.4.18) H(q, ∂ )= ∂2 + v(q) w(q) . q −2 q 0 1 − 2 0 1     −  This operator is supersymmetric when there exists a function ψ(q) such that 1 v(q)= ψ′(q)2, w(q)= ψ′′(q). 2 Problem 8.4.1. Though a trial to prove Atiyah-Singer index theorem applying super analysis by S. Rempel and T. Schmitt [112], was informed by A. Rogers long-time ago, but I feel shame that I haven’t comprehend well yet. Moreover, because I have stumbled before appreciate the naturalness of definition of weights of Douglis-Nirenberg for system of PDE, therefore I’m far from Rempel and Schmitt’s reformulation.

More recently, I find the paper of F.F. Voronov [141] though I haven’t appreciated it yet. Be- cause Voronov [140, 141] uses Banach-Grassmann algebra, therefore besides algebraic calculation, estimates by inequalities seems far from my understanding.

Problem 8.4.2. The proofs of “analytic torsion=Reidemeister torsion” by J. Cheeger [24], D. Burghelea, L. Friedlander and T. Kappeler [22] are given. Reprove this in our context. CHAPTER 9

Miscellaneous

9.1. Proof of Berezin’s Theorem 5.2.1

To be self-contained, we give a precise proof following F.A. Berezin [10] and A. Rogers [119] because their proofs are not so easy to understand at least for a tiny little old mathematician.

First of all, we prepare

a n Lemma 9.1.1. Let u(x,θ)= θ ua(x) be supersmooth on U = Uev R . If dx ua(x) |a|≤n × od Uev exists for each a, then we have P R B dxdθ u(x,θ)= dx dθ u(x,θ) = dθ dx u(x,θ) . − U U Rn Rn U ZZ Z ev  Z od  Z od  Z ev  Proof. By the primitive definition of integral, we have

B dxdθ u(x,θ)= dx dθ u(x,θ) = dx u1˜(x), − U U Rn U ZZ Z ev  Z od  Z ev and a a  dθ dx θ ua(x) = dθ θ dx ua(x) = dx u1˜(x). n n R Uev R Uev Uev Z od |aX|≤n  Z  Z od |aX|≤n  Z  Z (I) Now, we consider a simple case: Let a linear coordinate change be given by A C (x,θ) = (y,ω)M, M = , D B   that is, m n m n xi = ykAki + ωℓDℓi = xi(y,ω), θj = ykCkj + ωℓBℓj = θj(y,ω) Xk=1 Xℓ=1 Xk=1 Xℓ=1 with A ,B C and C , D C , and we have ki ℓj ∈ ev ℓi kj ∈ od ∂(x,θ) (9.1.1) sdet = det A det−1(B DA−1C) = det(A CB−1D)det−1B = sdet M. ∂(y,ω) − −   Interchanging the order of integration, putting ω(1) = ωB and y(1) = yA, we get

B dydω u(yA + ωD,yC + ωB)= dy dω u(yA + ωD,yC + ωB) − ZZ Z Z   = dy dω(1) det B u(yA + ω(1)B−1D,yC + ω(1)) · Z Z   = dω(1) det B dy u(yA + ω(1)B−1D,yC + ω(1)) Z Z   = dω(1) det B dy(1) det A−1 u(y(1) + ω(1)B−1D,y(1)A−1C + ω(1)) , · Z Z  169  170 9. MISCELLANEOUS that is, since ∂(y,ω) A−1 0 ∂(y,ω) = , sdet = det A−1 det B, ∂(y(1),ω(1)) 0 B−1 ∂(y(1),ω(1)) ·     we have B dydω u(yA + ωD,yC + ωB) − (9.1.2) ZZ ∂(y,ω) = B dy(1)dω(1) sdet u(y(1) + ω(1)B−1D,y(1)A−1C + ω(1)). − ∂(y(1),ω(1)) ZZ   Analogously, using Lemma 9.1.1 and by introducing change of variables as ∂(y(1),ω(1)) 1 A−1C y(2) = y(1),ω(2) = ω(1) + y(1)A−1C = sdet = sdet − = 1, ⇒ ∂(y(2),ω(2)) 0 1     we get

B dy(1)dω(1) u(y(1) + ω(1)B−1D,y(1)A−1C + ω(1)) − (9.1.3) ZZ ∂(y(1),ω(1)) = B dy(2)dω(2) sdet u(y(2) + (ω y(2)A−1C)B−1D,ω(2)). − ∂(y(2),ω(2)) − ZZ   Then by y(3) = y(2)(1 A−1CB−1D), ω(3) = ω(2) − ∂(y(2),ω(2)) (1 A−1CB−1D)−1 0 = sdet = sdet − = det −1(1 A−1CB−1D), ⇒ ∂(y(3),ω(3)) 0 1 −     we have B dy(2)dω(2) u(y(2) + (ω y(2)A−1C)B−1D,ω(2)) − − (9.1.4) ZZ ∂(y(2),ω(2)) = B dy(3)dω(3) sdet u(y(3) + ω(3)B−1D,ω(3)). − ∂(y(3),ω(3)) ZZ   Finally by ∂(y(3),ω(3)) 1 0 x = y(3) + ω(3)B−1D, θ = ω(3) = sdet = sdet = 1, ⇒ ∂(x,θ) B−1D 1   −  using det B det−1(A CB−1D) (det A det−1(B DA−1C)) = 1 from (9.1.1), we have, − · − ∂(y(3),ω(3)) (9.1.5) B dydω u(yA + ωD,yC + ωB) = sdet M −1 B dxdθ sdet u(x,θ). − · − ∂(x,θ) ZZ ZZ   Remark 9.1.1. For the linear change of variables, it is not necessary to assume the compact- ness of support for integrand using primitive definition of integration.

m|n (II) (ii-a) If H1 and H2 are superdiffeomorphisms of open subsets of R with the image of

H1 equals to the domain of H2, then Ber(H ) Ber(H ) = Ber(H H ) where Ber(H)(y,ω) = sdet J(H)(y,ω). 1 · 2 2 ◦ 1 Here, for H(y,ω) = (x (y,ω),θ (y,ω)) : Rm|n Rm|n, we put k l → ∂x (y,ω) ∂θ (y,ω) k l ∂(x,θ) J(H)(y,ω)= ∂yi ∂yi = . ∂xk(y,ω) ∂θl(y,ω) ∂(y,ω) ∂ωj ∂ωj ! 9.1. PROOF OF BEREZIN’S THEOREM 5.2.1 171

(ii-b) Any superdiffeomorphism of an open subset of Rm|n may be decomposed as H = H H 2 ◦ 1 where m|n m|0 H1(y,ω) = (h1(y,ω),ω)=(˜y, ω˜) with h1 : R R , (9.1.6) → m|n 0|n ( H2(˜y, ω˜)=(˜y, h2(˜y, ω˜)) with h2 : R R . →

Remark 9.1.2. (i) If H(y,ω) = (h1(y,ω), h2(y,ω)) is given by h1(y,ω) = yA + ωD and −1 h2(y,ω)= yC+ωB as above, putting H1(y,ω) = (yA+ωD,ω)=(˜y,ω) and H2(˜y,ω)=(˜y, yA˜ C+ ω(B DA−1C)), we have H = H H . In this case, we rewrite the procedures (9.1.2)–(9.1.5) as − 2 ◦ 1 B dydω u(yA + ωD,yC + ωB) − ZZ ∂(y,ω) = B dyd˜ ω˜ sdet u(˜y, (˜y ωD)A−1C + ωB) with y˜ = yA + ωD − ∂(˜y,ω) − ZZ   ∂(˜y,ω) = det A−1 B dxdθ sdet u(x,θ) with x =y, ˜ θ =yA ˜ −1C + ω(B DA−1C) · − ∂(x,θ) − ZZ   = det A−1 det(B DA−1C) B dxdθ u(x,θ). · − · − ZZ (ii) Analogously, putting H (y,ω) = (y,yC + ωB) = (y,θ) and H (y,θ) = (y(A CB−1D)+ 1 2 − θB−1D,θ), we have H = H H , and 2 ◦ 1 B dydω u(yA + ωD,yC + ωB) − ZZ ∂(y,ω) = B dydθ sdet u(y(A CB−1D)+ θB−1D,θ) with θ = yC + ωB − ∂(y,θ) − ZZ   ∂(y,θ) = det B B dxdθ sdet u(x,θ) with x = y(A CB−1D)+ CB−1θ · − ∂(x,θ) − ZZ   = det B det−1(A CB−1D) B dxdθ u(x,θ). · − · − ZZ

(iii) For any given superdiffeomorphism H(y,ω) = (h1(y,ω), h2(y,ω)), defining H1(y,ω) =

(h1(y,ω),ω)=(˜y,ω) and H2(˜y,ω)=(˜y, h˜2(˜y,ω)) such that h2(y,ω)= h˜2(h1(y,ω),ω), we have H = H H . Using the inverse function y = g(˜y,ω) ofy ˜ = h (y,ω), we put h˜ (˜y,ω) = h (g(˜y,ω),ω). 2 ◦ 1 1 2 2 We denote h (y,ω) = (h (y,ω)) = (h , , h ), etc. Then, for k,ℓ = 1, ,n, 1 1a 11 · · · 1n · · · m ∂h˜ ∂h ∂g ∂h 2ℓ = 2ℓ + i 2ℓ ∂ωk ∂ωk ∂ωk ∂yi Xi=1 with m ∂y˜ ∂h (g(y,ω),ω) ∂g ∂h ∂h 0= j = 1j = i 1j + 1j , ∂ωk ∂ωk ∂ωk ∂yi ∂ωk Xj=1 we get m ∂h˜ ∂h ∂h ∂h −1 ∂h 2ℓ = 2ℓ 1j 1j 2ℓ . ∂ωk ∂ωk − ∂ωk ∂yi ∂yi i,jX=1   Therefore,

∂h1 ∂h2 ∂h ∂h ∂h ∂h −1 ∂h ∂h ∂h˜ Ber H = sdet ∂y ∂y = det 1 det −1 2 1 1 2 = det 1 det −1 2 . ∂h1 ∂h2 ∂y · ∂ω − ∂ω ∂y ∂y ∂y · ∂ω ∂ω ∂ω !     172 9. MISCELLANEOUS

(III) For each type of superdiffeomorphisms H1 and H2, we prove the formula. (III-1) Let H(y,ω) = (h(y,ω),ω) where h = (h )m : Rm|n Rm|0. Then it is clear that j j=1 → m ∂h (y,ω) ∂hσ(i)(y,ω) Ber(H)(y,ω) = det j = sgn (σ) . ∂y ∂y i σ∈℘ i   Xm Yi=1 a For any u(x,θ)= |a|≤n θ ua(x), we put P B dxdθ u(x,θ)= dx dθ u(x,θ) = dxu1˜(x). − 0|n ZZU ZUev,B  ZR  ZUev,B On the other hand, we have

B dydω Ber(H)(y,ω)(u H)(y,ω) − ◦ ZZV ∂ ∂ ∂h (y,ω) = dy det j u(h(y,ω),ω) ∂ω · · ·∂ω ∂y ZπB(V) n 1   i   ω=0 (9.1.7) ∂hj (y, 0) = dy det u˜(h(y, 0)) ∂y 1 ZπB(V)   i   ∂ ∂ a ∂hj(y,ω) + dy ω ua(h(y,ω)) det . V ∂ωn · · · ∂ω1 ∂yi ZπB( )  |a|

Applying the standard integration on Rm to the first term of the rightest hand side above, we have ∂h (y, 0) dy det j u (h(y, 0)) = dx u (x) where U = H(V). ∂y 1˜ 1˜ ZπB(V)   i   ZUev,B Claim 9.1.1. The second term of the right hand side of (9.1.7) equals to the total derivatives a of even variables. More precisely, we have, for u(x,θ)= |a|≤n θ ua(x), m ∂ ∂ a P ∂ ω ua(h(y,ω)) Ber(H)(y,ω) = ( ) ∂ωn · · ·∂ω1 ∂yj ∗  |a|

As h (y,ω) R , we have j ∈ ev c hj(y,ω)= hj0˜(y)+ ω hj,c(y), |c|=evX≥2 ∂αu (h (y)) u (h(y,ω)) = u (h (y)) + ωch (y)u (h (y)) + x a 0˜ ( ωch (y))α, a a 0˜ j,c a,xj 0˜ α! j,c |c|=evX≥2 |αX|≥2 |c|=evX≥2 m ∂h (y,ω) ∂hσ(i)(y,ω) Ber(H)(y,ω) = det j = sgn (σ) ∂y ∂y i σ∈℘ i   Xm Yi=1 m m ∂h ˜(y) ∂hσ(j),c(y) ∂hσ(i),0˜(y) = det j,0 + sgn (σ) ωc ∂y ∂y ∂y i σ∈℘ j i   Xm Xj=1 |c|=evX≥2 i=1Y,i6=j m m ∂hσ(j),c (y) ∂hσ(k),c (y) ∂hσ(i),0˜(y) + sgn (σ) ωc 1 2 + etc. ∂y ∂y ∂y σ∈℘ j k i Xm j,kX=1 |cXj|=ev i=1Y,i6=j,k |c1+c2|=|c|≥4

Putting 1˜ a = b or = c + c , = c + c + c , etc, we have − 1 2 1 2 3 b (9.1.8) the coefficient of ω of fa(h(y,ω)) Ber(H)(y,ω)=I+II+III 9.1. PROOF OF BEREZIN’S THEOREM 5.2.1 173 where m m ∂hσ(i),0˜(y) I= h (y)u (h (y)) sgn (σ) , j,b a,xj 0˜ ∂y σ∈℘ i Xj=1 Xm Yi=1 m m ∂hσ(j),b(y) ∂hσ(i),0˜(y) II = u (h (y)) sgn (σ) , a 0˜ ∂y ∂y σ∈℘ j i Xm Xj=1 i=1Y,i6=j m m ∂hσ(j),c (y) ∂hσ(k),c (y) ∂hσ(i),0˜(y) III = u (h (y)) sgn (σ) 1 2 + etc. a 0˜ ∂y ∂y ∂y σ∈℘ j k i Xm j,kX=1 b=Xc1+c2 i=1Y,i6=j,k The term II is calculated as m m ∂ ∂hσ(i),0˜(y) II = f (h (y)) sgn (σ)h (y) A B ∂y a 0˜ σ(j),b ∂y − − j σ∈℘ i Xj=1  Xm i=1Y,i6=j  where m m m ∂h ˜(y) ∂hσ(i),0˜(y) A = k,0 u (h (y)) sgn (σ)h (y) , ∂y a,xk 0˜ σ(j),b ∂y j σ∈℘ i Xj=1  Xk=1  Xm i=1Y,i6=j m m ∂ ∂hσ(i),0˜(y) B = u (h (y)) sgn (σ)h (y) . a 0˜ σ(j),b ∂y ∂y σ∈℘ j i Xj=1 Xm  i=1Y,i6=j  We want to prove

Claim 9.1.2. (i) A =I, (ii) B = 0 and (iii) III= 0.

(i) To prove A = I, for each k = 1, ,m, we take all sums w.r.t. σ ℘ and j such that · · · ∈ m σ(j)= k. Then, relabeling in A, we have m m ∂hσ(j),0˜(y) ∂hσ(i),0˜(y) u (h (y)) sgn (σ)h (y) ∂y a,xk 0˜ k,b ∂y σ∈℘ j i Xm Xj=1 i=1Y,i6=j m ∂hσ(i),0˜(y) = u (h (y))h (y) sgn (σ) . a,xk 0˜ k,b ∂y σ∈℘ i Xm Yi=1 (ii) Take two permutations σ andσ ˜ in ℘m such that σ(i) =σ ˜(j), σ(j) =σ ˜(i), ,σ(k) =σ ˜(k) for k = i, j, and sgn (σ) sgn (˜σ)= 1. 6 − Then, m m ∂ ∂hσ(i),0˜(y) ∂ ∂hσ˜(i),0˜(y) sgn (σ)hσ(j),b(y) + sgn (˜σ)hσ,b˜ (y) = 0. ∂yj ∂yi ∂yj ∂yi  i=1Y,i6=j   i=1Y,i6=j  (iii) Interchanging the role of j, k and c1, c2 in III, we have III = 0. Others are treated analogously. Therefore, m ∂ I+II+III= A + B = ( ) ∂yj ∗ Xj=1 and we have proved the claim above.

Now, if we assume the compactness of the support of u (x) for a = 1,˜ then a | | 6 ∂ ˜ dy u (h(y,ω))∂1−a Ber(H)(y,ω) = 0. ∂y a ω ω=0 ZπB(U) i 

174 9. MISCELLANEOUS

(III-2) For H(y,ω) = (y, φ(y,ω)) with φ(y,ω) = (φ (y,ω), , φ (y,ω)) R0|n, we want to 1 · · · n ∈ claim ∂φ −1 (9.1.9) B dxdθ u(x,θ) = B dydω det i u(y, φ(y,ω)). − − ∂ω ZZ V ZZ U   j  By the analogous proof of (5.1.9) in Proposition 5.1.1, i.e. odd change of variables formula, we have the above readily.  Remark 9.1.3. How we decompose a given superdiffeomorphism? Though Berezin decomposes it as (9.1.6) but M. J. Rothstein [120] introduces another decomposition and arguments which are outside of my comprehention1. Moreover, there is another trial by Zirnbauer [147] which is not appreciated for me. I should be ashamed?! Therefore, I take the understandable arguments of Rogers, Vladimirov and Volvich with slight modification in Chapter 5. In any way, the following Rothstein’s decomposition is a key which is not proved here:

Proposition 9.1.1 (Proposition 3.1 of Rothstein [120]). Let superdiffeomorphism ϕ from V to U = ϕ(V) be given as

(9.1.10) x = x(y,ω)= ϕ0¯(y,ω), θ = θ(y,ω)= ϕ1¯(y,ω). We assume that the following:

∂ϕ0¯(y,ω) ∂ϕ1¯(y,ω) ∂xj ∂θl ∂y ∂y ∂yi ∂yi πB(sdet J(ϕ)(y,ω)) =0 with J(ϕ)(y,ω)= = ∂x . 6 ∂ϕ0¯(y,ω) ∂ϕ1¯(y,ω) j ∂θl ∂ω ∂ω ! ∂ωk ∂ωk ! Then, ϕ is decomposed uniquely by ϕ(1) : (y,ω) (˜y, ω˜) and ϕ(2) : (˜y, ω˜) (x, θ) satisfying → → (1) Z (1) (1) (1) (i) ϕ endows identical 2-gradings, that is, ϕ : (y,ω) (˜y, ω˜)=(˜ϕ0¯ (y), ϕ˜1¯ (y,ω)), and → (ii) ϕ(2) is derived from the following even and degree increasing derivation by Y(˜y,ω˜) (x, θ)= eY(˜y,ω˜) (˜y, ω˜)= ϕ(2)(˜y, ω˜) and (x, θ)= ϕ(2)(ϕ(1)(y,ω)) = ϕ(2) ϕ(1)(y,ω). ◦ Here, λ˜ (ϕ(V): Rm ), γ˜ (ϕ(V): Rn ) and j ∈CSS ev k ∈CSS od m ∂ n ∂ (9.1.11) = λ˜ (˜y, ω˜) + γ˜ (˜y, ω˜) . Y(˜y,ω˜) j ∂y˜ k ∂ω˜ j=1 j k X Xk=1

9.2. Function spaces and Fourier transformations

9.2.1. Function spaces. Let U Rm be a domain. We introduce following function spaces: ⊂ Definition 9.2.1 (Function spaces on U with values in C). ∞ I ∞ C (U : C)= u(q)= uI(q)σ uI(q) C (U : C) for any I , { ∈ ∈ I} I∈I ∞ X∞ ∞ C (U : C)= u(q) C (U : C) uI(q) C (U : C) for any I , 0 { ∈ ∈ 0 ∈ I} ∞ (U : C)= u(q) C (U : C) uI(q) (U : C) for any I , B { ∈ ∈B ∈ I} m ∞ m ˙(R : C)= u(q) C (U : C) uI(q) ˙(R : C) for any I , B { ∈ ∈ B ∈ I} m ∞ m m (R : C)= u(q) C (R : C ) uI(q) (R : C) , S { ∈ ∈ S } m ∞ m m M (R : C)= u(q) C (R : C) uI(q) M (R : C) . O { ∈ ∈ O }

1 Though following the principle of “Quick response to feel strange”, I asked him by mail, but no response. I don’t know exactly, but he maybe changes from mathematician to different occupation 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 175

Remark 9.2.1. In the above, we use the notation given in L.Schwartz [125] That is, ˙ m m α (R : C)= u(q) (R : C) lim ∂q u(q) = 0 for any α , B { ∈B |q|→∞ | | } m ∞ m α 2 k/2 M (R : C)= u(q) C (R : C ) ∂ u(q) C(1 + q ) O { ∈ | q |≤ | | for some constants C > 0 and k > 0 . } For a superdomain U = U Rn Rm|n with U = π (U)= π (U ) Rm, we put: ev × od ⊂ B B ev ⊂

Definition 9.2.2 (Function spaces on U with values in C). Putting X = (x,θ), XB = q U, a ∈ ∂θ u(x, 0) = ua(x), (U : C)= u(X)= u (x)θa u (q) C∞(U : C) for any a , CSS { a a ∈ } |a|≤n X (U : C)= u(X) (U : C) u (q) C∞(U : C) for any a , CSS,0 { ∈CSS a ∈ 0 } SS(U : C)= u(X) SS(U : C) ua(q) (UB : C) for any a , B { ∈C ∈B } m|n m|n m ˙SS(R : C)= u(X) SS(R : C) ua(q) ˙(R : C) for any a , B { ∈C ∈ B } m|n m|n m SS(R : C)= u(X) SS(R : C) ua(q) (R : C) for any a , S { ∈C ∈ S } m|n m|n m M,SS(R : C)= u(X) SS(R : C) ua(q) M (R : C) for any a . O { ∈C ∈ O } Remark 9.2.2. If we consider function spaces whose member are homogeneous, we denote them by adding subindeces ev or od, i.e. (U : C)= (U : C ) or (U : C)= (U : C ) CSS,ev CSS ev CSS,od CSS od etc.

Definition 9.2.3 (Function spaces on U with value ‘real’). / (U )= u(x) (U : C) u(x ) C∞(U : C) , CSS ev { ∈CSS ev B ∈ B } a / (U)= u(X) SS(U : C) ∂ u(x, 0) / (Uev) for any a , CSS { ∈C θ ∈ CSS } a ∞ / (U)= u(X) SS(U : C) ∂ u(XB) C (UB : C) for any a , CSS,0 { ∈C θ ∈ 0 } a / (U)= u(X) SS(U : C) ∂ u(XB) (UB : C) for any a , BSS { ∈B θ ∈B } ˙ m|n m|n a m / (R )= u(X) SS(R : C) ∂ u(XB) ˙(R : C) for any a , BSS { ∈C θ ∈ B } m|n m|n a m / (R )= u(X) SS(R : C) ∂ u(XB) (R : C) for any a , SSS { ∈ S θ ∈ S } m|n m|n a m / (R )= u(X) SS(R : C) ∂ u(XB) M (R : C) for any a . OM,SS { ∈C θ ∈ O }

Definition 9.2.4 (Topology of function spaces) . We introduce seminorms in (U : C) for any integer k, I and a compact set K U , • CSS ∈I ⊂ B by defining a pk,K,I(u)= sup projI(DX u(XB)) . XB∈K,|a|≤k | | (U : C) with this topology will be denoted by (U : C). CSS ESS We say that u 0 in (U : C) when j iff for any I , there exists a compact • j → CSS,0 →∞ ∈I set KI UB such that ⊂ a (i) the support of projI(∂θ uj( , 0)) is contained in KI for any a and j, and a · (ii) sup projI(D u (X )) 0 as j for any a = (α, a). XB∈KI | X j B |→ →∞ We denote (U : C) the set (U : C) with this topology and call it as the space of test DSS CSS,0 functions on U. 176 9. MISCELLANEOUS

a We say that u 0 in (U : C) iff for any I and a, projI(D u (X )) converges • j → BSS ∈ I X j B uniformly to 0 on any compact set KI and they are bounded on Rm. For u (Rm|n : C) and any integer k and I , we put • ∈ SSS ∈I 2 j/2 a (9.2.1) pk,I(u)= sup (1 + XB ) projI(DX u(XB)) . m XB∈R ,|a|+j≤k | | | | The topology of spaces / (U), / (U), / (U) and / (Rm|n) is defined accordingly as • E SS DSS BSS SSS above.

9.2.2. Scalar products and norms.

Definition 9.2.5 (Hermitian conjugation).

For x = (x , ,x ) Rm and θ = (θ , ,θ ) Rn , we put • 1 · · · m ∈ ev 1 · · · n ∈ od I x¯ = (¯x1, , x¯m) with x¯j = xj,Iσ · · · I | X|=ev I where σ is defined in (2.1.15) and xj,I =the complex conjugate of xj,I in C.

a ¯an ¯a1 |a|(|a|−1)/2 ¯a ¯ I (9.2.2) θ = θn θ1 = ( 1) θ and θk = θk,Iσ · · · − I | X|=od where θk,I = the complex conjugate of θk,I in C. I ∞ m Let w(q)= I wI(q)σ C (R : C). We put • ∈I ∈ I (9.2.3) w(q)= P wI(q)σ with wI(q)= the complex conjugate of wI(q) in C. I X∈I For w˜(x) (U : C) with w(q) C∞(Rm : C), we put • ∈CSS ev ∈ (9.2.4) w˜(x) w˜(¯x). ≡ For v(θ)= θav (C), we put • |a|≤n a ∈ Pn (9.2.5) v(θ)=P θ¯av∗ =v ¯(θ¯) with v∗ = ( 1)|a|(|a|−1)/2((v ) + ( 1)|a|(v ) ). a a − a ev − a od |aX|≤n For u(X) (U : C), we put • ∈CSS ¯a ∗ ¯ ¯ ¯ (9.2.6) u(X)= θ ua(¯x) =u ¯(X) where X = (¯x, θ) |aX|≤n with u∗(¯x) = ( 1)|a|(|a|−1)/2((u (x)) + ( 1)|a|(u (x)) ), a − a ev − a od 1 1 u (x)= ∂αu (x )xα = ∂αu (x )¯xα, a α! q a B S α! q a B S α α X X where x¯ = xB +x ¯S.

H Remark 9.2.3. (1) The Grassmann continuation w˜(x) is expanded as w˜(x)= H w˜H(x)σ with P ∗ 1 α w˜H(x)= ( 1) ∂q wJ (xB)x I(1) x I(α1) x I(1) x I(αm) , − α! 1, 1 · · · 1, 1 2, 2 · · · m, m H J I(1) I(αm) = + 1X+···+ m I I(1) I(αj ) j = j +···+ j , α=(α1,··· ,αm) 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 177 we get H 1 α α w˜(x)= w˜H(x)σ = ∂q w(xB)xS = w( )(¯x). H α! · X X This guarantees the naturalness of (9.2.3). g (2) The definiton of (9.2.4) and (9.2.5) follows from the relation θav = v θa for any v C. a a a ∈ Definition 9.2.6 (L2 spaces).

For u, w (U : C), we define the scalar product and the L2-norm by • ∈CSS,0 ev (9.2.7) (u, w)= dx u(x )w(x ) C and u 2 = (u, u) R. B B B ∈ k k ∈ ZπB(Uev) For v, w (C), we put • ∈ Pn (9.2.8) (v, w)= v w and v 2 = (v, v). a a k k |aX|≤n For u, w (U : C), we define also • ∈CSS,0 2 (9.2.9) (u, w)= dx ua(x)wa(x)= dxB ua(xB)wa(xB) with u = (u, u). Uev πB(Uev) k k |aX|≤n Z |aX|≤n Z Remark 9.2.4. (1) It is clear that if u, v are in / (U) or / (Rm|n), (u, v) C and CSS,0 SSS ∈ (u, u)= u 2 0. k k ≥ (2) In the following, we explain the derivation of the above scalar products: (i) If f(q), g(q) C∞(U : C), using δ-function symbolically, we may consider the standard scalar ∈ B product as

(f, g)= dq f(q)g(q)= dq′dq δ(q q′)f(q)g(q′) with U = π (U). − B B ZUB ZZUB,q′ ×UB,q On the other hand, for u(x) (U : C) and w(y) (U : C), we may regard u(x)w(y) ∈CSS,0 ev,x ∈CSS,0 ev,y ¯ as a function u(X)w(y) SS,0(Uev,X¯ Uev,y : C). Therefore, remarking (uw)(xB)= u(xB)w(xB)= τ(I;J,K) ∈C I × J K τ(I;J,K) I I J K ( 1) uJ(q)wK(q)σ with σ σ = ( 1) σ for I = J + K, we may define = + ∈I − − P (9.2.10) (u, w)= dydx¯ δ(¯x y)u(x)w(y)= dxB u(xB)w(xB) C. U U − Rm ∈ ZZ ev,y× ev,X¯ Z (ii) We introduce here a constant τ(a, b) for any multi-indeces a, b by

(9.2.11) θaθb = ( 1)τ(a,b)θa+b − from which we get easily

(9.2.12) τ(b, a) a b + τ(a, b) mod2. ≡ | || | Moreover, for any b and θ, θ,¯ π R , we get, by induction with respect to b , ∈ od | | n n (9.2.13) (π θ )bj = ( 1)|b|(|b|−1)/2πbθb and (θ¯ θ )bj = ( 1)|b|(|b|−1)/2θ¯bθb. j j − j j − jY=1 jY=1 (iii) By putting θ¯ θ = n θ¯ θ and h | i j=1 j j (9.2.14) P dθ = dθ¯ dθ¯ = ( 1)n(n−1)/2dθ¯ dθ¯ , 1 · · · n − n · · · 1 178 9. MISCELLANEOUS we get (9.2.8) from

hθ¯|θi a b (9.2.15) (u, w)= dθdθ e θ ua θ wb = ((ua)ev + (ua)od)wa = uawa. R0|n R0|n θ × ¯ a a a ZZ θ X Xb X X Here, we used equalities below and (9.2.13):

θau θbw = θaθb((u ) + ( 1)|a|+|b|(u ) )w , a b a ev − a od b (9.2.16)  n hθ¯|θi a b dθdθ e θ θ = δab = δa b .  0|n 0|n k k  R ×R¯ ZZ θ θ kY=1  In fact, taking up the top term w.r.t. θ¯,

n ¯ ehθ|θiθaθb ( 1)|a|(|a|−1)/2θ¯a (θ¯ θ )a¯j θb = ( 1)|a|(|a|−1)/2θ¯a( 1)|a¯|(|a¯|−1)/2θ¯a¯θa¯θb ∼ − j j − − j=1 Y = ( 1)|a|(|a|−1)/2+|a¯|(|a¯|−1)/2+τ(a,a¯)θ¯1˜θa¯θb, − and using (9.2.14), we have

¯ dθdθ ehθ|θiθaθb = ( 1)n(n−1)/2+|a|(|a|−1)/2+|a¯|(|a¯|−1)/2+τ(a,a¯) dθ θa¯θb R0|n×R0|n − R0|n ZZ θ θ¯ Z = ( 1)n(n−1)/2+|a|(|a|−1)/2+|a¯|(|a¯|−1)/2+τ(a,a¯)+τ(¯a,a)δ = δ − ab ab since n(n 1)/2+ a ( a 1)/2+ a¯ ( a¯ 1)/2+ τ(a, a¯)+ τ(¯a,a) 0 mod2. // − | | | |− | | | |− ≡ (iv) Finally, as we have

¯ dXdX δ(¯x x)ehθ|θiu(X)w(X) U U − ZZ X × X¯ hθ¯|θi (9.2.17) = dxdx¯ δ(¯x x) dθdθ e u(x,θ)w(x,θ) π (U ) ×π (U ) − R0|n×R0|n ZZ B ev x B ev x¯ (ZZ θ θ¯ )

= dxBua(xB)wa(xB)= (ua, wa), πB(Uev) |aX|≤n Z |aX|≤n our definition (9.2.9) seems canonical.

Lemma 9.2.1. Denoting by A∗ the dual of the operator A in the above scalar product, we get easily ~ ∗ ~ (9.2.18) ∂ = ∂ , (x )∗ = x , i xj i xj j j ∗  ∗ a ∗ a a ∗ t a (9.2.19) (∂θk ) = θk, (θk) = ∂θk , (Dθ ) = θ , (∂θ ) = (θ ).

Proof. For a, b 0, 1 n, we puta ˇ = (a , , a 1, , a ), ˆb = (b , , b + 1, , b ) ∈ { } k 1 · · · k − · · · n k 1 · · · k · · · n where ∂ θa = 0 with a for a = 0. Let v, w (C). Remarking (9.2.16), we have θk k ∈ Pn b ℓ (a) b ∂ (θau )(θ v )= u ( 1) k θaˇk (θ v ) θk a b a − b b ℓ (a) |aˇ |+|b| = θaˇk θ ( 1) k ((u ) + ( 1) k (u ) )v , − a ev − a od b and ˆ ˆ ˆ θau θ θbv = u θa( 1)ℓk(b)θbk v = ( 1)ℓk(b)θaθbk ((u ) + ( 1)|a|+|bk|(u ) )v , a k b a − b − a ev − a od b 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 179 which yield hθ¯|θi a b (∂θ u, v)= dθdθ e ∂θ (θ ua) vbθ k 0|n 0|n k Rθ ×R¯ a ZZ θ X Xb = ( 1)lk(a)δ ((u ) + ( 1)|a|−1+|b|(u ) )v − aˇkb a ev − a od b Xa,b lk(b) |a|+|b|+1 = ( 1) δ ˆ ((ua)ev + ( 1) (ua)od)vb − abk − Xa,b hθ¯|θi a b = dθdθ e uaθ θkvbθ = (u, θkv). R0|n R0|n θ × ¯ a ZZ θ X Xb Repeating this arguments, we have other equalities readily. 

9.2.3. Distributions. Definition 9.2.7 (Distributions on U Rm with values in C). ⊂ Let Φ be a linear functional defined on (U : C) such that Φ(u ) 0 in C iff u 0 in • DSS j → j → (U : C). Then, we call this functional as a distribution on U, and the set composed CSS,0 of these is denoted by ′ (U : C). DSS ′ (U : C) stands for the set consisting of continuous linear functionals on (U : C). • ESS ESS ′ (U : C) stands for the set consisting of continuous linear functionals on (U : C). •BSS BSS Φ ′ (Rm|n : C) iff Φ is a continuous linear functional on (Rm|n : C). • ∈ SSS SSS /′ (U), /′ (U), /′ (U) and /′ (Rm|n) are defined analogously. • DSS ESS BSS SSS ′ I ′ SS(U : C)= Φ(q)= ΦI(q)σ ΦI(q) (U : C) for any I , D { I ∈ D ∈ I} X∈I ′ ′ ′ (U : C)= Φ(q) (U : C) ΦI(q) = projI(Φ(q)) (U : C) for any I , ESS { ∈ DSS ∈E ∈ I} ′ m ′ m|n ′ m (R : C)= Φ(q) (R : C) ΦI(q) (R : C) for any I . SSS { ∈ DSS ∈ S ∈ I} ′ Here, Φ (U : C) acts on u SS(U : C) by ∈ DSS ∈ D I Φ, u = ΦJ, uK σ . h i h i I∈I I=J+K X X  Other dualities are defined analogously. Definition 9.2.8 (Distributions on U with values in C). ′ (U : C)= Φ(X)= Φ (x)θa Φ (X )= ∂aΦ(x , 0) ′(U : C) for any a , DSS { a | a B θ B ∈ DSS B } a X ′ (U : C)= Φ(X) ′ (U : C) Φ (X ) ′(U : C) for any a , ESS { ∈ DSS | a B ∈ESS B } ′ (Rm|n : C)= Φ(X) ′ (Rm|n : C) Φ (X ) ′(Rm : C) for any a . SSS { ∈ DSS | a B ∈ SSS } Action of Φ ′ (U : C) on u (U : C) are defined by ∈ DSS ∈ DSS Φ, u = ∂aΦ(x,θ), ∂au(x,θ) . h i h θ θ i |aX|≤n Proposition 9.2.1. Let Φ be a continuous linear functional on (U : C). Then, Φ(X) is DSS represented by a ′ Φ(X)= Φ (x)θ where projI(Φ (x )) (U : C) for each I . a a B ∈ D B ∈I |aX|≤n 180 9. MISCELLANEOUS

Analogous results hold for any element of ′ (U : C), ′ (U : C) or ′ (Rm|n : C). ESS BSS SSS

Proof is omitted here.

Definition 9.2.9 (Sobolev spaces). Let k be a non-negative integer.

We define, for u, v (U : C), • ∈CSS,0 (9.2.20) ((u, v)) = (Da u, Da v) and u 2 = ((u, u)) . k X X k kk k |Xa|≤k For any u, v (Rm|n : C), we define • ∈ SSS (((u, v))) = ((1 + X 2)l/2Da u, (1 + X 2)l/2Da v) k | B| X | B| X (9.2.21) |a|X+l≤k and u 2 = (((u, u))) . ||| |||k k Now, we put ˜2 (U : C)= u (U : C) u < , LSS { ∈CSS,0 k k ∞} k ˜ (U : C)= u SS(U : C) u k < HSS { ∈C k k ∞} and taking the completion of these spaces with respect to corresponding norms, we get the desired spaces 2 (U : C) and k (U : C). The closure of (U : C) in k (U : C) is denoted by LSS HSS CSS,0 HSS k (U : C). The spaces /2 (U) and / k (U) are defined analogously. HSS,0 LSS HSS Remark. We should consider the integral in the last form as the one in the Lebesgue sense.

Definition 9.2.10. Let 1 r . ≤ ≤∞ m|n m|n a m / r (R )= u(X) / (R ) ∂ u(X ) r (R : C) for any a DL ,SS { ∈ SSS θ B ∈ DL } where

m ∞ m α r m r (R : C)= u(q) C (R : C) ∂ u(q) L (R : C) for any α . DL { ∈ q ∈ } m|n The topology on / r (R ) is defined by seminorms DL ,SS a p I (u)= projI(D u(X )) r . k, :r k X B kL |Xa|≤k Definition 9.2.11. ′ (Rm : C)= Φ(q) ′(Rm : C) (1 + q 2)k/2DαΦ(q) ′(Rm : C) OC { ∈ D | | q ∈B for any α and some k , } ′ m ′ m ′ m (R : C)= Φ (R : C) projI(φ(q)) (R : C) , OC { ∈ DSS ∈ OC } ′ m|n ′ m|n a ′ m (R : C)= Φ (R : C ) D Φ(XB) (R : C) . OC,SS { ∈ DSS X ∈ OC }

Remark 9.2.5. (1) The following assertions follow directly from definitions above and the standard distribution theory of [125]: ′ m|n m|n m|n (i) Φ C,SS(R : C) iff for any ϕ SS(R : C), Φ ϕ SS(R : C). ∈ O m|n ′ ′ m|n ∈ D ∗ ∈ S (ii) (/ 1 (R )) = / (R ). DL ,SS B (2) Though we don’t mention other properties on function or distribution spaces on U Rm|n, but ⊂ they will be almost comparable to those in standard case treated in [125]. (3) Sobolev inequalities should be studied separately. 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 181

9.2.4. Fourier transformations, definitions and their basic properties. In this section, we borrow ideas from [10, 11, 35, 99] with necessary modifications.

9.2.4.1. Fourier transformations (even case). We introduce the Fourier and inverse Fourier transformations of functions with even variables. For u(x), v(ξ) (Rm|0 : C), we define ∈ SSS −m/2 −i~−1hx|ξi (9.2.22) (Feu)(ξ) = (2π~) dx e u(x), m|0 ZR −m/2 i~−1hy|ξi (9.2.23) (F¯ev)(ξ) = (2π~) dξ e v(ξ). m|0 ZR Remark 9.2.6. If F stands for the standard Fourier transformation on (Rm : C), then it m I I S m acts on (R : C) by (F u)(p)= I (F uI)(p)σ for u(q)= I uI(q)σ with uI(q) (R : C). S ∈I ∈I ∈ S As u(x) (Rm|0 : C) is the Grassmann continuation of u(q) (Rm : C), we need to say ∈ SSS P P ∈ S (F u)(ξ) = (Feu˜)(ξ).

Proposition 9.2.2. Let u, v (Rm|0 : C). ∈ SSS f α −1 |α| α α |α| α (9.2.24) (Fe(∂x u))(ξ) = (i~ ) ξ (Feu)(ξ), (Fe(x u))(ξ) = (i~) ∂ξ (Feu)(ξ). −1 ′ −1 ′ (9.2.25) (F (ei~ hx|ξ iu))(ξ) = (F u)(ξ ξ′), (F (u(x x′)))(ξ)= e−i~ hx |ξi(F u)(ξ). e e − e − e (9.2.26) (F (u(tx)))(ξ)= t −m(F u)(t−1ξ) for t R× = R 0 . e | | e ∈ \ { }

(9.2.27) F¯eFeu = u and FeF¯ev = v. (9.2.28) (u, v) = (F u, F v) and F u = u . e e k e k k k

−m/2 −i~−1hx|ξi −m/2 (9.2.29) (Feδ)(ξ) = (2π~) dx δ(x)e = (2π~) . m|0 ZR Moreover, F : (Rm|0 : C) (Rm|0 : C) mapping satisfying e SSS → SSS 2 2 (9.2.30) projI((F u)(ξ )) C ~ projI(u(x )) for any I . ||| e B |||k ≤ m, ||| B |||k ∈I

Proof. As ∂αu˜(x) = (^∂αu)(x), if ξ = p Rm, we get the first part of (9.2.24) by x q ∈ α −m/2 −i~−1hq|pi α (Fe(∂x u˜))(ξ) ξ=p = (2π~) dq e ∂q u(q) | m ZR = (i~−1)|α|pα(F u)(p) = (i~−1)|α|ξα(F u˜)(ξ) . e |ξ=p The second equality in (9.2.24) is proved analogously and which shows that (F u˜)(ξ) (Rm|n : e ∈ SSS C). Other equalities in (9.2.24)-(9.2.28) are proved as same as the standard case. (9.2.29) follows by defining F δ, u˜ = δ, F u˜ = dp δ(p)(F u)(p) = (F u)(0) = dq u(q) = dx u˜(x) = h e i h e i Rm Rm Rm|0  (Feu˜)(0). (9.2.30) is a direct consequenceR of the standard theory of FourierR transformation.R ∗ ¯ −1 Remark 9.2.7. The Plancherel formula (9.2.28) stands for Fe = Fe = Fe .

9.2.4.2. Fourier transformations (odd case). For v(θ), w(π) (C), we define Fourier trans- ∈ Pn formations withk ¯ C× = C 0 as ∈ \ { } n/2 −i¯k−1hθ|πi (9.2.31) (Fov)(π) =k ¯ ιn dθ e v(θ), 0|n ZR n/2 i¯k−1hθ|πi (9.2.32) (F¯ow)(θ) =k ¯ ιn dπ e w(π) 0|n ZR 182 9. MISCELLANEOUS where we put

i when n 1 mod4, ≡ −iπn(n−2)/4 2 n n(n−1)/2 1 when n 2 mod4, ιn = e and ιn = i ( 1) = ≡ − i when n 3 mod4,  ≡ 1 when n 0 mod4. ≡  Remark 9.2.8. (1) Clearly, in (9.2.32), if we change the role of the variables π and θ, we get

Fo = F¯o. (2) Moreover, we may put differently as

n/2 −¯k−1hθ|πi (F˜ov)(π) =k ¯ n dθ e v(θ), 0|n (9.2.33) ZR ¯ n/2 ¯k−1hθ|πi (F˜ow)(θ) =k ¯ n dπ e w(π). 0|n ZR with 1 when n 1 mod4, ≡ iπn(n−1)/2 2 n(n−1)/2  1 when n 2 mod4, (9.2.34) n = e , n = ( 1) = − ≡ −  1 when n 3 mod4, − ≡ 1 when n 0 mod4. ≡  n  Proposition 9.2.3. Putting 1=˜ (1, , 1) and a¯ = 1˜ a, we have, for v C, · · · − a ∈ −i¯k−1hθ|πi az }| { −1 |a¯| |a¯|(|a¯|−1)/2+τ(a,a¯) a¯ (9.2.35) dθ e θ va = ( i¯k ) ( 1) π va. 0|n − − ZR Moreover, for v, w (C), we have the following: ∈ Pn a −1 |a| n|a| a (Fo(∂θ v))(π) = (i¯k ) ( 1) π (Fov)(π), (9.2.36) − (F (θav))(π) = (i¯k−1)|a|( 1)n|a|∂a(F v)(π). o − π o

−1 ′ (F (ei¯k hθ|π iv))(π) = (F v)(π π′), o o − (9.2.37) −1 ′ (F (v(θ θ′)))(π)= e−i¯k hθ |πi(F v)(π). o − o

(9.2.38) (F (v(sθ)))(π)= sn(F v)(s−1π) for s C×. o o ∈

(9.2.39) FoF¯ow = w and F¯oFov = v.

(9.2.40) (v, w) = (F v, F w) and F v = v if ¯k = 1. o o k o k k k

n/2 1˜ (9.2.41) (Foδ)(π) =k ¯ ιn for δ(θ)= θ .

Proof. We get, by the definition of integration w.r.t. θ and (9.2.13),

−1 −i¯k hθ|πi a a −1 a¯j dθ e θ va = dθ θ va ( i¯k θjπj) 0|n 0|n − ZR ZR = ( i¯k−1)|a¯|( 1)Y|a¯|(|a¯|−1)/2+τ(a,a¯)πa¯v . − − a 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 183

Moreover, we get

i¯k−1hθ|πi ′ −i¯k−1hθ′|πi ′a dπ e dθ e θ va 0|n 0|n ZR ZR  i¯k−1hθ|πi −1 |a¯| |a¯|(|a¯|−1)/2+τ(a,a¯) a¯ = dπ e ( i¯k ) ( 1) π va 0|n − − ZR = ( i¯k−1)n( 1)|a|(|a|−1)/2+|a¯|(|a¯|−1)/2+τ(a,a¯)+τ(¯a,a)θav ,  − − a where we used (9.2.13) with b =a ¯. By the definition of ι , ι2 ( 1)n(n−1)/2 = 1 and (9.2.12), we n n − have the Fourier inversion formula (9.2.39). Or, we may prove directly this by changing the order of integration:

−1 ′ −1 ′ F¯ F w(θ) =k ¯nι2 dπdθ′e−i¯k hθ −θ|πiw(θ′) =k ¯nι2 ( 1)n dθ′ dπ e−i¯k hθ −θ|πi w(θ′) o o n n − ZZ Z Z  ˜ =k ¯nι2 ( 1)n dθ′(i¯k−1)n( 1)n(n−1)/2(θ′ θ)1w(θ′)= dθ′δ(θ′ θ)w(θ′). n − − − − Z Z

−1 −1 Remarking ∂ (e−i¯k hπ|θiv) = e−i¯k hπ|θi( i¯k−1π v + ∂ v), we get, after integration with respect θj − j θj to θ,

(F ∂ v)(π)= i¯k−1( 1)nπ F v(π) o θj − j o which proves the first equality of (9.2.36) when a = 1. Assuming the first equality of (9.2.36) | | holds for any a satisfying a = l, we apply the above for w(θ) = (∂av)(θ). Then, we get | | θ aˆ ( 1)ℓj (a)F ∂ j w = F ∂ w = i¯k−1( 1)nπ F w − o θ o θj − j o = (i¯k−1)|a|+1( 1)n(|a|+1)π πaF v = (i¯k−1)|aˆj |( 1)ℓj (a)+n|aˆj |πaˆj F v. − j o − o

As before, we puta ˆ = (a , , a , a + 1, a , , a ) and ℓ (a)= j−1 a . j 1 · · · j−1 j j+1 · · · n j k=1 k To prove the Plancherel formula (9.2.40), remarking that P

(F u)(π) =k ¯n/2ι (i¯k−1)|a¯|( 1)|a¯|(|a¯|−1)/2+τ(a,a¯)πa¯u , o n − a |aX|≤n ¯ ¯ ¯ ¯ ¯ (F u)(π) =k ¯n/2ι ( i¯k−1)|b|( 1)|b|(|b|−1)/2+τ(b,b)πbu′ , o n − − b |Xb|≤n we have

(F u, F u′) =k ¯n dπdπ ehπ¯|πi(i¯k−1)|a¯|( 1)|a¯|(|a¯|−1)/2+τ(a,a¯)πa¯u o o − a |a|≤Xn,|b|≤n ZZ ¯ ¯ ¯ ¯ ¯ ( i¯k−1)|b|( 1)|b|(|b|−1)/2+τ(b,b)πbu′ × − − b n−2|a¯| = ¯k uava. |aX|≤n This implies (9.2.40) fork ¯ = 1. Especially in case n = 2, a = 1, above holds for anyk ¯. Other | | equalities are proved by the analogous fashion so omitted.  184 9. MISCELLANEOUS

Example 9.2.1 (n = 2). For u(θ)= u +θ θ u and v(π)= π v +π v with u , u , v , v C, 0 1 2 1 1 1 2 2 0 1 1 2 ∈ we have

−i¯k−1hθ|πi −2 (Fou)(π) =k ¯ dθ e u(θ) =k ¯(u1 +k ¯ π1π2u0), 0|2 ZR i¯k−1hθ|πi −1 −1 (F¯ov)(θ) =k ¯ dπ e v(π) =k ¯( i¯k θ2v1 + i¯k θ1v2), 0|2 − ZR i¯k−1hθ|πi −2 F¯o(Fou)(θ) =k ¯ dπ e [¯k(u1 +k ¯ π1π2u0)] = u0 + θ1θ2u1 = u(θ), 0|2 ZR −i¯k−1hθ|πi −1 −1 Fo(F¯ov)(π) =k ¯ dθ e [¯k( i¯k θ2v1 + i¯k θ1v2)] = π1v1 + π2v2 = v(π). 0|2 − ZR Therefore, we get

′ hπ¯|πi ′ (Fou, Fou )= dπdπ e Fou(π)Fou (π) Z ∂ ∂ ∂ ∂ = [ehπ¯|πi(¯ku ¯k−1u π¯ π¯ )(¯ku +k ¯−1π π u ) ∂π ∂π ∂π¯ ∂π¯ 1 − 0 1 2 1 1 2 0 2 1 1 2 π=¯π=0 2 −2 =k ¯ u1u1 +k ¯ u0u0,

′ ′ ′ which implies that the Plancherel formula (Fou, Fou ) = (u, u ) for above u, u holds only when ¯k = 1. Analogously but for any ¯k = 0, 6 ′ hπ¯|πi ′ (Fov, Fov )= dπdπ e Fov(π)Fov (π) Z ∂ ∂ ∂ ∂ = [ehπ¯|πi(v π¯ v π¯ )(π v′ π v′ )] ∂π ∂π ∂π¯ ∂π¯ 1 1 − 2 2 1 1 − 2 2 2 1 1 2 π=¯π=0 ′ ′ ′ = v1v1 + v2v2 = (v, v ). //

Example 9.2.2 (n = 3). Let u(θ)= u0 + θ1θ2u1 + θ2θ3u2 + θ1θ3u3, v(θ)= θ1v1 + θ2v2 + θ3v3 + θ θ θ v with u , u , u , u , v , v , v , v C, we have the following inversion formula: 1 2 3 4 0 1 2 3 1 2 3 4 ∈ 3/2 −i¯k−1hθ|πi (Fou)(π) =k ¯ ι3 dθ e u(θ) 0|3 ZR =k ¯3/2ι [i¯k−3π π π u i¯k−1π u i¯k−1π u + i¯k−1π u ], 3 3 2 1 0 − 3 1 − 1 2 2 3 3/2 i¯k−1hθ|πi F¯o(Fou)(θ) =k ¯ ι3 dπ e (Fou)(π) 0|3 ZR 3 2 i¯k−1hθ|πi −3 −1 −1 −1 =k ¯ ι3 dπ e [i¯k π3π2π1u0 i¯k π3u1 i¯k π1u2 + i¯k π2u3] 0|3 − − ZR =k ¯3ι2[ i¯k−3u + i¯k−3θ θ u + i¯k−3θ θ u + i¯k−3θ θ u ] 3 − 0 2 1 1 3 2 2 3 1 3 = u0 + θ1θ2u1 + θ2θ3u2 + θ1θ3u3.

Since (F u)(¯π)= (F u)(¯π) =k ¯3/2ι [ i¯k−3u π¯ π¯ π¯ + i¯k−1u π¯ + i¯k−1u π¯ i¯k−1u π¯ ], o o 3 − 0 1 2 3 1 3 2 1 − 3 2 (F u′)(π) =k ¯3/2ι [i¯k−3π π π u′ i¯k−1π u′ i¯k−1π u′ + i¯k−1π u′ ], o 3 3 2 1 0 − 3 1 − 1 2 2 3 and ′ 3 −6 ′ −2 ′ (Fou)(¯π)(Fou )(π) =k ¯ [¯k u0π¯1π¯2π¯3π3π2π1u0 +k ¯ u1π¯3π3u1 +k ¯−2u π¯ π u′ +k ¯−2u π¯ π u′ ]+ , 2 1 1 2 3 2 2 3 · · · 9.2. FUNCTION SPACES AND FOURIER TRANSFORMATIONS 185 where the term ( ) vanishes after integration w.r.t. dπdπ, we get · · · ′ hπ¯|πi ′ (Fou, Fou )= dπdπe (Fou)(¯π)(Fou )(π) 0|3 0|3 ZZR ×R −3 ′ ′ ′ ′ 3−2|a¯| ′ =k ¯ u0u0 +k ¯(u1u1 + u2u2 + u3u3)= ¯k uaua. |aX|=0,2 Therefore, the Plancherel formula holds for ¯k = 1.

9.2.4.3. Fourier transformations (mixed case). Putting

c = (2π~)−m/2¯kn/2ι and X Ξ = ~−1 x ξ +k ¯−1 θ π R , m,n n h | i h | i h | i∈ ev for any u(X)= θau (x), v(Ξ) = πbv (ξ) (Rm|n : C), we define a a b b ∈ SSS P P −ihX|Ξi ( u)(ξ,π)= cm,n dX e u(X) F m|n ZR −m/2 n/2 −i~−1hx|ξi−i¯k−1hθ|πi a = (2π~) ¯k ιn dxdθ e θ ua(x) Rm|0×R0|n ZZ x θ a (9.2.42) X −m/2 −i~−1hx|ξi n/2 −i¯k−1hθ|πi a = (2π~) dx e ¯k ιn dθ e θ ua(x) m|0 0|n Rx R |aX|≤n Z  Z θ  a = [(Foθ )(π)](Feua)(ξ). |aX|≤n

ihX|Ξi ( ¯v)(x,θ)= cm,n dΞ e v(Ξ) F m|n ZR −m/2 n/2 i~−1hx|ξi+i¯k−1hθ|πi b = (2π~) ¯k ιn dξdπ e π vb(x) Rm|0×R0|n ZZ ξ π b (9.2.43) X −m/2 i~−1hx|ξi n/2 i¯k−1hθ|πi b = (2π~) dξ e ¯k ιn dπ e π vb(ξ) m|0 0|n R Rπ |Xb|≤n Z ξ  Z  b = [(F¯oπ )(θ)](F¯evb)(x). |Xb|≤n Proposition 9.2.4. For any u, v (Rm|n : C), ∈ SSS a −1 |α| −1 |a| n|a| a ( (DX u))(ξ) = (i~ ) (i¯k ) ( 1) Ξ ( u)(Ξ), (9.2.44) F − F ( (Xau))(Ξ) = (i~)|α|(i¯k)|a|( 1)n|a|Da ( u)(Ξ). F − Ξ F ′ ( (eihX|Ξ iu))(Ξ) = ( u)(Ξ Ξ′), (9.2.45) F F ′ − ( (u(X X′)))(Ξ) = e−ihΞ|X i( u)(Ξ). F − F (9.2.46) ( u)(tξ,sπ)= t −msn( u)(t−1ξ,s−1π) for t R×,s C×. F | | F ∈ ∈ (9.2.47) ¯u = u and ¯ v = v. FF FF (9.2.48) ( u, v) = (u, v) and u = u for ¯k = 1. F F kF k k k If we define δ(X)= δ(x)δ(θ), then

(9.2.49) ( δ)(Ξ) = (F δ)(ξ)(F δ)(π)= c . F e o m,n 186 9. MISCELLANEOUS

: (Rm|n : C) (Rm|n : C) gives a continuous linear mapping satisfying F SSS → SSS a¯ ′ a (9.2.50) projI((∂ u)(Ξ )) C projI((∂ u)(X )) for each I . ||| πF B |||k ≤ m,n||| θ B |||k ∈I

Proof. Combining above results, we have readily these statements. 

Remark 9.2.9. Since by the formal definition of δ-function, we have

−1 ′ dp F u¯(p)F v(p) = (2π~)−m dp [ dq e−i~−1qpu(q)][ dq′ e−i~ q pv(q′)] m m m m ZR ZR ZR ZR −1 ′ = dqdq′[(2π~)−m dp e−i~ hq−q |pi]u(q)v(q′)= dq u(q)v(q), m ZZ Z ZR and for A = (Ajk), putting m m p,Ap = p A p , D , AD = ( i~∂ )A ( i~∂ ) with D = i~∂ , h i j jk k h q qi − qj jk − qk qj − qj j,kX=1 j,kX=1 we have −m/2 ℓ ℓ (2π~) dp p,Ap u¯ˆ(p)= Dq, ADq u(q). m h i h i ZR Therefore, we want to ask whether following claim holds or not:

Claim 9.2.1. Let u, v (Rm|n : C). Then, we have ∈ SSS dX u(X)v(X)= dΞ ( u¯)(Ξ)( v)(Ξ). m|n m|n F F ZR ZR

′ But Claim 9.2.1 doesn’t hold in general: For example, take u(θ) = u0 + θ1θ2u1, u (θ) = ′ ′ u0 + θ1θ2u1, then

′ ′ ˆ′ dθ(u0 + θ1θ2u1)(u0 + θ1θ2u1)= dπu¯ˆ(π)u (π). 0|2 − 0|2 ZR ZR In fact

′ ′ ′ ′ dθ(u0 + θ1θ2u1)(u0 + θ1θ2u1)= u0u1 + u1u0, R0|2 Z u( )(θ¯)= u + θ θ u = u + u θ θ = u θ¯ θ¯ u =u ¯(θ¯), · 0 1 2 1 0 1 1 2 0 − 1 2 1 −ik¯−1hθ¯|π¯i −1 ¯k dθe¯ u( )(θ¯)=¯k π¯1π¯2u0 ¯ku1, R0|2 · − Z uˆ(¯π)= ¯k−1π π u ¯ku , − 1 2 0 − 1 −1 −1 ′ ′ ′ ′ dπ ( ¯k π1π2u0 ¯ku1)(¯k π1π2u0 +¯ku1)= u0u1 u1u0. // R0|2 − − − − Z ′ ′ ′ But, for v(θ)= θ1v1 + θ2v2, v (θ)= θ1v1 + θ2v2, Claim 9.2.1 does hold:

′ ′ ˆ ˆ′ dθ(θ1v1 + θ2v2)(θ1v1 + θ2v2)= dπ v( )(π)v (π). 0|2 0|2 · ZR ZR In fact

′ ′ ′ ′ dθ(θ1v1 + θ2v2)(θ1v1 + θ2v2) = ((v1)ev (v1)od)v2 ((v2)ev (v2)od)v1, R0|2 − − − Z 9.3.QI’SEXAMPLEOFWEAKLYHYPERBOLICEQUATION 187

θ1v1 + θ2v2 = v1θ¯1 + v2θ¯2,

−ik¯−1hθ¯|π¯i ¯k dθe¯ (v1θ¯1 + v2θ¯2)= iπ¯2((v1)ev (v1)od)+ iπ¯1((v2)ev (v2)od), R0|2 − − − Z vˆ( )(π)= i((v ) (v ) )π i((v ) (v ) )π , · 1 ev − 1 od 2 − 2 ev − 2 od 1 ′ ′ dπ [i((v1)ev (v1)od)π2 i((v2)ev (v2)od)π1]( iπ2v1 + iπ1v2) R0|2 − − − − Z = ((v ) (v ) )v′ ((v ) (v ) )v′ . // 1 ev − 1 od 2 − 2 ev − 2 od 1 9.3. Qi’s example of weakly hyperbolic equation

In 1958, M-y. Qi considered the following IVP: (9.3.1) v L(t, ∂ )v = 0 with L(t)= L(t, ∂ )= t2∂2 + (4k + 1)∂ and v(0,q)= ϕ(q), v (0,q) = 0. tt − q q q q t In Dreher and Witt [39], following claim is cited from Qi [110]:

Claim 9.3.1. For suitably chosen c = 0, jk 6 k t2 v(t,q)= c t2jϕ(j) q + jk 2 Xj=0   gives the solution of (9.3.1).

Though Qi uses the knowledge of Euler-Poisson equation and the Riemann-Louville fraction integral, we generalize the method of characteristics to a system of PDOp using superanalysis to have readily

Theorem 9.3.1. k 2ℓ k 2ℓ 2 2 2 k! 2 k! t v(t,q) = (2π)−1/2 dp eiqpeit p/2 t2ℓ(ip)ℓϕˆ(p)= t2ℓϕ(ℓ) q + . (2ℓ)!(k ℓ)! (2ℓ)!(k ℓ)! 2 Z Xℓ=0 − Xℓ=0 −   9.3.1. A systemization and superspace setting. Putting w = v tv , we have t − q ∂ w = v v tv = t2v + (4k + 1)v v tv = tw + 4kv , t tt − q − tq qq q − q − tq − q q then v(t,q) v(q) (9.3.2) i∂ U = H(t, ∂ )U, U = U(t,q)= and U(0,q)= U(q)= , t q w(t,q) w(q)     with t∂ 1 H(t, ∂ )= i q . q 4k∂ t∂  q − q Preparing odd variables θ1,θ2, we define an operator ∂ ∂ ∂2 (9.3.3) (t, ∂ ,θ,∂ )= it∂ 1 θ θ + 4ik∂ θ θ i H x θ x − 1 ∂θ − 2 ∂θ x 1 2 − ∂θ θ  1 2  1 2 which acts on u(t,x,θ)= v(t,x)+ w(t,x)θ1θ2. Then, we reformulate (9.3.2) as follows: ∂ (9.3.4) i u(t,x,θ)= (t, ∂ ,θ,∂ )u(t,x,θ) with u(0,x,θ)= v(x)+ w(x)θ θ . ∂t H x θ 1 2 Introducing Fourier transformation w.r.t. odd variables, we have a supersmooth function (9.3.5) (t,ξ,θ,π)= itξ θ π 4kξθ θ + iπ π , H h | i− 1 2 1 2 188 9. MISCELLANEOUS which is the Hamilton function corresponding to (t, ∂ ,θ,∂ ). H x θ 9.3.2. A solution of the Hamilton-Jacobi equation by direct method. We solve the following Hamilton-Jacobi equation directly:

t + it x θ θ 4k xθ1θ2 + i θ1 θ2 = 0, (9.3.6) S S h |S i− S S S ( (0,x,ξ,θ,π)= x ξ + θ π . S h | i h | i Decomposing (t,x,ξ,θ,π) as S (t,x,ξ,θ,π)= (t,x,ξ, 0, 0)+X(t,x,ξ)θ θ + Y (t,x,ξ)θ π + Y˜ (t,x,ξ)θ π S S 1 2 1 1 2 2 + V (t,x,ξ)θ1π2 + V˜ (t,x,ξ)θ2π1 + Z(t,x,ξ)π1π2 + W (t,x,ξ)θ1θ2π1π2, we calculate X,Y, Y,V,˜ V,Z,W˜ which are shown also independent of x.

(0) Taking θ = 0, π = 0 in (9.3.6), we have (t,x,ξ, 0, 0) = 0 with (0,x,ξ, 0, 0) = x ξ , S t S h | i which gives (t,x,ξ, 0, 0) = x ξ . S h | i

(1) Differentiating (9.3.6) w.r.t θ1 and θ2 and restricting to θ = π = 0, we have (9.3.7) X = 4kξ 2itξX iX2 with X(0) = 0 where X = ∂ ∂ (t,x,ξ,θ,π) . t − − θ2 θ1 S θ=π=0 Moreover, differentiating (9.3.7) once more w.r.t x, we put X˜ = X which gives x

X˜t + 2itξX˜ + 2iXX˜ = 0 with X˜(0) = 0. Therefore X˜ (t) = 0 which implies X(t,x,ξ) is independent of x.

To solve (9.3.7), we may associate the 2nd order ODE: (9.3.8) φ¨ + 2itξφ˙ 4ikξφ = 0 with φ˙(0) = 0, − ˙ from which we have a solution X(t,ξ)= i φ(t) of (9.3.7). − φ(t) For the sake of notational simplicity, we rewrite (9.3.8) as (9.3.9) φ¨ + αtφ˙ + βφ = 0 with φ˙(0) = 0. α = 2iξ, β = i4kξ. − ∞ j This equation is solvable in polynomial w.r.t. t: Putting φ(t)= j=0 cjt , we have ∞ ∞ P φ˙ = jc tj−1, φ¨ = j(j 1)c tj−2. j − j Xj=1 Xj=2 Then, the coefficients of tj are given by β t0 :2c + βc =0= c = c , 2 0 ⇒ 2 − 2 0 1 t1 :3 2c + αc + βc =0= c = (α + β)c , · 3 1 1 ⇒ 3 −3 2 1 ·1 1 t2 :4 3c + 2αc + βc =0= c = (2α + β)c = ( 1)2 (2α + β)βc , · 4 2 2 ⇒ 4 −4 3 2 − 4! 0 · t3 :5 4c + 3αc + βc = 0, · 5 3 3 · · · t2ℓ−2 :2ℓ(2ℓ 1)c + (2ℓ 2)αc + βc = 0, − 2ℓ − 2ℓ−2 2ℓ−2 9.3.QI’SEXAMPLEOFWEAKLYHYPERBOLICEQUATION 189

t2ℓ−1 :(2ℓ + 1)2ℓc + (2ℓ 1)αc + βc = 0, etc. 2ℓ+1 − 2ℓ−1 2ℓ−1

Since φ˙(0) = 0 implies c1 = 0, we have c2ℓ+1 = 0 for any ℓ. Moreover, putting c0 = 1, i .e. φ(0) = 1, we have

( 1)ℓ(2α)ℓ β c = − where (x) = x(x + 1) (x + ℓ 1). 2ℓ (2ℓ)! 2α ℓ · · · −  ℓ In our case, we have β(2α)−1 = k and ( k) = ( 1)ℓ k! , therefore − − ℓ − (k−ℓ)!

k k ℓ ℓ k ℓ 2ℓ ( 1) (4iξ) k ℓ 2ℓ 4 k! ℓ 2ℓ φ(t)= c2ℓt = − − t = (iξ) t . (2ℓ)!  (2ℓ)!(k ℓ)! Xℓ=0 Xℓ=0 Xℓ=0 − (2) Y = Y (t,ξ)= ∂ ∂ (t,x,ξ, 0, 0) satisfies π1 θ1 S

(9.3.10) Yt + itξY + iXY = 0 with Y (0) = 1.

From this, we have 2 d e−it ξ/2 (log (Y φ)) = itξ, Y (t)= . dt − φ(t) Same relation holds for Y˜ = ∂ ∂ (t,x,ξ, 0, 0). π2 θ2 S (3) V = ∂ ∂ (t,x,ξ, 0, 0) satisfies π2 θ1 S

Vt + itξV + XV = 0 with V (0) = 0.

From this, V (t) = 0. Analogously, V˜ = ∂ ∂ (t,x,ξ, 0, 0) = 0. π1 θ2 S (4) For Z = Z(t,ξ)= ∂ ∂ (t,x,ξ, 0, 0), we have π2 π1 S 2 (9.3.11) Zt + iY = 0 with Z(0) = 0.

Therefore, 2 t t e−is ξ Z(t)= i ds Y 2(s)= i ds . − − φ2(s) Z0 Z0 (5) Putting W = W (t,ξ)= ∂ ∂ ∂ ∂ (t,x,ξ, 0, 0), π2 π1 θ2 θ1 S and remarking Z = ∂ ∂ ∂ (t,x,ξ, 0, 0) = 0, we have x x π2 π1 S

(9.3.12) Wt + 2itξW + 2iXW = 0 with W (0) = 0.

Therefore, W = 0.

Now, we define

xξ xπ1 xπ2 (t,ξ,θ,π) = sdet S S S = Y −2(t,ξ)= 2(t,ξ,θ,π). D Sθ1ξ Sθ1π1 Sθ1π2  A Sθ2ξ Sθ2π1 Sθ2π2   190 9. MISCELLANEOUS

9.3.3. Quantization. Using above defined and , we construct a function S A u(t,x,θ) = (2π)−1/2 dξdπ (t,x,ξ,θ,π)eiS(t,x,ξ,θ,π)uˆ(ξ,π). A Z It is shown that this gives a solution of (9.3.4).

Since

iY hθ|πi+iZπ1π2 2 dπ e (ˆv(ξ)π1π2 +w ˆ(ξ)) =v ˆ(ξ) + (Y θ1θ2 + iZ)ˆw(ξ), Z we have

(2π)−1/2 dξdπ eiS(ˆv(ξ)π π +w ˆ (ξ)) A 1 2 Z = (2π)−1/2 dξ eihx|ξi+iXθ1θ2 [ˆv(ξ) + (Y 2θ θ + iZ)ˆw(ξ)] A 1 2 Z −1/2 −1 ihx|ξi 2 = (2π) dξ Y e (1 + iXθ1θ2)(ˆv(ξ)+ iZwˆ (ξ)+ Y θ1θ2wˆ(ξ)) Z = (2π)−1/2 dξ eihx|ξiY −1(ˆv(ξ)+ iZwˆ(ξ)) Z −1/2 ihx|ξi −1 2 + (2π) dξ e Y [iX(ˆv(ξ)+ iZwˆ (ξ)) + Y wˆ(ξ)]θ1θ2. Z Since w = 0, we have

k 2ℓ 2 2 k! v(t,q) = (2π)−1/2 dξ eihx|ξieit ξ/2 t2ℓ(iξ)ℓϕˆ(ξ) .  (2ℓ)!(k ℓ)! Z ℓ=0 x=q X −

9.4. An example of a system version of Egorov’s theorem – Bernardi’s question

It is well-known that Egorov’s theorem concerning the conjugation of ΨDO(=pseudo-differential operator) with FIOs(=Fourier integral operators) is a very powerfull tool for the study of ΨDOs.

Using superanalysis, we extend that theorem to the 2 2 system of PDOs(=partial differential × operators) or ΨDOs. As a by-product, we give a new geometrical interpretation of the similarity transformations eiHPe−iH for any 2 2-matrices P and H = H∗. ×

9.4.1. Bernardi’s question. Remarking that

2 2 ( α∂2 + βx2)e−iγx /2 = [iαγ + (β + αγ2)x2]e−iγx /2, − x 2 2 (c(x)∂ + ∂ c(x))e−iγx /2 = (c′(x) 2ic(x)γx)e−iγx /2, x x − we have iγx2/2 2 2 −1 −iγx2/2 e 0 α∂x + βx 2 (c(x)∂x + ∂xc(x)) e 0 iγx˜ 2/2 −1 − 2 2 −iγx˜ 2/2 0 e 2 (d(x)∂x + ∂xd(x)) α∂˜ + βx˜ 0 e   − x   (9.4.1) 2 iαγ + (β + αγ2)x2 2−1(c′(x) 2iγxc˜ (x))ei(γ−γ˜)x /2 = 2 − . 2−1(d′(x) 2iγxd(x))e−i(γ−γ˜)x /2 iα˜γ˜ + (β˜ +α ˜γ˜2)x2  −  In February 2001, Bernardi (as a chairman of a session where I gave a talk) asked me whether it is possible to explain (9.4.1) using superanalysis. Especially, why appear the terms c′(x) and d′(x) in the off-diagonal part? 9.4. AN EXAMPLE OF A SYSTEM VERSION OF EGOROV’S THEOREM – BERNARDI’S QUESTION 191

9.4.2. An answer to Bernardi. We re-interpret (9.4.1) as follows: For u, v C∞(R), ∈ (9.4.2) 2 2 eiγx /2 0 αD2 + βx2 2−1i(c(x)D + Dc(x)) e−iγx /2 0 u 2 2 0 eiγx˜ /2 2−1i(d(x)D + Dd(x))αD ˜ 2 + βx˜ 2 0 e−iγx˜ /2 v      2 2 2 i(γ−γ˜)x2/2 −1 ′ (β + αγ )x + iαγ + 2iγxα∂x α∂x e (2 c (x)+ c(x)∂x iγxc˜ (x)) u = 2 − − . e−i(γ−γ˜)x /22−1d′(x)+ d(x)∂ iγxd(x)) (β˜ +α ˜γ˜2)x2 + iα˜γ˜ + 2iγx˜ α∂˜ α∂˜ 2 v  x − x − x  

Since (9.4.2) with u = v = 1 gives (9.4.1), we should explain the meaning of (9.4.2) instead of (9.4.1).

Since we have

2 2 2 −1 i(x−y)ξ x + y (β + αγ )x + iαγ + 2iγxα∂x α∂x u(x) = (2π) dξdy e a ,ξ u(y), − 2 2 ZR    2−1i(γ−γ˜)x2 ′ −1 i(x−y)ξ x + y e (c /2+ c∂x iγxc˜ )u(x) = (2π) dξdy e c ,ξ u(y) − 2 2 ZR   with Weyl symbols

a(x,ξ)= α(ξ γx)2 + βx2 = (β + αγ2)x2 2αγxξ + αξ2, − − −1 2 γ +γ ˜ c(x,ξ)= ie2 i(γ−γ˜)x c(x) ξ x , − 2   we get the Weyl symbol of the right-hand side of (9.4.2), given by (9.4.3) 2 α(ξ γx)2 + βx2 iei(γ−γ˜)x /2c(x)(ξ 2−1(γ +γ ˜)x) 2 − − ie−i(γ−γ˜)x /2d(x)(ξ 2−1(γ +γ ˜)x)α ˜(ξ γx˜ )2 + βx˜ 2  − −  α +α ˜ (β + αγ2) + (β˜ +α ˜γ˜2) ξ2 (αγ +α ˜γ˜)xξ + x2 ∼ 2 − 2 α α˜ (β + αγ2) (β˜ +α ˜γ˜2) i − ξ2 (αγ α˜γ˜)xξ + − x2 θ π − 2 − − 2 h | i   2 γ +γ ˜ 2 γ +γ ˜ + ie−i(γ−γ˜)x /2d(x) ξ x θ θ + iei(γ−γ˜)x /2c(x) ξ x π π . − 2 1 2 − 2 1 2    

Superspace interpretation: On the other hand, putting

αξ2 + βx2 ic(x)ξ 2−1γx2 0 σ(P)(x,ξ)= , σ(H)(x)= , id(x)ξ αξ˜ 2 + βx˜ 2 0 2−1γx˜ 2     we have (9.4.4) σ(♯P♭)(x,ξ,θ,π)= σ( ˆ)(x,ξ,θ,π)= (x,ξ,θ,π) P P α +α ˜ β + β˜ α α˜ β β˜ = ξ2 + x2 i − ξ2 + − x2 θ π + id(x)ξθ θ + ic(x)ξπ π , 2 2 − 2 2 h | i 1 2 1 2   (γ +γ ˜)x2 (γ γ˜)x2 σ(♯H♭)(x,θ,π)= σ( ˆ)(x,θ,π)= (x,θ,π)= i − θ π . H H 4 − 4 h | i 192 9. MISCELLANEOUS

Therefore, we have x˙ = = 0, Hξ γ +γ ˜ γ γ˜  ξ˙ = x = x + i − x θ π ,  −H − 2 2 h | i  with (x(0),ξ(0),θ(0),π(0)) = (x,ξ,θ,π)  ˙ γ γ˜ 2  θj = π = i − x θj, (j = 1, 2),  −H j − 4 γ γ˜ π˙ = = i x2π , (j = 1, 2),  j θ1 − j  −H 4  which yields the Hamilton flow corresponding to as  H (t)(x,ξ,θ,π) = (x(t),ξ(t),θ(t),π(t)) with C γ +γ ˜ γ γ˜ x(t)= x(t,x,ξ,θ,π)= x, ξ(t)= ξ(t,x,ξ,θ,π)= ξ xt + i − xt θ π , − 2 2 h | i −i(γ−γ˜)tx2/4 i(γ−γ˜)tx2/4 θj(t)= θj(t,x,ξ,θ,π)= e θj, πj(t)= π(t,x,ξ,θ,π)= e πj (j = 1, 2).

Putting operators ˆ γ +γ ˜ ˆ γ γ˜ ∂ ∂ ˆ ˆ A = i∂x xt, B = − xt, σ3 = 1 θ1 θ2 , ξ(t)= A Bσ3, − − 2 2 − ∂θ1 − ∂θ2 − with Weyl symbols b d b γ +γ ˜ γ γ˜ σ(Aˆ)(x,ξ)= ξ xt, σ(Bˆ)(x,ξ)= − xt, σ(σ )(θ,π)= i θ π , − 2 2 3 − h | i γ +γ ˜ γ γ˜ σ(ξ(t))(x,ξ,θ,π)= ξ xt + i − xt θ π , θ(t) π(t) = θ π , − 2 2 h | i h b | i h | i we have the following:d γ +γ ˜ σ(AˆBˆ + BˆAˆ)(x,ξ) = (γ γ˜)xt ξ xt , σ(σ2)(θ,π) = 1. − − 2 3   Remark 9.4.1. Let a, b and c be non-commutative or commutativeb “operators”. For mono- mials p2(x,y)= xy and p3(x,y,z)= xyz, we define 1 2! (ab + ba) if [a, b] = 0, p2,s(a, b)= 6 (ab if [a, b] = 0,

1 3! (abc + acb + bca + bac + cba + cab) if a, b, c are non-commutative each other, 1 p3,s(a, b, c)= (abc + bca) if [a, b] = 0, but [a, c] = 0 and [b, c] = 0,  2! 6 abc if a, b, c are commutative each other.  From these, we get 2 γ2 +γ ˜2 γ +γ ˜ σ ξ(t) (x,ξ,θ,π)= ξ2 (γ +γ ˜)xtξ + x2t2 + i ξ xt (γ γ˜)xt θ π . − 2 − 2 − h | i   In fact, d  2 ξ(t) = (Aˆ Bˆσ )(Aˆ Bˆσ )= Aˆ2 (AˆBˆ + BˆAˆ)σ + Bˆ2σ2, − 3 − 3 − 3 3 with d b γ +γ ˜b γ +γ ˜ bγ2 +γ ˜2b Aˆ2 + Bˆ2 = ∂2 + i∂ xt + xt i∂ + x2t2, − x x 2 2 x 2     and γ2 +γ ˜2 σ(Aˆ2 + Bˆ2)= ξ2 iξ(γ +γ ˜)xt + x2t2. − 2 9.4. AN EXAMPLE OF A SYSTEM VERSION OF EGOROV’S THEOREM – BERNARDI’S QUESTION 193

2 Since [ξ(t) , θ(\t) π(t) ] = 0, we have h | i 2 2 σ ξ(t) θ(\t) π(t) (x,ξ,θ,π)= σ ξ(t) θ(\t) π(t) (x,ξ,θ,π) d h | i h | i γ2 +γ ˜2 d  = ξd2 (γ +γ ˜)xtξ + x2t2 θ π − 2 h | i   γ +γ ˜ 1 i ξ xt (γ γ˜)xt + 2θ θ π π . − − 2 − 2 1 2 1 2     Though [ˆσ , θ θ ] =0 and [ˆσ , π[π ] = 0, we have [ξ(t), θ \(t)θ (t)]=0 and [ξ(t), π \(t)π (t)] = 3 1 2 6 3 1 2 6 1 2 1 2 0. Moreover, we get

d d2 d ξ(t) θ \(t)θ (t)(u + u θ θ )= e−i(γ−γ˜)x t/2( i∂ γxt)u θ θ , 1 2 0 1 1 2 − x − 0 1 2 2 ξ(t) π \(t)π (t)(u + u θ θ )= ei(γ−γ˜)x t/2( i∂ γx˜ t)u . d 1 2 0 1 1 2 − x − 1 Therefore, we get d \ \ σ ξ(t) θ1(t)θ2(t) (x,ξ,θ,π)= σ ξ(t) θ1(t)θ2(t) (x,ξ,θ,π)

γ +γ ˜ 2 d  = ξd xt e−i(γ−γ˜)x t/2θ θ , − 2 1 2   \ \ σ ξ(t)π1(t)π2(t) (x,ξ,θ,π)= σ ξ(t) π1(t)π2(t) (x,ξ,θ,π)

γ +γ ˜ 2 d  = ξd xt ei(γ−γ˜)x t/2π π . − 2 1 2   On the other hand, since [d\(x(t)), ξ(t)] =0 but [d\(x(t)), θ \(t)θ (t)]=0 and [ξ(t), θ \(t)θ (t)] = 6 1 2 1 2 0, we have d d 1 p (d\(x(t)), ξ(t), θ \(t)θ (t)) = (d\(x(t)) ξ(t) θ \(t)θ (t)+ ξ(t) θ \(t)θ (t) d\(x(t))), 3,s 1 2 2 1 2 1 2 that is, d d d 1 2 p (d\(x(t)), ξ(t), θ \(t)θ (t))(u +u θ θ )= e−i(γ−γ˜)x t/2 d(x)( i∂ γxt)+( i∂ γxt)d(x) u θ θ . 3,s 1 2 0 1 1 2 2 { − x− − x− } 0 1 2 \ \ Analogous holdsd for p3,s(c(x(t)) ξ(t) π1(t)π2(t)).

From these, we have d (t)(x,ξ,θ,π) P C  α +α ˜ 2 β + β˜ 2 = σ ξ(t) + σ x(t) 2 2 α α˜  2  α α˜ 2 i −dσ ξ(t) θ(\t) π(td) i − σ(x(t) θ(\t) π(t) ) − 2 h | i − 2 h | i \ \ \ \ + iσ d(dx(t)) ξ(t) θ1(t)θ2(t) + iσ c(xd(t)) ξ(t) π1(t)π2(t))(x,ξ,θ,π (9.4.5) α +α ˜ β + β˜ αγ2t2 +α ˜γ˜2t2 = ξ2 + x2d+  x2 (αγt +dα ˜γt˜ )xξ  2 2 2 − α α˜ β β˜ αγ2t2 α˜γ˜2t2 i − ξ2 + − x2 (αγt α˜γt˜ )xξ + − x2 θ π − 2 2 − − 2 h | i   γ +γ ˜ 2 γt +γt ˜ 2 + id(x) ξ xt e−i(γt−γt˜ )x /2θ θ + ic(x) ξ x ei(γt−γt˜ )x /2π π . − 2 1 2 − 2 1 2     This equals to (9.4.3) after replacing γ γt andγ ˜ γt˜ . → → 194 9. MISCELLANEOUS

Therefore, denoting x simply by x, etc, we have proved

ˆ ˆ σ(♯eitHPe−itH♭)(x,ξ,θ,π)= σ(eitH ˆe−itH)(x,ξ,θ,π)= (t)(x,ξ,θ,π) .  P P C   Remark 9.4.2. In the above, we calculate the product of operators and find its symbol, rather directly. In the near future, we need to give a product formula for operators as analogous to “bosonic” case.

9.5. Functional Derivative Equations

9.5.1. Liouville equation. I mentioned, at 7 or 8-th lecture, a function with countably in- finite independent variables. There, I regard a function with an odd variable θ as a function J with countably infinite independent variables θJ C where θ = J θJσ . This resembles to { ∈ } ∈I consider a functional P As is well-known, a non-linear system of ODEs may be regarded as a linear PDE and therefore ask what occurs when we have non-linear PDE on Rd instead of ODE.

Typically, the solution of Hamilton equation relates to the solution of Liouville equation by the method of characteristics. For H(q,p) C∞(Rm : R), Hamilton equation is written down as ∈ q˙ = H (q(t),p(t)), q(0) q j pj with = , p˙ = H (q(t),p(t)), p(0) p ( j − qj     and the Liouville equation is ∂ u(t,q,p)= u, H(q,p) with u(0,q,p)= u(q,p). ∂t { } Here, Poisson bracket is defined by

m ∂f ∂f ∂f ∂f f, g = . { } ∂qj ∂pj − ∂pj ∂qj Xj=1   In general, Hamilton equation is a non-linear ODE and Liouville equation is a linear PDE. Even non- linear, applying Galerkin method, PDE may be regarded as ODE with infinitely many components in certain function spaces.

If we take as the special initial data in Liouville equation, for example, u(q,p) may be a measure δq(q)dq or δp(p)dp, respectively, we get the solution of Hamilton equation. Conversely, putting u(t,q,p)= u(q(t,q,p),p(t,q,p)) from the solution of Hamilton equation, we get the solution u(t,q,p)dqdp2.

9.5.2. Hopf equation (H). [E. Hopf [61]] As Liouville equation corresponds to Hamilton equation, Hopf equation written by functional derivatives corresponds to Navier-Stokes equation. More precisely,

2 here, we misuse measure δq(q)dq and its density δq (q) w.r.t. dq 9.5. FUNCTIONAL DERIVATIVE EQUATIONS 195

9.5.2.1. Navier-Stokes equation: Let a domain Ω in Rm be given with smooth boundary ∂Ω. Find a vector fieldu(t,x)= m u (t,x) ∂ and a pressure p(t,x) satisfying j=1 j ∂xj ∂P u(t,x) ν∆u(t,x) + (u )(t,x)+ p = 0, ∂t − ·∇ ∇ div u = 0,

u(0,x)= u0(x), u(t,x) ∂Ω = 0. Assuming this has a solution, we denote it (T u )(x )= u(t,x). t 0 i j For the sake of simplicity, we take a Riemannian manifold M with metric gij(x)dx dx . Let ◦ 2 2 1 Lσ(M) bea set of L -integrable solenoidal vector field on M, and let Λσ(M) be a solenoidal vector field with compact support on M. 2 ˜ ˜ Put H = Lσ(M, g) and H its dual. Find a functional W (t,ηη) on [0, ) H, for (t, η) ◦ ∞ × ∈ (0, ) Λ1 (M), it satisfies ∞ × σ ∂ δ2 W (t, η)= i( ˜ η)jk(x) W (t, η) ∂t M − T δηj (x)δηk(x) (9.5.1) Z  δ j + ν(∆η)j(x) W (t, η)+ iηj(x)f (x,t)W (t, η) dgx, δηj (x)  1 ∂ δ g(x) W (t, η) = 0, ∂x δη (x) g(x) j  j  p pW (t, 0)=1 and W (0, η)= W0(η). and 1 ∂ δ W (0) = 1, g(x) W (η) = 0, 0 ∂x δη (x) 0 g(x) j  j  ∂ p f(x,t)=pf j(x,t) L2(0, : V−1). ∂xj ∈ ∞

2 9.5.2.2. Hopf-Foia¸sequation (HF). [C. Foia¸s[46]] Take H = Lσ(Ω) as Hilbert space. Find a family of Borel measures µ(t, ) on H, such that for a suitable class of test functionals { · }t∈(0,∞) Φ(t, u), it satisfies ∞ ∂Φ(t, u) µ(t, du)dt Φ(0, u)µ0(du) − 0 H ∂t − H (9.5.2) Z Z ∞ Z j j j δΦ(t, u) = ( uu) (x) ν(∆u) (x) f (x,t) dgxµ(t, du)dt H ∇ − − δuj (x) Z0 Z ZM  Here µ0( ) is a given Borel measure on H. This equation is obtained, for any ω (H), putting · −1 ∈ B µ(t,ω)= µ0(Tt ω), calculate ∂ Φ(t, Ttu)µ0(du). ∂t H Z Here, we assume Ttu above is considered well-defined. The solution of Hopf equation is obatained by putting Φ(t, u)= ρ(t)eihu,ηi into (9.5.2), and

ihu,ηi ihTtu,ηi j W (t, η)= e µ(t, du)= e µ0(du) with u, η = u (x)ηj(x)dgx. H H h i Z Z ZM ======Mini Column 4 ======196 9. MISCELLANEOUS

What are functional derivatives, which is abruptly mentioned above. Let Ω be a domain in Rm. We consider a functional W : X(Ω) η W (η) C on suitable functional space X(Ω). ∋ → ∈ Taking a test function φ C∞(Ω) X(Ω), if we have ∈ 0 ⊂ d W (η + tφ) dt t=0 then, we denote it as δW (η) ′ ′ w, φ = dx w(x)φ(x), w(x)= (Ω) D h iD δη(x) ∈ D ZΩ and call this functional derivative. We need higher order functional derivatives for W . Formally we may put δ2W (η) ′(Ω Ω) δη(x)δη(y) ∈ D × but what does this mean when x = y? See, Inoue [67].

======End of Mini Column 4 ======

I feel the address of I.M. Gelfand [54] at ICM Amsterdam conference in 1954 suggests beauti- • ful and important problems and I believe as he mentioned that we need to develop a very new theory of differential equations to study quantum field theory or turbulence theory, for example, theory of functional derivative equations (FDE). The configuration space where functional lives is a function space which is infinite-dimensional, therefore no suitable Lebesgue-like measure. This means it is not yet possible to integrate functional freely, and no integration by parts, no Fourier transforma- tion does exist. The tool which we are available now is Taylor expansion if it exists, therefore, only very algebraic treatise is possible. Concerning a simple model equation with removable by ∞ renormalization, see Inoue [66].

9.5.2.3. FDE representing turbulence? Though Hopf equation is related to the invariant mea- sure w.r.t. the flow governed by Navier-Stokes equation, I suspect that the equation related to turbulence will be Fokker-Planck type FDE derived from Navier-Stokes equation:

Find a measure P (t, v)dF v satisfying below:

∂ 3 δ j i 1 i i i P (t, v)= d x i v (x) jv (x)+ p(x) ν∆v (x) f (x) P (t, v) ∂t 3 δv (x) ∇ ρ∇ − − ZR n 2  o kBTν 3 δ + d x j i P (t, v). ρ R3 ∇ δv (x) Z   Here ρ ( ′ vj(x′))( ′ vi(x′)) ′ f i(x′) 3 ′ ∇i ∇j −∇i p(x)= d x ′ 4π 3 x x ZR | − | and the functional derivatives are taken w.r.t. transversal velocity field 3 j j d ξ j δℓ ξℓξk k iξx v (x)= 3 δk 2 v (x)e . R3 (2π) − ξ Z  | |  Problem 9.5.1. Very recently, I make know the paper [136] written by O.V. Troshkin, where he cited the “result” by W. Thomson (alias Lord Kelvin) such that W.T. obtained a wave equation for an incompressible fluid by averaging Euler’s equation. Troshkin claims that the formal analogy existing between waves of small disturbances of inviscous and incompressible turbulent medium and j k d 9.6. SUPERSYMMETRIC EXTENSION OF THE RIEMANNIAN METRIC gjk(q)dq dq ON R 197 electromagnetic waves is established. Prove these facts mathematically using Reynolds equation by Foia¸s [46].

In the above, seemingly Thomson assumes the intrinsic fluctuation associated to Euler flow and averaging w.r.t. this. On the other hand, we [78] derive Navier-Stokes equation from Euler equation, by adding artificially white noise (extrinsic) fluctuation to each flow line of Euler equation.

9.5.3. Equation for QED?. As a functional derivative equation for QED=quantum electro- dynamics), it might be the following forms a base? δZ(η, η,¯ J) δ2Z(η, η,¯ J)  = iJ (x)Z(η, η,¯ J)+ ieγµ , δJ (x) − µ δη(x)δη¯(x)  µ  δZ(η, η,¯ J) δ2Z(η, η,¯ J)  (iγµ∂~ M) = iη(x)Z(η, η,¯ J)+ ieγµ ,  µ  − δη¯(x) − δη¯(x)δJµ(x)   δZ(η, η,¯ J) δ2Z(η, η,¯ J) (iγµ∂~ + M)= iη¯(x)Z(η, η,¯ J) ieγµ , δη(x) µ − δη(x)δJ (x)  µ   Z(0, 0, 0) = 1.   Remark 9.5.1 . This equation stems from functional method in QFT(Quantum Theory), more precisely, adjoining external forces to each component of Maxwell-Dirac equation, we get this equation. In finite dimensional case, I know vaguely a story, for given non-linear ODE, adding fluctuating external force to it to have Langevin equation, and solving that and taking average of solution w.r.t. fluctuation to get Fokker-Planck equation. In this story, if we replace this ODE with the coupled Maxwell-Dirac equation, what facts do we get? Considering like this, what type of classical property is inherited to the solution of quantum or statistical equation, and how the quantum or statistical effect is represented by “classical quantity”?

As the Feynman’s path-integral representation gives directly quantum object from Lagrangian without solving Schr´’odinger equation, physicists write down the quantum quantity using path inte- gral with Feynman measure and no use of FDE etc. As is mentioned before explaining “stationary phase method”, how to make rigorous their arguments?

j k d 9.6. Supersymmetric extension of the Riemannian metric gjk(q)dq dq on R

In Witten’s paper, he writes down “classical object” or rather “quantity before quantization” ∗ ∗ corresponding to the deformed Laplace-Beltrami operator dφdφ + dφdφ as if it is evident. Here the function φ is the Morse function on the manifold M.

Mathematically it is not so clear what is the classical object3 for the given quantum operator. Therefore reversely, I try to give a prescription how one obtains the super symmetrically extended i j metric from the given Riemannian metric gij(q)dq dq . To make the situation simple, as a manifold, we take Rd4 and as the given Lagrangian 1 L(q, q˙)= g (q)q ˙iq˙j + A (q)q ˙j V (q) C∞(T Rd : R). 2 ij j − ∈

3Semi-classical analysis is a study to get classical objects from the given quantum thing 4in this section, as the dimension of the configuration space, we use d instead of m, 198 9. MISCELLANEOUS

Using Legendre transformation, we associate a Hamiltonian 1 H(q,p)= gij(q)(p A (q))(p A (q)) + V (q) C∞(T ∗Rd : R). 2 i − i j − j ∈ To such a Hamiltonian, via extending formally that Lagrangian, we may associate a supersymmetric extension 1 √ 1 √ 1 (x,ξ,θ,π)= gij ξ − (g g )θkπl A ξ − (g g )θmπn A H 2 i − 2 ik,l − il,k − i j − 2 jm,n − jn,m − j 1 1 + R θjθlπiπk + gjkW W W θiπj  2 ikjl 2 ,j ,k − ;ij which belongs to (R2d|2d : R ). Here, the functions gij = gij(x) of x Rd|0 etc., appeared CSS ev ∈ above are Grassmann extensions of the corresponding ones gij = gij (q) of q Rd etc. ∈

9.6.1. A prescription for a supersymmetric extension of a given L(q, q˙). We prepare two odd variables ρ = (ρ , ρ ) R2 . Instead of the path space P considered in 2, we introduce 1 2 ∈ od § another path space ˜ consisting of (super)fields Φ = Φ(t, ρ) = (Φ1(t, ρ),..., Φd(t, ρ)) : (t, ρ) P0 ∈ R R2 Rd given by the following form: × od → ev √ 1 (9.6.1) Φj(t, ρ)= xj(t)+ √ 1ρ ǫ ψj (t)+ − ρ ǫ ρ F j(t) − α αβ β 2 α αβ β where for a certain interval I R ⊂ (9.6.2) xj(t) C∞(I : R ), ψj (t) C∞(I : R ), F j(t) C∞(I : R ) ∈ ev β ∈ od ∈ ev with ǫ = ǫ , ǫ = 1, j = 1, 2, , d and α, β = 1, 2. αβ − βα 12 · · · Introducing operators − as Dα − ∂ ∂ (9.6.3) α = √ 1ρα for α = 1, 2, D ∂ρα − − ∂t we put ˜ = ˜ (Φ, −Φ) L0 L0 Dα (9.6.4) def 1 √ 1 = g −Φjǫ −Φk + − g (ρ Ajǫ −Φk + −Φjǫ ρ Ak) √ 1W, −4 jkDα αβDβ 4 jk α αβDβ Dα αβ β − − d where argument of gij, Aj and W is Φ. Here and what follows, for any smooth function u on R , we extend it (called Grassmann continuation) on Rm|0 as 1 1 (9.6.5) u(Φ) = u(x)+ √ 1u (x) ρ ǫ ψk + ρ ǫ ρ F k u (x) ρ ǫ ψk ρ ǫ ψl − ,k α αβ β 2 α αβ β − 2 ,kl α αβ β β βα α   for √ 1 (9.6.6) Φj = xj + √ 1ρ ǫ ψj + − ρ ǫ ρ F j where xi R , ψj R and F k R . − α αβ β 2 α αβ β ∈ ev α ∈ od ∈ ev Notations. We put def (9.6.7) [jk] = ψj ǫ ψk = ψi ψj ψi ψj = ψk ǫ ψj = [kj], α αβ β 1 2 − 2 1 α αβ β def 1 (9.6.8) R = (g + g g g )+ΓmΓn g ΓmΓn g ikjl 2 ij,kl kl,ij − jk,il − il,jk ij kl mn − il jk mn and

(9.6.9) W = W , jW = gjkW , W = W = W = W Γm W . ∇j ,j ∇ ,k ∇j∇k ∇k∇j ;jk ,jk − jk ,m j k d 9.6. SUPERSYMMETRIC EXTENSION OF THE RIEMANNIAN METRIC gjk(q)dq dq ON R 199

Formulas. Following formulas are easily obtained using renumbering and symmetry of indecies combining with the anticommutativity of ψi ’s. { α}

2 (9.6.10) (g + g ΓmΓn )[ij][kl]= R [ij][kl], ij,kl mn ij kl 3 ikjl i j k l i j l k j i k l j i l k gij,kl[ij][kl]= gij,kl(ψ1ψ ψ1 ψ2 + ψ1ψ ψ1ψ2 + ψ ψ2ψ1 ψ2 + ψ ψ2ψ1ψ2 ) (9.6.11) 2 2 1 1 i j k l = 4gij,klψ1ψ2ψ1 ψ2 = gkl,ij[ij][kl],

i j k l i j l k j i k l j i l k gil,kj[ij][kl]= gil,kj(ψ1ψ ψ1 ψ2 + ψ1ψ ψ1ψ2 + ψ ψ2ψ1 ψ2 + ψ ψ2ψ1ψ2 ) (9.6.12) 2 2 1 1 i j k l = 2gil,kjψ1ψ2ψ1 ψ2 = gkj,il[ij][kl]

(9.6.13) g ψi ψjψkψl = g ψi ψjψkψl , il,kj 1 2 1 2 − ij,kl 1 2 1 2 m n m n i j k l (9.6.14) Γij Γklgmn[ij][kl] =4Γij Γklgmnψ1ψ2ψ1 ψ2, (9.6.15) ΓmΓn g [ij][kl]= 2ΓmΓn g ψi ψjψkψl . il kj mn − ij kl mn 1 2 1 2 Simple but lengthy calculations yield that ′ ˙ def ˜ − 0(x, x,ψ˙ α, ψα, F ) = dρ 0(Φ(t, ρ), α Φ(t, ρ)) L 0|2 L D ZR 1 1 = g (x ˙ jx˙ k + F jF k + √ 1ψj ψ˙ k )+ g [ij][kl] 2 jk − α α 8 ij,kl (9.6.16) √ 1 √ 1 √ 1 + A x˙ j − A ψj ψk − Γk [ij]g F l + − g ψi Γj x˙ kψl j − 2 j,k α α − 2 ij kl 2 ij α kl α √ 1 + W F k − W [kl]. ,k − 2 ;kl Assuming the auxiliary field F = (F 1,...,F d) satisfies δ ′ √ 1 (9.6.17) L0 = 0, i.e. F k = − Γk [ij] gklW , δF k 2 ij − ,l we get √ ˙ 1 j k 1 j D k 1 0(x, x,˙ ψα, ψα)= gjk x˙ x˙ + − gjkψα ψα + Rikjl[ij][kl] (9.6.18) L 2 2 dt 12 √ 1 1 √ 1 + A x˙ j − A ψj ψk gjkW W − W [jk], j − 2 j,k α α − 2 ,j ,k − 2 ∇j ;jk where D def (9.6.19) ψk = ψ˙k +Γk x˙ pψl . dt α α pl α Remark 9.6.1. (a) Above derivation of (9.6.18) is essentially due to Davis et al.[29], though in their calculations the coefficient 1/8 in (9.6.16) is replaced by 1. Moreover, they didn’t mention the necessity of using the Grassmann algebra with infinite number of generators, which is necessary to define odd derivatives uniquely. (b) To eliminate the auxiliary fields F i, we assume that the ‘equation of motion’ described in (9.6.17) holds. Though there is a work, for example Cooper and Freedman [27], which asserts that F i is calculated out after integrating the partition function expressed by the Feynman measure, it seems curious to use the “quantum argument” when we are discussing the “classical objects”. (In any way, there does not exist the ‘Fubini theorem’ with respect to the ‘Feynman measure’.) 200 9. MISCELLANEOUS

9.6.2. A prescription for a supersymmetric extension of Hamiltonian H(q,p). We restart from the Lagrangian (9.6.18) ignoring the procedures itself. As the variables ψi are L0 { α} assumed to be “real” and anticommutative, we define from them the “complex” odd variables as follows: 1 1 (9.6.20) ψj = (ψj + √ 1ψj), ψ¯j = (ψj √ 1ψj), √2 1 − 2 √2 1 − − 2 that is 1 1 (9.6.21) ψj = (ψj + ψ¯j), ψj = (ψj ψ¯j). 1 √2 2 √2√ 1 − − Then, clearly we have ψ¯i, ψ¯j = ψi, ψj = ψ¯i, ψj = ψi, ψ¯j = 0 (9.6.22) { } { } { } { } [ij]= √ 1(ψiψ¯j ψ¯iψj). − − By the same calculation as before, m n i ¯j k ¯l Rikjl[ij][kl]= 6(gij,kl +Γij Γklgmn)ψ ψ ψ ψ (9.6.23) − m n i ¯j k ¯l = 6(gik,jl +ΓikΓjlgmn)ψ ψ ψ ψ . So, we get 1 √ 1 D D 1 (x, x,ψ,˙ ψ,˙ ψ,¯ ψ¯˙)= g x˙ jx˙ k + − g (ψj ψ¯k + ψ¯j ψk) R ψiψ¯jψkψ¯l L 2 jk 2 jk dt dt − 4 ijkl √ 1 1 (9.6.24) + A x˙ j − B ψjψ¯k + A Aj j − 2 jk 2 j 1 1 jW W + W (ψiψ¯j ψ¯iψj). − 2∇ ∇j 2∇i∇j − Introducing new variables by δ √ 1 ξ = L = g (x)x ˙ j + − g (x)(ψ¯j ψk + ψjψ¯k)+ A (x), i δx˙ i ij 2 ij,k i   δ √ 1 j  φi = L = − gij(x)ψ¯ , (9.6.25)  ˙i − 2  δψ  δ √ 1 j φ¯i = L = − gij(x)ψ ,  ˙ i − 2  δ ¯ψ  we get   def = (x,ξ,ψ, ψ¯) =xξ ˙ + ψφ˙ + ψ¯˙φ¯ H H −L 1 √ 1 √ 1 (9.6.26) = gij ξ − g (ψ¯kψl + ψkψ¯l) A ξ − g (ψ¯mψn + ψmψ¯n) A 2 i − 2 ik,l − i j − 2 jm,n − j 1 1 1 R ψ¯iψjψ¯kψl + gjkW W + W (ψ¯iψj ψiψ¯j).  − 2 ikjl 2 ,j ,k 2 ;ij − Now, rewriting the variables ψi, ψ¯i as θi, πi, we get finally = (x,ξ,θ,π) H H 1 √ 1 √ 1 = gij ξ − (g g )θkπl A ξ − (g g )θmπn A (9.6.27) 2 i − 2 ik,l − il,k − i j − 2 jm,n − jn,m − j 1 1 + R θjθlπiπk + gjkW W W θiπj,  2 ikjl 2 ,j ,k − ;ij 1 jk which is thought as the supersymmetric extension of H(q,p) when V = 2 g W,jW,k. j k d 9.6. SUPERSYMMETRIC EXTENSION OF THE RIEMANNIAN METRIC gjk(q)dq dq ON R 201

9.6.3. Supersymmetry and supercharges. Preparing a pair of “real” Grassmann parame- ters ε R for α = 1, 2, we introduce a one parameter group of transformations T (s R) from α ∈ od s ∈ (t, ρ , ρ ) R1|2 to (t′, ρ′ , ρ′ ) R1|2 defined by 1 2 ∈ 1 2 ∈ t′ = t is(ε ρ ε ρ ), − 1 2 − 2 1 ′ (9.6.28)  ρ1 = ρ1 sε1,  −  ρ′ = ρ sε 2 2 − 2 and also two operators   + ∂ ∂ (9.6.29) α = + iρα for α = 1, 2. D ∂ρα ∂t Clearly, the infinitesimal generator of transformations above is given by ∂ (9.6.30) v(T (t, ρ , ρ )) = (ε + ε +)v(t, ρ , ρ ) ∂s s 1 2 s=0 − 1D2 − 2D1 1 2 1|2 for any smooth function v(t, ρ1, ρ2) from R to Rev. Here, we remark that + + ∂v ∂v ∂v (ε1 1 + ε2 2 )v(t, ρ1, ρ2)= δt + δρ1 + δρ2 − D D ∂t ∂ρ1 ∂ρ2 with δt = i(ε ρ + ε ρ ), δρ = ε , δρ = ε . − 1 1 2 2 1 − 1 2 − 2

Moreover, v(t, ρ1, ρ2) is called supersymmetric if (9.6.31) δv(t, ρ , ρ )= (ε + ε )v(t, ρ). 1 2 − 1D1 2D2 j Above relation implies the following: If Φ (t, ρ1, ρ2) given in (9.6.1) is supersymmetric, we have

j j j i j δΦ (t, ρ1, ρ2) δx (t)+ iραǫαβ δψ (t)+ ραǫαβρβδF (t) (9.6.32) ≡ β 2 = (ε + ε )Φj(t, ρ , ρ ) − 1D1 2D2 1 2 Since R is infinite dimensional, if ω satisfies δρ1ω = 0 and ρ1ω = 0 then ω = 0. This yields that δxj = iε ǫ ψj , − α αβ β j j j (9.6.33)  δψα = εαF ǫαβεβx˙ ,  − −  j ˙ j δF = iεαψα.  Using the relation (??), we get  δxj = iε ǫ ψj , − α αβ β (9.6.34) i  δψj = ε ( Γj [kl] jW ) ǫ ε x˙ j.  α − α 2 kl −∇ − αβ β From this, we have the following quantities, called supercharges, (9.6.35) = ψi g x˙ j ǫ ψi W Qα α ij − αβ β∇i which is conserved by the flow defined by the above Lagrangian.

On the other hand, the following supersymmetric Lagrangian is introduced by physicist: 1 √ 1 D 1 (x;x, ˙ ψ ; ψ˙ )= g x˙ jx˙ k + − g ψ¯jγ0 ψk + R ψ¯iψjψ¯kψl L0 α α 2 jk 2 jk dt 12 ikjl 1 1 jW W W [ψ¯i, ψj]. − 2∇ ∇j − 2∇i∇j 202 9. MISCELLANEOUS

Here, j j ∗ ψ j j ψ 0 √ 1 ψj = 1 , ψ¯j = √ 1(ψ , ψ )= t 1 γ0, γ0 = − − , ψj − 2 − 1 ψj √ 1 0  2  2  −  L. Alvarez-Gaum´e[6] used above with W = 0 on a general manifold M.

9.7. Hamilton flow for Weyl equation with external electro-magnetic field

In this section, we consider the Weyl equation with external electro-magnetic field (7.0.9) with its symbol below: 3 e (9.7.1) (t,x,ξ,θ,π)= cσ (θ,π) ξ A (t,x) + eA (t,x). H j j − c j 0 j=1 X  Corresponding Hamilton equation is, for j = 1, 2, 3, l = 1, 2, d ∂ (t,x,ξ,θ,π) d ∂ (t,x,ξ,θ,π) xj = H , ξj = H , dt ∂ξj dt − ∂xj (9.7.2)   d ∂ (t,x,ξ,θ,π) d ∂ (t,x,ξ,θ,π)  θl = H , πl = H . dt − ∂πl dt − ∂θl  We take this as a simple example for the necessity of the countablely infinite Grassmann generators. Proposition 9.7.1. Assume (A (t,q), A (t,q), A (t,q), A (t,q)) C∞(R R3 : R) in (0.7) in 0 1 2 3 ∈ × Chapter 7. Then, for any initial data (x(0),ξ(0),θ(0),π(0)) = (x,ξ,θ,π) R6|4 = ∗R3|2, (9.7.2) ∈ ∼ T has the unique, global in time, solution (x(t),ξ(t),θ(t),π(t)).

Remark 9.7.1. We require only smoothness for A (t,q) 3 without strict conditions on the { j }j=0 behavior when q . | |→∞

Odd variables part of (9.7.2) are rewritten as

θ1 θ1 θ1(t) θ1 d θ θ θ (t) θ (9.7.3)  2 = ic~−1X(t)  2 with  2  =  2  . dt π1 π1 π1(t) π1 π2 π2 π2(t) π         2 Here, we put         e (9.7.4) η (t)= ξ (t) A (t,x(t)), j j − c j and η (t)I ~−1(η (t) iη (t))σ (9.7.5) X(t)= 3 2 1 2 2 ~(η (−t)+ iη (t))σ η (−t)I  1 2 2 3 2 

Moreover, for σj(t)= σj(θ(t),π(t)), 0 η (t) η (t) − 3 2 (9.7.6) Y(t)= η3(t) 0 η1(t) .  η (t) η (t)− 0  − 2 1 by simple calculation   −2 σ1 σ1 σ1(t) σ1 θ1θ2 + ~ π1π2 d −1 −2 (9.7.7) σ2 = 2c~ Y(t) σ2 with σ2(t) = σ2 = i(θ1θ2 ~ π1π2) . dt σ  σ  σ (t) σ   i~−1(θ−π + θ π ) 3 3 3 3 − 1 1 2 2           9.7. HAMILTON FLOW FOR WEYL EQUATION WITH VARIABLE COEFFICIENTS 203

Now we begin our proof. We decompose dependent variables (x,ξ,θ,π) by degree: ∞ ∞ ∞ ∞ [2ℓ] [2ℓ] [2ℓ+1] [2ℓ+1] (9.7.8) xj(t)= xj (t), ξj(t)= ξj (t), θk(t)= θk (t), πk(t)= πk (t). Xℓ=0 Xℓ=0 Xℓ=0 Xℓ=0 For given m = 0, 1, 2, , · · · d [2m] [2m] [0] xj = cσj where σj = 0, dt [2m] [2m]  x (t) x (9.7.9) m 3 [2m−2ℓ] [2m] with [2m] = [2m] ,  d [2m] [2ℓ] ∂Ak ∂A0 ξ (t) ξ  ξj = e σk e     dt ∂xj − ∂xj ℓ=1 k=1  X X   [2m+1] [2m+1−2ℓ] [2m+1] [2m+1] θ1 θ1 θ1 (t) θ1 m d θ[2m+1] θ[2m+1−2ℓ] θ[2m+1](t) θ[2m+1] (9.7.10)  2  = ic~−1 X[2ℓ](t)  2  with  2  =  2  , dt π[2m+1] π[2m+1−2ℓ] π[2m+1](t) π[2m+1]  1  ℓ=0  1   1   1  π[2m+1] X π[2m+1−2ℓ] π[2m+1](t) π[2m+1]  2   2   2   2          and [2m] [2m−2ℓ] [2m] [2m] σ m−1 σ σ (t) σ d 1 1 1 1 (9.7.11) σ[2m] = 2c~−1Y[2ℓ](t) σ[2m−2ℓ] with σ[2m](t) = σ[2m] . dt  2   2   2   2  σ[2m] ℓ=0 σ[2m−2ℓ] σ[2m](t) σ[2m]  3  X  3   3   3          Here, X[2ℓ](t), Y[2ℓ](t), σ[2ℓ](t) are degree 2ℓ part ofX(t), Y(t), σ(t), respectively. e η[2ℓ](t)= ξ[2ℓ](t) A[2ℓ](t,x), k k − c k 1 A[2ℓ](t,x)= ∂αA (t,x[0]) (xα1 )[2ℓ1](xα2 )[2ℓ2](xα3 )[2ℓ3], k α! q k · 1 2 3 |αX|≤2ℓ ℓ1+ℓ2+ℓ3=ℓ [2ℓ] ∂A0 (t,x) 1 α [0] α1 [2ℓ1] α2 [2ℓ2] α3 [2ℓ3] = ∂q ∂qj A0(t,x ) (x1 ) (x2 ) (x3 ) ∂xj α! · |αX|≤2ℓ ℓ1+ℓ2+ℓ3=ℓ And m−1 [2m] [2ℓ+1] [2m−2ℓ−1] −2 [2ℓ+1] [2m−2ℓ−1] σ1 = θ1 θ2 + ~ π1 π2 ,  ℓ=0  X    m−1  [2m] [2ℓ+1] [2m−2ℓ−1] [2ℓ+1] [2m−2ℓ−1]  σ = i θ θ ~−2π π ,  2 1 2 − 1 2  ℓ=0  X   m−1 [2m] [2ℓ+1] [2m−2ℓ−1] [2ℓ+1] [2m−2ℓ−1]  σ = i~−1 θ π + θ π .  3 − 1 1 2 2  ℓ=0  X    [0] Putting m = 0 in (9.7.9), for j = 1, 2, 3,

[0] [0] d [0] d [0] ∂A0 (t,x ) [0] [0] xj (t)=0 and ξj (t)= e = e∂qj A0 (t,x ). dt dt − ∂xj − Therefore, for any t R and j = 1, 2, 3, ∈ t x[0](t)= x[0] and ξ[0](t)= ξ[0] e ds ∂ A[0](s,x[0]). j j j j − qj 0 Zt 204 9. MISCELLANEOUS

[1] Putting these result into (9.7.10) with m = 0

[1] [1] [1] [1] θ1 θ1 θ1 (t) θ1 d θ[1] θ[1] θ[1](t) θ[1] (9.7.12)  2  = ic~−1X[0](t)  2  with  2  =  2  . dt π[1] π[1] π[1](t) π[1]  1   1   1   1  π[1] π[1] π[1](t) π[1]  2   2   2   2          Here, X[0](t) is 4 4-matrix whose components are complex valued, depending on × (t,t,x[0],ξ[0], ∂βA , ∂αA ; α = 0, β 1). q 0 q | | | |≤ More precisely, components of X[0](t) are given as e t e η[0](t)= ξ[0](t) A (t,x[0])= ξ[0] e ds ∂ A[0](s,x[0]) A[0](t,x[0]). j j − c j j − qj 0 − c j Zt ODE (9.7.12) has smooth coefficients w.r.t. t with value in (R0|1)4, which has a unique global in time solution depending on as follows: Putting A = (A1, A2, A3), we have

[1] [1] [0] [0] [1] [1] β α [1] [1] θj (t)= θj (t,x ,ξ ,θ ,π , ∂q A0, ∂q A ; α = 0, β 1), linear w.r.t. θ ,π , (9.7.13) | | | |≤  π[1](t)= π[1](t,x[0],ξ[0],θ[1],π[1], ∂βA , ∂αA ; α = 0, β 1), linear w.r.t. θ[1],π[1].  j j q 0 q | | | |≤ [2] Putting m = 1 in (9.7.9), d x[2] = cσ[2], dt j j [2] [2]  x (t)= x 3 [0] [2] with [2] [2] .  d [2] [2] ∂Ak ∂A0 ξ (t)= ξ  ξj = e σk e   dt ∂xj − ∂xj Xk=1  From m = 1 in (9.7.11) and (9.7.13), for j = 1, 2, 3, 3 [2] [1] [1] ~−2 [1] [1] [2] [0] [2] σ1 = θ1 θ2 + π1 π2 , A0 (x)= ∂qk A0(x )xk ,  k=1  [2] [1] [1] ~−2 [1] [1] and  X σ2 = i(θ1 θ2 π1 π2 ),  3  −  ∂A[2]  [2] −1 [1] [1] [1] [1]  0 [0] [2] σ = i~ (θ π + θ π ) = ∂qkqj A0(x )xk . 3 1 1 2 2 ∂xj − k=1   X   Therefore, for j = 1, 2, 3,  x[2](t)= x[2](t,x[2ℓ],ξ[0],θ[1],π[1], ∂βA , ∂αA ; 0 ℓ 1, α = 0, β 1), j j q 0 q ≤ ≤ | | | |≤  ξ[2](t)= ξ[2](t,x[2ℓ],ξ[2ℓ],θ[1],π[1], ∂βA , ∂αA ; 0 ℓ 1, α 1, β 2).  j j q 0 q ≤ ≤ | |≤ | |≤ [3] Putting m = 1 in (9.7.10),

[3] [3] [1] [3] [3] θ1 θ1 θ1 θ1 (t) θ1 d θ[3] θ[3] θ[1] θ[3](t) θ[3]  2  = ic~−1X[0](t)  2  + ic~−1X[2](t)  2  with  2  =  2  . dt π[3] π[3] π[1] π[3](t) π[3]  1   1   1   1   1   [3]  [3]  [1]  [3]   [3] π2  π2  π2  π2 (t) π2            Here, 4 4-matrix X[2](t) has components, valued in C depending on × ev (t,x[2ℓ],ξ[2ℓ],θ[1],π[1], ∂βA , ∂αA ; 0 ℓ 1, α 1, β 2). q 0 q ≤ ≤ | |≤ | |≤ 9.7. HAMILTON FLOW FOR WEYL EQUATION WITH VARIABLE COEFFICIENTS 205

From these, for k = 1, 2, θ[3](t)= θ[3](t,x[2ℓ],ξ[2ℓ],θ[2ℓ+1],π[2ℓ+1], ∂βA , ∂αA ; 0 ℓ 1, α 1, β 2), k k q 0 q ≤ ≤ | |≤ | |≤  π[3](t)= π[3](t,x[2ℓ],ξ[2ℓ],θ[2ℓ+1],π[2ℓ+1], ∂βA , ∂αA ; 0 ℓ 1, α 1, β 2).  k k q 0 q ≤ ≤ | |≤ | |≤ [4] Repeating this procedure, we get x[2m](t)= x[2m](t,x[2ℓ],ξ[2ℓ],θ[2ℓ−1],π[2ℓ−1], ∂βA , ∂αA ; 0 ℓ m, α m 1, β m), q 0 q ≤ ≤ | |≤ − | |≤ [2m] [2m] [2ℓ] [2ℓ] [2ℓ−1] [2ℓ−1] β α  ξ (t)= ξ (t,x ,ξ ,θ ,π , ∂q A0, ∂q A ; 0 ℓ m, α m, β m + 1),  ≤ ≤ | |≤ | |≤  θ[2m+1](t)= θ[2m+1](t,x[2ℓ],ξ[2ℓ],θ[2ℓ+1],π[2ℓ+1], ∂βA , ∂αA ; 0 ℓ m, α m, β m + 1),  q 0 q ≤ ≤ | |≤ | |≤ [2m+1] [2m+1] [2ℓ] [2ℓ] [2ℓ+1] [2ℓ+1] β α π (t)= π (t,x ,ξ ,θ ,π , ∂q A0, ∂q A ; 0 ℓ m, α m, β m + 1).  ≤ ≤ | |≤ | |≤ These prove the existence, moreover, since for each degree, the solution of (9.7.9) with (9.7.10) is  unique, so follows the uniqueness of the solution of (9.7.2). 

Moreover, we get easily

Corollary 9.7.1. If (x(t),ξ(t),θ(t),π(t)) C1(R : ∗R3|2) is a solution of (9.7.2), then ∈ T d ∂ (9.7.14) (t,x(t),ξ(t),θ(t),π(t)) = H(t,x(t),ξ(t),θ(t),π(t)). dtH ∂t Putting ∂Ak(t,x) ∂Aj(t,x) Bjk(t,x)= , ∂xj − ∂xk and rewriting d x = cσ (θ,π), dt j j (9.7.15)  3  d ∂A0(t,x)  ηj = eσk(θ,π)Bjk(t,x) e , dt − ∂xj Xk=1 we have   Corollary 9.7.2. For A (t,q) 3 C∞(R R3 : R), putting the initial data { j }j=0 ∈ × e (˜x(t), η˜(t), θ˜(t), π˜(t)) = (x, η,θ,π) where η = ξ A (t,x) j j − c j for (9.7.15) with (9.7.3), there exists a unique solution (˜x(t), η˜(t), θ˜(t), π˜(t)) C1(R : ∗R3|2). ∈ T These are related with (x(t),ξ(t),θ(t),π(t)) as e x (t,t ; x,ξ,θ,π) =x ˜ (t,t ; x,ξ A(t,x),θ,π), j j − c  e e e  ξj(t,t ; x,ξ,θ,π) =η ˜j(t,t ; x,ξ A(t,x),θ,π)+ Aj(t, x˜(t,t ; x,ξ A(t,x),θ,π)),  − c c − c  e  θ (t,t ; x,ξ,θ,π)= θ˜ (t,t ; x,ξ A(t,x),θ,π),  k k − c e  πk(t,t ; x,ξ,θ,π) =π ˜k(t,t ; x,ξ A(t,x),θ,π).  − c   Remark 9.7.2. The solution of the Hamilton flow corresponding to free Weyl equation in Proposition 9.7.1 is solved explicitly. Obtaining such an explicit solution is not necessarily happened frequently, in general we have only its existence abstractly. Fortunately, because of the countable degree stems from the countable Grassmann generators, we get rather easily the existence proof. But it is rather complicated to have the estimates w.r.t. the initial data.

Bibliography

[1] M.S. Abdalla and U.A.T. Ramjit, Quantum mechanics of the damped pulsating oscillator, J.Math.Phys.30, 1989, pp.60-65. [2] R. Abraham and J.E. Marsden, Foundations of Mechanics, 2nd ed. Massachusetts, Benjamin, 1980. [3] A. Albeverio and R.J. Hoegh-Krohn, Mathematical Theory of Feynman Integrals, Lec.Notes in Math. 523, Heidelberg-New York, Springer-Verlag, 1976. [4] S. Albeverio, G. Guatteri and S. Mazzucchi, Phase space Feynman path integrals, J. Math. Phys. 43(2002) pp. 2847-2857. [5] S. Albeverio and S. Mazzucchi, Feynman path integrals for polynomially growing potentials, J. Functional Analysis 221(2005), 83-121. [6] L. Alvarez-Gaum´e, Supersymmetry and the Atiyah–Singer index theorem, Commun. Math. Phys. 90(1983), pp. 161-173. [7] J. Baik, P. Deift and K. Johansson, On the distribution of the length of the longest increasing subsequence of random permutations, J. Amer. Math. Soc. 12(1999), pp. 1119-1178. [8] R.G. Bartle: A Modern Theory of Integration, Graduate Studies in Mathematics, vol.32 Amer. Math. Soc. Providence,2001. [9] R. Beals, Exact fundamental solutions, Journ´ees Equations´ aux d´eriv´ees partielles, Saint-Jean-de-Monts, 2-5 juin 1998. [10] F.A. Berezin(ed. A.A. Kirillov), Introduction to Superanalysis, D. Reidel Publ. Company, 1987. [11] F.A. Berezin and M.S. Marinov, Particle spin dynamics as the Grassmann variant of classical mechanics, Ann. of Physics 104(1977), pp. 336-362. [12] M.S. Berger, Nonlinearity and Functional Analysis–Lectures on Nonlinear Problems in , Academic Press, NewYork, 1977. [13] D. Bessis, C. Itzykson and J.B. Zuber, Quantum field theory techniques in graphical enumeration, Adv. Appl.Math. 1(1980), pp. 109-157. [14] E.K. Blum, The Euler-Poisson-Darboux equation in the exceptional cases, Proc. Amer. Math. Soc. 5 (1954), pp. 511-520 [15] D. Boll´e, F. Gesztesy, H. Grosse, W. Schweiger and B. Simon, Witten index, axial anomaly, and Krein’s spectral shift function in supersymmetric quantum mechanics, J.Math.Phys.28(1987), pp. 1512-1525. [16] C.P. Boyer and S. Gitler, The theory of G∞-supermanifolds, Trans.Amer.Math.Soc. 285(1984), pp. 241-267. [17] E. Br´ezin, Grassmann variables and supersymmetry in the theory of disordered systems, in Applications of field theory to statistical mechanics, ed. L. Garrido, Lecture Notes in Physics 216 (1985) pp 115-123. [18] E. Br´ezin and V.A. Kazakov, Exactly solvable field theories of closed strings, Physics Letters B, 236(1990) pp 144-150. [19] E. Br´ezin and A. Zee, Universality of the correlations between eigenvalues of large random matrices, Nucl.Phys. B402(1993) pp 613-627. [20] Y. Choquet-Bruhat, Supergravities and Kaluza-Klein theories, in “Topological properties and global structure of space-time” (eds. P. Bergmann and V. de Sabbata), New York, Plenum Press, 1986, pp. 31-48. [21] P. Bryant, DeWitt supermanifolds and infinite dimensional ground rings,J.London Math.Soc.39(1989), pp. 347- 368. [22] D. Burghelea, L. Friedlander and T. Kappeler, Witten deformation of the analytic torsion and the Reidemeister torsion, Amer.Math.Soc.Transl. 184(1998), pp. 23-39. [23] R. Catenacci, C. Reina and P. Teofilatto, On the body of supermanifolds, J. Math. Phys. 26(1985), pp. 671-674. [24] J. Cheeger, Analytic torsion and the heat equation, Ann. of Math. 109(1979), pp. 259-300. [25] Hung Cheng, Canonical quantization of Yang-Mills Theories, pp. 99-115, in the Proceedings of the Conference on the interface between Mathematics and Physics, Academia Sinica, Taipei, Taiwan, 1992. [26] G.M. Constantine and T.H. Savits, A multivariate Fa`adi Bruno formula with applications, Transactions of AMS, 348(1996), pp. 503-520. [27] F. Cooper and B. Freedman, Aspects of supersymmetric quantum mechanics, Annals of Physics 146(1983), pp. 262-288. [28] H.L. Cycon, R.G. Froese, W. Kirsh and B. Simon, Schr¨odinger Operators with Application to Quantum Me- chanics and Global Geometry, New York, Springer-Verlag, GMT, 1987.

207 208 BIBLIOGRAPHY

[29] A.C. Davis, A.J. Macfarlane, P.C. Popat and J.W. van Holten, The quantum mechnics of the supersymmetric nonlinear σ-model, J. Phys.A: Math. Gen.17(1984), pp. 2945-2954. [30] N.G. de Bruijn, Asymptotic Methods in Analysis, North-Holland Publ. 1961. [31] P.A. Deift, Applications of a commutation formula, Duke Math.J.45(1978), pp. 267-310. [32] , Orthogonal polynomials and random matrices: Riemann-Hilbert approach, Courant Lecture Notes in Mathematics, vol.3, New York, AMS, 2000. [33] , Four lectures on Random Matrix Theory, Lecture Notes in Math. Springer, 1815(2003), pp. 21-52. [34] B. DeWitt, Dynamical theory in curved spaces I. A review of the classical and quantum action principles, Reviews of modern physics 29(1984), pp. 377-397. [35] , Supermanifolds, London, Cambridge Univ. Press, 1984. [36] E. D’Hoker and L. Vinet, Supersymmetry of the Pauli equation in the presence of a magnetic monopole, Phys.Lett. 137B(1984), pp. 72-76. [37] J.A. Dieudonn´e, Treatise on Analysis, Academic Press,1960. [38] J. Dodziuk, Finite difference approach to the hodge theory of harmonic forms, American J. Math. 98(1976) pp. 79-104. [39] M. Dreher and I. Witt, Energy estimates for weakly hyperbolic systems of the first order, arXiv:math/0311027,Commun. Contemp. Math. 7 (2005), no. 6, 809-837 [40] F.J. Dyson, Quaternion , Helvetica Physica Acta 45(1972), pp. 289-302. [41] K.B. Efetov, Supersymmetry and theory of disordered metals, Advances in Physics 32(1983) pp. 53-127. [42] I. Ekeland, An inverse function theorem in Fr´echet spaces, Ann.I.H. Poincare, 28(2011), pp. 91-105. [43] M.V. Fedoryuk, The stationary phase method and pseudo differential operators, Russian Math.Survey 26(1971),pp. 65-115. [44] R. Feynman, Space-time approach to non-relativistic quantum mechanics, Rev. Modern Phys. 20 (1948) pp. 367-387. [45] R. Feynman and A.R. Hibbs, Quantum Mechanics and Path Integrals, New York, McGraw-Hill Book Co. 1965. [46] C. Foia¸s, Statistical study of Navier-Stokes equations I,II, Rend. Sem. Mat. Padova, 48,49(1973), pp. 219-349,9- 123. [47] P.J. Forrester, The spectrum edge of random matrix ensembles, Nucl.Phys. B402[FS](1993) pp. 709-728. [48] D. Fujiwara, An approximate positive part of a self-adjoint pseudo-differential operator I, II, Osaka J. Math. 11(1974), pp. 265-281, 283-293. [49] , A construction of the fundamental solution for the Schr¨odinger equation, J. D’Analyse Math. 35 (1979), pp. 41-96. [50] , Remarks on convergence of the Feynman path integrals, Duke Math. J.47(1980), pp. 559-600. [51] , An integration by parts formula for Feynman path integrals J. Math. Soc. Japan 65(2013), pp. 1273-1318. [52] Y.V. Fyodorov, Basic features of Efetov’s supersymmetry approach, Les Houches, Session LXI, Physique Quan- tique M´esoscopique (eds. E. Akkermans, G. Montambaux, J.L. Pichand and J. Zinn-Justin), 1994 Elsevier pp. 493-532. [53] C. Garrod, Hamiltonian Path-Integral Methods, Review of Modern Physics 38(1966), pp. 483-494. [54] I.M. Gelfand, Some questions of analysis and differential equations, Translation series II, Amer.Math.Soc. 26(1963), pp. 201-219. [55] E. Getzler, Pseudo-differential operators on supermanifolds and the Atiyah-Singer index theorem, Commun. Math. Phys. 92(1983), pp. 163-178. [56] C. Grosche, An introduction into the Feynman path integral, arXiv:hep-th/9302097v1 [57] R. Hamilton,@ The inverse function theorem of Nash and Moser, Bulletin of AMS, 7(1982), pp. 65-222. [58] B. Helffer and J. Sj¨ostrand, Multiple wells in the semi-classical limit I, Comm PDE 9 (1984), 337-408. [59] , Puits multiples en mecanique semi-classique, IV Etude du complexe de Witten, Comm PDE 10 (1985), 245-340. [60] E. Hewitt and K. Stromberg, Real and Abstract Analysis, Springer& Kinokuniya, 1969, [61] E. Hopf, Statistical hydrodynamics and functional calculus J.Rat.Mech.Anal. 1(1952), pp. 87-123. [62] L. H¨ormander, On the theory of general partial differential operators, Acta.Math. 94(1955), pp. 161-248. [63] , Linear Partial Differential Operators, Springer, 1963 [64] , The Analysis of Linear Partial Differential Operators, I–IV, Springer, 1983-85. [65] N.E. Hurt, Geometric Quantization in Action, Reidel Pub.Co., 1983. [66] A. Inoue, Some examples exhibiting the procedures of renormalization and gauge fixing –Schwinger-Dyson equa- tions of first order, K¯odai Math.J. 9(1986), pp. 134-160. [67] , Strong and classical solutions of the Hopf equation –an example of Functional Derivative Equation of second order, Tˆohoku Math.J.29(1987), pp. 115-144. [68] , A mathematical link between the Navier-Stokes equation and the Euler equation via the Hopf equation, private memo. [69] , On functional derivative equations–an invitation to functional analysis in functional spaces I,II Suugaku (in Japanese) 42(1990), pp. 170-177, 261-265 BIBLIOGRAPHY 209

[70] , On Hopf type functional derivative equations for u + cu + bu2 + au3 = 0 on Ω × R. I. Existence of solutions, J.Math.Anal.Appl. 152(1990) pp. 61-87. [71] , On a construction of the fundamental solution for the free Weyl equation by Hamiltonian path-integral method —an exactly solvable case with “odd variable coefficients”, Tˆohoku J.Math.50(1998), pp. 91-118. [72] , On a construction of the fundamental solution for the free Dirac equation by Hamiltonian path-integral method —the classical counterpart of Zitterbewegung, Japanese J. Math. 24(1998), pp. 297-334. [73] , On a “Hamiltonian path-integral” derivation of the Schr¨odinger equation, Osaka J.Math.36(1999), pp. 111-150. [74] , Introduction of Superanalysis and its Applications, book in preparation. [75] , A partial solution for Feynman’s problem –a new derivation of the Weyl equation, Proceedings of Math- ematical Physics and , a symposium celebrating the seventieth birthday of Eyvind H. Wichmann, University of California, Berkeley, June 11 - 13, 1999, Electronic J.Diff.Eqns., Conf.04, 2000, pp. 121-145. http://ejde.math.txstate.edu/ [76] , Definition and characterization of supersmooth functions on superspace based on Fr´echet-Grassmann algebra, http://arxiv.org/abs/0910.3831v4 [77] , Remarks on elementary integral calculus for supersmooth functions in Rm|n, http://arxiv.org/abs/1408.3874 [78] A. Inoue and T. Funaki, On a new derivation of the Navier-Stokes equation, Commun. Math. Phys. 65(1979), pp. 83-90. [79] A. Inoue and Y. Maeda, On integral transformations associated with a certain Lagrangian– as a prototype of quantization, J.Math.Soc.Japan 37(1985), pp. 219-244. [80] A. Inoue and Y. Maeda, Foundations of calculus on super Euclidean space Rm|n based on a Fr´echet-Grassmann algebra, Kodai Math.J.14(1991), pp. 72-112. [81] , On a construction of a good parametrix for the Pauli equation by Hamiltonian path-integral method — an application of superanalysis, Japanese J.Math. 29(2003), pp. 27-107. [82] A. Inoue and Y. Nomura, Some refinements of Wigner’s semi-circle law for Gaussian random matrices using superanalysis, Asymptotic Analysis 23(2000), pp. 329-375. [83] A. Intissar, A Remark on the convergence of Feynman path integrals for Weyl pseudo-differential operators on Rn, Commun. in Partial Differential Equations 7 (1982) pp. 1403-1437. [84] C. Itzykson and J.B. Zuber, The planar approximation. II, J.Math.Phy. 21(1980), pp. 411-421. [85] A. Jadczyk and K. Pilch, Superspaces and , Commun.Math.Phys.78(1981), pp. 373-390. [86] , Classical limit of CAR and self-duality of the infinite-dimensional Grassmann algebra, in Quantum theory of particles and fields, ed b. Jancewicz and J. Lukierski, World Sci. Publ.co. Singapore, 1983, pp. 62-73. [87] H.H. Keller, Differential calculus in locally convex spaces, SPLN 417, Berlin-New York-Tokyo-Heidelberg, Springer-Verlag, 1974. [88] A. Khrennikov, Superanalysis, Matematics and its Appliactions, Kluwer Acad. Publ. Dordrecht, 1999. [89] A.Klein, L.J. Landau and J.P. Perez, Supersymmetry and the Parisi-Sourlas dimensional reduction: a rigorous proof, commun.math.phys.94, 1984, pp.459-482. [90] G. K¨othe, Topological Linear Spaces I, Berlin-New York-Tokyo-Heidelberg, Springer-Verlag, 1969. [91] H. Kumano-go, Pseudo-Differential Operators, The MIT Press, Cambridge, 1982 (original Japanese version published from Iwanami 1974) [92] N. Kumano-go, A construction of the fundamental solution for Schr¨odinger equations, J. Math. Sci. Univ. Tokyo 2(1995), pp. 441-498. [93] , Feynman path integrals as analysis on path space by time slicing approximation Bull. Sci. Math. 128(2004), pp. 197-251. [94] N. Kumano-go and D. Fujiwara, Phase space Feynman path integrals via piecewise bicharacteristic paths and their semiclassical approximations, Bull. Aci.math. 132(2008), pp. 313-357. [95] H.H. Kuo, Gaussian Measures in Banach Spaces, Lecture Notes in Mathematics 463, Heidelberg-New York, Springer-Verlag,1975. [96] S. Lang, Introduction to Differentiable Manifolds, Addison-Wesley, Reading, MA, 1972. [97] D.A. Leites, Introduction to the theory of supermanifolds, Russian Math. Surveys 35(1980), pp.1-64. [98] J. Le Rousseau, Fourier- integral-operator approximation of solutions to first-order hyperbolic pseudodifferential equations I: convergence in Sobolev spaces, Comm.PDE. 31(2006) pp. 867-906. [99] J. Ma˜nes and B. Zumino, WKB method, susy quantum mechanics and the index theorem, Nuclear Phys. B270(FS16)(1986), pp. 651-686. [100] S. Matsumoto and K. Kakazu, A note on topology of supermanifolds, J.Math.Phys.27(1986), pp. 2690-2692. [101] S. Matsumoto, S Uehara and Y Yasui, A superparticle on the super Riemann surface, J.Math.Phys.31(1990), pp. 476-501. [102] A. Matytsin, On the large-N limit of the Itzykson-Zuber integral, Nuclear Phys. B411(1994), pp. 805-820. [103] M.L. Mehta, Random Matrices, Academic Press, 2nd ed, 1991 Boston. 210 BIBLIOGRAPHY

[104] P.A. Mello, Theory of Random Matrices: spectral statistics and scattering problems, Les Houches, Session LXI, Physique Quantique M´esoscopique (eds. E. Akkermans, G. Montambaux, J.L. Pichand and J. Zinn-Justin), Elsevier 1994 pp. 435-491. [105] Y. Miyanishi, Remarks on low-energy approximations for Feynman path integration on the sphere, Adv. Math. Appl. Anal. 9(2014), pp. 41-61 (e-print available at arXiv:1310.1631). [106] E. Nelson, Feynman integrals and the Schr¨odinger equation J.Math.Phys. 5(1964), pp. 332-343. [107] K. Nishijima, Relativistic Quantum Mechanics(in Japanes), Baifu-kan, 1973. [108] J. Peetre, Rectification ´al’article “ Une caract´erisation abstraite des op´erateurs differentiels” Math. Scand. 8(1960) pp. 116-120. [109] D. V. Perepelitsa, Path integrals in quantum mechanics, http://web.mit.edu/dvp/www/Work/8.06/dvp-8.06-paper.pdf [110] M-y.Qi, On the Cauchy problem for a class of hyperbolic equations with the initial data on the parabolic degen- erating line, Acta Math. Sinica 8(1958), pp. 521-528. [111] M. Reed and B. Simon, Methods of modern mathematical physics vol.I-IV, Academic Press, 1972-79. [112] S. Rempel and T. Schmitt, Pseudo-differential operators and the index theorem on supermanifolds, Math.Nachr. 111 (1983), pp. 153-175. [113] A. Rogers, A global theory of supermanifolds, J.Math.Phys. 21(1980), pp. 1352-1365. [114] , Integration on supermanifolds, in “Mathematical Aspects of Superspace”(eds. H.-J. Seifert et al.), pp. 149-160, D. Reidel Pub.Com. 1984. [115] , Consistent superspace integration, J.Math.Phys.26(1985), pp. 385-392. [116] , On the existence of global integral forms on supermanifolds, J.Math.Phys.26(1985), pp. 2749-2753. [117] , Graded manifolds, supermanifolds and infinite-dimensional Grassmann algebra, Com- mun.Math.Phys.105(1986), pp. 375-384. [118] , Realizing the Berezin integral as a superspace contour integral, J.Math.Phys.27(1986), pp. 710-717. [119] , Supermanifolds –Theory and applications, World Scientific, 2007. [120] M.J. Rothstein, Integration on non-compact supermanifolds, Trans.Amer.Math.Soc. 299(1987) pp. 387-396. [121] L. Schulman, A path integral for spin, Physical Review 176(1968), pp. 1558-1569. [122] , Techniques and Applications of Path Integration, New York, Wiley,1981. [123] J.T. Schwartz, The formula for change in variables in a multiple integral, Amer.Math.Monthly, 61(1954), pp. 81-85. [124] , Nonlinear Functional Analysis, New-York, Gordon-Breach, 1969. [125] L. Schwartz, Th´eorie des Distributions, 2nd ed., Paris, Hermann, 1970. [126] , Cours d’Analyses I, Hermann, Paris, 1967 [127] O.G. Smolyanov and S.V. Fomin, Measures on linear topological spaces, Russian Math.Surveys 31(1976), pp. 1-53. [128] R.S. Strichartz, A functional calculus for elliptic pseudo-differential operators, American J.Math.94(1972), pp, 711-722. [129] E. S. Swanson, A primer on functional methods and the Schwinger-Dyson equations, arxiv.org/pdf/1008.4337v2.pdf [130] K. Taniguchi and Y.Tozaki, A hyperbolic equation with double characteristics which has a solution with branching singularity, Math. Japonica 25(1980), pp. 279-300. [131] T. Tao, Topics in random matrix theory, http://terrytao.files.wordpress.com/2011/02/matrix-book.pdf [132] S. Tomonaga, Spin rotates – the period of maturity of quantum mechanics(in Japanese), Chuokoronsha, 1974. [133] C.A. Tracy and H. Widom, Random unitary matrices, permutations and Painlev´e, Commun.Math.Phys. 207(1999), pp. 665-685. [134] -, Distribution functions for the largest eigenvalues and their applications, ICM 2002, vol.I, pp.587-596. [135] F. Treves, Topological Vector Spaces, Distributions and Kernels, Academic Press, 1967. [136] O.V. Troshkin, On wave properties of an incompressible turbulent fluid, Physica A 168(1990), pp. 881-899. [137] J. Verbaarschot, The supersymmetric method in random matrix theory and applications to QCD, www.nuclecu.unam.mx/ biker/aip/o9verbaarschot.ps [138] V.S. Vladimirov and I.V. Volovich, Superanalysis I. Differential calculas,Theor. Math. Phys. 59(1983), pp. 317-335. [139] , Superanalysis II. Integral calculas, Theor. Math. Phys. 60(1984), pp. 743-765. [140] F.F. Voronov, Geometric integration theory on supermanifolds, Soviet Scientific Reviews, Math. Phys., Gordon and Breach (1991). [141] , Quantization on supermanifolds and an analytic proof of the Atiyah-Singer index theorem, J. Soviet Math.64(1993), 993-1069. [142] H. Widom, A complete symbolic calculus for pseudodifferential operators, Bull.Sc.math. 104(1980), pp. 19-63. [143] E. Witten, Supersymmetry and Morse theory, J.Diff.Geom.17(1982), pp. 661-692. [144] K. Yagi, Superdifferential calculus, Osaka J. Math. 25(1988) pp. 243-257. [145] T. Yokonuma, Tensor Spaces and Exterior Algebras, Iwanami lecture series on fundamental mathematics(1977) [146] X. Zeng and X. Hou, The universality of Tracy-Widom F2 distribution, Advances in Mathematic(China) 41(2012), pp. 513-530. BIBLIOGRAPHY 211

[147] M.R. Zirnbauer, Riemannian symmetric superspaces and their origin in random-matrix theory, J.Math.Phys.37(1996) pp. 4986-5018. [148] A. Zvonkin, Matrix integrals and map enumeration: An accesiible Introduction, Mathl.Comput.Modelling 26(1997), pp. 281-304.