<<

arXiv:math/0602425v2 [math.NT] 2 Jan 2008 cteig eemnns yefntosi eainto relation in hyperfunctions determinants, Scattering, aur ,20. hr a an was there 2008.: 2, January 2006. 19, February [email protected] France d’Ascq Villeneuve F-59655 M2 Cit´e scientifique Math´ematiques de UFR 1 Universit´e Lille determi Fredholm Scattering; transform; Hankel keywords: kerne Dirichlet the of a resolvent In the regarding causality. observation relativistic and equation Klein-Gordon the akbedsrbtosivle nti nlssaese t seen ( are values analysis boundary of) this (difference in involved distributions markable ul rsne nsmaie omi h aeo h Fourier the transform of self-reciprocal case the the to in applied form here summarized in presented ously h ehd rdomdtriat ftekernel the Rovnya of functi V. determinants Bessel Fredholm and t various method. Branges under involving the de reciprocal formulae of self-or-skew integral are theorems Related which extend functions imaginar to grable of orde and functions of re-prove zeta Dedekind transform to the Hankel of the equations to tional equivalent isometrically is etfo h rvossuyb h uhro h aetransfo same the on author the by study previous the from dent ftoeisie v ie shr otase eecal t beneficially transfer to host comment hard from paper is prevent my lines not five did kept inspired it who those reading of referee Not it. the ( read only equation actually affected if only improvements This other there. been have ietececet ffis n eododrdffrnilequ transform differential self-reciprocal order the second and (isometrically) first is of coefficients the give h ehdo elzn eti efrcpoa transform self-reciprocal certain realizing of method The i n( in i.e. 148c Jean-Fran¸cois Burnol yefntos)Tepeetwr scmltl indepen- completely is work present The hyperfunctions.) n te qain edn oTheorem to leading equations other and ) Abstract 156e f 1 .Ti salfls ol osbyhv proposed have possibly could I folks. all is This ). ( H y lsl eae otefunction the to related closely , ) J →H 7→ 0 (2 ywd readership. wide my o nants. √ saHletsaerpouigkernel. reproducing space Hilbert a as l ( pedx emk simple-minded a make we appendix, n xy n pni,ufruaeytesubstance the unfortunately it, upon ing f n r l salse nenlyto internally established all are ons )( eHne rnfr fodrzero. order of transform Hankel he aems aua xrsin as expressions natural most have o etitdt nt nevl (0 intervals finite to restricted ) udai ed.Ti loallows also This fields. quadratic y eoadi eae otefunc- the to related is and zero r toswoeascae scattering associated whose ations x g o oto 06ad20 had 2007 and 2006 of most for age s(boue cteig previ- scattering, (absolute) as s = ) oieadsn rnfrs is transforms, sine and cosine rm R 0 ∞ H eadn qaeinte- square regarding k hc etrdaround centered which , J 0 (2 √ 15 xy hc hudnot should which ) f ( y Γ(1 ) Γ( dy − s ) s which , ) Re- . Γ(1 a , Γ( ) − s ) s ) Contents

1 Introduction (the idea of co-Poisson) 2

2 Hardy spaces and the de Branges-Rovnyak isometric expansion 8

3 Tempered distributions and their and Mellin transforms 11 H 4 A group of distributions and related integral formulas 16

5 Orthogonal projections and evaluators 18

6 Fredholm determinants, the first order differential system, and scattering 28

7 The K-Bessel function in the theory of the transform 39 H 8 The reproducing kernel and differential equations for the extended spaces 44

9 Hyperfunctions in the study of the transform 54 H 10 Appendix: a remark on the resolvent of the Dirichlet kernel 60

1 Introduction (the idea of co-Poisson)

We explain the underlying framework and the general contours of this work. Throughout the paper, we have tried to formulate the theorems in such a form that one can, for most of them, read their statements without having studied the preceeding material in its entirety, so a sufficiently clear idea of the results and methods is easily accessible. Setting up here all notations and necessary preliminaries for stating the results would have taken up too much space.

1 1 1 The ζ(s) = 1s + 2s + 3s + ... is a meromorphic function in the entire complex plane with a simple pole at s = 1, residue 1. Its functional equation is usually written in one of the following two forms:

s s 1−s 1 s π− 2 Γ( )ζ(s)= π− 2 Γ( − )ζ(1 s) (1a) 2 2 − 1 s 1 Γ( − ) s 2 2 ζ(s)= χ0(s)ζ(1 s) χ0(s)= π − s (1b) − Γ( 2 )

s 2 s 1 The former is related to the expression of π− Γ( 2 )ζ(s) as a left Mellin transform and to the Jacobi identity:

s s 1 ∞ s 1 π− 2 Γ( )ζ(s)= (θ(t) 1)t 2 − dt ( (s) > 1) (2a) 2 2 − ℜ Z0 1 ∞ 1 s 1 = (θ(t) 1 )t 2 − dt (0 < (s) < 1) (2b) 2 − − √t ℜ Z0 n2 πt 1 1 θ(t)=1+2 q q = e− θ(t)= θ( ) (2c) √ t n 1 t X≥ 1in the left Mellin transform we use s 1, in the right Mellin transform we use s. − −

2 The latter form of the functional equation is related to the expression of ζ(s) as the right Mellin transform of a tempered distribution with in [0, + ), which is self-reciprocal under the ∞ Fourier cosine transform:2

∞ s ζ(s)= ( δ (x) 1)x− dx (3a) m − 0 m 1 Z X≥ ∞ 2 cos(2πxy)( δ (y) 1) dy = δ (x) 1 (x> 0) (3b) n − m − 0 n 1 m 1 Z X≥ X≥ This last identity may be written in the more familiar form:

2πixy e δn(y)dy = δm(x) (4) R n Z m Z Z X∈ X∈ which expresses the invariance of the “Dirac comb” distribution m Z δm(x) under the Fourier ∈ transform. As a linear functional on Schwartz functions φ , the invarianceP of m Z δm(x) under ∈ Fourier is expressed as the Poisson summation formula: P φ(n)= φ(m) φ(y)= e2πixyφ(x) dx (5) n Z m Z R X∈ X∈ Z e e The Jacobi identity is the special instance with φ(x) = exp( πtx2), and conversely the validity of − (5) for Schwartz functions (and more) may be seen as a corollary to the Jacobi identity.

The idea of co-Poisson [4] leads to another formulation of the functional equation as an identity involving functions. The co-Poisson identity ((10) below) appeared in the work of Duffin and Weinberger [13]. In one of the approaches to this identity, we start with a function g on the positive

∞ ∞ 1 half-line such that both 0 g(t) dt and 0 g(t)t− dt are absolutely convergent. Then we consider ∞ x dt the averaged distributionR g D(x) = 0R g(t)D( t ) t where D(x) = n 1 δn(x) 1x>0(x). This ∗ ≥ − gives (for x> 0): R P ∞ g(x/n) ∞ g(1/t) g D(x)= dt (6) ∗ n − t n=1 0 X Z If g is smooth with support in [a, A], 0

2of course, δm(x)= δ(x m). − b 3one observes that I(g)(s)= g(1 s). d b −

3 One may reinterpret this in a manner involving the cosine transform acting on L2(0, + ; dx). C ∞ The Mellin transform of a function f(x) in L2(0, ; dx) is a function f(s) on (s) = 1 which is ∞ ℜ 2 1 1 1 2 u u ∞ 2 iγ nothing else than the Plancherel of e f(e ): f( 2 + iγ)= 0 f(x)x− − dx = u u 1 b ∞ u 2 iγu ∞ 2 ∞ u 2 2 2 f(e )e e− du, 0 f(x) dx = f(e )e du = 2π (s)= 1 f(s)R ds . The unitary −∞ | | −∞ | | bℜ 2 | | | | \ Roperator I is scale invariantR hence it is diagonalizedR by the Mellin transform:R I(f)(s)= χ0(s)f(s), C b C πx2 (f)(s)= χ0(s)f(1 s), where χ0(s) is obtained for example using f(x)= e− and coincides with C − b the chi-function defined in (1b). It has modulus 1 on the critical line as is unitary. So (8) says d b C that the co-Poisson intertwining identity holds:

(g D)= I(g) D (9) C ∗ ∗ The co-Poisson intertwining (9) or explicitely:

∞ ∞ g(x/m) ∞ g(1/t) ∞ g(n/y) ∞ 2 cos(2πxy) dt dx = g(t) dt (y > 0) (10) m − t y − 0 m=1 0 ! n=1 0 Z X Z X Z is, when g is smooth with support in [a, A], 0

If the integrable function g has its support in [a, A], 0 0 is arbitrary [8]. − Schwartz functions are square-integrable so here we have made contact with the investigation of de Branges [1], V Rovnyak [28] and J. and V. Rovnyak [29, 30] of square integrable functions on (0, ) vanishing on (0,a) and with Hankel transform of order ν vanishing on (0,a). For ν = 1 ∞ − 2 the Hankel transform of order ν is f(y) 2 ∞ cos(xy)f(y) dy and up to a scale change this 7→ π 0 is the cosine transform considered above. Theq co-PoissonR idea allows to attach the zeta function to, among the spaces defined by de Branges [1], the spaces associated with the cosine transform: it has allowed the definition of some novel Hilbert spaces [3] of entire functions in relation with the Riemann zeta function and Dirichlet L-functions (the co-Poisson idea is in [4] on the adeles of an arbitrary algebraic number field K; then, the study of the related Hilbert spaces was begun for K = Q. Further results were obtained in [7].)

4both sides in fact depend only on E(x)+ E( x) as a distribution on the line, which may be identically 0, and this happens exactly when E is a linear combination− of odd derivatives of the delta function.

4 −s 1 1 s Γ( 2 ) The study of the function χ0(s)= π − 2 s , of unit modulus on the critical line, is interesting. Γ( 2 ) We proposed to realize the χ0 function as a “scattering matrix”. This is indeed possible and has been achieved in [6]. The distributions, functions, and differential equations involved are all related to, or expressed by, the Fredholm determinants of the finite cosine transform, which in turn are related sin(t(x y)) to the Fredholm determinants of the finite Dirichlet kernels π(x −y) on [ 1, 1]. The study of the − − Dirichlet kernels is a topic with a vast literature. A minor remark will be made in an appendix.

−s 1 1 s Γ( 2 ) We mentioned the Riemann zeta function and how it relates to χ0(s)= π − 2 s and to the Γ( 2 ) cosine transform. Let us now briefly consider the Dedekind zeta function of the Gaussian number Γ(1 s) field Q(i) and how it relates to χ(s)= − and to the transform. The transform is Γ(s) H H

n n ∞ ∞ x y (g)(y)= J (2√xy)g(x) dx J (2√xy)= ( 1)n (11) H 0 0 − n!2 0 n=0 Z X 1 1 Up to the unitary transformation g(x) = (2x)− 4 f(√2x), (g)(y) = (2y)− 4 k(√2y), it becomes H ∞ the Hankel transform of order zero k(y) = 0 √xyJ0(xy)f(x) dx. It is a self-reciprocal, unitary, scale reversing operator ( (g(λx))(y) = 1 (g)( y )). We shall also extend its action to tempered H λ HR λ distributions on R with support in [0, + ). At the level of right Mellin transforms of elements of ∞ L2(0, ; dx) it acts as: ∞ Γ(1 s) 1 [(g)(s)= χ(s)g(1 s) χ(s)= − (s)= (12) H − Γ(s) ℜ 2

x It has e− 1x 0(x) as one among its self-reciprocalb functions, as is verified directly by series expansion ≥ n y ( 1) n n y x ∞ ∞ − ∞ 0 J0(2√xy)e− dy = n=0 n!2 x 0 y e− dy = e− . The identity R P R ∞ s Γ(1 s) J (2√t)t− dt = χ(s)= − (13) 0 Γ(s) Z0 is equivalent to a special case of well-known formulas of Weber, Sonine and Schafheitlin [33, 13.24.(1)]. Here we have an absolutely convergent integral for 3 < (s) < 1 and in that range 4 ℜ 1 y x ∞ y ∞ x the identity may be proven as in: e− = 0 J0(2√xy)e− dy = 0 J0(2√y) x e− dy, Γ(1 s) = y − ∞ ∞ s 1 x ∞ s 0 J0(2√y)( 0 x− − e− dx) dy = Γ(s) R0 J0(2√y)y− dy. TheR integral is semi-convergent for 1 u R (s) > 4 , andR of course (13) still holds.R In particular on the critical line and writing t = e , ℜ 1 1 1 2 u 2 u iγu 1 s = 2 + iγ, we obtain the identities of tempered distributions R e J0(2e )e− du = χ( 2 + iγ), 1 1 2 u 2 u 1 1 +iγu e J0(2e )= 2π R χ( 2 + iγ)e du. R

R 1 1 1 1 1 2 1 cn We have ζQ(i)(s) = 4 (n,m)=(0,0) (n2+m2)s = 1s + 2s + 4s + 5s + 8s + = n 1 ns and it 6 ··· ≥ π is a meromorphic functionP in the entire complex plane with a simple pole at s = 1,P residue 4 . Its functional equation assumes at least two convenient well-known forms:

s s 1 s (1 s) (√4) (2π)− Γ(s)ζQ (s) = (√4) − (2π)− − Γ(1 s)ζQ (1 s) (14a) (i) − (i) − 1 s 1 1 s Γ(1 s) ( ) ζQ (s)= χ(s)( ) − ζQ (1 s) χ(s)= − (14b) π (i) π (i) − Γ(s)

5 s The former is related to the expression of π− Γ(s)ζQ(i)(s) as a left Mellin transform:

s 1 ∞ 2 s 1 π− Γ(s)ζQ (s)= (θ(t) 1)t − dt ( (s) > 1) (15a) (i) 4 − ℜ Z0 1 ∞ 2 1 s 1 = (θ(t) 1 )t − dt (0 < (s) < 1) (15b) 4 − − t ℜ Z0 1 1 θ(t)2 = θ( )2 (15c) t t

1 s The latter form of the functional equation is related to the expression of ( π ) ζQ(i)(s) as the right Mellin transform of a tempered distribution which is supported in [0, ) and which is self-reciprocal ∞ under the -transform: H

1 s ∞ 1 s ( ) ζQ (s)= ( c δ (x) )x− dx (16a) π (i) m πm − 4 0 m 1 Z X≥ ∞ 1 1 J (2√xy)( c δ (y) ) dy = c δ (x) 1 (x)= E(x) (x> 0) (16b) 0 n πn − 4 m πm − 4 x>0 0 n 1 m 1 Z X≥ X≥ The invariance of E under the -transform is equivalent to the validity of the functional equation H 1 s 1 of ( π ) ζQ(i)(s) and it having a pole with residue 4 at s = 1. The co-Poisson intertwining becomes the assertion:

∞ ∞ g(x/πm) 1 ∞ 1 dt ∞ g(πn/y) 1 ∞ y > 0 = J (2√xy) c g( ) dx = c g(t) dt ⇒ 0 m πm − 4 t t n y −4 Z0 m=1 Z0 ! n=1 Z0 X X (17) If g is smooth with support in [b,B], 0 0 with an transform again constant on H (0,a) and have Schwartz decrease at + (the two constants being arbitrarily prescribed.) ∞ De Branges and V. Rovnyak have obtained [1, 28] rather complete results in the study of the

Hankel transform of order zero f(x) g(y) = ∞ √xyJ0(xy)f(x) dx from the point of view of 7→ 0 understanding the support property of being zeroR and with transform again zero in a given interval (0,b). They obtained an isometric expansion (Theorem 1 of section 2) and also the detailed de- scription of the related spaces of entire functions ([1]). The more complicated case of the Hankel transforms of non-zero integer orders was treated by J. and V. Rovnyak [29, 30]. These, rather complete, results are an indication that the Hankel transform of order zero or of integer order is easier to understand than the cosine or sine transforms, and that doing so thoroughly could be useful to better understand how to try to understand the cosine and sine transforms.

The kernel J (2√uv) of the -transform satisfies the Klein-Gordon equation in the variables 0 H

6 x = v u, t = v + u: − ∂2 ∂2 ∂2 ( + 1)J (2√uv) = ( + 1)J (2√uv) = ( + 1)J ( t2 x2)=0 (18) ∂u∂v 0 0 ∂t2 − ∂x2 0 − p It is a noteworthy fact that the support condition, initially considered by de Branges and V. Rovnyak, and which, nowadays, is also seen to be in relation with the co-Poisson identities, has turned out to be related to the relativistic causality governing the propagation of solutions to the Klein-Gordon equation. This has been established in [9] where we obtained as an application of this idea the isometric expansion of [1, 28] in a novel manner. It was furthermore proven in [9] that the H transform is indeed an (absolute) scattering, in fact the scattering from the past boundary to the future boundary of the Rindler wedge 0 < t < x for solutions of a first order, two-component | | (“Dirac”), form of the KG equation.

In the present paper, which is completely independent from [9], we shall again study the - H transform and show in particular how to recover in yet a different way the earlier results of [1, 28] and also we shall extend them. This will be based on the methods from [5, 6], and uses the techniques motivated by the study of the co-Poisson idea [8]. Our exposition will thus give a fully detailed account of the material available in summarized form in [5, 6]. Then we proceed with a development of these methods to provide the elucidation of the (two dimensions bigger) spaces of functions constant in (0,a) and with -transforms constant in (0,a). H The use of tempered distributions is an important point of our approach5; also one may envision the co-Poisson idea as asking not to completely identify a distribution with the linear functional it “is”. In this regard it is of note that the distributions which arise following the method of [5] are seen in the present case of the study of the -transform to have a very natural formulation as H differences of boundary values of analytic functions, that is, as hyperfunctions [23]. We do not use the theory of hyperfunctions as such, but could not see how not to mention that this is what these distributions seem to be in a natural manner.

The paper contains no . And, the reader will need no prior knowledge of [2]; some familiarity with the m-function of Hermann Weyl [10, 21, 26] is necessary at one stage of the discussion (there is much common ground, in fact, between the properties of the m-function and the axioms of [2]). The reproducing kernel in any space with the axioms of [2] has a specific appearance (equation (109) below) which has been used as a guide to what we should be looking for. The validity of the formula is re-proven in the specific instance considered here6. Regarding the differential equations governing the deformation, with respect to the parameter a> 0 7, of the

5at the bottom of page 456 of [1] the formulas given for A(a, z) and B(a, z) as completed Mellin transforms are lacking terms which would correspond to Dirac distributions; possibly related to this, the isometric expansion as presented in Theorem II of [1] is lacking corresponding terms. The exact isometric expansion appears in [28] and the exact formulas for A(a, z) and B(a, z) as completed Mellin transforms appear, in an equivalent form, in [30, eq.(37)]. 6 s 1 iz s the critical line here plays the rˆole of the real axis in [2], is 2 and the use of the variable is most useful in distinguishing the right Mellin transforms which need to be completed− by a Gamma factor from the left Mellin transforms of “theta”-like functions. 7 1 2 the a here corresponds to 2 a in [1].

7 Hilbert spaces, we depart from the general formalism of [2] and obtain them in a canonical form, as defined in [21, 3]. Interestingly this is related to the fact that the A and B functions (connected § to the reproducing kernel, equation (109)) which are obtained by the method of [5] turn out not to be normalized according to the rule in general use in [2]. Each rule of normalization has its own advantages; here the equations are obtained in the Schr¨odinger and Dirac forms familiar from the spectral theory of linear second order differential equations [10, 21, 26]. This allows to make reference to the well-known Weyl-Stone-Titchmarsh-Kodaira theory [10, 21, 26], and to understand as a scattering. Regarding spaces with the axioms of [2], the articles of Dym [14] and Remling H [27] will be useful to the reader interested in second order linear differential equations. And we refer the reader with number theoretical interests to the recent papers of Lagarias [18, 19].

The author has been confronted with a dilemma: a substantial portion of the paper (most of chapters 5, 6, 8) has a general validity for operators having a kernel of the multiplicative type k(xy) possessing certain properties in common with the cosine, sine or transforms. But on the H other hand the (essentially) unique example where all quantities arising may be computed is the transform (and transforms derived from it, or closely related to it, as the Hankel transforms of H integer orders). We have tried to give proofs whose generality is obvious, but we also made full use of distributions, as this allows to give to the quantities arising very natural expressions. Also we never hesitate using arguments of analyticity although for some topics (for example, some aspects involving certain integral equations and Fredholm determinants) this is certainly not really needed.

2 Hardy spaces and the de Branges-Rovnyak isometric ex- pansion

Let us state the isometric expansion of [1, 28] regarding the square integrable Hankel transforms of order zero. We reformulate the theorem to express it with the transform (11) rather than the H Hankel transform of order zero.

Theorem 1 ([1], [28]). : Let k L2(0, ; dx). The functions f and g , defined as the following ∈ ∞ 1 1 integrals:

∞ f1(y)= J0(2 y(x y))k(x) dx , (19a) y − Z p ∞ y g1(y)= k(y) J1(2 y(x y))k(x) dx , (19b) − y x y − Z r − p exist in L2 in the sense of mean-square convergence, and they verify:

∞ ∞ f (y) 2 + g (y) 2 dy = k(x) 2 dx . (19c) | 1 | | 1 | | | Z0 Z0 The function k is given in terms of the pair (f1,g1) as: x x y k(x)= g1(x)+ J0(2 y(x y))f1(y) dy J1(2 y(x y))g1(y) dy (19d) 0 − − 0 x y − Z p Z r − p

8 The assignment k (f ,g ) is a unitary equivalence of L2(0, ; dx) with L2(0, ; dy) L2(0, ; dy) 7→ 1 1 ∞ ∞ ⊕ ∞ such that the -transform acts as (f ,g ) (g ,f ). Furthermore k and (k) both identically H 1 1 7→ 1 1 H vanish in (0,a) if and only if f1 and g1 both identically vanish in (0,a).

Let us mention the following (which follows from the proof we have given of Thm. 1 in [9]): if 2 2 2 f , f ′ , g , g′ are in L then k, k′ and (k)′ are in L . Conversely if k, k′ and (k)′ are in L then 1 1 1 1 H H the integrals defining f1(y) and g1(y) are convergent for each y> 0 as improper Riemann integrals, 2 and f1′ and g1′ are in L .

It will prove convenient to work with (f(x),g(x)) = 1 (g ( x )+ f ( x ),g ( x ) f ( x )): 2 1 2 1 2 1 2 − 1 2 1 y 1 ∞ y f(y)= k( )+ J ( y(2x y)) J ( y(2x y)) k(x) dx (20a) 2 2 2 0 − − 2x y 1 − y/2   Z p r − p 1 y 1 ∞ y g(y)= k( ) J ( y(2x y)) + J ( y(2x y)) k(x) dx (20b) 2 2 − 2 0 − 2x y 1 − y/2   Z p r − p 1 2x y k(x)= f(2x)+ J ( y(2x y)) J ( y(2x y)) f(y) dy 2 0 − − 2x y 1 − Z0  r −  1 2x p y p + g(2x) J ( y(2x y)) + J ( y(2x y)) g(y) dy (20c) − 2 0 − 2x y 1 − Z0  r −  ∞ ∞ p p k(x) 2 dx = f(y) 2 + g(y) 2 dy (20d) | | | | | | Z0 Z0 The transform on k acts as (f,g) (f, g). The pair (k, (k)) identically vanishes on (0,a) H 7→ − H if and only if the pair (f,g) identically vanishes on (0, 2a). The structure of the formulas is more apparent after observing (x,y > 0): ∂ 1 1 y J ( y(2x y))1 (y) = δ (y) J ( y(2x y))1 (y) (21) ∂x 2 0 − 0

To a function k L2(0, ; dx) we associate the analytic function ∈ ∞ ∞ k(λ)= eiλxk(x) dx ( (λ) > 0) (22) ℑ Z0 with boundary values for λ eR again written k(λ), which defines an element of L2(R, dλ ), the ∈ 2π assignment k k being unitary from L2(0, ; dx) onto H2( (λ > 0), dλ ). Next we have the 7→ ∞e ℑ 2π conformal equivalence and its associated unitary map from H2( (λ> 0), dλ ) to H2( w < 1, dθ ): e ℑ 2π | | 2π λ i 1 λ + i w = − K(w)= k(λ) (23) λ + i √2 i e 9 x It is well known that this indeed unitarily identifies the two Hardy spaces. With k0(x) = e− , i 1 2 2x 1 2 k (λ) = , K (w) = , and k = ∞ e− dx = = K . The functions k (λ) = 0 λ+i 0 √2 k 0k 0 2 k 0k n λ i n i 1 n ( λ−+i ) λ+i correspond to Kn(w) = √ w . ToR obtain explicitely the orthogonal basis (kn)n 0, we e 2 e ≥ first observe that w =1 2 i , so as a unitary operator it acts as: − λ+i x x (x y) x y w k(x)= k(x) 2 e− − k(y) dy = k(x) e− 2 e k(y) dy (24) · − − Z0 Z0 x x Writing k (x)= P (x)e− we thus obtain P (x)= P (x) 2 P (y) dy: n n n+1 n − 0 n n x n n R( 2x)j P (x)= 1 2 1= − (25) n − · j j! 0 j=0  Z  X   (0) So as is well-known P (x) = Ln (2x) (in the notation of [31, 5]) where the Laguerre polynomials n § (0) x Ln (x) are an orthonormal system for the weight e− dx on (0, ). ∞ One of the most common manner to be led to the -transform is to define it from the two- H dimensional Fourier transform as: 2 2 2π 1 1 y + y ∞ dθ 1 (f)( r2)= ei(x1y1+x2y2)f( 1 2 )dy dy = eirs cos θ f( s2)sds H 2 2π 2 1 2 2π 2 ZZ Z0 Z0  (26) 1 ∞ 1 (f)( r2)= J (rs)f( s2) sds r2 = x2 + x2,s2 = y2 + y2 H 2 0 2 1 2 1 2 Z0 which proves its unitarity, self-adjointness, and self-reciprocal character and the fact that it has x e− has self-reciprocal function. The direct verification of (k0) = k0 is immediate: (k0)(x) = ( 1)n H x H y n n y x tx 1 t ∞ J0(2√xy)e− dy = ∞ − 2 x ∞ y e− dy = e− . Then, (e− )= t− e− for each t> 0. 0 n=0 n! 0 H tx 1 1 x So ∞ e− (k)(x) dx = t− ∞ e− t k(x) dx hence: R 0 H P 0 R R R i 1 k L2(0, ; dx) ](k)(λ)= k(− ) (27) ∀ ∈ ∞ H λ λ With the notation (K) for the function in H2( w < 1) correspondinge to (k), we obtain from H | | H (23), (27), an extremely simple result:8

(K)(w)= K( w) (28) H − n This obviously leads us to associate to K(w)= n∞=0 cnw the functions: P ∞ n F (w) := c2nw (29a) n=0 X ∞ n G(w) := c2n+1w (29b) n=0 X K(w)= F (w2)+ wG(w2) (29c) and to k the functions f and g in L2(0, + ; dx) corresponding to F and G. Certainly, k 2 = ∞ k k f 2 + g 2, and the assignment of (f,g) to k is an isometric identification. Furthermore, certainly k k k k 8we also take note of the operator identity w = w . H· − · H

10 the transform acts in this picture as (f,g) (f, g). Let us now check the support properties. H 7→ − Let α(m) be the leftmost point of the (essential) support of a given m L2(0, ; dx). As is ∈ ∞ well-known, 1 α(m) = lim sup log m(it) , (30) − t + t | | → ∞ 1 1 If w corresponds to λ via (23) then w2 corresponds to (λe ), so if to a function f with corre- 2 − λ sponding F (w) we associate the function ψ(f) L2(0, ; dx) which corresponds to F (w2), ∈ ∞ t + 1 t + 1 (t + 1)ψ](f)(it) = ( t + 1)f(i t ) , (31) 2 2 then we have the identity: e 1 α(ψ(f)) = α(f) (32) 2 Returning to F (resp. f) and G (resp. g) associated via (29a), (29b), to K (resp. k) we thus have k = ψ(f)+ w ψ(g), (k) = ψ(f) w ψ(g), hence if the pair (f,g) vanishes on (0, 2a) then the · H − · λ i pair (k, (k)) vanishes on (0,a) (clearly the unitary operator of multiplication by w = − does not H λ+i affect α(m).) Conversely, as α(f)=2α(k + (k)) and α(g)=2α(k (k)), if the pair (k, (k)) H − H H vanishes on (0,a) then the pair (f,g) vanishes on (0, 2a).

We have thus established the existence of an isometric expansion, its support properties, and its relation to the -transform. That there is indeed compatibility of (20a) and (20b) with (29a) and H (29b), and with (20c), will be established in the last section (9) of the paper with a direct study of (31). In the meantime equations (20a), (20b), (20c) and (20d) will have been confirmed in another manner. Yet another proof of the isometric expansion has been given in [9].

3 Tempered distributions and their and Mellin transforms H

Any distribution D on R has a primitive. If the closed support of D is included in [0, + ), then ∞ x ( 1) it has a unique primitive, which we will denote 0 D(x) dx, or, more safely, D − , which also has ( N) its support in [0, + ). The temperedness of such a D is equivalent to the fact that D − for ∞ R ( N) 2 M N 0 is a continuous function with polynomial growth. With D − (x) = (1+ x ) g (x), ≫ (N,M) M 0, we can express D as P (x, d )(g) where P is a polynomial and g L2(0, ; dx). Conversely ≫ dx ∈ ∞ any such expression is a tempered distribution vanishing in ( , 0). The Fourier transforms of −∞ d such tempered distributions D(λ) appear thus as the boundary values of Q( dλ , λ)f(λ) where Q are polynomials and the f’s belong to H2( (λ) > 0). As taking primitives is allowed we know e ℑ without further ado that this class of analytic functions is the same thing as the space of functions d 1 2 g(λ)= R( , λ, λ− )f(λ), R a polynomial and f H . It is thus clearly left stable by the operation: dλ ∈ i 1 g (g)(λ) := g(− ) ( (λ) > 0) (33) 7→ H λ λ ℑ which will serve to define the action of on tempered distributions with support in [0, + ). H ∞

11 Let us also use (33), where now λ R, to define as a unitary operator on L2( , + ; dx). ∈ H −∞ ∞ It will anti-commute with f(x) f( x) so: → − ∞ (f)(x)= (J (2√xy)1 (x)1 (y) J (2√xy)1 (x)1 (y))f(y) dy (34) H 0 x>0 y>0 − 0 x<0 y<0 Z−∞ Useful operator identities are easily established from (33): d d d d x = x and x = x (35a) dx · H −H · dx dx · H −H · dx d x x d = and = (35b) dx · H H· · H H· dx Z0 Z0 d d d d x = x and x = x (35c) · H −H · dx dx H· −dx dx · H d It is important that dx is always taken in the distribution sense. It would actually be possible to define the action of on distributions supported in [0, + ) without mention of the Fourier H ∞ transform, because these identities uniquely determine (D) if D is written ( d )N (1 + x)M g (x) H dx N,M with g L2(0, ; dx). But the proof needs some organizing then as it is necessary to check N,M ∈ ∞ independence from the choice of N and M, and also to establish afterwards all identities above. So (33) provides the easiest road. Still, in this context, let us mention the following which relates to the restriction of (D) to (0, + ): H ∞ Lemma 2. Let k be smooth on R with compact support in [0, + ). Then (k) is the restriction ∞ H to [0, + ) of an entire function γ which has Schwartz decrease as x + . For any tempered ∞ → ∞ distribution D with support in [0, + ), there holds ∞ ∞ ∞ (D)(x)k(x) dx = D(x)γ(x) dx , (36) H Z0 Z0 ∞ where in the right hand side in fact one has ǫ D(x)γ(x)θ(x) dx where the smooth function θ is 1 − for x ǫ and 0 for x ǫ and is otherwise arbitrary (as is ǫ). ≥− 3 ≤− 2 R

Let us suppose k = 0 for x>B. Defining:

B γ(x)= J0(2√xy) k(y) dy (37) Z0 we obtain an entire function and, according to our definitions, (k)(x)= γ(x)1 (x) as a distribu- H x>0 tion or a square-integrable function. Using (35c) ( = 1 d x d for x> 0) and bounding J by H − x H· dx dx 0 N d 1 d 1 we see (induction) that γ is O(x− ) for any N as x + , and using (35a) ( = x → ∞ dx · H − x H· dx for x> 0) the same applies to its derivative and also to its higher derivatives. So it is of the Schwartz class for x + . → ∞ Replacing D by (D) in (36) it will be more convenient to prove: H ∞ ∞ D(x)k(x) dx = (D)(x)γ(x) dx (38) H Z0 Z0

12 ∞ If (38) holds for D (and all k’s) then = < D,k′ >= < (D), θ(x) x γ(y)dy > (ob- x − − H x − serve that γ(y) dy = (k′)(x) vanishes at + )so =+ < (D),θγ >=< (D′),θγ > 0 H ∞ 0 H R H hence (38)R holds as well for D′ (and all k’s). So we may assume D toR be a continuous function of polynomial growth. It is also checked using (35c) that if (38) holds for D it holds for xD. So we may reduce to D being square-integrable, and the statement then follows from the self-adjointness of on L2 (or we reduce to Fubini). H The behavior of with respect to the translations τ : f(x) f(x a) is important. For H a 7→ − f L2(R; dx) the value of a is arbitrary and we can define ∈ τ # := τ (39a) a H a H ] iaλ τa(f)(λ)= e f(λ) (39b)

− ^# ia 1 τa (f)(λ)= e λ ef(λ) (39c)

We observe the remarkable commutation relations (whiche would fail for the cosine or sine trans- forms): a,b τ τ # = τ #τ (40) ∀ a b b a For a distribution D the action of τ # is here defined only for a α( (D)), where α(E) is the a ≥ − H leftmost point of the closed support of the distribution E. On this topic from the validity of (30) when f L2(0, ; dx), and invariance of α under derivation9, integration, and multiplication by x, ∈ ∞ one has: 1 α(E) = lim sup log E(it) (41) − t + t | | → ∞ We thus have the property, not shared by the cosine or sinee transforms:

a α( (D)) = α(τ #(D)) = α(D) (42) ≥− H ⇒ a

We now consider D with α(D) > 0 and α( (D)) > 0 and prove that its Mellin transform is H an entire function with trivial zeros at 0, 1, 2, . . . , following the method of regularization by − − multiplicative convolution and co-Poisson intertwining from [8]. The other, very classical in spirit, proof shall be presented later. The latter method is shorter but the former provides complementary information.

In [8, 4.A] the detailed explanations relative to the notion of multiplicative convolution are § given: x dt (g D)(x)“=” g(t)D( ) , (43) ∗ t t ZR | | where we will in fact always take g to have compact support in (0, + ). It is observed that ∞ g g g xD = x( D) (g D)′ = D′ (44) ∗ x ∗ ∗ x ∗ 9 d It is important in order to avoid a possible confusion to insist on the fact that dx is always taken in the distribution d 1 d sense so for example dx x>0 = δ(x) indeed has the leftmost point of its support not affected by dx .

13 s The notion of right Mellin transform ∞ D(x)x− dx is developed in [8, 4.C], for D with support 0 § in [a, + ), a> 0: ∞ R D(s)= s(s + 1) (s + N 1)D\( N)(s + N) , (45) ··· − − where N 0. The meaning of D is as the maximal possible to a half-plane ≫ b (s) > σ, where σ is as to the left as is possible. The notion is extended10 in [8, 4.F] to the case ℜ b § where the restriction of D to ( a,a) is “quasi-homogeneous”. For example, if D ( a,a) = 10 0 and such that (D) ∞ H also has a positive leftmost point of support. Let g be a smooth function with compact support in (0, ). Then the multiplicative convolution g D belongs to the Schwartz class. ∞ ∗

g(1/t) This is the analog of [8, Thm 4.29]. The function k(t) = (Ig)(t) = t is defined and it is written as k = (γ1 ) where γ is the entire function, of Schwartz decrease at + such that H x>0 ∞ (k)= γ 1 . Then it is observed that H · x>0 ∞ k(x/t) ∞ t> 0 = (g D)(t)= D(x) dx = (D)(x)γ(tx) dx (47) ⇒ ∗ t H Z0 Z0 We have used Lemma 2. Then the Schwartz decrease of ∞ (D)(x)γ(tx) dx as t + is es- 0 H → ∞ tablished as is done at the end of the proof of [8, Thm 4.29],R integrating by parts enough times to transform (D) into a continuous function of polynomial growth, identically zero on [0,c], c> 0. H Theorem 4. Let D a tempered distribution with a positive lefmost point of support and such that (D) also has a positive leftmost point of support. Then D(s) and Γ(s)D(s) are entire functions H and: b b Γ(s)D(s) = Γ(1 s)\(D)(1 s) (48) − H − b We first establish:

Theorem 5 (“co-Poisson intertwining”). Let D be a tempered distribution supported in [0, + ) ∞ and let g be an integrable function with compact support in (0, ). Then, with (Ig)(t)= g(1/t) : ∞ t (g D) = (Ig) (D) (49) H ∗ ∗ H 10if D is near the origin a function with an analytic character, then straightforward elementary arguments allow a complementary discussion. However if D is just an element of L2(0, ; dx) then D is a square-integrable function on b the critical line, and nothing more nor less. ∞

14 Let us first suppose that D is an L2 function. In that case, we will use the Mellin-Plancherel s 1 [ transform f f(s)= ∞ f(t)t− dt, for f square integrable and (s)= . Then g f is, changing 7→ 0 ℜ 2 ∗ variables, the Fourier transformR of an additive convolution where one of the two has compact support, b well known to be the product g f. We need also to understand the Mellin transform of (f). Let · H 1 \ s us suppose ft(x) = exp( tx). Then (ft)= f 1 has Mellin transform (ft)(s)= t− Γ(1 s) and − H t t H − s 1 b b f (s)= t − Γ(1 s), so we have the identity for such f’s: t − Γ(1 s) b [(f)(s)= − f(1 s) (50) H Γ(s) − 2 The linear combinations of the ft’s are dense in L ,b so (49) holds for all f’s as an identity of square integrable functions on the critical line. We are now in a position to check the intertwining: Γ(1 s) \(g f)(s)= − g(1 s)f(1 s)= Ig(s) [(f)(s)= Ig\(f)(s). H ∗ Γ(s) − − H ∗ H For the case of an arbitrary distribution it will then be sufficient to check that if (49) holds for b b c D it holds for xD and for D′. This is easily done using (44). We have g (D′) = (xg D)′, so ∗ ∗ x x Ig x (g D′)= (xg D)= ( (D)) = Ig ( (D)) = Ig (D′). A similar proof is done H ∗ 0 H ∗ 0 x ∗ H ∗ 0 H ∗ H for xD. ThisR completes the proofR of the intertwining.R The theorem 4 is then established as is [8, Thm 4.30]. We pick an arbitrary g smooth with compact support in (0, ). We know by theorem 3 that g D is a Schwartz function as x + , ∞ ∗ → ∞ and certainly it vanishes identically in a neighborhood of the origin, so g[D(s) = g(s)D(s) is an ∗ entire function. So D(s) is a meromorphic function in the entire complex plane, in fact an entire b function as g is arbitrary. We then use the intertwining and (50) for square integrableb functions. b Γ(1 s) This gives g(1 s)\(D)(s)= Ig\(D)(s)= \(g D)(s)= − g(1 s)D(1 s). Hence, indeed, − H ∗ H H ∗ Γ(s) − − after replacing s by 1 s: − b b Γ(s)D(s) = Γ(1 s)\(D)(1 s)b (51) − H − The left-hand side may have poles only at 0, 1, ..., and the right-hand side only at 1, 2, .... So b − both sides are entire functions and D(s) has trivial zeros at 0, 1, 2, . . . − − We now give another proof of Theoremb 4, which is more classical, as it is the descendant of the second of Riemann’s proof, and is the familiar one from the theory of theory of L-series and modular functions. The existence of two complementary proofs is instructive, as it helps to better understand

∞ s ∞ s 1 the rˆole of the right Mellin transform 0 f(x)x− dx vs. the left Mellin transform 0 θ(it)t − dt.

R 11 R∞ iλx To the distribution D we associate its “theta” function θD(λ) = D(λ) = 0 e D(x) dx, 12 which is an analytic function for (λ) > 0 . Right from the beginning we have: R ℑ e 1 i θ (D)(it)= θD( ) (52) H t t If the leftmost point of the support of D is positive then θ (it) has exponential decrease as t + D → ∞ s 1 and ∞ θ (it)t − dt is an entire function. If also the leftmost point of support of (D) is positive 1 D H 11 −1 theR author hopes to be forgiven this temporary terminology in a situation where only the behavior under λ λ is at work. 7→ 12 we adopt the usual notation, and consider θD as a function of it rather than t.

15 1 s 1 s then θ (D)(it) has exponential decrease as t + and θD(it)t − dt = ∞ θ (D)(it)t− dt is an H → ∞ 0 1 H s 1 entire function. So, under the support property considered in Theorem 4 (s) := ∞ θ (it)t − dt R DR 0 D is indeed an entire function, and the functional equation is R

(s)= ∗(1 s) (53) D D −

s 1 with ∗(s)= ∞ θ (D)(t)t − dt. D 0 H To concludeR we also need to establish:

(s)=Γ(s)D(s) (54) D We shall prove this for (s) 0 under the hypothesisb that D has support in [a, + ), a > 0 (no ℜ ≫ ∞ N hypothesis on (D)). In that case, as θ (it) is O(t− ) for a certain N as t 0 (t> 0), and is of H D → s 1 exponential decrease as t + , we can define (s)= ∞ θ (it)t − dt as an analytic function for → ∞ D 0 D 2 (s) 0. Let us suppose that D is a continuous function which is O(x− ) as x + . Then, for, ℜ ≫ R → ∞ (s) > 0, the identity (54) holds as an application of the Fubini theorem. We then apply our usual ℜ method to check that if (54) holds for D it also holds for xD and for D′. For this, obviously we need things such as D (s)= sD(s + 1) [8, 4.15] and xD(s)= D(s 1), the formulas θ ′ = iλθ , ′ − D − D ∂ θxD = i θD, and Γ(s +1)= sΓ(s). The verifications are then straightforward. − ∂λ c b c b In summary we have seen how the support property for D and (D) is related in two comple- H mentary manners to the functional equation, one using the right Mellin transform D(s) of D and the idea of co-Poisson, the other using the left Mellin transform (s) of the “theta” function θD D b associated to D as an analytic function on the upper half-plane and the behavior of θD(it) under t 1 . It is possible to push further the analysis and to characterize the class of entire functions 7→ t (s) = Γ(s)D(s), as has been done in [8] in the case of the cosine and sine transforms. It is also D explained in [8] how the discussion extends to allow finitely many poles. The proofs and statements b given there are easily adapted to the case of the transform. Only the case of poles at 1 and 0 H will be needed here and this corresponds, either to the condition that D and (D) both restrict in H ( a,a) for some a > 0 to multiples of the , or, that they are both constant in − [0,a) for some a> 0. We recall that the Mellin transform D(s) is defined in such a manner, that it is not affected from either substracting δ or 1x>0 from D. b

4 A group of distributions and related integral formulas

We now derive some integral identities which will prove central. The identities will be re-obtained later as the outcome of a less direct path. We are interested in the tempered distribution ga(x) 1 # whose Fourier transform is exp(ia −λ ). Indeed τa (f) (equation (39a)) is the additive convolution of 1 f with g : we note that g differs from δ(x) by a square integrable function as 1 exp( iaλ− )= a a − − 1 # O λ ( λ − ); so there is a convolution formula τa (f)= f fa f for a certain square integrable | |→∞ | | − ∗

16 function f . For f L2, the convolution f f as the Fourier transform of an L1-function is a ∈ a ∗ 1 i 1 continuous on R. Starting from the identity exp(ia − )= iλ exp(ia − ) we identify g for a 0 λ − λ λ a ≥ as ∂ δ . It is important that ∂ is taken in the distribution sense. So we have, simply: ∂x H a ∂x aJ (2√ax) g (x)= δ(x) 1 1 (x) (a 0) (55) a − √ax x>0 ≥

If a< 0 then ga(x)= g a( x), fa(x)= f a( x). So: − − − −

aJ1(2√ ax) g a(x)= δ(x) − 1x<0(x) ( a 0) (56) − − √ ax − ≤ − The group property under the additive convolution g g = g leads to remarkable integral a ∗ b a+b identities f = f +f f f involving the Bessel functions. The pointwise validity is guaranteed a+b a b − a ∗ b aJ1(2√ax) 1 by continuity; the Plancherel identity confirms the identity, where fa(x) = √ax x>0(x) for a 0 and f a(x)= fa( x): ≥ − − f = f + f f f (57) a+b a b − a ∗ b At x = 0 the pointwise identity is obtained by continuity from either x > 0 or x < 0. We have essentially two cases: ga gb for a,b 0 and ga g b for a b 0. The following is obtained: ∗ ≥ ∗ − ≥ ≥ Proposition 6. Let a b 0 and x 0. There holds: ≥ ≥ ≥ x (a + b)J (2 (a + b)x) aJ (2√ax) bJ (2√bx) aJ1(2√ay) bJ (2 b(x y)) 1 = 1 + 1 1 − dy (58a) (a + b)x √ax √bx − 0 √ay b(x y) p Z p − p p (a b)J (2 (a b)x) aJ (2√ax) ∞ aJ1(2√ay) bJ (2 b(y x)) − 1 − = 1 1 − dy (58b) (a b)x √ax − x √ay b(y x) −p Z p − p bJ (2√bx) ∞ aJ1(2√ay) bJ (2 b(y + xp)) 0= 1 1 dy (58c) √bx − √ay b(y + x) Z0 p p Exchanging a and b and changing variables we combine (58b) and (58c) into one single equation for x 0 and a,b 0: ≥ ≥

(a b)J1(2 (a b)x) aJ1(2√ax) ∞ aJ1(2 a(y + x)) bJ1(2√by) − − 1a b 0(a b)= dy (59) (a b)x − ≥ − √ax − 0 a(y + x) √by −p Z p The formulap for x = 0 in (59) is obtained by continuity. It is equivalentp to

∞ du 1 1 J (u)J (cu) = min(c, ) (c> 0) (60) 1 1 u 2 c Z0 which is a very special case of formulas of Weber, Sonine and Schafheitlin ([33, 13.42.(1)]). Another interesting special case of (59) is for a = b. The formula becomes

J (2√x) ∞ J1(2√y) J (2√x + y) 1 = 1 dy (61) √x √y √x + y Z0 which is equivalent to a special case of a formula of Sonine ([33, 13.48.(12)]).

17 2 We already mentioned the equation ∂ J (2√uv) = J (2√uv). New identities are obtained ∂u∂v 0 − 0 from (59)or(58a) after taking either the a or the b derivative. We investigate no further (59) as the corresponding semi-convergent integrals, in a form or another, are certainly among the formulas of [33, 13]. Let us rather focus more closely on the case a,b 0 ((58a).) We have a function which § ≥ is entire in a, b, and x and the identity holds for all complex values of a, b, and x. Let us take the derivative with respect to a:

x bJ1(2 b(x y)) J0(2 (a + b)x)= J0(2√ax) J0(2√ay) − dy (62) − 0 pb(x y) p Z − We replace b by b and then set x = b. This gives: p − b bI1(2 b(b y)) I0(2 b(b a)) = J0(2√ba)+ J0(2√ay) − dy (63) − 0 pb(b y) p Z − We take the derivative of (62) with respect to b: p

x xJ1(2 (a + b)x) = J0(2√ay)J0(2 b(x y)) dy (64) − (pa + b)x − 0 − Z p Then we replace b by b andp set x = b: − b bI1(2 b(b a)) − = J0(2√ay)I0(2 b(b y)) dy (65) pb(b a) 0 − − Z p Combining (63) and (65) byp addition and substraction we discover that we have solved certain integral equations:

+ bI1(2 b(b x)) ∂ φb (x)= I0(2 b(b x)) − = (1+ )I0(2 b(b x)) (66a) − − pb(b x) ∂x − p − p p bI1(2 b(b x)) ∂ φ−(x)= I0(2 b(b x))+ − = (1 )I0(2 b(b x)) (66b) b − b(b x) − ∂x − p − p b p + p + √ φb (x)+ J0(2√xy)φb (y) dy = J0(2 bx) (66c) Z0 b φ−(x) J (2√xy)φ−(y) dy = J (2√bx) (66d) b − 0 b 0 Z0 The significance will appear later in the paper and we leave the matter here. The method was devised after the importance of solving equations (66c) and (66d) had emerged and after the solutions (66a) and (66b) had been obtained as the outcome of a more indirect path. Of course, direct verification by replacement of the Bessel functions by their series expansions is possible and easy.

5 Orthogonal projections and Hilbert space evaluators

Let a> 0 and let P be the orthogonal projection on L2(0,a; dx) and Q = P the orthogonal a a H aH projection on (L2(0,a; dx)) and let K L2(0, ; dx) be the Hilbert space of square integrable H a ⊂ ∞

18 functions f such that both f and (f) have their supports in [a, ). Also we shall write H = H ∞ a P P . Also we shall very often use D = H2 = P P P . Using: aH a a a aH aH a ∞ xnyn J (2√xy)= ( 1)n , (67) 0 − n!2 n=0 X we exhibit H = P P as a limit in operator norm of finite rank operators so P P is a compact a aH a aH a (self-adjoint) operator. It is not possible for a non zero f L2(0,a; dx) to be such that H (f) = ∈ k a k f , as this would imply that H (f) vanishes identically for x>a, but H (f) is an entire function. k k a a So the operator norm of H is strictly less than one, and 1 H as well as 1 D are invertible. a ± a − a We consider the equation φ = u + (v) u, v L2(0,a; dx) (68) H ∈ Hence:

u + Ha(v)= Pa(φ) (69a) H (u)+ v = P ( (φ)) (69b) a a H 1 u = (1 D )− (P (φ) H P (φ)) (69c) − a a − a aH 1 v = (1 D )− ( H P (φ)+ P (φ)) (69d) − a − a a aH Then if φ = u + (v ) is L2-convergent, (u ) and (v ) will be convergent, and the vector space n n H n n n sum L2(0,a; dx)+ (L2(0,a; dx)) is closed. Its elements are analytic functions for x>a so certainly H this is a proper subspace of L2. Hence we obtain that each K is not reduced to 0 and a { } 2 2 K⊥ = L (0,a; dx)+ (L (0,a; dx)) (70) a H We also mention that K is dense in but not equal to L2(0, ; dx), more generally that K ∪a>0 a ∞ ∪a>b a is dense in but not equal to Kb, and also obviously a< Ka = 0 , a 0 will be fixed (all defined quantities and functions will depend on a, but this will not always be explicitely indicated.) We shall be occupied with understanding the vectors Xa K such that s ∈ a

∞ a ∞ s f K f(x)X (x) dx = f(s)= f(x)x− dx (71) ∀ ∈ a s Za Za and in particular we are interested in computing b

∞ a a Xa(s,z)= Xs (x)Xz (x) dx (72) Za As a is fixed here, we shall drop the superscript a to lighten the notation. For the time being we 1 s shall restrict to (s) > and we define X to be the orthogonal projection to K of 1 (x)x− . ℜ 2 s a x>a As a preliminary to this study we need to say a few words regarding:

s ∞ s g (x) := (1 (x)x− )= J (2√xy)y− dy (73) s H x>a 0 Za

19 The integral is absolutely convergent for (s) > 3 , semi-convergent for (s) > 1 , and g is defined ℜ 4 ℜ 4 s by the equation as an L2 function for (s) > 1 (it will prove to be entire in s for each x> 0). We ℜ 2 need the following identity, which shows also that gs(x) is analytic in x> 0:

a ∞ n n+1 s s 1 s s 1 n x a − g (x)= χ(s)x − J (2√xy)y− dy = χ(s)x − ( 1) (74) s − 0 − − n!2(n +1 s) 0 n=0 Z X − 3 s s 1 s This is obtained first in the range < (s) < 1: ∞ J0(2√xy)y− dy = x − ∞ J0(2√y)y− dy = 4 ℜ a ax s 1 ax s s 1 a s x − χ(s) J0(2√y)y− dy = x − χ(s) J0(2√xy)y− dy. The poles at s = 1, s = 2, − 0 − R0 R . . . are only apparent. The identity is valid by analytic continuation in the entire plane (s) > 1 . R  R ℜ 2 For each given x> 0 we have in fact an entire function of s C. But we are here more interested ∈ in g as a function of x and we indeed see that it is analytic in C ] , 0] (it is an entire function s \ − ∞ of x if s N). 13 ∈− 2 There are unique vectors us, vs in L (0,a; dx) such that

s 1 (x)x− = X (x)+ u (x)+ (v )(x) (75) x>a s s H s and they are the solutions to the system of equations:

us + Ha(vs)=0 (76a)

Ha(us)+ vs = Pa(gs) (76b)

From (76a) we see that us is in fact the restriction to (0,a) of an entire funtion, and from (76b) that v is the restriction to (0,a) of a function which is analytic in C ] , 0]. Redefining u and s \ − ∞ s v to now refer to these analytic functions their defining equations become (on (0, + )): s ∞ u + P (v )=0 (77a) s H a s P (u )+ v = g (77b) H a s s s and (75) becomes (we set Xs(a)= Xs(a+)):

s 1x a(x)x− = Xs(x)+ 10

d The key to the next steps will be the idea to investigate the distribution (x dx +s)Xs on the (positive) d real line. Let Ds be x dx + s. There holds:

Ds = D1 s (79) H −H − To compute d P (v ) we first suppose (s) > 1, so (we know the behavior as x 0 from (74)) dx a s ℜ → d d P (v ) = P (v′ ) v (a)δ (x) and x P (v ) = P (xv′ ) av (a)δ (x). This remains true for dx a s a s − s a dx a s a s − s a 13For some other transforms k(xy), such as the cosine transform, the argument must be slightly modified in order ∞ −s to accomodate the fact R0 k(y)y dy has no range of absolute convergence.

20 1 (s) > . Applying Ds to (77a) thus gives Ds(us) (PaD1 s(vs) avs(a)δa(x)). We similarly ℜ 2 − H − − apply D1 s to (77b) and obtain the following system: −

Ds(us)(x) ( PaD1 svs)(x)= avs(a)J0(2√ax) (80a) − H − − ( PaDsus)(x)+ D1 s(vs)(x) = (D1 sgs)(x) aus(a)J0(2√ax) (80b) − H − − − s 1 s 1 s From (73), we have D1 sgs = Ds(1x>ax− )= (a − δa(x)) = a − J0(2√ax). Let us define − −H −H − a J0 (x)= J0(2√ax) (81)

We have proven:

a +Dsus PaD1 svs = avs(a)J0 (82a) − H − − s a PaDsus + D1 svs = a(a− + us(a))J0 (82b) −H − − Restricting to the interval (0,a) and solving, we find:

1 a s a P D u = a(1 D )− (v (a)J + (a− + u (a))H J ) (83a) a s s − − a s 0 s a 0 1 s a a PaD1 svs = a(1 Da)− ((a− + us(a))J0 + vs(a)HaJ0 ) (83b) − − − + It is advantageous at this stage to define φa and φa− to be the solutions of the equations (in L2(0,a; dx)):

+ + a φa + Haφa = J0 (84a) a φ− H φ− = J (84b) a − a a 0 + We already know from (66a) and (66b) exactly what φ and φ− are (in this special case of the a a H + transform), but we shall proceed as if we didn’t. We see from (84a), (84b) that φa and φa− are entire functions, and we can rewrite the system as:14

φ+ + P φ+ = J a (85a) a H a a 0 a φ− P φ− = J (85b) a − H a a 0 We observe the identities: + 1 a φa + φa− (1 D )− J = P (86a) − a 0 a 2 + 1 a φa + φa− (1 D )− H J = P − (86b) − a a 0 a 2 So (83a) and (83b) become

s s a− + us(a) vs(a) + a− + us(a)+ vs(a) D u =+a − φ a φ− (87a) s s 2 a − 2 a s s a− + us(a) vs(a) + a− + us(a)+ vs(a) D1 svs = a − φa a φa− (87b) − − 2 − 2 14in conformity with our conventions, these are identities on (0, ); to see them as identities on C one must read a + + ∞ R J0(2√xy)φa (y) dy rather than ( Paφa )(x). 0 H

21 From (87a) we compute successively (again, these are identities on (0, + )): ∞ s s a− + us(a) vs(a) a + a− + us(a)+ vs(a) a P D u = a − (J φ ) a ( J + φ−) (88) H a s s 2 0 − a − 2 − 0 a

s s a− + us(a) vs(a) + a− + us(a)+ vs(a) a P D u = a − (δ φ ) a ( δ + φ−) (89) a s s 2 a − H a − 2 − H a In (89), φ+ should perhaps be more precisely written as (φ+1 ). From (85a) we know that H a H a x>0 + s φa 1x>0 is tempered as a distribution. From (78c) we compute DsXs = 1x>aDs(us)+ a(a− + s u (a))δ (x)= D u P D u + a(a− + u (a))δ (x). From (87a) and(89) then follows: s a s s − a s s s a s s a− + us(a) vs(a) + + a− + us(a)+ vs(a) a D X =+a − (φ + φ δ ) a (φ− φ− + δ ) s s 2 a H a − a − 2 a − H a (90) s +a(a− + us(a))δa(x)

And the result of the computation is:

s s a− + us(a) vs(a) + + a− + us(a)+ vs(a) D X =+a − (φ + φ )+ a ( φ− + φ−) (91) s s 2 a H a 2 − a H a We then define the remarkable distributions: √a A = (φ+ + φ+) (92a) a 2 a H a √a iB = ( φ− + φ−) (92b) − a 2 − a H a E = A iB (92c) a a − a From (84a) we observe that A has its support in [a, ). Furthermore it is invariant. Similarly, a ∞ H iB , which is anti invariant, also has its support in [a, + ). We recover A and iB from E − a H ∞ a − a a through taking the invariant and anti-invariant parts. We may also rewrite DsXs as:

s D X = √a(a− + u (a))E √a v (a) E (93) s s s a − s H a Some other manners of writing A and iB are useful: from (85a) φ+ = δ P φ+ and from a − a H a a − a a (85b) φ− = δ + P φ−, so: H a a a a √a A = (δ + φ+1 ) (94a) a 2 a a x>a √a iB = (δ φ−1 ) (94b) − a 2 a − a x>a And, we take also notice of the following definitions and identities: 1 j = √a(δ φ+1 ) j = √a φ+ A = (j + j ) (95a) a a − a 0

22 + From (85a) and (85b) we know that φa and φa− are bounded, so the right Mellin transforms are defined directly for (s) > 1 by:15 ℜ

√a s ∞ + s Aa(s)= a− + φa (x)x− dx (96a) 2 a  Z  c √a s ∞ s iBa(s)= a− φa−(x)x− dx (96b) − 2 − a  Z  c

s 1 ∞ + s Ea(s)= √a a− + (φa (x) φa−(x))x− dx (96c) 2 a −  Z  \ 1 ∞ + s (Ec)(s)= √a (φ (x)+ φ−(x))x− dx (96d) H a 2 a a Za + + + From (85a) and (85b) we know that φ φ− is square-integrable at + , so, using φ = δ P φ a − a ∞ H a a − a a and φ− = δ + P φ− we compute: H a a a a a ∞ + s ∞ + + (φ (x) φ−(x))x− dx = ( φ φ−)g (x) dx = (φ (x)+ φ−(x))g (x) dx (97) a − a H a − H a s − a a s Za Z0 Z0 Then using (86a):

a + a a φa (x)+ φa−(x) 1 a a 1 gs(x) dx = (1 Da)− (J0 )(x)gs(x) dx = J0 (x)((1 Da)− (gs))(x) dx 0 2 0 − 0 − Z Z Z (98) a Comparing with (76a) and (76b) the right-most term of (98) may be written as 0 J0(2√ax)vs(x) dx which in turn we recognize from (77a) to be u (a). We have thus proven the identity: − s R

s Ea(s)= √a(a− + us(a)) (99)

In a similar manner we have: c

+ a + ∞ φa (x)+ φa−(x) s ∞ s φa (x)+ φa−(x) x− dx = J (2√ax)x− dx + − g (x) dx (100) 2 0 2 s Za Za Z0 a + a a φa (x)+ φa−(x) 1 a − g (x) dx = ((1 D )− H J )(x)g (x) dx = J (2√ax)u (x) dx 2 s − a a 0 s − 0 s Z0 Z0 Z0 ∞ s = v (a) g (a)= v (a) J (2√ax)x− dx s − s s − 0 Za (101)

+ \ ∞ φa (x)+ φa−(x) s A (s)+ iB (s)= (E )(s)= √a x− dx = √av (a) (102) a a H a 2 s Za Then, we obtainc the reformulationc of (93) as:

D X = E (s)E \(E )(s) E (103) s s a a − H a H a 1 + − 15 a the integral for E (s) is certainly absolutely convergent for (s) > 2 as φa φa is square integrable on (0, ), c c1 ℜ − + − ∞ and in fact it is absolutely convergent for (s) > 4 . As we know already completely explicitely φa and φa , we do not pause on this here. A general argument suitableℜ to establish in more general cases absolute convergence for (s) > σ 1 ℜ for some σ < 2 will be given later.

23 And, noting D\X (z) = (s + z 1)X (z) = (s + z 1) ∞ X (x)X (x) dx we are finally led to the s s − s − a s z remarkable result: R c \ [ ∞ a a Ea(s)Ea(z) (Ea)(s) Ea(z) Xa(s,z)= Xs (x)Xz (x) dx = − H H (104) a s + z 1 Z c c − This equation has been proven under the assumption (s) > 1, and (z) > 1 . To complete the ℜ ℜ 2 discussion we need to know that the evaluators f f(s), s C are indeed continuous linear forms on 7→ ∈ 1 s 1 Γ(1 s) [ K . For (s) > , we have f(s)= ∞ f(x)x− dx. For (s) < we have f(s)= − (f)(1 s). a ℜ 2 a ℜ 2 Γ(s) H − 1 b For (s)= 2 continuity follows byR the Banach-Steinhaus theorem, and of course more elementary ℜ b b proofs exist (as in [3] for the cosine or sine transform). So we do have unique Hilbert space vectors Xa K such that f K s C f(s)= ∞ Xa(x)f(x) dx. Then (104) holds throughout C C s ∈ a ∀ ∈ a∀ ∈ a s × by analytic continuation. R b The vectors Xa are zero for s N, and it is more precise to use vectors a = Γ(s)Xa which s ∈ − Xs s are non-zero for all s C. These vectors are the evaluators16 for f (s), (s) = Γ(s)f(s). ∈ 7→ F F We recapitulate some of the results in the following theorem, whose analog for the cosine (or sine) b transform was given in [5] (up to changes of variables and notations, the first paragraph as well as equation (108) are theorems from [1]; the equations (105), (106), (107) are our contributions. In + this specific case of we shall later identify exactly φ and φ− and E and . As we shall explain H a a a Ea the analog of the -function in [1] has value 1 at s = 1 , and is not identical with the here): Ea 2 Ea

Theorem 7. For a given a > 0 let Ka be the Hilbert space of square integrable functions f(x) on 2 [a, + ) whose -transforms ∞ J0(2√xy)f(y) dy (in the L -sense) again vanish for 0

24 reason is this: the Hilbert space of the entire functions (s), f K (the Hilbert structure is the F ∈ a 1 ds | | one from Ka, or ( 1, 2) = 2π (s)= 1 1(s) 2(s) Γ(s) 2 ) verifies the de Branges axioms [2], up to F F ℜ 2 F F | | the change of variable s = 1 iz. Let us recall the axioms of [2] for a (non-zero) Hilbert space of 2 − R entire functions F (z):

(H1) for each z, evalution at z is a continuous linear form,

(H2) for each F , z F (z) belongs to the Hilbert space and has the same norm as F , 7→ z w (H3) if F (w) = 0 then G(z)= z−w F (z) belongs to the space and has the same norm as F −

Let K(z, w) be defined as the evaluator at z: F F (z) = (F,K(z, )). It is anti-analytic in z and ∀ · analytic in w (the scalar product is complex linear in its first entry, and conjugate linear in its second entry). It is a reproducing kernel: K(z, w) = (K(z, ),K(w, )). It is proven in [2] that (H1), · · (H2), (H3) entail the existence of an entire function E(z) with E(z) > E(z) for (z) > 0, such | | | F (z) | ℑF (z) that the space is exactly the set of entire functions F (z) such that both E(z) and E(z) belong to 2 1 2 dt 18 H ( (z) > 0), and the Hilbert space norm of F is 2π R F (t) E(t) 2 . We have incorporated a 2π ℑ | | | | for easier comparison with our conventions. Then theR reproducing kernel is expressed as: E(z)E(w) E(z)E(w) K(z, w)= − (109) i(z w) − The function E is not unique; if the space has the isometric symmetry F (z) F ( z), a function 7→ − E exists which is real on the imaginary axis and writing E = A iB where A and B are real on the − 1 real axis, the pair (A, B) is unique up to A kA, B k− B, A is even and B is odd. If A(0) =0 7→ 7→ 6 (this happens exactly when the space contains at least one element not vanishing at 0) then it may be uniquely normalized so that A(0) = 1. Then E is uniquely determined.

Model examples are the Paley-Wiener spaces of entire functions F (z) of exponential type at 2 1 2 iτz most τ with F = 2π R F (t) dt < . Then E(z) = e− is a possible E function. The k k | | ∞ 2 Paley-Wiener spaces are related to the study of the differential operator d on the positive half- R − dx2 line, and an important class of spaces verifying the axioms of [2] is associated with the theory of the 2 eigenfunction expansions for Schr¨odinger operators d +V (x) ([27]). In these examples the spaces − dx2 are indexed by a parameter τ (the Schr¨odinger operator is first studied on a finite interval (0, τ)) and they are ordered by isometric inclusions (the E-function of a bigger space may be used in the computation of the norm of an element of a smaller space). Typically indeed, de Branges spaces are 2 1 studied included in one fixed space L (R, 2π dν), are ordered by isometric inclusion and indexed by a parameter19. Obviously this theory is intimately related with the Weyl-Stone-Titchmarsh-Kodaira theory of the spectral measure. The articles of Dym [14] and Remling [27], the book of Dym and McKean [15], will be useful to the interested reader. In the case of the study of we will have H 18the conditions on F (z) are not formulated in [2] as Hardy space conditions, but they are exactly equivalent. 19the axioms allow for “jumps” in the isometric chain of inclusions, as occur in the theory of the Krein strings [15], discrete Schr¨odinger equations being special cases.

25 1 2 dν(γ)= Γ( + iγ) − dγ. It is an important flexibility of the axioms not to be limited to functions | 2 | 2 1 of finite exponential type, and also the spectral measures are not necessarily such that (1 + γ )− is integrable. It has turned out in our study of the spaces associated with the -transform that H the naturally occurring E function is not the one normalized to take value 1 at z = 0. Rather the iB(iσ) normalization will prove to be limσ + − = 1. This has an important impact on the aspect → ∞ A(iσ) of the differential equations which will govern the deformation of the Ka’s with respect to a: they will take the form of a first order linear differential system in canonical form (as generally studied in [21, 3].) § The space of the functions (s)=Γ(s)f(s), f K verify (this is easy) the de Branges axioms, F ∈ a 1 20 with s = iz and they were defined in [1]. The spaces Ka have the real structure, which is 2 − b manifest in the s variable through the isometry (s) (s). Rather than with the reproducing F 7→ F kernel K(z ,z ) we work mainly with (s ,s ) = K( z ,z ) which is analytic in both variables. 1 2 X 1 2 − 1 2 Of course then it is (s,s) which gives the squared norm of the evaluator at s. Writing (s)= E(z) X E we obtain from (109): (s ) (s ) (1 s ) (1 s ) (s ,s )= E 1 E 2 −E − 1 E − 2 (110) X 1 2 s + s 1 1 2 − which is indeed what has appeared on the right hand side of (108). With (s) = (s) i (s), E A − B A (resp. ) even (resp. odd) under s 1 s, this is also: B 7→ − i (s ) (s )+ (s )( i (s )) (s ,s )=2− B 1 A 2 A 1 − B 2 (111) X 1 2 s + s 1 1 2 − 1 ( (s) (s)) ℑ B A and for (s) = 2 , 0 < (s,s)=2 (s) 1 so both and have all their zeros on the critical ℜ 6 X ℜ − 2 A B line.21

The method in this chapter has been developed in [5, 8, 6] for the case of the cosine and sine transforms, and it leads to the currently only known “explicit” formulae22 for the structural elements , , and reproducing kernels for the spaces for the cosine and sine transforms. So far, almost E A B nothing very specific to has been used apart from it being self-adjoint self-reciprocal with an H entire multiplicative kernel k(xy). The next section is still of a very general validity.

As was mentioned in the Introduction the realization of the structural elements of the spaces as right Mellin transforms of distributions is a characteristic aspect of the method; the Dirac delta’s in the expressions for A (x) and iB (x) could have been overlooked if we had only been prepared a − a d to use functions, and the whole development was based on the computation of (x dx + s)Xs(x) asa distribution. This aspect will be further reinforced in the concluding chapter of the paper (section 9) where it will be seen that the distributions A (x) and iB (x) are very naturally differences of a − a boundary values of analytic functions, so they are hyperfunctions [23] in a natural manner.

20in the variable z, and associated with the Hankel transform of order zero, rather than with the transform. H s 2 s 2 21this is also seen from 2 (s) = (s)+ (1 s) as (s) > (1 s) for (s) > 1 . As (s,s)= |E( )| −|E(1− )| A E E − |E | |E − | ℜ 2 X 2ℜ(s)−1 this is in fact the same argument. 22as “explicit” as the Fredholm determinants of the finite Dirichlet kernels are “explicit”.

26 Let us consider the behavior of A (s), B (s), E (s) and \(E )(s) for (s) 1 . Let us first a a a H a ℜ ≥ 2 s 1 + s + look at E (s) = √a a− + ∞(φ (x) φ−(x))x− dx . We remark that φ (x) φ−(x) is the a 2 a a − a a − a + c c c -transform of (φa(x)+ φa−R(x))10 0 and let f(x) be an absolutely continuous function on 0 → ∞ a A 1 [0,a]. Then k(xy)f(y) dy = O(x − ) as x + . R 0 → ∞ R There exists C < such that x > 0 k (x) C xA. Then a k(xy)f(y) dy = 1 k (xa)f(a) ∞ ∀ | 1 | ≤ 0 x 1 − 1 a a A a A k (xy)f ′(y) dy, and k (xy)f ′(y) dy Cx y f ′(y) dy. This was easy. . . x 0 1 | 0 1 |≤ 0 | | R 1 + R With k(x) = J (2√x),R one has k (x) = √xJ (2R √x) = O(x 4 ). We have φ (x) φ−(x) = 0 1 1 a − a a + 3 J0(2√xy)(φ (x)+ φ−(x)) dy and from Lemma 8 this is O(x− 4 ). So the integral in the expres- − 0 a a sion for E (s) is absolutely convergent for (s) > 1 . In particular E is bounded on the critical R a ℜ 4 a \ line. But then (Ea)(s)= χ(s)Ea(1 s) is also bounded. Hence: c H − c

Proposition 9. The functions cAa and Ba are bounded on the critical line.

c c Let us turn to the situation regarding (s)= σ + . ℜ → ∞ a 3 2 4 Let f(x) be a function of class C on [0,a] and e(x) = 0 J0(2√xy)f(y) dy. It is O(x− ). d a d a There holds xe(x) = ( yJ0(2√xy))f(y) dy = af(a)J0(2√ax) J0(2√xy) yf ′(y) dy. Let dx 0 dy R − 0 a 3 3 k(x) = J0(2√xy) yf ′(y) dy. By the Lemma 8 it is O(x− 4 ). For (s) > , with absolutely 0 R Rℜ 4 convergentR integrals:

∞ s ∞ s s ∞ s af(a) J (2√ax)x− dx k(x)x− dx = ae(a)a− + s e(x)x− dx (112) 0 − − Za Za Za s 1 5 We show that the left hand side of (112) is O(a− ) for (s) > . We apply to k what we did for s ℜ 4 d 2 a e, xk(x) = a f ′(a)J0(2√ax) J0(2√xy) y(yf ′)′(y) dy. This is O(1) (using J0 1). So for dx − 0 | | ≤ d s s (s) > 1, we can compute ∞( xk(x))x− dx by integration by parts, this gives ak(a)a− + ℜ 0 dxR − s s s 1 s ∞ k(x)x− dx. So for (s) 1 + ǫ we have ∞ k(x)x− dx = O(a− ). Then regarding a ℜR ≥ a s s d 5 ∞ J (2√ax)x− dx we note that xJ (2√ax)= J (2√ax) √axJ (2√ax), so for (s) +ǫ we aR 0 dx 0 0R − 1 ℜ ≥ 4 ∞ s s 1 canR apply the same method of integration by parts, and prove that a J0(2√ax)x− dx = O(a− s ). s 1 3 So the left hand side of (112) is indeed O(a− ) for (s) and we have: s ℜ ≥ 2 R 2 a Lemma 10. Let f(x) be a function of class C on [0,a] and let e(x) = 0 J0(2√xy)f(y) dy. One has s R ∞ s a− 1 3 e(x)x− dx = ae(a)+ O( ) ( (s) ) (113) s s ℜ ≥ 2 Za  s 1 1 s s Let us return to ∞ J (2√ax)x− dx = J (2a)a − + ∞(J (2√ax) √axJ (2√ax))x− dx . a 0 s 0 a 0 − 1 d 1 d We want to iterate soR we also need x dx √axJ1(2 √ax)= 4 2√Rax d2√ax 2√axJ1(2√ax)= axJ0(2√ax). s 1 7 So we can integrate by parts and obtain that the last Mellin integral is O(a− ) for (s) + ǫ. s ℜ ≥ 4

27 So, certainly: s ∞ s a− 1 5 J (2√ax)x− dx = aJ (2a)+ O( ) ( (s) ) (114) 0 s 0 s ℜ ≥ 2 Za  + a + a Using φ = J P φ and φ− = J + P φ− and combining (113) and (114) we obtain: a 0 − H a a a 0 H a a Proposition 11. One has for (s) σ (here σ = 5 for example): ℜ ≥ 0 0 2 + + 1 s aφ (a) aφ−(a) 1 √a s aφa (a) 1 E (s)= a 2 − (1 + − + O( )) A (s)= a− (1 + + O( )) a 2s s2 a 2 s s2 (115a) c + c \ 1 s aφ (a)+ aφ−(a) 1 √a s aφa−(a) 1 (E )(s)= a 2 − ( + O( )) iB (s)= a− (1 + O( )) H a 2s s2 − a 2 − s s2 (115b) c Theorem 12. One has + i a(σ) a(1 σ) aφa (a)+ aφa−(a) lim − B =1 and E − σ + (116) σ + (σ) (σ) ∼ → ∞ 2σ → ∞ Aa Ea So the functions and are not normalized as is usually done in [2] which is to impose (when Aa Ba 1 possible) to the E function to have value 1 at the origin (which for us is s = 2 ; the exact value of ( 1 ) will be obtained later.) This difference in normalization is related to the realization of the Aa 2 differential equations governing the deformation of the spaces Ka as a first order differential system in “canonical” form, as in the classical spectral theory of linear differential equations ([21, 10].) This allows to realize the self-reciprocal scale reversing operator as a scattering [6].

6 Fredholm determinants, the first order differential system, and scattering

+ Let us return to the defining equations for the entire functions φa and φa−:

φ+ + P φ+ = J a (117a) a H a a 0 a φ− P φ− = J (117b) a − H a a 0

Either we read these equations as identities on (0, ), or we decide that Paφa± in fact stands for a ∞ H J0(2√xy)φ±(y) dy, and the equation holds for x C; the latter option slightly conflicts with our 0 a ∈ earlier definition of as an operator on functions or distributions. But whatever choice is made R H ∂ this has no impact on what comes next. We shall apply to the equations the operators a ∂a and x ∂ . As J a(x)= J (2√ax) we have a ∂ J a = x ∂ J a. We write δ = x ∂ + 1 = ∂ x 1 . First we ∂x 0 0 ∂a 0 ∂x 0 x ∂x 2 ∂x − 2 have: ∂ ∂ ∂ a φ+ + P a φ+ = aφ+(a)J a + a J a (118a) ∂a a H a ∂a a − a 0 ∂a 0 ∂ ∂ a ∂ a a φ− P a φ− =+aφ−(a)J + a J (118b) ∂a a − H a ∂a a a 0 ∂a 0

28 Then, as x ∂ = ∂ x, δ = δ , δ P f = (P δ f) af(a)δ (x), δ J a = a ∂ J a + 1 J a: ∂x H −H ∂x xH −H x x a a x − a x 0 ∂a 0 2 0 1 ∂ δ φ+ P δ φ+ = ( aφ+(a))J a + a J a (119a) x a − H a x a 2 − a 0 ∂a 0 1 a ∂ a δ φ− + P δ φ− = ( + aφ−(a))J + a J (119b) x a H a x a 2 a 0 ∂a 0 Combining we obtain:

∂ + ∂ + + 1 a a φ δ φ− + P (a φ δ φ−)= (aφ (a)+ aφ−(a)+ )J (120a) ∂a a − x a H a ∂a a − x a − a a 2 0 ∂ + ∂ + + 1 a a φ− δ φ P (a φ− δ φ )=+(aφ (a)+ aφ−(a) )J (120b) ∂a a − x a − H a ∂a a − x a a a − 2 0 Comparing with (117a) and (117b), and as there is uniqueness:

∂ + + 1 + a φ δ φ− = (aφ (a)+ aφ−(a)+ )φ (121a) ∂a a − x a − a a 2 a ∂ + + 1 a φ− δ φ = +(aφ (a)+ aφ−(a) )φ− (121b) ∂a a − x a a a − 2 a + 23 The quantity aφa (a)+ aφa−(a) will play a fundamental rˆole and we shall denote it by µ(a). So:

∂ 1 + (a + + µ(a))φ = δ φ− (122a) ∂a 2 a x a ∂ 1 + (a + µ(a))φ− = δ φ (122b) ∂a 2 − a x a ∂ + + 1 ∂ + 2 2 It follows easily from this that a (φ φ−)= φ φ− + x((φ ) + (φ−) ). So ∂a a a − a a 2 ∂x a a a a d + + + 1 + 2 2 a φ (x)φ−(x) dx = aφ (a)φ−(a) φ (x)φ−(x) dx + a(φ (a) + φ−(a) ) (123) da a a a a − a a 2 a a Z0 Z0 a d + 1 2 a a φ (x)φ−(x) dx = µ(a) (124) da a a 2 Z0 We then compute:

a a + 1 a a φ (x)φ−(x) dx = ((1 D )− J )(x)J (x) dx , (125) a a − a 0 0 Z0 Z0 + 1 a 1 a 2 where we recall φ = (1+ H )− J , φ− = (1 H )− J , D = H . The operator D acts a a 0 a − a 0 a a a 2 a on L (0,a; dx) with kernel Da(x, z) = 0 J0(2√xy)J0(2√yz) dy. After the change of variables 1 x = at, y = au, z = av this becomesR the operator da on L (0, 1; dt) with kernel da(t, v) = 1 √ 0 aJ0(2a tu) aJ0(2a√uv) du. We compute the derivative with respect to a: R ∂ 1 aJ (2a√tu) aJ (2a√uv) du (126a) ∂a 0 0 Z0 1 ∂ 1 ∂ = ((2u + 1)J (2a√tu)) aJ (2a√uv) du + aJ (2a√tu))((2 u 1)J (2a√uv)) du ∂u 0 0 0 ∂u − 0 Z0 Z0 (126b)

=2aJ0(2a√t)J0(2a√v) (126c)

23maybe it would be unfair to hide the fact that µ(a) = 2a, in this study of ! In a later section a further mu function, associated with a variant of , will also be found explicitely and it willH be quite more complicated. H

29 d √ So da da is a rank one operator, with range CJ0(2a t)10

d 1 d log det(1 d )= Tr((1 d )− d ) (127) da − a − − a da a 1 d 1 The rank one operator (1 d )− d has the function (1 d )− 2aJ (2a√t) as eigenvector − a da a − a 0 1 1 and the eigenvalue is J (2a√t)((1 d )− 2aJ (2a√v))(t) dt. Going back to (0,a) we obtain 0 0 − a 0 a 1 2 J (2√ax)((1 D )− J (2√az))(x) dx and in view of (125) we have proven: 0 0 − aR 0 a R d + log det(1 D )= 2 φ (x)φ−(x) dx (128) da − a − a a Z0 Then, using (124), we have the important formula: d d µ(a)2 = a a log det(1 D ) (129) − da da − a + We shall now also relate φa (a) and φa−(a) to Fredholm determinants. In fact the following holds: d aφ+(a)=+a log det(1 + H ) (130a) a da a d aφ−(a)= a log det(1 H ) (130b) a − da − a This is the application of a well-known general theorem, for any continuous kernel k(x, y): if w(x)+ a k(x, y)w(y) dy = k(x, a) for 0 x a then w(a)=+ d log det (δ(x y)+ k(x, y)). A proof 0 ≤ ≤ da (0,a) − mayR be given which is of a somewhat similar kind as the one given above for (128), or one may more directly use the Fredholm’s formulas for the determinant and the resolvent.24 The theorem is proven in the book of P. Lax [20], Theorem 12 of Chapter 24 (Lax treats the case of a kernel on (a, + ), here we have the simpler case of a finite interval (0,a).) This means that µ(a) has another ∞ expression in terms of Fredholm determinants: d det(1 + H ) µ(a)= a log a (131) da det(1 H ) − a Combining (129) and (131) we obtain:

d d d det(1 + H ) 2 d d det(1 + H ) 2a a log det(1 + H )= a log a a a log a (132a) − da da a da det(1 H ) − da da det(1 H )  − a  − a d d d det(1 + H ) 2 d d det(1 + H ) 2a a log det(1 H )= a log a + a a log a (132b) − da da − a da det(1 H ) da da det(1 H )  − a  − a d + 2 2a aφ (a)= µ(a) + aµ′(a) (132c) da a − d 2 2a aφ−(a)=+µ(a) + aµ′(a) (132d) da a d + + 2 a(φ−(a) φ (a)) = a(φ (a)+ φ−(a)) (132e) da a − a a a These Fredholm determinants identities are reminiscent of certain well-known Gaudin identities [22, App. A16], which apply to the even and odd parts of an additive (Toeplitz) convolution kernel on

24let us recall that for a continuous kernel on a finite interval, the formula of Fredholm for a determinant as a convergent series always applies, even if the operator given by the kernel is not trace class, which may happen.

30 an interval ( a,a); here the situation is with kernels k(xy) which have a multiplicative look, and − reduction to the additive case would give g(t + u) type kernels on semi-infinite intervals.

√a + + √a We have defined A = (φ + φ ) and iB = ( φ− + φ−). Let us recall that a 2 a H a − a 2 − a H a here φ± is restricted to [0, + ) and is then tempered as a distribution. Using the differential a ∞ equations (122a) and (122b) and the commutation property δ = δ , δ = x ∂ + 1 , we xH −H x x ∂x 2 √a + + √a ∂ 1 ∂ have δ A = (δ φ δ φ ) = (a + µ(a))(φ− φ−) = (a µ(a))( iB ) and x a 2 x a − H x a 2 ∂a 2 − a − H a − ∂a − − a δ ( iB )= √a ( (a ∂ + 1 + µ(a))φ+ (a ∂ + 1 + µ(a)) φ+)= (a ∂ + µ(a))A . The following x − a 2 − ∂a 2 a − ∂a 2 H a − ∂a a first order system of differential equations therefore holds: ∂ a A = µ(a)A δ ( iB ) (133a) ∂a a − a − x − a ∂ a ( iB )=+µ(a)( iB ) δ A (133b) ∂a − a − a − x a Then we also have the second order differential equations (a ∂ µ(a))(a ∂ +µ(a))A =+δ2A and ∂a − ∂a a x a (a ∂ + µ(a))(a ∂ µ(a))( iB )=+δ2( iB ), or, taking the right Mellin transforms, and writing ∂a ∂a − − a x − a 1 s = 2 + iγ, δx = iγ:

∂ ∂ 2 2 a a A + (µ(a) aµ′(a))A = γ A (134a) − ∂a ∂a a − a a ∂ ∂ 2 2 a a ( iB ) +c (µ(a) + aµ′(a))( iBc)= γ (ciB ) (134b) − ∂a ∂a − a − a − a With the new variable u = log(a) wec obtain Dirac and Schr¨odingerc equationsc which are associated with this study of , modeled on the study of the cosine and sine transforms summarized in [5, 6]. H All quantities in the statement of the theorem will be completely explicited later in terms of Bessel functions, but we keep the notation sufficiently general to allow, if an interesting other case arises, to write down the identical results:

+ Theorem 13. For each a> 0 let φa and φa− be the entire functions which are the solutions to:

a + + φa (x)+ J0(2√xy)φa (y) dy = J0(2√ax) (135a) 0 Z a φ−(x) J (2√xy)φ−(y) dy = J (2√ax) (135b) a − 0 a 0 Z0 2 Let Ha be the integral operator on L (0,a; dx) with kernel J0(2√xy). There holds:

d φ+(a)=+ log det(1 + H ) (135c) a da a d φ−(a)= log det(1 H ) . (135d) a −da − a

√a + √a The tempered distributions Aa = (1+ )(φa 10

31 Dirac and Schr¨odinger types of differential equations in the variable u = log(a),

α(u) Let us consider the Hilbert space of pairs on R with squared norms ∞ α(u) 2 + β(u) | |   −∞ β(u) 2 du , and the two equivalent differential systems in canonical forms: R | | 2 0 1 d 0 µ(eu) α(u) α(u) = γ (136) 1 0 du − µ(eu) 0 β(u) β(u) −       0 1 d µ(eu) 0 α(u)+ β(u) α(u)+ β(u) + − = γ (137) 1 0 du 0 µ(eu) α(u)+ β(u) α(u)+ β(u) −   −  −  The components obey the corresponding Schr¨odinger equations:

u 2 u 2 dµ(e ) α′′(u)+ V (u)α(u)= γ α V (u)= µ(e ) (138a) − + + − du u 2 u 2 dµ(e ) β′′(u)+ V (u)β(u)= γ β V (u)= µ(e ) + (138b) − − − du Regarding the behavior at , we are in the limit-point case for each of the Schr¨odinger equations −∞ + (138a) and (138b) because clearly (say, from the defining integral equations for φa and φa−) one has + φa (a) a 0 J0(0) = 1, φa−(a) a 0 1, µ(a) a 0 2a, so the potentials are exponentially vanishing → → → → ∼ → as u . Perhaps we should reveal that one has exactly µ(a)=2a =2eu so we are dealing here → −∞ with quite concrete Schr¨odinger equations and Dirac systems whose exact solutions will later be written explicitely in terms of modified Bessel functions, but we delay using any information which would be too specific of the -transform. H For each γ C ∈ 1 exp(u)( 2 + iγ) u A 1 (139) 7→ exp(u)( + iγ) B 2  is a (non-zero) solution of the system (136), and we now show that it is square-integrable (with re- spect to du = d log(a)) at+ . Let us recall the equation (111) (s+z 1) (s,z)= 2i (s) (z) ∞ − Xa − Ba Aa −

32 2i (s) (z), from which we deduce Aa Ba ∂ a (s,z)= 2 (s) (z) 2(i (s))(i (z)) (140) ∂aXa − Aa Aa − Ba Ba We have25 a 2 = (s, s), (s)= (s), i (s)= i (s), so kXs k Xa Aa Aa Ba Ba ∂ a a 2 = 2 (s) 2 2 (s) 2 (141) ∂akXs k − |Aa | − |Ba | a 2 a 2 1 2 and as of course lima + s = 0 (we have s ∞ s (x) dx for a 1) we obtain: → ∞ kX k kX k ≤ a |X | ≥ ∞ R da s C a 2 =2 ( (s) 2 + (s) 2) (142) ∀ ∈ kXs k |Aa | |Ba | a Za (s) Aexp(u) This establishes the square-integrability at + of (s) , for any s C. ∞ Bexp(u) ∈ h 1 i a( ) 0 2 1 −1 The solutions of (136) with eigenvalue γ = 0 are A and a( ) . The former is square- 0 A 2 1 integrable, so from 2 t + t− the latter then necessarilyh i is not.h This confirmsi that the Dirac ≤ system (136) is in the limit point case at + (according to a general theorem of Levitan [21, 13, ∞ § 0 1 d a(u) b(u) Thm 7.1] any first order differential operator 1 0 du + c(u) d(u) with continuous coefficients is − in the limit point case at infinity). So the pair (139) is in fact,h for anyi γ C, the unique solution of   ∈ (136) which is square-integrable at + . Also the Schr¨odinger equation (138b) is in the limit point ∞ case as not all of its solutions are square integrable at + . Whether the limit-point case at + ∞ ∞ holds for equation (138a) is less evident. Let us recall from [10, 9, Thm 2.4] and [26, X, Thm X.8] § § 2 that a sufficient condition for this is the existence of a lowerbound lim infu + V+(u)/u > . → ∞ −∞ We will prove in the next chapter that µ(a)=2a = 2eu so this is certainly the case here. In the present chapter only the fact that the Dirac system is known to be in the limit-point case will be used.

We now take u = log a and apply on (u , ) the Weyl-Stone-Titchmarsh-Kodaira theory ([10, 0 0 0 ∞ 9], [21, 3]). Let ψ(u,s) be the unique solution of the system (136) for the eigenvalue γ, s = 1 + iγ, § § 2 1 and with the initial condition ψ(u0,s) = [ 0 ] and let φ(u,s) be the unique solution with the initial condition φ(u ,s) = [ 0 ]. Let m(γ)ψ(u,s)+ φ(u,s) for γ > 0 be the unique solution which is 0 1 ℑ a (s) square-integrable on (u , + ). So m(γ)= A 0 , s = 1 +iγ, (s) < 1 . It is a fundamental general 0 a (s) 2 2 ∞ B 0 ℜ property of the m function from Hermann Weyl’s theory that (m(γ)) > 0 (for (γ) > 0.) Here, ℑ ℑ we have a case where the m-function is found to be meromorphic on all of C; so we see that its poles and zeros on R are simple. Furthermore, the spectral measure ν is obtained via the formula 1 b ν(a,b) = limǫ 0+ m(γ + iǫ) dγ (under the condition ν a,b = 0). We obtain: → π a ℑ { } R (ρ) dν(γ)= Aa0 δ(γ ρ) (143) i a′ 0 (ρ) −ℑ a (ρ)=0 B 0X − B The spectrum is thus purely discrete and the general theory tells us further that the finite linear a (ρ) a (ρ) A 0 A 0 2 combinations ρ cρ i ′ (ρ) ψ(u,ρ) have squared norms ρ i ′ (ρ) cρ and also that they are − Ba0 − Ba0 | | 25 a a 2 let us recall theP notation s = Γ(s)Xs L (a, + ; dx). P X ∈ ∞

33 (ρ) 2 C2 1 Aexp(u) 1 dense in L ((u0, ) ; du). For a0 (ρ) = 0, ψ(u,ρ) = a0 (ρ)− (ρ) u u0 (u), so the ∞ → B A Bexp(u) ≥ 2 (ρ) a0 Aexp(u) 1 2 h Ci2 1 vectors Zρ = 2 (ρ) u u0 (u) are an orthogonal basis of L ((u0, ) ; 2 du) and they Bexp(u) ≥ ∞ → a 2 satisfy Z 0 =h 2 (ρ)ii ′ (ρ). Similarly a spectral interpretation is given to the zeros of k ρ k − Aa0 Ba0 Aa0 0 1 if one looks at the initial condition [ 1 ]. The factors of 2 and 2 , have been incorporated so that the statement may be translated (taking into account results established later) into the fact that the evaluators K (ρ,z), for (ρ) = 0, are an orthogonal basis of the Hilbert space of the functions a0 Ba0 Γ(z)f(z), f K . This last statement is a general theorem (under a certain condition) for spaces ∈ a0 with the de Branges axioms [2, 22]. b § To discuss in a self-contained manner the generalized Parseval identity which is associated with the differential system on the full line, it is convenient to make a preliminary majoration of Xa 2, k s k (s) = 1 . From (108) we have, for (s) = 1 : a 2 = 2 ( (s) (s)). And (s) = Γ(s)E (s). ℜ 2 ℜ 2 kXs k ℜ Ea Ea′ Ea a s 1 + s And Ea(s)= √a a− + ∞(φa (x) φa−(x))x− dx . We know from the discussion of Lemma 9 that 2 a − c 1 the integral in the expressionR for Ea(s) is absolutely convergent for (s) > . Hence by the Riemann- c ℜ 4 1 iγ ′ 1 iγ Lebesgue lemma E ( + iγ) a− as γ , γ R. Similarly, E ( + iγ) log(a)a− . So, a 2 ∼ | | → ∞ ∈ a 2 ∼− a 2 2 a 2 c with s = Γ(s) Xs and using Stirling’s formula we obtain: kX k | c| k k c Lemma 14. For each given a> 0 one has Xa 2 2 log s as s , (s)= 1 . k s k ∼ | | | | → ∞ ℜ 2 c Ba(s) 1 From (104) expressed using Aa and Ba we see that s 1 is square integrable, so s− Ba(s) is − 2 square integrable on the critical line (with respect to ds ). Then using again (104) we see that | | 1 26 c s− Aa(s) is also square integrable on the critical line. Let us pick a function F (s) on the critical line which is such that sF (s) is square integrable. Then F (s)Aa(s) and F (s)Ba(s) are absolutely c c 2 ds 2 Aa(s) ds integrable on the critical line and ( (s)= 1 F (s)Aa(s) |2π| ) C (s)= 1 | s 2 | |2π| and similarly ℜ 2 | | ≤c ℜ 2 | | c ds ds | | | | with Ba. If we define αF (u)=2 R (s)= 1 F (s)Aa(s) 2π and βF (uR)=2 (s)= 1 F (s)Ba(s) 2π we ℜ 2 c ℜ 2 then compute: R R c c 2 2 ∞ ∞( Aa(s) + Ba(s) ) du ds 2 2 u0 | | | | αF (u) + βF (u) du C 2 | | u | | | | ≤ (s)= 1 s 2π Z 0 Zℜ 2 R | | c c a 2 C Xs ds = k 2k | | < (144) 2 (s)= 1 s 2π ∞ Zℜ 2 | |

So αF (u) and βF (u) are square integrable at + . More precisely the above upper bound holds as ds ∞ ds well for (s)= 1 F (s)Aa(s) |2π| and (s)= 1 F (s)Ba(s) |2π| . So the double integrals ℜ 2 | | ℜ 2 | | R R c c ds exp(u)(z) exp(u)(s) (s) | | 2 du (145a) u

26we know in fact according to proposition 11 that Aa and Ba are bounded on the critical line. c c

34 where z C is arbitrary, and (s) = Γ(s)F (s), are absolutely convergent and Fubini may be ∈ F employed. Using (140):

∞ (z, s)=2 ( (z) (s)+ (z) (s)) du (146) Xexp(u0) Aexp(u) Aexp(u) Bexp(u) Bexp(u) Zu0 And we obtain the following identity of absolutely convergent integrals, for any (s) = Γ(s)F (s) F 2 1 ds with s F (s) L ( (s)= ; | | ): ∈ ℜ 2 2π ds ∞ exp(u0)(z, s) (s) | | 2 = ( exp(u)(z)αF (u)+ exp(u)(z)βF (u)) du (147) (s)= 1 X F 2π Γ(s) u A B Zℜ 2 | | Z 0 2 1 ds We shall prove that this identity holds under the weaker hypothesis F (s) L ( (s) = ; | | ). ∈ ℜ 2 2π First, still with s F (s) square integrable we suppose additionally that F = f with f K 27. ∈ exp(u0) The hilbertian kernel Kexp(u )(z,s) is exp(u )(z,s) so Kexp(u )(z,s)= exp(u )(z, s). The equations 0 X 0 0 X 0b give then:

∞ (z)= ( (z)α (u)+ (z)β (u)) du (148a) F Aexp(u) F Bexp(u) F Zu0 ds αF (u)=2 (s) a(s) | | 2 (148b) (s)= 1 F A 2π Γ(s) Zℜ 2 | | ds βF (u)=2 (s) a(s) | | 2 (148c) (s)= 1 F B 2π Γ(s) Zℜ 2 | | We have worked under the hypothesis that sF (s) is square integrable. To show that the formulae extend in the L2 sense, we first examine:

2 ds1 ds2 αF (u) =4 (s1) a(s1) | | 2 (s2) a(s2) | | 2 (149) | | (s )= 1 F A 2π Γ(s1) (s )= 1 F A 2π Γ(s2) Zℜ 1 2 | | Zℜ 2 2 | |

∞ 2 2 du ds1 ds2 αF (u) + βF (u) = (s1) (s2) exp(u0)(s1, s2) | | | | (150) 1 2 2 u | | | | 2 (si)= F F X 2π Γ(s1) 2π Γ(s2) Z 0 ZZℜ 2 | | | | There was absolute convergence in the triple integral used as an intermediate. Also (s , s )= Xexp(u0) 1 2 ds1 (s2, s1) and 1 (s1) (s2, s1) | | 2 = (s2). Hence: exp(u0) (s )= exp(u0) 2π Γ(s1) X ℜ 1 2 F X | | F R ∞ 2 2 du 2 ds ∞ 2 ( αF (u) + βF (u) ) = F (s) | | = f(x) dx (151) u | | | | 2 (s)= 1 | | 2π exp(u ) | | Z 0 Zℜ 2 Z 0 So with an arbitrary f K , F = f, (s) = Γ(s)f(s), the assignment f (α ,β ) exists in ∈ a F 7→ F F 2 the sense of L convergence when one approximates f by a sequence fn in Ka such that sfn(s) is 2 1 ds b b in L ( (s) = ; | | ), and f (αF ,βF ) is linear and isometric. We check that its range is all of ℜ 2 2π 7→ c L2(u , ; du ) L2(u , ; du ). For this let us identify the functions α (u) and β (u) which will 0 ∞ 2 ⊕ 0 ∞ 2 w w correspond to (s)= (w,s) (a = exp(u ).) On one hand from (147) it must be the case that: F Xa0 0 0 ds ∞ C z a0 (z, s) a0 (w,s) | | 2 = ( exp(u)(z)αw(u)+ exp(u)(z)βw(u)) du (152) ∀ ∈ (s)= 1 X X 2π Γ(s) u A B Zℜ 2 | | Z 0 27this is certainly possible as we know that the f(x) which are smooth, vanishing on (0, a) and of Schwartz decrease as x + are dense in Ka. → ∞

35 The left hand side is a (w,z) which on the other hand is given by the formula 2 ∞( a(z) a(w) X 0 u0 A A − a(z) 2 2 du (z) (w)) du. The functions u A , z C are certainly dense in L ((u , ) C ; ) as a a a(z) 0R 2 B B 7→ B ∈ ∞ → we know in particular that the pairsh for thei ρ’s such that (ρ) = 0 give an orthogonal basis. So Ba0 we have the identification on (u , + ): 0 ∞ α (u)=2 (w) β (u)= 2 (w) (153) w Aexp(u) w − Bexp(u) This proves that the range is all of L2((u , ) C2; du ). Let us note that in this identifica- 0 ∞ → 2 tion the hilbertian evaluator K (w, ) is sent to the pair u 2 1 (u)( (w) , (w)). To a0 · 7→ u>u0 Aa Ba check if all is coherent we compute the hilbertian scalar product (K (w, ),K (z, )). We obtain a0 · a0 · du 4 ∞ a(w) a(z)+ a(w) a(z) =2 ∞ a(w) a(z) a(w) a(z) du = exp(u )(w,z), which is u0 A A B B 2 u0 A A −B B X 0 indeedR Ka0 (w,z). R

2 1 ds Let us return to the consideration of a general F (s) L ( (s)= ; | | ). Under the hypothesis ∈ ℜ 2 2π that s F (s) is square integrable we have assigned to F the functions ds ds αF (u)=2 (s) a(s) | | 2 = F (s)2Aa(s)| | (154a) (s)= 1 F A 2π Γ(s) (s)= 1 2π Zℜ 2 | | Zℜ 2 ds c ds βF (u)=2 (s) a(s) | | 2 = F (s)2 a(s) | | (154b) (s)= 1 F B 2π Γ(s) (s)= 1 B 2π Zℜ 2 | | Zℜ 2 which are square-integrable at + . From (147) there holds, for any a =c exp(u ): ∞ 0 0 ds ∞ \ \ du Xexp(u0)(z, s)F (s) | | = (2Aexp(u)(z)αF (u)+2Bexp(u)(z)βF (u)) (155) (s)= 1 2π u 2 Zℜ 2 Z 0

The function of z on the left side is the orthogonal projection Fa0 of F to the space Ka0 . So, we deduce by unicity αF (u)1u u (u) = αFu (u) and βF (u)1u u (u) = βFu (u). We then obtain ≥ 0 0 ≥ 0 0 2 2 2 du d Fa = ∞( αF (u) + βF (u) ) so αF and βF are square-integrable on ( , + ), and as k 0 k u0 | | | | 2 −∞ ∞ K is dense in L2(0, ; dx) the assignment F (α ,β ) is isometric, and also it has a dense ∪a a R ∞ 7→ F F range in L2(R C2; du ). We can then remove the hypothesis that s F (s) is square integrable and → 2 2 define the functions αF and βF to be the limit in the L sense of functions αn and βn associated with F ’s such that F F 0 and the s F are square-integrable. Summing up: n k − nk → n 2 2 1 ds 2 2 du Theorem 15. There are unitary identifications L (0, ; dx) L ( (s)= ; | | ) L (R C ; ) ∞ → ℜ 2 2π → → 2 2 1 given in the L sense by the formulas, where (s)= 2 : ℜ f f ∞ s s 1 ds F (s)= f(s)= f(x)x− dx f(x)= F (s)x − | | (156a) 0 (s)= 1 2π Z Zℜ 2 b \ ds 2 α(u) = lim Fn(s) 2Aexp(u)(s) | | (Fn L2 F ; s Fn(s) L ) (156b) n (s)= 1 2π → ∈ →∞ Zℜ 2 \ ds β(u) = lim Fn(s) 2Bexp(u)(s) | | (156c) n (s)= 1 2π →∞ Zℜ 2 ∞ \ \ du F (s) = lim (α(u)2Aexp(u)(s)+ β(u)2Bexp(u)(s)) (156d) a0 0 2 → Zlog(a0)

36 The orthogonal projection of f to Ka0 corresponds to the replacement of α(u) by α(u)1u>u0 (u) and of β(u) by β(u)1 (u) (u = log(a ).). The unitary operators f (f), F (s) χ(s)F (1 s), u>u0 0 0 7→ H 7→ − correspond to (α, β) (α, β). For f = Xa0 one has α(u)=2A\ (z)1 (u) and β(u)= 7→ − z exp(u) u>log(a0) 2B\ (z)1 (u). The self-adjoint operator F (s) γF (s) (s = 1 + iγ) corresponds to the − exp(u) u>log(a0) 7→ 2 canonical operator: 0 d 0 µ(eu) H = du (156e) d 0 − µ(eu) 0 − du    which, in L2(R C2; du ), is essentially self-adjoint when defined on the domain of the functions of → 2 1 i τH 2 class C (or even C∞) with compact support. The unitary operator e acts on L (0, ; dx) as ∞ 1 τ τ f(x) e 2 f(e x). 7→

For the statement of self-adjointness we start with α and β of class C1 with compact support, α define F by (156d) and integrate by parts to confirm that γF (s) corresponds to H ([ β ]). We know α 1 by Hermann Weyl’s theorem that in the limit point case the pairs [ β ] of class C with compact support are a core of self-adjointness (cf. [21, 13].) On the other hand we know that multiplication § 2 1 ds by γ on L ( (s)= ; | | ) with maximal domain is a self-adjoint operator. So the two self-adjoint ℜ 2 2π operators are the same.

Having discussed the matter from the point of view of the isometric expansion we now turn to another topic, the topic of the scattering, or rather total reflection against the potential barrier at + . Another pair of solutions of the first order system (136) (hence also of the second order ∞ differential equations) is known. Let us recall from equations (95a), (95b) that we defined ja = + + √a(δ φ 1 )= √a φ and ik = √a(δ + φ−1 )= √a φ−. Using again (122a) and a − a 0

37 + As Paφa is square integrable, and also using (85a), we have on the critical line:

Γ(s)ja(s)+Γ(1 s)ja(1 s) (159a) − − a s 1 + s 1 = Γ(s)j (s)+Γ(1 s)a − 2 Γ(1 s)√a φ (x)x − dx (159b) b a −b − − a Z0 s 1 ∞ + s = Γ(s)jb (s)+Γ(1 s)a − 2 Γ(s)√a ( P φ )(x)x− dx (159c) a − − H a a Z0 1 ∞ b s 2 + s = Γ(s)ja(s)+Γ(1 s)a − + Γ(s)√a (φa (x) J0(2√ax))x− dx (159d) − 0 − Z a 1 s s 1 s = Γ(s)ab2 − + Γ(1 s)a − 2 Γ(s)√a J (2√ax)x− dx − − 0 Z0 ∞ + s + Γ(s)√a (φ (x) J (2√ax))x− dx (159e) a − 0 Za As J (2√ax) φ+(x) is square integrable both integrals are simultaneously absolutely convergent 0 − a at least for 1 < (s) < 1 (the 1 can be improved, but this does not matter). As the boundary 2 ℜ 2 values on the critical line coincide we have an identity of analytic functions. We recognize in s 3 ∞ J (2√ax)x− dx, which is absolutely convergent for (s) > , the quantity g (a) (equation a 0 ℜ 4 s s 1 a s (73)). And from equation (74) we know g (a)= χ(s)a − J (2√ax)x− dx. So R s − 0 0 R 1 s ∞ + s Γ(s)j (s)+Γ(1 s)j (1 s)=Γ(s)a 2 − + Γ(s)√a φ (x)x− dx , (160) a − a − a Za b b 1 s a + s which is indeed 2 (s). From the equation (158a) j (s) = a 2 (a− φ (x)x− dx) (valid as Aa a − 0 a iγu is for (s) < 1) the function u ja(s) differs from u e− by anR error which is relatively ℜ 7→ b 7→ exponentially smaller (we write s = 1 + iγ, (γ) > 1 ). So j is the Jost solution at of the 2 ℑ − 2 a −∞ b 1 s a s Schr¨odinger equation (135g). The identity relating a(s) and ka(s) = ia 2 (a− + φa−(x)x− dx) B b 0 is proven similarly. R c Theorem 16. The unique29 solution, square integrable at u = + , of the Schr¨odinger equation ∞ (135g) (resp. (135h); γ =0) is expressed in terms of the functions j ( 1 + iγ) (resp. i k ( 1 + iγ)) 6 a 2 − a 2 iγu iγu satisfying at the Jost condition ja u e− (resp. ika u e− ) as: −∞ ∼ →−∞ − ∼b →−∞ c 1 (s)= b Γ(s)j (s)+Γ(1 s)j (1c s) (161a) Aa 2 a − a − 1   a(s)= Γ(s)kba(s) Γ(1 s)kba(1 s) (161b) B 2 − − −   c c Let us add a time parameter t and consider the wave equation:

∂2 ∂2 dµ + µ2 Φ(t,u)=0 (162) ∂t2 − ∂u2 − du   iγt 1 iγ(t u) Then Φ(t,u)= e j\ ( + iγ) is a solution which behaves as e − as u . This wave is exp(u) 2 → −∞ thus right-moving, it is an incoming wave from u = at t = . For a given frequency γ there −∞ −∞ 29here we make use of the fact that (135g) is in the limit point case at + , because it is proven in the next chapter, or known from (66a), (66b), that µ(a) = 2a = 2 eu, in this study of the ∞transform. H

38 is a unique, up to multiplicative factor, wave which respects the condition of being at each time square integrable at u = + . This wave is eiγt ( 1 + iγ). So the equation (161a) represents ∞ Aexp(u) 2 the decomposition in incoming and reflected components. There is in the reflected component a phase shift θ = arg χ(s), the solution behaving approximatively at u as C(γ)cos(γu+ 1 θ ). γ → −∞ 2 γ This is an absolute scattering, as there is nothing a priori to compare it too. We will thus declare that equation (162) has realized χ(s) as an (absolute) scattering. Similarly the Schr¨odinger equation (135h) realizes χ(s) as an absolute scattering. − 1 1 1 1 1 a + 1 1 1 We have 2 a( ) = 2Γ( )ja( ) and ja( )=1 a 2 φa (x)x− 2 dx. So lima 0 a( )=Γ( )= A 2 2 2 2 − 0 → A 2 2 √π. On the other hand a d ( 1 )= µ(a) ( 1 ) and µ(a)= a d log det(1+Ha) . so: da a 2 a 2 R da det(1 Ha) Ab − b A −

1 1 det(1 Ha) a( )= √π Aa( )= √π − (163) A 2 2 det(1 + Ha)

d a 2 1 2 a c 1 a We have a da 1 = 2 a( 2 ) . And 1 = Γ( 2 )X 1 . So: kX 2 k − A X 2 2 f(x) Theorem 17. a ∞ The squared-norm of the evaluator f X 1 (f) = a √x dx on the Hilbert space 7→ 2 K of square integrable functions vanishing on (0,a) and with transforms again vanishing on a HR (0,a) is: 2 a 2 ∞ 1 Hb db X 1 =2 det − (164) k 2 k 1+ H b Za  b  where H is the restriction of to L2(0,a; dx). a H

a 1 a2 a 1 a2 It will be seen that det(1+ H )= e − 2 and det(1 H )= e− − 2 . Having spent a long time a − a + in the general set-up we now turn to determine explicitely what the functions φa , φa−, etc...are.

7 The K-Bessel function in the theory of the transform H

Let us recall that we may define the transform on all of L2(R; dx) through the formula ](f)(λ)= H H i 1 2 f( − ). This anticommutes with f(x) f( x), and leaves separately invariant L (0, + ; dx) λ λ 7→ − H ∞ 2 # and L ( , 0; dx). We defined the groups τa : f(x) f(x a) and τa = τa . We observed e −∞ 7→ − H H that the two groups are mutually commuting, and that if the leftmost point of the support of f is at α(f) 0 then the leftmost point of the support of τ #(f), for any b 0, more precisely for any ≥ b ≥ b α( (f)), is still exactly at α(f). From this we obtain the exact description of K : ≥− H a Lemma 18. One has K = τ τ #L2(0, + ; dx). a a a ∞

Let now Q be the orthogonal projection L2(R; dx) L2(0, + ; dx). The orthogonal projection 7→ ∞ 2 # # Qa from L (0, ; dx) to Ka is thus exactly τaτa Qτ aτ a. It will be easier to work with Ra = ∞ − − # # Qτ aτ a, especially as we are interested in scalar products so we can skip the τaτa isometry. First, − − a tx we obtain gt (x) = Ra(ft)(x), for ft(x) = e− . The part of τ a(ft) supported in x < 0 will be − # sent by τ a to a function supported again in x< 0. We can forget about it and we have thus first −

39 ta tx ta 1 x e− e− 1 (x), whose transform is e− exp( ), which we translate to the left, again we cut x>0 H t − t a a(t+ 1 ) tx the part in x < 0, and we reapply , this gives g (x) = e− t e− 1 (x). In other words we H t x>0 have used in this computation:

# Qτ aτ a = Qτ a Qτ a (a 0) (165) − − H − H − ≥ a tx # a The orthogonal projection ft := Qa(ft) of ft(x) = e− 1x>0(x) to Ka is thus τaτa (gt ). We can a a τx # a then compute exactly the Fourier transform of ft as ft (iτ) = (e− , τaτa (gt ))L2(R) which is also # 1 1 τx a a a(t+ t ) a(τ+ τ ) 1 (τ aτ ae− ,gt )L2(R) = (gτ ,gt )= e− e− t+τ . Hence: − − a tx a Lemma 19. The orthogonal projection ft to Ka of e− 1x>0(x) has its Fourier transform ft (λ) which is given as: a a(t+ 1 +τ+ 1 ) 1 f f (iτ)= e− t τ (166) t t + τ f The Gamma completed right Mellin transform a(s) of f a is the left Mellin transform of f a(iτ). Ft t t s 1 ∞ a a a a(t+ 1 ) ∞ a(τ+ 1 ) τ − f (x) (x) dx = (s)= e− t e− τ dτ f(167) t Xs Ft t + τ Za Z0 Let us write W a for the element of L2(0, + ; dx) such that τ τ #W a = a. We have a(s) = s ∞ a a s Xs Ft a a a a a(t+ 1 ) a tx a ( ,f ) = (W ,g )= e− t ∞ W (x)e− dx. So the of W (x) is exactly: Xs t s t 0 s s s 1 ∞R a tx ∞ a(τ+ 1 ) τ − W (x)e− dx = e− τ dτ (168) s t + τ Z0 Z0 1 ∞ (t+τ)x a Writing t+τ = 0 e− dx, we recover Ws (x) as:

R ∞ 1 a a(τ+ τ ) s 1 τx Ws (x)= e− τ − e− dτ (169) Z0 Then we obtain ∞ W a(x)W a(x) dx which is nothing else than (s,z): 0 s z Xa Theorem 20. TheR (analytic) reproducing kernel associated with the space of the completed right

Mellin transforms of the elements of Ka is s 1 z 1 a(t+ 1 +u+ 1 ) t − u − a(s,z)= e− t u dtdu (170) X [0,+ )2 t + u ZZ ∞ Here is a shortened argument: the analytic reproducing kernel (s,z) is the completed right Xa a a tx s 1 1 a tx Mellin transform of (x), so this is ∞( ,e− )t − dt. But for (s) > , ( ,e− ) = Xs 0 Xs ℜ 2 Xs s tx s a a Γ(s)(Q (x− 1 ),e− ) = Γ(s)(x− 1 ,f ) = (s) (Q is the orthogonal projection to K ). a x>a x>aR t Ft a a This gives again (170).

To proceed further, we compute (s + z 1) (s,z). Using integration by parts, multiplication − Xa by s (resp. z) is converted into t d (resp. u d ; there are no boundary terms.) − dt − du s 1 z 1 1 t a(t+ 1 +u+ 1 ) t − u − s a(s,z)= (a(t )+ )e− t u dtdu (171a) X [0,+ )2 − t t + u t + u ZZ ∞ s 1 z 1 1 u a(t+ 1 +u+ 1 ) t − u − z a(s,z)= (a(u )+ )e− t u dtdu (171b) X [0,+ )2 − u t + u t + u ZZ ∞

40 s 1 z 1 1 1 a(t+ 1 +u+ 1 ) t − u − (s + z 1) a(s,z)= a (t + u )e− t u dtdu − X [0,+ )2 − t − u t + u ZZ ∞ ∞ a(t+ 1 ) s 1 ∞ a(u+ 1 ) z 1 = a e− t t − dt e− u u − du Z0 Z0 ∞ a(t+ 1 ) s 2 ∞ a(u+ 1 ) z 2 a e− t t − dt e− u u − du (171c) − Z0 Z0 1 1 1 ∞ x 2 (t+ t ) s 1 ∞ x cosh u The K-Bessel function is Ks(x)= 2 0 e− t − dt = 0 e− cosh(su) du. It is an even function of s. It has, for each x> 0, allR its zeros on the imaginaryR axis, and was used by P´olya in a famous work on functions inspired by the Riemann ξ-function and for which he proved the validity of the [24, 25]. We have obtained the formula

E(s)E(z) E(1 s)E(1 z) (s,z)= − − − E(s)=2√aK (2a) (172) Xa s + z 1 s − 1 a(t+ 1 ) 1 s 1 To confirm (s)=2√aK (2a), let us define temporarily A(s) = √a ∞ e− t (1 + )t − dt Ea s 2 0 t 1 a(t+ 1 ) 1 s 1 and iB(s)= √a ∞ e− t (1 )t − dt which are respectively even and odd under s 1 s − 2 0 − t R 7→ − and are such that E(z) = A(z) iB(z). We have s,z C iB(s)A(z)+ A(s)( iB(z)) = R − ∀ ∈ − − i (s) (z)+ (s)( i (z)) and considering separately the even and odd parts in z, we find − Ba Aa Aa − Ba 1 that there exists a constant k(a) such that A(s)= k(a) a(s) and B(s)= k(a)− a(s). Let us check iB(σ) A B that limσ − = 1. It is a corollary to limσ Kσ(x)/Kσ+1(x) = 0 which is elementary: →∞ A(σ) →∞ 0 σu ∞ σu exp( x cosh u)e du = O(1) (σ + ), and for each T > 0, T exp( x cosh u)e du −∞ − → ∞ − ≥ 2σT T σu σT TR exp( x cosh3T )e , 0 exp( x cosh u)e du Te , and combiningR we get Kσ(x) = (1+ − − ≤K (x) o(1)) 1 ∞ exp( x cosh u)eσu du. So lim sup σ e T for each T > 0. Using (116), we 2 T R σ + Kσ (x) − − → ∞ +1 ≤ then concludeR k(a) = 1.

1 s 1 Let us examine the equality (s)=2√aK (2a) = √a ∞ exp( a(t + ))t − dt. It exhibits Ea s 0 − t as the left Mellin transform of √a exp( a(t + 1 )), so the distribution E is determined as the Ea − t R a 1 # distribution whose Fourier transform is √a exp(ia(λ λ− )). Using τ and τ , this means exactly: − a a E = √a τ # τ δ = √a τ δ(x a) (173) a a a H aH − 1 ] We exploit the symmetry Ks = K s, which corresponds to Ea(λ) = Ea( λ− ) = iλ Ea(λ), so − − − H the unexpected identity appears: d f f E = E (174) a dxH a From (173) we read E = √aτ J (2√ax)= √aJ (2 a(x a))1 (x). Using (174), and recalling H a a 0 0 − x>a equations (96c) and (96d) we deduce: p φ+(x)+ φ (x) a a− = J (2 a(x a)) = I (2 a(a x) (175a) 2 0 − 0 − + φ (x) φ−(x) ∂ p p∂ a − a = J (2 a(x a)) = I (2 a(a x)) (175b) 2 ∂x 0 − ∂x 0 − p p We knew already from equations (66a), (66b)! Summing up we have proven:

41 2 Theorem 21. Let be the self-reciprocal operator with kernel J0(2√xy) on L (0, ; dx). Let Ha be H ∞ the restriction of to L2(0,a; dx). The solutions to the integral equations φ+ + H φ+ = J (2√ax) H a a a 0 and φ− H φ− = J (2√ax) are the entire functions: a − a a 0 ∂ φ+(x)= I (2 a(a x)) + I (2 a(a x)) (176a) a 0 − ∂x 0 − p ∂ p φ−(x)= I (2 a(a x)) I (2 a(a x)) (176b) a 0 − − ∂x 0 − + d p p d One has 1 a = φ (a)= log det(1 + H ) and 1+ a = φ−(a)= log det(1 H ), and − a da a a − da − a +a 1 a2 det(1 + Ha)= e − 2 (176c) a 1 a2 det(1 H )= e− − 2 (176d) − a √a + √a The tempered distributions A = (1 + )φ and iB = ( 1+ )φ−, respectively invariant a 2 H a − a 2 − H a and anti-invariant under , are also given as: H √a ∂ A (x)= δ (x)+ 1 (x) J (2 a(x a)) + J (2 a(x a)) (176e) a 2 a x>a 0 − ∂x 0 − √a  p ∂ p  iB (x)= δ (x) 1 (x) J (2 a(x a)) J (2 a(x a)) (176f) − a 2 a − x>a 0 − − ∂x 0 −  p p  Their Fourier transforms are eiλxA (x) dx = √a (1+ i ) exp(ia(λ 1 )) and i eiλxB (x) dx = R a 2 λ − λ − R a √a (1 i ) exp(ia(λ 1 )). The Gamma completed right Mellin transforms are: 2 − λ − λ R R

Γ(s)Aa(s)= a(s)= √a(Ks(2a)+ Ks 1(2a)) (176g) A − iΓ(s)Ba(s)= i a(s)= √a(Ks(2a) Ks 1(2a)) (176h) − c − B − − ∞ a(t+ 1 ) s 1 (s) i (s)= (s)=2√aK (2a)= √a e− t t − dt (176i) Aa − Bca Ea s Z0 + They verify the first order system, where µ(a)= aφ (a)+ aφ−(a)=2a:

0 1 d 0 µ(a) (s) 1 (s) a Aa = i(s ) Aa (176j) 1 0 da − µ(a) 0 a(s) − − 2 a(s) −   B  B 

a(s) The pair A is the unique solution of the first order system which is square-integrable with a(s) B respect to hd log(ai) at + . The total reflection against the exponential barriers at log(a) + ∞ → ∞ Γ(1 s) Γ(1 s) 1 of the associated Schr¨odinger equations realizes + − and − ( (s) = ) as scattering Γ(s) − Γ(s) ℜ 2 matrices.

1 2a 1 From (163) we have ( )= √πe− . To normalize according to ( ) = 1, we would have Aa 2 Aa Aa 2 1 2a 2a to make the replacement π− 2 e and √πe− and the expression of in terms Aa → Aa Ba → Ba Ea of the K-Bessel function would be less simple. Let us also note that according to (116) we must

Kσ−1(2a) a have σ + . Kσ(2a) ∼ → ∞ σ Regarding the isometric expansion, as given in theorem 15, we apply it to a function F (s) =

∞ s d 2 2 1 0 k(x)x− dx such that dx x k(x) as a distribution on R is in L . Using the L -function s Aa(s), R c 42 1 x which is the Mellin transform of the function Ca(x)= x 0 Aa(x) dx, and the Parseval identity we obtain α(u)=2 ∞( x d k(x))C (x) dx as an absolutely convergent integral. The square integrable 0 − dx a R function Ca(x) isR explicitely: √a x a Ca(x)= 1x>a(x) J0(2 a(x a)) + − J1(2 a(x a)) (177) 2x − r a −  p p  d And under the hypothesis made on x dx k(x) we obtain the existence of:

X ∂ α(u) = lim √ak(a) 2Xk(X)Ca(X)+ √a k(x)(1 + )J0(2 a(x a)) dx (178) X ∂x →∞ − a − Z p l(y) 2 2 C 1 Let us observe that k(x)= ∞ dy with l(y) L , so k(x) and Xk(X)C (X)= O(X− 4 ). x y ∈ | | ≤ x a Hence: R →∞ ∂ α(u)= √ak(a)+ √a k(x)(1 + )J0(2 a(x a)) dx (179) a ∂x − Z p Comparing with equation (20a) we see that the f(y) defined there is related to α(u), u = log(a) by the formula f(y)= 1 α(log(a)), a = y , so f(y) 2 dy = 1 α(log(a)) 22 da = α(log(a)) 2 1 d log(a). 2√a 2 | | 4a | | | | 2 Similarly we obtain β(u):

→∞ ∂ β(u)= √ak(a)+ √a k(x)( 1+ )J0(2 a(x a)) dx (180) a − ∂x − Z p and comparing with (20b), g(y)= 1 β(log(a)), g(y) 2 dy = β(log(a)) 2 1 d log(a). So according to 2√a | | | | 2 theorem 15 we do have equation (20d):

∞ ∞ ( f(y) 2 + g(y) 2) dy = k(x) 2 dx (181) | | | | | | Z0 Z0 2 1 ds 2 From 15 the assignment k (α, β) extends to a unitary identification L ( (s)= ; | | ) L (R → ℜ 2 2π → → C2; du ), which has the property (k) (α, β). In order to complete the proof of the isometric 2 H → − expansion, it remains to check the equation (20c) which expresses k in terms of f and g.f According to 15 we recover k(x) has the inverse Mellin transform of (α(u)2A (s)+ β(u) 2( iB (s))) du . R a − a 2 Expressing this in terms of f(y) and g(y), y =2a, u = log(a),R this means the identity of distributions, c c where we suppose for simplicity that f(y) and g(y) have compact support in (0, + ) (as usual, this ∞ means having support away from 0 as well as .): ∞ ∞ y y dy k(x)= 2 f(y)2A y (x)+2 g(y) 2( iB y (x)) 2 2 2 − 2 2y Z0  r r  (182) ∞ dy =2 (√yf(2y) A (x)+ √yg(2y) 2( iB (x))) y − y y Z0 Then imagining that we are integrating against a test function ψ(x) and using Fubini we obtain:

∞ dy 2 √yf(2y) A (x) y y Z0 (183) ∞ y = f(2y) δ(x y)+ 1 J (2 y(x y)) J (2 y(x y)) dy − x>y 0 − − x y 1 − 0   Z  p r − p 

43 x y = f(2x)+ J0(2 y(x y)) J1(2 y(x y)) f(y) dy 0 − − x y − Z  r −  (184) 1 2x p y p = f(2x)+ J0( y(2x y)) J1( y(2x y)) f(y) dy 2 0 − − 2x y − Z  r −  p p dy Proceeding similarly with g(y) one obtains for 2 ∞ √yg(2y)( iBy(x)) : 0 − y R 1 2x y g(2x) J0( y(2x y)) + J1( y(2x y)) f(y) dy (185) − 2 0 − 2x y − Z  p r − p  Combining (183) and (185) in the formula (182) for k(x) we obtain equation (20c).

8 The reproducing kernel and differential equations for the extended spaces Let L L2(0, ; dx) be the Hilbert space of square integrable functions f which are constant in a ⊂ ∞ (0,a) and with their -transforms again constant in (0,a). The distribution x d d xf = d x x d f H dx dx dx dx vanishes in (0,a) and its transform does too. So s(s 1)f(s) is an entire function with trivial H − zeros at N. The Hilbert space of the functions s(s 1)Γ(s)f(s) satisfies the axioms of [2]; we prove − − b everything according to the methods developed in the earlier chapters. Our goal is to determine the b evaluators and reproducing kernel for the spaces La.

For f La, f(s) is a meromorphic function with at most a pole at s = 1 and also f(0) ∈ a ∞ s s does not necessarily vanish. The Mellin-Plancherel transform 0 f(x)x− dx = 0 c(f)x− dx + b c(f) b ∞ s a f(x)x− dx has polar part s 1 . Let us write (f, Y1) = Rc(f) = Res(f(s),sR = 1) = s(s − − − − 1)Γ(s)f(s) . This defines an element Y L . We define also a = Γ(1)Y = Y . Then (f, a)= R |s=1 1 ∈ a Y1 1 1 Y1 a a b s(s 1)Γ(s)f(s) s=1. We also define 0 as the vector such that (f, 0 ) = s(s 1)Γ(s)f(s) s=0 = − b | Y Y − | f(0). One observes (f, (Y )) = ( (f), Y ) = s(s 1)Γ(s) [(f)(s) = s(s 1)Γ(1 s)f(1 − H 1 H 1 − H |s=1 − − − b a a a b s) s=1 = f(0) = (f, 0 ) so 0 = ( 1 ). To lighten the notation we sometimes write 1 and 0 |b − Y Y H Y Y b Y instead a and a when no confusion can arise. Y1 Y0 b 30 We will also consider the vectors X× L such that f L f(s) = (X×,f). The orthogonal s ∈ a ∀ ∈ a s projection of Xs× to Ka La is Xs. Let us look more closely at this orthogonal projection. ⊂ b First let N be the (closed) vector space sum L2(0,a; dx)+ L2(0,a; dx). Inside N we have the a H a 2 codimension two space M defined as the sum of (1 )⊥ L (0,a; dx) and of its image under a 0

44 to N = L2(0,a; dx)+ L2(0,a; dx) and as such is uniquely written as u + v . As a L we a H 1 H 1 Y1 ∈ a have constants α, β C such that: ∈ u + H v = α (186a) 1 a 1 − H u + v = β (186b) a 1 1 − where we recall that P is the restriction to (0,a) and H = P P , D = H2. From what was said a a aH a a a previously α = ( a, a) and β = ( ( a), a) = ( a, a). We have thus: Y1 Y1 H Y1 Y1 Y0 Y1 1 u = (1 D )− ( α1 + βH (1 )) (186c) 1 − a − 0

Also the function a may be obtained as the orthogonal projection to L of 1 1 . Indeed Y1 a − a 0

1 1 α(1 , (1 D )− 1 ) β(1 , (1 D )− H 1 )=1 (190a) 0

We thus have:

Proposition 22. Let p(a) and q(a) be defined as a 1 p(a)= (1 Da)− (10

45 We have introduced, for s =0, 1, X× as the evaluator f(s) for functions in L . We shall write 6 s a a × = Γ(s)X× and then = s(s 1) ×. This is compatible with our previous definitions of Xs s Ys − Xs Y1 a b and . We note that the orthogonal projection of × to K is . So we may write Y0 Xs a Xs

a a × = + λ(s) + µ(s) (192) Xs Xs Y1 Y0

∞ a a a a a We shall also write a(s,z) = 0 s (x) z (x) dx = z(z 1)Γ(z) s (z). One has ( s ) = 1 s Y Y Y − Y H Y Y − a a and a(s,z)= a(1 s, 1 z)=R a(z,s). Taking the scalar products with 1 and 0 in (192) we Y Y − − Y c Y Y obtain 1 (1,s)= λ(s)α + µ(s)β (193a) s(s 1)Ya − 1 (1, 1 s)= λ(s)β + µ(s)α (193b) s(s 1)Ya − −

1 λ(s)= (p (1,s) q (1, 1 s)) (193c) s(s 1) Ya − Ya − − 1 µ(s)= ( q (1,s)+ p (1, 1 s)) = λ(1 s) (193d) s(s 1) − Ya Ya − − − Combining with (192), this gives:

= s(s 1) + (1,s)(p a q a)+ (1, 1 s)( q a + p a) (194) Ys − Xs Ya Y1 − Y0 Ya − − Y1 Y0 Let T (s)= p(a) (1,s) q(a) (1, 1 s) (195) a Ya − Ya − Proposition 23. The (analytic) reproducing kernel (s,z) of the extended space L is given by Ya a each of the following expressions:

p(a) q(a) a(1,z) s(s 1)z(z 1) a(s,z)+ a(1,s) a(1, 1 s) − Y (196a) − − X Y Y − q(a) p(a) a(1, 1 z) − Y −    α(a) β(a) Ta(z) = s(s 1)z(z 1) a(s,z)+ Ta(s) Ta(1 s) (196b) − − X − β(a) α(a) Ta(1 z)   −  = s(s 1)z(z 1) (s,z)+ T (s) (1,z)+ T (1 s) (1 z) (196c) − − Xa a Ya a − Ya −

A very important observation, before turning to the determination of the quantities p(a) and q(a) shall now be made. Let L be the unitary operator:

1 x L(f)(x)= f(x) f(y) dy (197) − x Z0 s 1 It is the operator of multiplication by −s at the level of right Mellin transforms. Obviously it converts functions constant on (0,a) into functions vanishing on (0,a). Let us now consider the operator 1 2 2 ⋄ = L L− = L = L− (198) H H H H

46 It is a unitary, self-adjoint, self-reciprocal, scale reversing operator whose kernel is easily computed to be ∞ 2 n n J1(2√xy) 1 J0(2√xy) n n x y k⋄(xy)= J (2√xy) 2 + − = ( 1) (199) 0 − xy xy − (n + 1)!2 √ n=0 X x 1 x 1 s 1 It has L(e− ) =(1+ )e− as self-reciprocal function; the Mellin transform is − Γ(1 s) which, x − x s − multiplied by s(s 1) gives (1 s)2Γ(1 s) which is the Mellin transform of a more convenient − − − x x invariant function for ⋄, the function x(x 1)e− . This function is the analog for ⋄ of e− for H − H . Let us now consider the space L(L ). It consists of the square integrable functions vanishing H a identically on (0,a) and having ⋄ transforms also identically zero on (0,a). But then the entire H theory applies to ⋄ exactly as it did for , up to some minor details in the proofs where the function H H J was really used like in Lemma 9 or proposition 11. We have for ⋄ versions of all quantities 0 H previously considered for . To check that the proof of 11 may be adapted, we need to look at H x 2√x 1 J0(u) 1 k1⋄(x)= k⋄(t) dt = √xJ1(2√x)+2(J0(2√x) 1)+2 − du = Ox + (x 4 ). So we may 0 − 0 u → ∞ employ lemma 8 as was done for . Proposition 11 and Theorem 12 thus hold. We must be careful R H R 1 that the operator L− is always involved when comparing functions or distributions related to ⋄ H s 1 1 with those related to . For example, one has Xs× = s 1 L− Xs⋄ and s = s(s 1) s× = L− s⋄. H − Y − X X The two types of Gamma completed Mellin transforms differ: for we consider Γ(s)f(s) while for H 2 ⋄ we consider s Γ(s)g(s). Indeed this is quite the coherent thing to do in order that: H b b s2Γ(s)g(s)= s(s 1)Γ(s)f(s) (200) − for g = L(f). The bare Mellin transformsb of elements of spacesb Ka⋄ are not always entire in the complex plane: they may have a pole at s = 0. After multiplying by s2Γ(s) which is the left x Mellin transform of the self-invariant function x(x 1)e− , as Γ(s) is the left Mellin transform − x of e− , we do obtain entire functions, whose trivial zeros are at 1, 2, . . . (0 is not a trivial − − zero anymore.) From equation (200) we see that the (analytic) reproducing kernel ⋄(s,z) exactly Xa coincides with the function (s,z) whose initial computation has been given in Proposition 23. Ya 1 s 2 Γ(1 s) Also the Schr¨odinger equations will realize − − as scattering matrices, and there will be ± s Γ(s) an isometric expansion generalizing the de Branges-Rovnyak  expansion to the spaces La. We will determine exactly the functions ⋄(s), ⋄(s), ⋄(s) and especially the function µ⋄(a). It will be Aa Ba Ea seen that this is a more complicated function than the simple-minded µ(a)=2a...

The key now is to obtain the functions p(a) and q(a) defined in Proposition 22, and the function (1,s). It turns out that their computation also involves the quantities (we recall that J a(x) = Ya 0 J0(2√ax)):

a 1 a r(a)=1+ ((1 Da)− Ha J0 )(x) dx (201a) 0 − · a Z 1 a s(a)= ((1 D )− J )(x) dx (201b) − a · 0 Z0 + 1 a In order to compute r, s, p, q we shall need the already defined functions φa (= (1 + Ha)− J0 on

47 1 a + (0,a)), φ−, (= (1 H )− J ) as well as the entire functions ψ and ψ− verifying: a − a 0 a a ψ+ + P ψ+ =1 (202a) a H a a ψ− P ψ− = 1 (202b) a − H a a 1 a + We have r(a)=1+ ( φ (x)+ φ−(x)) dx, and we know explicitely φ±. But, we shall proceed in 2 0 − a a a a more general manner.R First we recall the differential equations (121a), (121b) which are verified ∂ 1 by φa± (where δx = x ∂x + 2 ):

∂ + 1 + a φ =+δ φ− (µ(a)+ )φ (203a) ∂a a x a − 2 a ∂ + 1 a φ− =+δ φ + (µ(a) )φ− (203b) ∂a a x a − 2 a + − φa (a)+φa (a) 1 a ∂ + + We compute ar′(a)= a − + (x + 1)(φ (x) φ−(x)) + µ(a)(φ (x)+ φ−(x)) dx and 2 2 0 ∂x a − a a a 1 a + simplifying this gives exactly ar′(a)= µ(Ra) 2 0 (φa (x)+ φa−(x)) dx = s(a). Similarly starting with 1 a + 1 1 a ∂ + + s(a)= (φ (x)+φ−(x)) dx we obtain as′(a)= µ(a)+ (x (φ (x)+φ−(x))+µ(a)( φ (x)+ 2 0 a a R 2 2 0 ∂x a a − a φ−(x))) dx which gives µ(a)r(a) s(a). So the quantities r and s verify the system: a R − R

ar′(a)= µ(a)s(a) (204a)

(as)′(a)= µ(a)r(a) (204b)

Either solving the system taking into account the behavior as a 0 or using the explicit formulas → for φ± we obtain in this specific instance of the study of , for which µ(a)=2a, that r(a)= I (2a) a H 0 and s(a)= I1(2a).

∂ From (202a) and (202b) we obtain two types of differential equations, either involving x ∂x or ∂ + a + ∂ + + a a . From ψ (x)+ J0(2√xy)ψ (x) dx = 1, we obtain (1+ Pa)a ψ (x) = aψ (a)J . We ∂a a 0 a H ∂a a − a 0 do similarly with ψa−Rand deduce: ∂ a ψ+(x)= aψ+(a)φ+(x) (205a) ∂a a − a a ∂ a ψ−(x)=+aψ−(a)φ−(x) (205b) ∂a a a a ∂ Regarding the differential equations with x ∂x , which we shall actually not use, the computation is ∂ ∂ done using only the fact that the kernel is a function of xy so x ∂x J0(2√xy) = y ∂y J0(2√xy). We only state the result:

∂ 1 + 1 + (x + )ψ (x)= ψ−(x) aψ (a)φ−(x) (206a) ∂x 2 a 2 a − a a ∂ 1 1 + + (x + )ψ−(x)= ψ (x)+ aψ−(a)φ (x) (206b) ∂x 2 a 2 a a a a 1 Let us now turn to the quantities p(a) and q(a). We have p(a) = (1 D )− (1 )(x) dx = 0 − a 0

48 + a + We remark that from the integral equations defining ψa± we have ψa (a)=1 0 J0(2√ax)ψa (x) dx = − + − a + a a ψa (a)+ψa (a) 1 0 φa (x) dx and ψa−(a)=1+ 0 J0(2√ax)ψa−(x) dx =1+ 0 φa−(x) dx.R So 2 = r(a) − − ψ+(a)+ψ (a) − a a andR 2 = s(a). Hence theR quantity p(a) verifies: R

2 2 p′(a)= r(a) + s(a) (208)

With exactly the same method one obtains:

q′(a)=2r(a)s(a) (209)

1 a 1 1 2 1 a Let us observe that q(a) = ((1 H )− (1 + H )− )(1) dx = O(a ) and p(a) = ((1 + 2 0 − a − a 2 0 1 1 Ha)− + (1 Ha)− )(1) dx a 0 a. So (p q) a 0 a. Also r(a) a 0 1 and s(a) a 0 a. The − ∼ R→ ± ∼ → ∼ → ∼ → R equation for p(a) can be integrated:

p(a)= a(r(a)2 s(a)2) (210) −

Indeed this has the correct derivative. Regarding q(a) the situation is different, one has q′ =2rs = 2a 1 2 rr′ so in the special case considered here, and only in that case we have q(a) = (r(a) 1). µ 2 − Summing up:

Proposition 24. The quantities r(a), s(a), p(a) and q(a) verify the differential equations ar′(a)= 2 2 2 2 µ(a)s(a), as′(a)+s(a)= µ(a)r(a), p′(a)= r(a) +s(a) , q′(a)=2r(a)s(a), p(a)= a(r(a) s(a) ). − In the special case of the transform one has: H

r(a)= I0(2a) (211a)

s(a)= I1(2a) (211b) p(a)= a(I2(2a) I2(2a)) (211c) 0 − 1 1 q(a)= (I2(2a) 1) (211d) 2 0 −

We now need to determine a(1,s) = s(s 1)Γ(s)Y1(s). There holds Y1 = u1 + v1 = Y 1−s − H a ∞ s ∞ s α10a v1. So Y1(s) = α s 1 + a ( v1)(x)x− dx. Then a ( v1)(x)x− dx = − H a − H c H ∞ 0 v1(x)gs(x) dx = 0 v1(x)gs(x) dx, where theR function gs from (73) hasR been used. Recalling c 1 from (76a), (76b) the analytic functions u , equal to (1 D )− H (g ) on (0,a), and v , equal to R R s − − a a s s 1 (1 D )− P (g ) on (0,a), and using (186d) and self-adjointness we obtain − a a s a a a v (x)g (x) dx = α u (x) dx β v (x) dx (212) 1 s − s − s Z0 Z0 Z0 ∂ Let us now recall that we computed ((83a)) (x ∂x + s)us and found it to be on the interval (0,a) 1 a s 1 a given as av (a)(1 D )− (J ) a(a− + u (a))(1 D )− H (J ). Integrating and also using − s − a 0 − s − a a 0 equations (99) and (102) we obtain

a 1 s \ √aE (s) a − + (s 1) u (x) dx = √a (E )(s)s(a) √aE (s)(r(a) 1) (213) a − − s − H a − a − Z0 c c 49 a 1 s \ a 2 − Ea(s)r(a) (Ea)(s)s(a) us(x) dx = √a − − H (214) 0 s 1 Z c − ∂ 1 a \ 1 a We have similarly ((83b)) (x +1 s)v = √aE (s)(1 D )− (J ) √a (E )(s)(1 D )− H (J ) ∂x − s − a − a 0 − H a − a a 0 a \ so integration gives avs(a) s vs(x) dx = √aEa(s)s(a) √a (Ea)(s)(r(a) 1) hence − 0 − c − H − a R \ Ea(sc)s(a)+ (Ea)(s)r(a) vs(x) dx = √a H (215) 0 s Z c Combining (214), (215) with (212), and using (1,s)= s(s 1)Γ(s)Y a(s): Ya − 1 α(a)r(a) β(a)s(a) α(a)s(a) β(a)r(a) Y (s)= √a E (s) + √a \(E )(s) c (216a) 1 a s 1 − s H a s 1 − s  −   −  ac(1,s)= √a csα(a)( a(s)r(a)+ a(1 s)s(a)) + (1 s)β(a)( a(s)s(a)+ a(1 s)r(a)) Y E E − − E E −  (216b)

Proposition 25. The functions (1,s) and (1, 1 s) verify Ya Ya − (1,s) α(a) β(a) s( (s)r(a)+ (1 s)s(a)) a = √a a a (217) Y(1, 1 s) β(a) α(a) (1 sE)( (s)s(a)+E −(1 s)r(a)) Ya −    − Ea Ea − 

Comparing with equation (195) we get: T (s)= √as( (s)r(a)+ (1 s)s(a)). So: a Ea Ea − Theorem 26. The analytic reproducing kernel (s,z) associated with the extended spaces L is: Ya a

α(a) β(a) Ta(z) a(s,z)= s(s 1)z(z 1) a(s,z)+ Ta(s) Ta(1 s) (218a) Y − − X − β(a) α(a) Ta(1 z)   −    (s) (z) (1 s) (1 z) (s,z)= Ea Ea −Ea − Ea − (s)=2√aK (2a) (218b) Xa s + z 1 Ea s − T (s)= √as ( (s)r(a)+ (1 s)s(a)) r(a)= I (2a) s(a)= I (2a) (218c) a Ea Ea − 0 1 p(a) α(a)= p(a)= a(I2(2a) I2(2a)) (218d) p(a)2 q(a)2 0 − 1 − q(a) 1 β(a)= q(a)= (I2(2a) 1) (218e) p(a)2 q(a)2 2 0 − −

We proceed now to the determination of ⋄, ⋄ and ⋄ = ⋄(s) i ⋄(s). The function ⋄(s) Aa Ba Ea Aa − Ba Aa is even under s 1 s and ⋄(s) is odd. We must have: → − Ba

z (1,z)=2( i ⋄(1)) ⋄(z)+2 ⋄(1)( i ⋄(z)) (219) Ya − Ba Aa Aa − Ba On the other hand from (195) we have (1,z)= αT (z)+ βT (1 z). Let us write Ya a a − zT (z)= √a(z(z 1)r(a) (z)+ zr(a) (z) a − Ea Ea + z(z 1)s(a) (1 z)+ zs(a) (1 z)) (220) − Ea − Ea − zT (1 z)= √a( z(z 1)s(a) (z) z(z 1)r(a) (1 z)) (221) a − − − Ea − − Ea −

50 z (1,z)= √a z(z 1)((αr βs) (z) + (αs βr) (1 z)) Ya − − Ea − Ea −  + zα(r (z)+ s (1 z)) (222) Ea Ea −  + Extracting the even part (z (1,z)) and the odd part (z (1,z))−: Ya Ya 1 1 (z (1,z))+ = √a z(z 1)(α β)(r + s) + (z )α(r s)( i )+ α(r + s) (223) Ya − − Aa − 2 − − Ba 2 Aa   1 1 (z (1,z))− = √a z(z 1)(α + β)(r s)( i ) + (z )α(r + s) + α(r s)( i ) (224) Ya − − − Ba − 2 Aa 2 − − Ba +  We have (z (1,z)) = 2( i ⋄(1)) ⋄(z) and (z (1,z))− = 2 ⋄(1)( i ⋄(z)). Let us define Ya − Ba Aa Ya Aa − Ba 1 1 K(a) = (2( i ⋄(1)))− and L(a)=(2 ⋄(1))− . We know that: − Ba Aa i ⋄(σ) lim − Ba =1 (225) σ + (σ) → ∞ Aa⋄ So it must be that K(a)(α β)(r + s)= L(a)(α + β)(r s) (226) − − Also, taking z = 1 in (223) we have 1 = 2√a 1 α (r (1) + s (0)) = αT (1). But referring to KL 2 Ea Ea a (195) one has T (1) = pα qβ = 1. So: a − 1 K(a)L(a)= (227) α(a) Then: 1 (α + β)(r s) α2 β2 r2 s2 1 p 1 K(a)2 = − = − − = (228) α(a) (α β)(r + s) α(a) (α β)2(r + s)2 p a (α β)2(r + s)2 − − − 1 (α β)(r + s)K(a)= a− 2 (229) − We conclude: 1 α(r s) α ⋄ = z(z 1) + (z ) − ( i )+ (230a) Aa − Aa − 2 (α β)(r + s) − Ba 2(α β)Aa − − 1 α(r + s) α i ⋄ = z(z 1)( i ) + (z ) + ( i ) (230b) − Ba − − Ba − 2 (α + β)(r s)Aa 2(α + β) − Ba − α p Let us now observe that α β = p q and further: ± ± 2 α r s p a(r s) p′ q′ d − = − = a − = a log(p q) (231a) α β r + s p q p p q da − − − − α r + s p a(r + s)2 p + q d = = a ′ ′ = a log(p + q) (231b) α + β r s p + q p p + q da −

1 p d 1 ⋄(z) = (z(z 1)+ ) (z)+ a log(p q)(z )( i (z)) (232a) Aa − 2 p q Aa da − − 2 − Ba − 1 p d 1 i ⋄(z) = (z(z 1)+ )( i (z)) + a log(p + q)(z ) (z) (232b) − Ba − 2 p + q − Ba da − 2 Aa Combining we get finally:

51 Theorem 27. The E function associated with the entire functions s(s 1)Γ(s)f(s), f L is: − ∈ a

1 d 2 2 1 1 ⋄(z)= z(z 1)+ a log(p(a) q(a) )(z )+ p(a)α(a) b(z) Ea − 2 da − − 2 2 Ea (233)  1 d p(a)+ q(a) 1 1  + a log (z )+ p(a)β(a) (1 z) 2 da p(a) q(a) − 2 2 Ea −  −  2 2 1 2 p(a) q(a) where p(a)= a(I0 (2a) I1 (2a)), q(a)= 2 (I0 (2a) 1), α(a)= p(a)2 q(a)2 , β(a)= p(a)2 q(a)2 , and − − − − (z)=2√aK (2a). Ea z

1 1 We shall now obtain by two methods the function µ⋄(a). First, we compute ⋄( ) = ( + Ea 2 − 4 1 1 1 p+q 1 d 1 1 2 p(α + β)) a( 2 )= 4 p q a( 2 ) and invoke a da a⋄( 2 )= µ⋄(a) a⋄( 2 ). We thus have: E − E E − E Theorem 28. The mu function for the chain of spaces L , 0

′ ′ pq +qp The asymptotic behavior is a corollary to lima 2a −p2 q2 = 2 which itself follows from →∞ − − 2 2 1 4 1 q 1 p q 4 I0(2a) 16a2 and ( p )′ + 16a3 which are easily deduced from the − ∼ x ∼ I (x)= e (1 + 1 + 9 + ... ) ([33]). Of course the o(1) is in fact an O(a 1). 0 √2πx 8x 128x2 − ⋄ ⋄ (1 σ) µ (a) Ea − The second method to obtain µ⋄(a) relies on ⋄(σ) σ + 2σ ((116)). We have: Ea ∼ → ∞ 1 d 2 2 ⋄(σ) a log(p q ) 1 1 Ea =1+ 2 da − − + O( ) (237a) σ2 (σ) σ σ2 Ea a⋄(1 σ) 1 d p + q 2 E − σ 1 (a log ) (237b) σ2 (1 σ) → →∞ − 2 da p q µ(a) Ea − − ⋄ µ (a) 1 d p+q so µ(a) = 1 µ(a) a da log p q and (234) is confirmed. We can use this method to gather more − − information. From (115a) we have, as (s) + : ℜ → ∞ + 1 s aφ (a) aφ−(a) 1 E (s)= a 2 − (1 + − + O( )) (238a) a 2s s2 + 1 s aφ⋄ (a) aφ⋄−(a) 1 Ec(s)= a 2 − (1 + − + O( )) (238b) a⋄ 2s s2

c 2 Let us be careful that (s)=Γ(s)E (s) while ⋄(s)= s Γ(s)E (s). We obtain: Ea a Ea a⋄

+ + d 2 2 aφ⋄ (a) aφ⋄−(ac)= aφ (a) aφ−(a)+ a clog(p q ) 2 (239a) − − da − − + + d p q aφ⋄ (a)= aφ (a)+ a log − (239b) da a d p + q aφ⋄−(a)= aφ−(a) a log (239c) − da a

52 We recall that (p q) a 0 a. We integrate (239b) and (239c) using (130a), (130b) and this gives ± ∼ → p q p+q det(1 + H⋄)= − det(1 + H ) and det(1 H⋄)= det(1 H ). a a a − a a − a a p q 1 2 det(1 + Ha⋄)= − det(1 + Ha) = det(1 + Ha) (r s) da (240a) a a 0 − Z a p + q 1 2 det(1 H⋄)= det(1 H ) = det(1 H ) (r + s) da (240b) − a a − a − a a Z0 1 2 Theorem 29. Let ⋄ = L L− be the self-reciprocal operator on L (0, ; dx) with kernel: H H ∞ 2 n n J1(2√xy) 1 J0(2√xy) ∞ n x y J (2√xy) 2 + − = ( 1)n (241a) 0 − xy xy − (n + 1)!2 √ n=0 X 2 and let Ha⋄ be the restriction to L (0,a; dx). Then:

a 2 +a 1 a2 1 2 +a 1 a2 2 2 I0 (2a) 1 det(1 + H⋄)= e − 2 (I (2a) I (2a)) da = e − 2 I (2a) I (2a) − a a 0 − 1 0 − 1 − 2a Z0 (241b) a 2 a 1 a2 1 2 a 1 a2 2 2 I0 (2a) 1 det(1 H⋄)= e− − 2 (I (2a)+ I (2a)) da = e− − 2 I (2a) I (2a)+ − − a a 0 1 0 − 1 2a Z0 (241c)

2 1 2 1 2 1 1 1 From theorem 26 1 = 1 + 2(α + β)Ta( ) and Ta( ) = √a (r + s) a( ). Also, kY 2 k 16 kX 2 k 2 2 2 E 2 2 1 2 2 1 1 2 2 π 2 2 (α + β)(r + s) = p q (r + s) . Furthermore 1 = 16 Γ( 2 ) X⋄1 = 16 X⋄1 and 1 = − kY 2 k k 2 k k 2 k kX 2 k 2 1 det(1 Ha) a 2 π X 1 . And also from (163) a( 2 ) = √π det(1+−H ) and from theorem 17 one has X 1 = k 2 k E a k 2 k 2 ∞ 1 Hb db a a a 2 a det 1+−H b and the analog holds for X⋄1 . Let us observe that X⋄1 = LX×1 so X⋄1 = b 2 2 − 2 k 2 k a XR 1 ×. So  k 2 k

2 2 2 a 2 ∞ 1 Hb⋄ db ∞ 1 Hb db p′ + q′ 1 Ha X 1 × =2 det − =2 det − +8a det − k 2 k 1+ H b 1+ H b p q 1+ H a  b⋄  a  b   a  Z Z − (242)

Theorem 30. Let L be the Hilbert space of square integrable functions on f L2(0, ; dx) such a ∈ ∞ ∞ that both f and (f)= 0 J0(2√xy)f(y) dy are constant on (0,a). Then the squared norm of the H f(x) linear form f ∞ dx is given by either one of the following two expressions: 7→ 0 √x R

R 2 2 2 4b ∞ (2b + 1)I (2b) 2bI (2b) 1 e− 2 0 − 1 − db (243a) (2b 1)I2(2b) 2bI2(2b)+1 b Za  − 0 − 1  4b 2 ∞ e− 8a(I0(2a)+ I1(2a)) 4a =2 db +2 e− (243b) b (2a 1)I2(2a) 2aI2(2a)+1 Za − 0 − 1 The squared norm of the restriction of the linear form to the subspace Ka of functions vanishing on −4b (0,a) and with (f) also vanishing on (0,a) is 2 ∞ e db. H a b R One may express the wish to verify explicitely from equations (230a) and (230b), or in the

53 equivalent form

1 2 1 p + q d 1 ⋄(z)= (z ) + (z) + (a log(p q))(z )( i (z)) (244a) Aa − 2 4 p q Aa da − − 2 − Ba  −  1 2 1 p q d 1 i ⋄(z)= (z ) + − ( i (z)) + (a log(p + q))(z ) (z) (244b) − Ba − 2 4 p + q − Ba da − 2 Aa   the differential system: ∂ 1 a ⋄(z)= µ⋄(a) ⋄(z) (z )( i ⋄(z)) (245a) ∂aAa − Aa − − 2 − Ba ∂ 1 a ( i ⋄(z))=+µ⋄(a)( i ⋄(z)) (z ) ⋄(z) (245b) ∂a − Ba − Ba − − 2 Aa and also to verify explicitely the reproducing kernel formula

⋄(s) ⋄(z) ⋄(1 s) ⋄(1 z) (s,z)= Ea Ea −Ea − Ea − (246) Ya s + z 1 − The interested reader will see that the algebra has a tendency to become slightly involved if one 2 2 does not benefit from the following preliminary observations: using p′ = r + s , q′ =2rs, ar′ = µr, 2 2 p as′ = µr s, p = a(r s ) one first establishes aq′′ + q′ =2µp′, ap′′ + p′ = 2µq′. Using this − − − a one checks easily: 2 p′ + q′ ′ p′ + q′ 1 p 2µ 1 p′ + q′ + = + − (247a) p + q p + q a2 p + q a p + q     2 p′ q′ ′ p′ q′ 1 p 2µ +1 p′ q′ − + − = − (247b) p q p q a2 p q − a p q  −   −  − − Also the identity 2 p p′ + q′ p′ q′ = a a − = pα (247c) p2 q2 p + q p q − − is useful. The verifications may then be done.

9 Hyperfunctions in the study of the transform H

In this final section we return to the equation (31): t +1 t + 1 ψ](f)(it)= f(i t ) , (248) 2t 2 Let us recall that f L2(0, ; dx) and ψ : L2(0, ; dx) L2(0, ; dx) is the isometry which ∈ ∞ ∞e → ∞ 2 n x corresponds to F (w) F (w ), where F (w) = ∞ c w , f(x) = ∞ c P (x)e− , P (x) = 7→ n=0 n n=0 n n n (0) 2 Ln (2x). Let g = ψ(f). Using λ = it, in the L sense:P P 1 1 ∞ λ + i λ λ iλx g(x)= f( − )e− dλ (249) 2π 2λ 2 Z−∞ It is natural to consider separately λ> 0 and λ< 0.e So let us define: 0 1 1 λ + i λ λ iλx G (x)= f( − )e− dλ (250a) + 2π 2λ 2 Z−∞ 1 1 ∞ λ + ie λ λ iλx G (x)= f( − )e− dλ (250b) − −2π 2λ 2 Z0 e 54 We observe that G+ is in the Hardy space of (x) > 0 and G is in the Hardy space of (x) < 0. ℑ − ℑ Their boundary values must coincide on ( , 0) as g L2(0, + ; dx). So we have a single analytic −∞ ∈ ∞ function G(z) on C [0, + ) with G = G+ for (x) > 0 and G = G for (x) < 0. Then \ ∞ ℑ − ℑ g = ψ(f)= G+ G is computed as − − g(x)= G(x + i0) G(x i0) (251) − − In other words g is most naturally seen as a hyperfunction [23], as a difference of boundary values of analytic functions. We shall now compute it explicitely, and also we will show later that this observation extends to the distributions A (x), iB (x), E (x) which are associated with the study a − a a of the transform. The point of course is that the corresponding functions G will for them have a H simple natural expression.

We have, for (z) > 0: ℑ 1 1 ∞ λ i λ + G(z)= − f(− λ )e+iλz dλ (252a) 2π 2λ 2 Z0 e 1 ∞ λ i ∞ i 1 x( λ+ 1 ) +iλz G(z)= − e 2 − λ f(x) dx e dλ (252b) 2π 2λ Z0 Z0  1 1 2 λ i λ Let µ = (λ ), λ = µ + 1+ µ , − dλ = dµ, with, for 0 <λ< , <µ< . 2 − λ 2λ λ+i ∞ −∞ ∞ p 1 ∞ λ ∞ iµx +iλz G(z)= e− f(x) dx e dµ (252c) 2π λ + i 0 Z−∞ Z  For µ + , λ =2µ+ 1 +... , and for µ , λ = 1 +... , and λ i as µ . So far → ∞ 2µ → −∞ − 2µ λ+i ∼ 2µ → −∞ 2 1 the inner integral is in the L sense. We shall now suppose that f and f ′ are in L (so limx f(x)= →∞ f(0) ∞ iµx ∞ iµx x x 1 ∞ iµx 0) and write 0 e− f(x) dx = 0 e− − e f(x) dx = iµ+1 + iµ+1 0 e− (f(x)+ f ′(x)) dx.

R +Riλz +iλz R 1 ∞ λ e ∞ λ e ∞ iµx G(z)= f(0) dµ + e− (f(x)+ f ′(x)) dx dµ 2π λ + i iµ +1 λ + i iµ +1   0   Z−∞ Z−∞ Z (252d) 1 1 In this manner, with f L , f ′ L , (z) > 0, we have an absolutely convergent double integral. ∈ ∈ ℑ +iλz iµx+iλz 1 ∞ λ e ∞ ∞ λ e− G(z)= f(0) dµ + dµ (f(x)+ f ′(x)) dx 2π λ + i iµ +1 0 λ + i iµ +1  Z−∞ Z Z−∞   (252e) +iλz 1 ∞ λ e 1 ∞ λ i 1 +iλz 1 ∞ 1 +iλz Observing 2π λ+i iµ+1 dµ = 2π 0 2−λ iµ+1 e dλ = 2πi 0 λ i e dλ, we then suppose −∞ − (z) < 0, (z) > 0 (or (z) 0) so that we may rotate the contour to λ = it, 0 t< . This ℜ ℑ R ℑ ≥ R R − ≤ ∞ procedure gives thus: +iλz tz 1 ∞ λ e 1 ∞ e dµ = dt (252f) 2π λ + i iµ +1 2πi 0 1+ t Z−∞ Z Also: iµx+iλz 1 ∞ λ e− 1 ∞ 1 i x (λ 1 )+iλz dµ = e− 2 − λ dλ (252g) 2π λ + i iµ +1 2πi 0 λ i Z−∞ Z −

55 We rotate the contour to λ i[0, ), which is licit as x 0 and, for (z) < 0, x 0, we obtain: ∈ −∞ ≥ ℜ ≥ zt x (t+ 1 ) 1 ∞ e 2 t − dt (252h) 2πi 1+ t Z0 Going back this allows to write (252e), for (z) < 0, (z) > 0 as: ℜ ℑ zt zt x (t+ 1 ) 1 ∞ e ∞ ∞ e − 2 t G(z)= f(0) dt + dt (f(x)+ f ′(x)) dx (252i) 2πi 1+ t 1+ t Z0 Z0 Z0 ! ! and finally, after integrating by parts:

1 ∞ ∞ 1 1 zt 1 y(t+ 1 ) G(z)= (1 + )e − 2 t dt f(y) dy (252j) 2πi 2 t Z0 Z0  1 This last expression (still temporarily under the hypothesis f,f ′ L ) is certainly a priori absolutely ∈ convergent for (z) < 0 and gives G(z) in this half-plane. ℜ We are led to the study of:

1 ∞ 1 1 zt 1 y(t+ 1 ) a(z,y)= (1 + )e − 2 t dt (253a) 2πi 2 t Z0 We still temporarily assume (z) < 0. We even suppose z < 0 and make a change of variable: ℜ 1 y 1 ∞ 1 √y(y 2z)(u+ 1 ) 1 ∞ 1 √y(y 2z)(u+ 1 ) 1 a(z,y)= e− 2 − u du + e− 2 − u du (253b) 2πi y 2z 2 2 u r − Z0 Z0 

1 1 1 ∞ 1 √y 2z(v+y 1 ) 1 ∞ 1 √y 2z(v+y 1 ) 1 a(z,y)= e− 2 − v dv + e− 2 − v dv (253c) 2πi y 2z 2 2 v r − Z0 Z0  1 y a(z,y)= K ( y(y 2z)) + K ( y(y 2z)) (253d) 2πi y 2z 1 − 0 −   r − p p For any z C [0, + ) and any y 0 the integrals in (253c) converge absolutely and define an ∈ \ ∞ ≥ analytic function of z. Furthermore the K Bessel functions decrease exponentially as y + → ∞ in (253d). For fixed z, a(z,y) is certainly a square-integrable function of y (also at the origin), locally uniformly in z so the equation (252j) defines G as an analytic function on the entire domain C [0, + ). Then by an approximation argument (252j) applies to any f L2(0, ; dx) and any \ ∞ ∈ ∞ z C [0, + ). ∈ \ ∞ We now study the boundary values a(x+i0,y), a(x i0,y), x, y 0. We could use the expression − ≥ of the K Bessel functions in terms of the Hankel functions H(1) and H(2), go to the boundary, and then recover the Bessel functions J0 and J1. But we shall proceed in a more direct manner. Let us first examine

1 1 ∞ zt 1 y(t+ 1 ) 1 d(z,y)= e − 2 t dt 2πi 2 t Z0 (254a) 1 1 ∞ 1 √y(y 2z)(u+ 1 ) 1 1 ∞ √y(y 2z) t dt = e− 2 − u du = e− − 2 2πi 2 0 u 2πi 1 √t 1 Z Z − 1 ∞ √y(y 2z) t 1 1 ∞ √y(y 2z) t 1 1 = e− − t− dt + e− − ( ) dt (254b) 2 2πi 1 2πi 1 √t 1 − t Z Z −

56 √y(y 2z) ∞ √y(y 2z) t 2 1 e− − 1 e− − t− dt 1 ∞ √y(y 2z) t 1 1 = − + e− − ( ) dt (254c) 2 2πi y(y 2z) 2πi 1 √t 1 − t R − Z − We now look at the (distributional) boundary values z x with z = x + iǫ, ǫ 0+ or z = x iǫ p → → − and ǫ 0+. We shall take x> 0. Here the singularities at y =2x and at y = 0 are integrable and → we need only take the limit in the naive sense. We distinguish y > 2x from 0

1 ∞ √y(y 2x) t dt d(x + i0,y)= d(x i0,y)= e− − (255a) 2 − 2πi 1 √t 1 Z − In the latter case:

+i√y(2x y) ∞ +i√y(2x y) t 2 1 e − 1 e − t− dt 1 ∞ +i√y(2x y) t 1 1 d(x + i0,y)= − + e − ( ) dt 2π y(2x y) 2πi √t2 1 − t R Z1 − − (255b) p i√y(2x y) ∞ i√y(2x y) t 2 1 e− − 1 e− − t− dt 1 ∞ i√y(2x y) t 1 1 d(x i0,y)= − + e− − ( ) dt − −2π y(2x y) 2πi √t2 1 − t R Z1 − − (255c) p So d(x + i0,y) d(x i0,y) is supported in (0, 2x) and has values there − − 2 1 cos y(2x y) ∞ cos( y(2x y) t) t− dt 1 ∞ 1 1 − − 1 − + sin( y(2x y) t)( ) dt π y(2x y) π − √t2 1 − t p R p Z1 − p − (255d) p We used this method to have a clear control not only of the pointwise behavior but also of the limit as a distribution. There is no necessity now to keep working with absolutely convergent integrals and we have the simple result, using the very classical Mehler formula:31

1 ∞ sin( y(2x y) t) 1 d(x + i0,y) d(x i0,y)= 10

Let us now consider the behavior of

1 1 ∞ zt 1 y(t+ 1 ) 1 y 1 ∞ 1 √y(y 2z)(u+ 1 ) e(z,y)= e − 2 t dt = e− 2 − u du (257) 2πi 2 2πi y 2z 2 Z0 r − Z0 ∂ We make the simple observation that e(z,y) = ∂z d(z,y). So we shall have (as is confirmed by a more detailed examination): ∂ 1 e(x + i0,y) e(x i0,y)= 1 (y)J ( y(2x y)) − − ∂x 2 0

57 Some pointwise regularity of f at x is necessary to fully justify the formula; in order to check if continuity of f at 2x is enough we can not avoid examining e(z,y) more closely as z x. →

y 1 ∞ √y(y 2z) t t dt e(z,y)= e− − 2 y 2z 2πi 1 √t 1 r − Z − (260) √y(y 2z) √ 2 1 e− − y 1 ∞ √y(y 2z) t t t 1 = + e− − − − dt 2 2πi y 2z y 2z 2πi 1 √t 1 − r − Z − − − e √y(y 2z) 1 The integral term on the right causes no problem at all. And writing y 2z = y 2z + − − − − e √y(y 2z) 1 1 y 2z − , again the term on the right has no problem, so there only remains y 2z , and of − − course, this is very well-known, the difference between +i0 and i0 gives the Poisson kernel, so for − non-tangential convergence, continuity of f at 2x is enough. Of course this discussion was quite superfluous if we wanted to understand k as an L2 function, here we have the information that non tangential boundary value of G(x + i0) G(x i0) does give pointwise the formula (259) if f is − − continuous at y =2x. We can also rewrite (259) as:

d 1 2x k(x)=(1+ ) J0( y(2x y))f(y) dy (261) dx 2 0 − Z p This is exactly one half of equation (20c), where k was obtained from (f,g) as ψ(f)+ w ψ(g). · λ i d d d Let us observe that w = − verifies, as an operator, ( + 1) w = w ( +1) = 1. λ+i dx · · dx dx − So the isometry corresponding to g(w) w G(w2), which is the composite w ψ, sends g to 7→ · ( 1+ d ) 1 2x J ( y(2x y))f(y) dy. This is indeed the second half of equation (20c). − dx 2 0 0 − The formulasR (20ap) and (20b) may be established in an exactly analogous manner (taking k with compact support to simplify the discussion). But this would be a repetition of the arguments we just went through, so rather I will conclude the paper with a method allowing to go directly from (s), i (s), (s) to the distributions A (x), iB (x), E (x), and this will show that they are Aa − Ba Ea a − a a in a natural manner (differences of) boundary values of an analytic function.

a(t+ 1 ) s 1 From the expression (s)=Γ(s)E (s)=2√aK (2a)= √a ∞ e− t t − dt, we shall recover Ea a s 0 1 v s Ea(s) as a right Mellin transform with the help of the Hankel formulaR Γ(s)− = e v− dv, where c C C is a contour coming from along the lower edge of the cut along ( , 0] turningR counterclockwise c −∞ −∞ around the origin and going back to along or slightly above the upper edge of the cut. Let us −∞ write the Hankel formula as

s 1 t − 1 tv s = e v− dv (t> 0) (262) Γ(s) 2πi ZC So we have: 1 ∞ tv s a(t+ 1 ) Ea(s)= √a e v− dv e− t dt (263) 2πi 0 Z ZC  Let us suppose (s) > 1.c Then the contour can be deformed into the contour coming from ℜ C Cǫ i π i π i to iǫ, then turning counterclockwise from e− 2 ǫ to e 2 ǫ, then going to +i . Also we impose − ∞ − ∞

58 0 <ǫ

1 ∞ tv a(t+ 1 ) s Ea(s)= √a e e− t dt v− dv (264) 2πi ǫ 0 ZC Z  and using e(z,y) from (257c) this gives:

s (s) > 1 = Ea(s)= √a 2e(v, 2a)v− dv (265) ℜ ⇒ ǫ ZC We have previously studied e(z,y), whichc is also expressed as in (260). We see on this basis and simple estimates that we may deform into a contour going from + to a + η along the lower Cǫ Ca,η ∞ border, turning clockwise around a from a + η i0 to a + η + i0, then going from a + η to + on − − ∞ 2 a+η+i0 v s the upper border (η 1). We will have in particular from (260) a term 2πi a+η i0 2a 2v dv which ≪ − − s is a− . The final result is obtained: R

s ∞ a s (s) > 1 = E (s)= √a a− J (2 a(x a))x− dx (266) ℜ ⇒ a − x a 1 −  a r  Z − p c This identifies Ea(s) as the right Mellin transform of the distribution

∂ ∂ E (x)= √acδ (x)+ 1 (x) J (2 a(x a)) = √a 1 (x)J (2 a(x a)) (267) a a x>a ∂x 0 − ∂x x>a 0 −   p  p  This proof reveals that the distribution Ea(x) is expressed in a natural manner as the difference of boundary values √a(2e(x + i0, 2a) 2e(x i0, 2a)), with − −

1 ∞ zt a(t+ 1 ) 1 a √a 2e(z, 2a)= √a e − t dt = √a 2 K (2 a(a z)) (268) 2πi 2πi a z 1 − 0 r Z − p The formulas (176e) and (176f) are recovered in the same manner.

√a + + a + Theorem 31. The distribution Aa(x)= (1 + )(φa 10

1 ∞ 1 1 zt a(t+ 1 ) √aa(z, 2a)= √a (1 + )e − t dt 2πi 2 t Z0 (269) 1 a = √a K (2 a(a z)) + K (2 a(a z)) 2πi a z 1 − 0 − r  − p p √a a The distribution iBa(x)= ( 1+ )(φa−10

1 ∞ 1 1 zt a(t+ 1 ) √a( ib(z, 2a)) = √a (1 )e − t dt − 2πi 2 − t Z0 (270) 1 a = √a K (2 a(a z)) K (2 a(a z)) 2πi a z 1 − − 0 − r  − p p

59 10 Appendix: a remark on the resolvent of the Dirichlet kernel

In this paper we have studied a special transform on the positive half-line with a kernel of a multi- plicative type k(xy), following the method summarized in [5, 6]. We have associated to the kernel the investigation of its Fredholm determinants on finite intervals (0,a), and have related them with first and second order differential equations leading to problems of spectral and scattering theory. There is a vast literature on kernels of the additive type k(x y), and on the related Fredholm − determinants on finite intervals. The Dirichlet kernel on L2( s,s; dx): − sin(x y) K (x, y)= − (271) s π(x y) − has been the subject of many works (only a few references will be mentioned here.) The Fredholm determinant det(1 K ), as a function of s (or more generally as a function of the endpoints of − s finitely many intervals), has many properties, and is related to the study of random matrices [22]. The Fredholm determinants of the even and odd parts

sin(x y) sin(x + y) K±(x, y)= − (272) s π(x y) ± π(x + y) − on L2(0,s; dx) have been studied by Dyson [16]. He used the second derivatives of their logarithms to construct potentials for Schr¨odinger equations on the half-line, and studied their asymptotics with the tools of scattering theory. Jimbo, Miwa, Mˆori, and Sato [17] related det(1 K ) to a − s Painlev´eequation. Widom [34] obtained the leading asymptotics using the Krein continuous analog of orthogonal polynomials. Deift, Its, and Zhou [11] justified the Dyson asymptotic expansions using tools developed for Riemann-Hilbert problems. Tracy and Widom [32] established partial differential equations for the Fredholm determinants of integral operators arising in the study of the scaling limit of the distribution functions of eigenvalues of random matrices. We refer the reader to the cited references and to [12] for recent results and we apologize for not providing any more detailed information here.

We have, in the present paper, been talking a lot of scattering and determinants and one might wonder if this is not a re-wording of known things. In fact, our work is with the multiplicative kernels k(xy), and (direct) reduction to additive kernels would lead to (somewhat strange) g(t + u) kernels on semi-infinite intervals ( , log(a)]. So we are indeed doing something different; one may −∞ also point out that the entire functions arising in the present study are not of finite exponential type; and the scattering matrices do not at all tend to 1 as the frequency goes to infinity. In the case of the cosine and sine kernels the flow of information will presumably go from the additive to the multiplicative, as the additive situation is more flexible, and has stimulated the development of powerful tools, with relation to Painlev´eequations, Riemann-Hilbert problems, Integrable systems [11].

60 Nevertheless, one may ask if the framework of reproducing kernels in Hilbert spaces of entire functions also may be used in the additive situation. This is the case indeed and it is very much connected to the method of Krein in inverse scattering theory, and his continuous analog of or- thogonal polynomials (used by Widom in the context of the Dirichlet kernel in [34].) The Gaudin identities for convolution kernels ([22, App. A16]) play a rˆole very analogous to the identities in the present paper (132a), (132b) involved in the study of multiplicative kernels. Widom in his proof [34] of the main term of the asymptotics as s + studied the Krein functions associated with the → ∞ complement of the interval ( 1, +1) and he mentioned the interest of extremal properties. In this − appendix, I shall point out that the resolvent of the Dirichlet kernel indeed does have an extremal property: it coincides exactly (up to complex conjugation in one variable) with the reproducing kernel of a certain (interesting) Hilbert space of entire functions. This could be a new observation, obviously closely related to the method of Widom [34].

The space mP Ws we shall use is, as a set, the Paley-Wiener space P Ws, but the norm is different:

mP W = f(z) entire of exponential type at most s with f < s { k k ∞} (273) f 2 = f(t) 2 dt k k R ( 1,1) | | Z \ − Let X (z, w) be the element of mP W which is the evaluator at z: f mP W (f,X (z, )) = f(z). s s ∀ ∈ s s · We shall compare Xs(z, w) with the resolvent of the kernel

sin(s(x y)) D (x, y)= − (274) s π(x y) − on L2( 1, 1; dx). − Let f mP W . It belongs to P W so ∈ s s is(t z) is(t z) e − e− − sin(s(t z)) f(z)= f(t) − dt = f(t) − dt (275) 2πi(t z) π(t z) ZR − ZR − On the other hand: 1 − ∞ f(z)= + f(t)Xs(z,t) dt (276) 1 Z−∞ Z  1 1 − ∞ − ∞ As f(z) = + 1 f(t) Xs(z,t) dt = + 1 f(t)Xs(z,t) dt one has Xs(z,t) = Xs(z,t) −∞ −∞ for t R. We have for y and y real   ∈ R R 1 2 R R

Xs(y1,y2)= Xs(y1,t)Xs(y2,t) dt = Xs(y1,t)Xs(y2,t) dt = Xs(y2,y1) (277) R ( 1,1) R ( 1,1) Z \ − Z \ − so more generally Xs(z1,z2)= Xs(z2,z1).

sin(s(z y)) We apply (276) to f(z)= π(z −y) for some y C: − ∈ sin(s(z y)) sin(s(t y)) − = − Xs(z,t) dt (278) π(z y) R ( 1,1) π(t y) − Z \ − −

61 We apply (275) to f(y)= X (z,y) for some z C: s ∈ sin(s(t y)) X (z,y)= X (z,t) − dt (279) s s π(t y) ZR − Combining we obtain:

sin(s(z y)) 1 sin(s(t y)) Xs(z,y) − = Xs(z,t) − dt (280) − π(z y) 1 π(t y) − Z− − Restricting to y ( 1, 1), z = x ( 1, 1), this says exactly: ∈ − ∈ −

Xs(x, y)= Rs(x, y) (281)

1 where R (x, y) is the kernel of the resolvent: 1 + R = (1 D )− , R D = R D . The resolvent s s − s s − s s s Rs(x, y) is entire in (x, y) and the general formula is thus:

z, w C R (z, w)= X (z, w) . (282) ∀ ∈ s s

References [1] L. de Branges, Self-reciprocal functions, J. Math. Anal. Appl. 9 (1964) 433–457. [2] L. de Branges, Hilbert spaces of entire functions, Prentice Hall Inc., Englewood Cliffs, 1968. [3] J.-F. Burnol, Sur certains espaces de Hilbert de fonctions enti`eres, li´es `ala transformation de Fourier et aux fonctions L de Dirichlet et de Riemann, C. R. Acad. Sci. Paris, Ser. I 333 (2001), 201–206. [4] J.-F. Burnol, On Fourier and Zeta (’s), talk at University of Nice (18 Dec. 2001), Forum Math- ematicum 16 (2004), 789–840. [5] J.-F. Burnol, Sur les “espaces de Sonine” associ´es par de Branges `ala transformation de Fourier, C. R. Acad. Sci. Paris, Ser. I 335 (2002), 689–692. [6] J.-F. Burnol, Des ´equations de Dirac et de Schr¨odinger pour la transformation de Fourier, C. R. Acad. Sci. Paris, Ser. I 336 (2003), 919–924. [7] J.-F. Burnol, Two complete and minimal systems associated with the zeros of the Riemann zeta function, Jour. Th. Nb. Bord. 16 (2004). [8] J.-F. Burnol, Entrelacement de co-Poisson, Ann. Inst. Fourier, 57 no. 2 (2007), 525–602. [9] J.-F. Burnol, Spacetime causality in the study of the Hankel transform, Ann. Henri Poincar´e 7 (2006), 1013–1034. [10] E. Coddington, N. Levinson, Theory of ordinary differential equations, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1955. [11] P. Deift, A. Its, X. Zhou, A Riemann-Hilbert approach to asymptotic problems arising in the theory of random matrix models, and also in the theory of integrable statistical mechanics, Ann. Math. 146 (1997), 149–235. [12] P. Deift, A. Its, I. Krasovsky, X. Zhou, The Widom-Dyson constant for the gap probability in random matrix theory, arXiv:math.FA/0601535 [13] R. J. Duffin, H. F. Weinberger, Dualizing the Poisson summation formula, Proc. Natl. Acad. Sci. USA 88 (1991), 7348–7350. [14] H. Dym, An introduction to de Branges spaces of entire functions with applications to differ- ential equations of the Sturm-Liouville type, Advances in Math. 5 (1971), 395–471. [15] H. Dym, H.P. McKean, Gaussian processes, function theory, and the inverse spectral problem, Probability and Mathematical Statistics, Vol. 31. Academic Press, New York-London, 1976.

62 [16] F. Dyson, Fredholm determinants and inverse scattering problems, Comm. Math. Phys. 47 (1976), 171–183 [17] M. Jimbo, T. Miwa, Y. Mˆori, M. Sato, Density matrix of an impenetrable Bose gas and the fifth Painlev´etranscendent, Physica 1D (1980), 80-158. [18] J. C. Lagarias, Hilbert spaces of entire functions and Dirichlet L-functions, in: Frontiers in Number Theory, Physics and Geometry: On Random Matrices, Zeta Functions and Dynamical Systems (P. E. Cartier, B. Julia, P. Moussa and P. van Hove, Eds.), Springer-Verlag: Berlin 2006, to appear. [19] J. C. Lagarias, Zero spacing distributions for differenced L-functions, Acta Arithmetica 120 (2005), No. 2, 159–184. [20] P. Lax, Functional Analysis, Wiley, 2002. [21] B. M. Levitan, I. S. Sargsjan, Introduction to Spectral Theory, Transl. of Math. Monographs 39, AMS 1975. [22] M. L. Mehta, Random Matrices, Academic Press, 2nd ed., 1991. [23] M. Morimoto, An introduction to Sato’s Hyperfunctions, Transl. of Math. Monographs 129, AMS 1993. [24] G. P´olya, Bemerkung ¨uber die Integraldarstellung der Riemannschen ξ-Funktion, Acta Math. 48 (1926), 305-317. [25] G. P´olya, Uber¨ trigonometrische Integralen mit nur reellen Nullstellen, J. Reine u. Angew. Math. 158 (1927), 6-18. [26] M. Reed, B. Simon, Methods of modern mathematical physics. II. , self- adjointness, Academic Press, New York-London, 1975. [27] C. Remling, Schr¨odinger operators and de Branges spaces, J. Funct. Anal. 196 (2002), 323–394. [28] V. Rovnyak, Self-reciprocal functions, Duke Math. J. 33 (1966) 363–378. [29] J. Rovnyak, V. Rovnyak, Self-reciprocal functions for the Hankel transformation of integer order, Duke Math. J. 34 (1967) 771–785. [30] J. Rovnyak, V. Rovnyak, Sonine spaces of entire functions, J. Math. Anal. Appl. 27 (1969) 68–100. [31] G. Szeg¨o, Orthogonal Polynomials, AMS Colloquium Publications 23, 1939. [32] C. Tracy, H. Widom, Fredholm determinants, differential equations, and matrix models, Com- mun. Math. Phys. 163 (1994), 33–72. [33] G. N. Watson, A Treatise on the Theory of Bessel Functions, 2nd ed., Cambridge University Press, 1944. [34] H. Widom, The asymptotics of a continuous analogue of orthogonal polynomials, J. Approx. Theory 77 (1994), 51–64

63