Hankel Operators plus Orthogonal Polynomials

Yield Combinatorial Identities

E. A. Herman, Grinnell College

∗ Abstract: A Hankel operator H = [hi+j] can be factored as H = MM , where M maps a space of L2 functions to the corresponding moment sequences. Furthermore, a necessary and sufficient condition for a sequence to be in the range of M can be expressed in terms of an expansion in orthogonal polynomials. Combining these two results yields a wealth of combinatorial identities that incorporate both the entries hi+j and the coefficients of the orthogonal polynomials.

2000 Mathematics Subject Classification: 47B35, 33D45, 05A19.

Keywords: Hankel operators, Hausdorff moments, orthogonal polynomials, combinatorial identities, Hilbert matrices, Legendre polynomials, Fibonacci numbers.

1. Introduction

The context throughout this paper will be the Hilbert spaces l2 of square-summable se- 2 quences and Lα( 1, 1) of square-integrable functions on [ 1, 1] with the distribution dα, α nondecreasing.− Their inner products are the usual dot− product for the former and the usual integral inner product for the latter.

The Hankel operators in Section 2 are exactly as in [9]. That is, we study the infinite matrices H = [hi+j]i,j≥0, where

1 j hj = x dα(x), j =0, 1, 2,... Z−1 The following result in [9] will be essential:

Widom’s Theorem: The following are equivalent:

(a) H represents a bounded operator on l2.

(b) h = O(1/j) as j . j →∞ (c) α(1) α(x)= O(1 x) as x 1−, α(x) α( 1) = O(1 + x) as x 1+. − − → − − → − 1 In Theorem 1 we show that H = MM ∗, where M : L2 ( 1, 1) l2 is the bounded operator α − → that maps each function f L2 ( 1, 1) to its sequence of moments: ∈ α − 1 k M(f)k = x f(x) dα(x), k =0, 1, 2,... Z−1 We also show that, for u = u l2, we have the formula h ki ∈ ∞ ∗ k M u(x)= ukx Xk=0

In Section 3, we explore the space α of Hausdorff moment sequences given by the range of the operator M. Extending an ideaM in [2], we derive in Theorem 2 the following necessary and sufficient condition for a sequence to belong to . If u = u l2 and p is an Mα h ji ∈ h ki orthogonal sequence of polynomials in L2 ( 1, 1) with α − k p (x)= a xj, a =0, k =0, 1, 2,... (1) k kj kk 6 j=0 X define the sequence c = c by h ki k

ck = akjuj, k =0, 1, 2,... (2) j=0 X Then u = u if and only if ∞ c 2/ p 2 < . h ji∈Mα k=0 | k| k kk ∞ Finally, in Theorem 3 we use TheoremsP 1 and 2 to produce combinatorial identities as 2 follows. Suppose v = vk l and u = Hv. If pk and ck are the sequences described above in (1) and (2),h wei obtain ∈ one type of identityh i by equatinh i g two ways of expressing the ck’s. On the one hand,

k k ∞

ck = akj(Hv)j = akj hj+ivi j=0 j=0 i=0 X X X On the other hand, the ck’s are Fourier coefficients with respect to the orthogonal poly- 2 ∗ nomials pk for some function f Lα( 1, 1). Since Hv = MM v = Mf, where h ∗ i k ∈ − f(x) = M v(x) = k vkx , we obtain a second formula for ck in terms of the compo- nents vk of v. Other identities come from Parseval’s equation. P In Section 4, we explore several examples.

2 2. A factorization of Hankel Operators

The Hankel operators we investigate are given by the infinite matrices H = [hi+j]i,j≥0, where 1 j hj = x dα(x), j =0, 1, 2,... Z−1 and α is nondecreasing on [ 1, 1]. We assume H represents a bounded operator on l2, which means we have the equivalent− conditions on h and α in Widom’s theorem. h ji Theorem 1 : For all f L2 ( 1, 1), let ∈ α − 1 M(f)= uf = uf , where uf = xkf(x) dα(x), k =0, 1, 2,... (3) h ki k Z−1 For all v = v l2, let h ki ∈ ∞ k N(v)= fv, where fv(x)= v x , 1

(d) H = MM ∗.

(e) M is injective.

2 Proof : (a) We use the Cauchy criterion to show that fv L ( 1, 1): ∈ α − 2 n 1 n n n n 1 k k j k+j vkx = vkx vjx dα(x)= vkvj x dα(x) −1 ! j=m ! j=m −1 kX=m Z kX=m X kX=m X Z n n n n p = v v h v v k j j+k ≤ | k|| j|k + j j=m j=m kX=m X kX=m X

for some p> 0, where the inequality uses the fact that hj = O(1/j). Hence, by Hilbert’s inequality, v xk converges in L2 ( 1, 1). k k α − P

3 To show that N is bounded, we use the change of variables

0 1 g(x) dα(x)= g( y) dβ(y), where g is integrable and β(y)= α( y) (5) − − − Z−1 Z0 Thus 1 1 2 2 2 fv = fv(x) dα(x)+ fv( y) dβ(y) k k | | | − | Z0 Z0 and the matrices A and B defined below are bounded Hankel operators on l2 by Widom’s Theorem.

1 1 j j A = [aj+k], B = [bj+k], where aj = x dα(x), bj = y dβ(y) Z0 Z0 Therefore

1 1 ∞ ∞ ∞ ∞ 2 k+j 2 fv(x) dα(x)= v v x dα(x)= v v a = Av · v A v | | k j k j k+j ≤k kk k 0 0 j=0 j=0 Z Z Xk=0 X Xk=0 X and

1 ∞ ∞ 1 2 k+j fv( y) dβ(y)= v v ( y) dβ(y) | − | k j − Z0 k=0 j=0 Z0 X∞ X∞ = ( 1)kv ( 1)jv b B v 2 − k − j k+j ≤k kk k j=0 Xk=0 X k 2 The fact that k vkx converges in Lα( 1, 1) justifies the above interchanges of limits. Therefore N(v) 2 ( A + B ) v 2. − k Pk ≤ k k k k k k (b) Again using the change of variables (5), we have, for any f L2 ( 1, 1), ∈ α − 1 1 1 uf = xkf(x) dα(x)= xkf(x) dα(x)+ ( 1)kykf( y)dβ(y) k − − Z−1 Z0 Z0

Let ak denote the first of the two integrals on the right and bk denote the second. The 2 following argument is an adaptation of the proof in [5, Theorem 324]. Let v = vk l . k h i ∈ The interchange of limits below is again justified by the fact that k vkx converges in 2 Lα( 1, 1): − ∞ 1 ∞ 1 P k vkak = vkx f(x) dα(x)= Nv(x)f(x) dα(x) 0 0 Xk=0 Z Xk=1 Z 4 Hence, by the Cauchy-Schwarz inequality and part (a),

∞ 2 2 2 2 2 2 vkak N(v) f N v f ≤k k k k ≤k k k k k k k=0 X Therefore, by the so-called converse of H¨older’s inequality [5, Theorem 15], a kh kik ≤ N f . Similarly, b N f , and so M(f) 2 N f . k kk k kh kik≤k kk k k k ≤ k kk k (c) and (d) are straightforward verifications.

(e) Suppose M(g) = 0. Then, for all v l2, ∈ ∗ 0= M(g) · v = g, M v = g, fv h i h i Since the functions fv include all polynomials, and since the polynomials are dense in L2 ( 1, 1), it follows that g = 0. α − 2

Although the factorization H = MM ∗ appears to be new, it is closely related to the well- known fact [7, Chapter 2] that H is unitarily equivalent to the integral transform T on 2 Lα( 1, 1) defined by − 1 f(y) T f(x)= dα(y) 1 xy Z−1 − Specifically, one can show that T = M ∗M and hence that H and T are unitarily equivalent whenever M ∗ is injective. For M ∗ to be injective, it suffices to assume that α has infinitely many points of increase with a cluster point in the open interval ( 1, 1). − 3. Orthogonal polynomials and moment sequences

In this section we use a complete orthogonal sequence p of polynomials in L2 ( 1, 1) h ki α − to characterize the elements of α, where α denotes the space of moment sequences given by the range of M (see [8]).M AlthoughM these sequences may be finite or infinite, to simplify notation, the details below will be expressed as if the sequences are infinite. The conclusions are valid in all cases, and the finite case will be treated as one of the examples in Section 4.

The following theorem extends an idea in [2], where it was applied to the shifted Legendre polynomials.

5 Theorem 2 : Let p be an orthogonal sequence of polynomials in L2 ( 1, 1) with h ki α − k p (x)= a xj, a =0, k =0, 1, 2,... (6) k kj kk 6 j=0 X Let u = u l2 and define c = c by h ji ∈ h ki k

ck = akj uj, k =0, 1, 2,... (7) j=0 X

Then u α if and only if ∈M ∞ 2 ck | | 2 < (8) pk ∞ Xk=0 k k Furthermore, when (8) holds, then u = M(f), where

∞ ck f(x)= 2 pk(x) (9) pk Xk=0 k k Proof : If u = u , then there exists f L2 ( 1, 1) such that u = M(f); that is, h ki∈Mα ∈ α − 1 k uk = x f(x) dα(x), k =0, 1, 2,... Z−1 Hence, since p is a complete orthogonal sequence in L2 ( 1, 1) (see [8]), h ki α − ∞ 1 1 f(x)= bkpk(x) where bk = 2 f(x)pk(x) dα(x) pk −1 Xk=0 k k Z Therefore, by equations (7) and (6),

k 1 1 c = a xjf(x) dα(x)= f(x)p (x) dα(x)= p 2b k kj k k kk k j=0 −1 −1 X Z Z So, by Parseval’s equation,

∞ 2 ∞ ck 2 2 2 | | 2 = pk bk = f < pk k k | | k k ∞ Xk=0 k k Xk=0 Furthermore, (9) holds since b = c / p 2. k k k kk 6 On the other hand, if u = u l2 and c 2/ p 2 < , let h ki ∈ k | k| k kk ∞ P∞ ck f(x)= 2 pk(x) pk Xk=0 k k Since c 2/ p 2 < , the above series converges in L2 ( 1, 1) and k | k| k kk ∞ α − P 1 ck = f(x)pk(x) dα(x), k =0, 1, 2,... Z−1 This equation can also be written, using (6) and (7), as

k 1 k j akj uj = f(x) akj x dα(x), k =0, 1, 2,... j=0 −1 j=0 X Z X which implies, since a = 0, kk 6 1 j uj = f(x)x dα(x), j =0, 1, 2,... Z−1 Therefore u = u = M(f) . 2 h ji ∈Mα Finally, we use Theorems 1 and 2 to produce combinatorial identities corresponding to each choice of α.

Theorem 3 : Suppose α is a nondecreasing function on [ 1, 1] satisfying − α(1) α(x)= O(1 x) as x 1−, α(x) α( 1) = O(1 + x) as x 1+ − − → − − → − and let 1 j hj = x dα(x), j =0, 1, 2,... Z−1 Suppose also that p is an orthogonal sequence of polynomials in L2 ( 1, 1) with h ki α − k p (x)= a xj, a =0, k =0, 1, 2,... k kj kk 6 j=0 X and k k x = bkj pj(x), k =0, 1, 2,... j=0 X 7 Then, for all non-negative integers k and m,

k 2 pk bmk if m k akj hj+m = k k ≥ (10) j=0 (0 if m

and ∞ ∞ 2 ∞ ∞ 2 pk vjbjk = vkvj hk+j (13) k k k=0 j=k k=0 j=0 X X X X

2 ∗ Proof : Let H = [h ] ≥ , v = v l , and u = Hv = MM v. Then u , and we i+j i,j 0 h ki ∈ ∈Mα use Theorem 2 to find two expressions for the coefficients ck in equation (7). On the one hand, k k k ∞

ck = akj uj = akj (Hv)j = akj hj+ivi (14) j=0 j=0 j=0 i=0 X X X X On the other hand,

1 ∞ ck ck = f(x)pk(x) dα(x) where f(x)= 2 pk(x) −1 pk Z Xk=0 k k

Using Theorem 1, we may also write the last expression for ck in terms of the components ∗ of v. Note that Hv = M(f) and Hv = MM v = M(fv). Since M is injective, we have f = fv a. e. and so

1 ∞ j ck = vjx pk(x) dα(x) −1 j=0 Z X ∞ 1 j ∞ = v b p (x)p (x) dα(x)= p 2 v b (15) j ji i k k kk j jk j=0 −1 i=0 X Z X Xj=k

8 Equating formulas (14) and (15) for ck and letting vk = δkm, we obtain the identity (10). m Then, solving for bmk, (m k) in (10) and substituting into k=i bmkaki = δmi (m i), we obtain the identity (11).≥ ≥ P Furthermore, by Parseval’s equation and MM ∗ = H, we have

∞ 2 ∞ ∞ ck 2 ∗ 2 ∗ ∗ | | = fv = M v = M v, M v = Hv · v = v v h p 2 k k k k h i k j k+j k j=0 Xk=0 k k Xk=0 X Substituting formulas (14) and (15) for ck in this result yields the identities (12) and (13). 2 Another perspective on Theorem 3

The analysis below shows that, with a different set of assumptions, the identities in The- orem 3 are quite natural. Thus, some instances of them are likely to be known.

Suppose, instead of a distribution dα as in Theorem 3, we are given the following assump- tions about a matrix H and a sequence of polynomials p . Let H = [h ] ≥ be any h ki i+j i,j 0 infinite Hankel matrix with real entries; that is, do not assume hj = O(1/j) as j . Also let → ∞ k p (x)= a xj, a =0, k =0, 1, 2,... k kj kk 6 j=0 X be a sequence of polynomials with real coefficients that are orthogonal in the sense that

k m =0 if k = m akihi+jamj 6 (16) i=0 j=0 (=0 if k = m X X 6 Note that, under the assumptions of Theorem 3, property (16) is the statement that pk,pm =0 if k = m and pk,pk = 0. So we will denote the left side of (16) by pk,pm heven thoughi the6 matrix H hdoes noti 6 necessarily generate a true inner product. Thenh (16i) can be written as AHA∗ = D, where A denotes the lower- whose (k, j) entry, for k j, is a and D denotes the whose kth diagonal entry is ≥ kj pk,pk . Since the diagonal entries of A are nonzero, A has a formal inverse B. Hence h ∗i ∗ 2 AHA = D implies AH = DB , which is the identity (10). Of course, pk must be interpreted as p ,p , which is nonzero but possibly negative. Identity (11)k follk ows from h k ki (10), as in the proof of Theorem 3. The left side of identity (12) can be expressed as D−1/2AHv 2, where D−1/2 is diagonal and the norm is the usual l2 norm. However, now k k we must assume that the sequence v has only finitely many nonzero terms. Then (12) may be derived as follows: D−1/2AHv 2 =(D−1/2AHv) · (D−1/2AHv)=(D−1DB∗v) · (AHv)= v · Hv k k 9 Identity (13) may be derived similarly.

4. Examples

In the examples below, some of the instances of identity (10) have been independently confirmed using the WZ methods in [6]. On the other hand, instances of identities (12) and (13) do not yet seem to be susceptible of confirmation by such algorithmic methods.

Example 1 (Hilbert matrices and binomial coefficients): For any r > 0, let α(x) = xr if 0 x 1 and α(x)=0if 1 x 0. Then hj = r/(j +r) and so H = [r/(i+j +r)]i,j≥0. In≤ [3],≤ Berg shows that the− corresponding≤ ≤ orthogonal polynomials may be written as

k k k + j + r 1 p (x)= ( 1)j − xj k − j j j=0 X    It follows that p 2 = r/(2k + r) and k kk 1 k 2k + r xk = ( 1)j (2j + r)p (x) 2k+r − k j j (k + r) k j=0 X  −  Hence equations (10), (11), (12) and (13) become, respectively,

k k k + j + r 1 ( 1)j ( 1)k 2m+r − − = − m−k j k j + m + r 2m+r j=0 m (m + r) X     m i i i + j + r 1 i i + k + r 1 ( 1)j+k(2i + 1) − − − = δ j i k i j + m + r mk j=0 Xi=k X      ∞ ∞ 2 ∞ ∞ k k k + j + r 1 ( 1)jv v v (2k + r) − − i = k j j k j + i + r k + j + r k=0 i=0 j=0    k=0 j=0 X X X X X ∞ ∞ 2j+r 2 ∞ ∞ j−k vj vkvj (2k + r) 2j+r = (j + r)  k + j + r k=0 j=k j k=0 j=0 X X X X where v l2.  h ji ∈ Example 2 (Legendre polynomials): We begin by considering all Jacobi polynomials: (α + 1) p (α,β)(x)= k F ( k, α + β + k + 1; α + 1;(1 x)/2) k k! 2 1 − − 10 where 2F1 denotes the Gauss hypergeometric series, (α + 1)k denotes a shifted factorial, and α,β > 1. The Jacobi polynomials are orthogonal on [ 1, 1] relative to the weight function w(x−)=(1 x)α(1 + x)β, and − − 2α+β+1Γ(α + k + 1)Γ(β + k + 1) p (α,β) 2 = k k k k!Γ(α + β + k + 1)(α + β +2k + 1) Using Euler’s integral representation of Gauss’ hypergeometric series [1, Theorem 2.2.1], we find the moments to be 1 h = xj(1 x)α(1 + x)β dx j − Z−1 F ( α, 1+ j;2+ β + j; 1)) F ( β, 1+ j;2+ α + j; 1) = j! ( 1)j 2 1 − − + 2 1 − − − (1 + β) (1 + α)  1+j 1+j  Rather than writing out identities (10), (11), (12), and (13) for this general case, we write the more compact versions for the special case α = β = 0. Then hj =2/(j +1) if j is even, (0,0) 2 hj =0if j is odd, and pk = pk is the kth Legendre polynomial; hence pk =2/(2k+1) and k k k ( 1)(k−j)/2 k k + j p (x)= − xj k 2k k−j k j=0  2   j+Xk even k k!(2j + 1) xk = p (x) 2(k−j)/2( 1 (k j))!(j + k + 1)!! j j=0 2 j+Xk even − Note: The double factorial n!! is defined to be 1 when n = 0 and, for positive integers n,

n!! = n(n 2)(n 4) (1 or 2) − − ··· where the terminal factor in this product is 1 when n is odd and 2 when n is even.

Hence equations (10), (11), (12), and (13) become, respectively,

k (k−j)/2 m! k k + j ( 1) 2(m−k)/2( 1 (m−k))!(k+m+1)!! if m + k even, m k − = 2 ≥ k−j k 2k(j + m + 1) j=0  2   ( 0 otherwise j+Xk even j+m even

m i ( 1)i−(j+k)/2 i i + j i i + k − = δ 22i(j + m + 1) i−j i i−k i mk i=k j=0  2   2   i+Xk even i+Xj even j+m even

11 2

∞ ∞ ∞ k−j)/2 ∞ ∞ k k + j ( 1) vi vivj (2k + 1) − = k−j k 2k(j + i + 1) k + j +1 k=0 i=0 j=0  2   k=0 j=0 X X j+Xk even X j+Xk even j+i even

2 ∞ ∞ ∞ ∞ j!v v v (2k + 1) j = i j 2(j−k)/2( 1 (j k))!(k + j + 1)!! k + j +1 k=0 j=k 2 − k=0 j=0 X j+Xk even X j+Xk even

2 where vj l . h i ∈ Example 3 (Fibonacci numbers and Fibonomial coefficients): We begin by considering all little q-Jacobi polynomials [4, Section 7.3]:

n (q−k; q) (abqk+1; q) p (x; a, b; q)= j j (xq)j k (q; q) (aq; q) j=0 j j X where (a; q) denotes the q-shifted factorial. We assume that q < 1, aq < 1, and b 1, j | | | | | | ≤ which suffices to show that these polynomials satisfy the orthogonality property

∞ (bq; q) δ p (qi; a, b; q)p (qi; a, b; q) i (aq)i = km (17) k m (q; q) h (a, b; q) i=0 i m X where 2m+1 (abq; q)m(1 abq )(aq; q)m(aq; q)∞ −m hm(a, b; q)= − 2 (aq) (q; q) (1 abq)(bq; q) (abq ; q)∞ m − m We can express the orthogonality property (17) in terms of an integral on [ 1, 1] as follows. − Let pk(x)= pk(φx; a, b; q), where φ> 1, and define the discrete signed measure µ by

∞ (bq; q)i i µ = (aq) δ i (18) (q; q) q /φ i=0 i X Then (17) becomes 1 δ p (x)p (x) dµ(x)= km k m h (a, b; q) Z−1 m The corresponding moments are

1 ∞ (bq; q) j h = xj dµ(x)= i (aq)i qi/φ j (q; q) Z−1 i=0 i X  12 When b = 1, this sum is a geometric series:

∞ 1 i 1 h = aqj+1 = j φj φj(1 aqj+1) i=0 − X  The following interesting special case was found by Berg [3]. For any α N, let ∈ 1+ √5 1 √5 b =1, φ = , q = − , a = qα−1, and 2 1+ √5

∞ α iα µ = (1 q ) q δ i , α =1, 2, 3,... α − q /φ i=0 X Note that µ is the signed measure (18) multiplied by 1 qα so it becomes a measure with α − total mass 1. Berg shows that hj = Fα/Fα+j, j =0, 1, 2,..., that

k (α) α−1 jk− j k α + k + j 1 j p (x)= p (φx; q , 1; q)= ( 1) (2) − x k k − j k j=0 F F X     and 1 F p (α)(x)p (α)(x) dµ (x)= δ ( 1)kα α k m α km − F Z−1 α+2k k where Fj denotes the jth Fibonacci number (with F0 = 0, F1 = 1) and j F denotes the Fibonomial coefficient j Fk−i+1 . i=1 Fi 

Q (α) When α is even, µα is a probability measure that, together with the polynomials pk (x), satisfies the hypotheses of Theorem 3. Note that p(α) 2 = F /F . Hence, equations k k k α α+2k (11) and (12) become

m i i(j+k)− j − k i i α + i + j 1 α + i + k 1 Fα+2i ( 1) (2) (2) − − = δ − j k i i F mk j=0 F F F F α+j+m Xi=k X        

∞ ∞ k 2 ∞ ∞ jk−(j) k α + k + j 1 vi vkvj Fα+2k ( 1) 2 − = − j F k Fα+j+i Fα+k+j k=0 i=0 j=0     k=0 j=0 X X X X X where α =2, 4, 6 ,... and v l2. Furthermore, the analysis following Theorem 3 shows h ji ∈ that both identities also hold for α =1, 3, 5,..., provided vj = 0 for all but finitely many values of j. For such α, note that p (α),p (α) < 0 when k is odd. h k k i

13 Example 4 (finite dimensional spaces): Let α be a step function with positive jumps Jk 2 at the distinct points zk ( 1, 1), k = 0, 1,...,r. Then Lα( 1, 1) has dimension r + 1, and ∈ − − r j hj = zk Jk, j =0, 1, 2,... (19) Xk=0 In what follows, we will frequently use the following submatrices of the infinite Hankel matrix H = [hi+j]i,j≥0: Let Hk denote the upper-left (k + 1) (k + 1) submatrix of H. From equation (19), we see that ×

T Hk = VkDrVk , k =0, 1,...,r (20)

where Vk is rows 0 through k of the whose row 1 is [z0, z1,...,zr], and where Dr is the diagonal matrix with diagonal entries J0,J1,...,Jr.

2 In seeking an orthogonal basis of polynomials for Lα( 1, 1), we choose their leading coef- ficients to be 1: − k−1 k j pk(x)= x + akj x , k =0, 1,...,r j=0 X The remaining coefficients are then uniquely determined; we compute them as follows. Let h(p, q) denote the vector with components hp,...,hq, and let a(k) denote the vector with components ak0,...,ak,k−1. We claim that

H − a(k)= h(k, 2k 1), k =1,...,r (21) k 1 − − which may also be written

k−1 h a = h , k =1,...,r, m =0,...,k 1 m+j kj − m+k − j=0 X Since Hk is nonsingular when k = 0, 1,...,r, equation (21) uniquely determines the co- efficients akj. One may confirm (21) by a straightforward calculation to show that the resulting polynomials are mutually orthogonal. By Cramer’s rule, we then have the ex- plicit formula

k−1 k 1 j pk(x)= x det(Hk−1(j, h(k, 2k 1)))x , k =0, 1,...,r (22) − detH − − k 1 j=0 X where M(j, w) denotes the matrix M with its jth column replaced by the vector w. A similar calculation shows that

2 detHk pk = , k =0, 1,...,r k k detHk−1

14 where we assign the value 1 to detH−1.

Next we express each monomial xm as a linear combination of the basis polynomials p0,p1,...,pr: r m x = bmk pk(x), m =0, 1, 2,... Xk=0 By equations (10) and (22),

1 k b = a h mk p 2 kj j+m k j=0 k k X k−1 detHk−1 1 = hk+m det(Hk−1(j, h(k, 2k 1)))hj+m detH − detH − − k k 1 j=0 ! X − 1 k 1 = (detH − )h det(H − (j, h(k, 2k 1)))h detH k 1 k+m − k 1 − j+m k j=0 ! X det(H (k, h(m, m + k))) = k detHk

To confirm the last step above, evaluate det(Hk(k, h(m, m+k))) by the Laplace expansion down the last column of the matrix. For the jth entry of that column, j =0, 1,...,k 1, the expansion formula assigns the sign ( 1)k+j. We must also move the vector h(k, 2k −1) from column j to column k 1, and the− corresponding transposes change the sign of− the − by the factor ( 1)k−1−j. Thus, the overall sign factor is ( 1)k+j+k−1−j = 1. − − − Having confirmed the above computation of bmk, we conclude

r m det(Hk(k, h(m, m + k))) x = pk(x), m =0, 1, 2,... detHk Xk=0 Hence equation (13) becomes

∞ 2 ∞ ∞ r 1 vj det(Hk(k, h(j, j + k))) = vkvjhk+j (detHk)(detHk−1) k=0 j=k k=0 j=0 X X X X 2 where vj l . h i ∈ Acknowledgment: I thank my colleague C. French for suggesting the analysis following Theorem 3.

15 References

1. G. E. Andrews, R. Askey, R. Roy, Special Functions, Cambridge University Press, 1999.

2. R. Askey, I. J. Schoenberg, A. Sharma, “Hausdorff’s moment problem and expansions in Legendre polynomials,” J. Math. Anal. Appl., v. 86 (1982), 237-245.

3. C. Berg, “Fibonacci numbers and orthogonal polynomials,” J. Comput. Appl. Math. (to appear).

4. G. Gasper and M. Rahman, Basic Hypergeometric Series, 2nd ed., Cambridge Univer- sity Press, 2004.

5. G. H. Hardy, J. E. Littlewood, G. P´olya, Inequalities, Cambridge University Press, 1952.

6. M. Petkovˇsek, H. Wilf, D. Zeilberger, A=B, A. K. Peters, Ltd., 1996.

7. S. C. Power, Hankel Operators on , Pitman Publishing, Inc., 1982.

8. G. Szeg¨o, Orthogonal Polynomials, 4th ed., Amer. Math. Soc., 1975.

9. H. Widom, “Hankel Matrices,” Trans. Amer. Math. Soc., v. 121 (1966), 1-35.

16