RODRIGUES FORMULA FOR JACOBI POLYNOMIALS ON THE UNIT CIRCLE
MASTERS THESIS
Presented in Partial Fulfillment of the Requirements for the Degree Master of
Science in the Graduate School of the Ohio State University
By
Griffin Alexander Reiner-Roth
Graduate Program in Mathematics
The Ohio State University
2013
Master’s Examination Committe:
Professor Rodica Costin, Advisor
Professor Barbara Keyfitz c Copyright by
Griffin Alexander Reiner-Roth
2013 ABSTRACT
We begin by discussing properties of orthogonal polynomials on a Borel measur- able subset of C. Then we focus on Jacobi polynomials and give a formula (analogous to that of [5]) for finding Jacobi polynomials on the unit circle. Finally, we consider some examples of Jacobi polynomials and demonstrate different methods of discov- ering them.
ii ACKNOWLEDGMENTS
I would like to thank my advisor, Dr. Rodica Costin, for all her help on this thesis; for finding a really interesting and doable project for me, for spending so much time explaining the basics of orthogonal polynomials to me, and for perfecting this paper.
I also thank my parents for making sure I never starved.
iii VITA
2011 ...... B.A. Mathematics, Vassar College
2011-Present ...... Graduate Teaching Associate, Department of Mathematics, The Ohio State Univer- sity
FIELDS OF STUDY
Major Field: Mathematics
iv TABLE OF CONTENTS
Abstract ...... ii
Acknowledgments ...... iii
Vita...... iv
List of Tables ...... vii
CHAPTER PAGE
1 Introduction ...... 1
1.1 Historical Background ...... 1 1.2 Orthogonality in Hilbert Spaces ...... 4 1.3 Orthogonal Polynomials with Respect to Measures ...... 5 1.4 Approximation by Orthogonal Polynomials ...... 12 1.5 Classical Orthogonal Polynomials ...... 15
2 Jacobi Polynomials ...... 17
2.1 Jacobi Polynomials on the Real Line ...... 17 2.2 Jacobi Polynomials on the Unit Circle ...... 19 2.3 Properties of Jacobi Polynomials on the Unit Circle ...... 21 2.4 Connection between Jacobi Polynomials on the Real Line and on the Unit Circle ...... 22 2.5 Calculating the Inner Product ...... 26 2.6 Formulas for Cn, Dn ...... 32 2.7 Conjecture for An, Bn ...... 35 3 New Results ...... 37
3.1 Commutation Relations ...... 37 3.2 Rodrigues’ Formula ...... 39
v 4 Examples of Jacobi Polynomials on the Unit Circle ...... 41
4.1 Using the Gram-Schmidt Process ...... 41 4.2 Using Theorem 9 ...... 43 4.3 Using the New Rodrigues’ Formula ...... 44 4.4 Comparison of Formulas ...... 45
5 Codes and Computer Calculation Results ...... 47
5.1 OPRL ...... 47 5.2 OPUC ...... 48
Bibliography ...... 57
vi LIST OF TABLES
TABLE PAGE
5.1 Table 1: Comparison of Rodrigues’ formulas on the real line ..... 49
5.2 Table 2: Computation times using the new Rodrigues’ formula ... 54
vii CHAPTER 1
INTRODUCTION
1.1 Historical Background
The study of orthogonal polynomials has a rich history, which can be traced to as early as the late eighteenth century, linked to Legendre’s study of planetary motion.
About a century later, orthogonal polynomials arose (seemingly, in a more natural manner) when searching for solutions to Sturm-Liouville problems. These polynomi- als are referred to as the classical orthogonal polynomials (we discuss properties of these polynomials in Section 1.5). Some of the first mathematicians to study these polynomials are Chebyshev, Markov, and Stieltjes. These three names come up again, in connection with orthogonal polynomials, regarding the Markov-Stieltjes inequality, which “plays a fundamental role in the theory of momentum probems” [7]. Cheby- shev conjectured the inequality, and Markov and Stieltjes (independently) proved the result.
Orthogonal polynomials are defined on the real line (OPRL), or on the unit circle
(OPUC); there is far more research that has been done on orthogonal polynomials on the real line. Simon [17] conjectures one reason is “[OPRL] examples appear in so many places that most scientists are exposed to them early in their education.
1 The applications of OPUC are subtle and beautiful, but less concrete.” However,
Szeg¨oproved a relation between two special classes of these sets of polynomials, so an additional application of OPUC is that knowledge of these polynomials can illuminate
OPRL facts. In Section 2.4, we state and prove this relation.
In the early twentieth century, Szeg¨o,the father of OPUC, is responsible for the revolution in the study of orthogonal polynomials by considering a wide variety of problems. In many ways, OPUC are analogous to OPRL. As an example, polynomials from both sets possess the Christoffel-Darboux Formula; for more information on the
Christoffel-Darboux Formula in both settings, see Simon [17], Szeg¨o[19] (to whom the formula is due), and Freud [7] (whose work on the unit circle relies heavily on the minimum problem, i.e., minimizing the Christoffel function). In fact, “The study of the minimum problem is central to Szeg¨o’sinitial papers and have been a recurrent theme since” [17]. Other analogues on the unit circle include Favard’s Theorem (a recent proof is given in [6]), Gauss-Jacobi quadrature, and the fact that they all satisfy three-term recursion relations.
We conclude this section with a short summary of some areas of mathematics that have benefitted from the study of OPRL, and to a lesser extent, OPUC. We mentioned quadrature, an area that can be considered independently of orthogo- nal polynomials. Asymptotic behavior of orthogonal polynomials is a large field of study, and has applications to problems such as to Riemann-Hilbert problems. The asymptotic behavior of the ultraspherical and Laguerre polynomials (examples of clas- sical orthogonal polynomials) have been studied by Darboux and Fej´er,respectively.
Asymptotic properties of orthogonal polynomials have continued to be investigated,
2 and results have been found by several mathematicans, from Szeg¨oto Simon. In- terpolation procedures, very important in applications, have been studied by, among others, Geronimus. A solution to the momentum problem (or the Hamburger momen- tum problem, named after Hamburger for his work on the problem) was proved due in no small part to results of orthogonal polynomials. Approximation theory (with profound results due to Nevai), Lagrange interpolation, continued fractions, stochas- tic processes, and even coding theory are subjects with results relying on orthogonal polynomials.
3 1.2 Orthogonality in Hilbert Spaces
Orthogonal polynomials are the Rodney Dangerfield of analysis.
Barry Simon, OPUC on One Foot
Let (H, h·, ·i) be a Hilbert space and k · k be the associated norm, defined by kfk = phf, fi.
Definition 1. Let F ⊆ H be a countable set of functions {fk : k ≥ 0}. fi is orthog- onal to fj if hfi, fji = 0, and F is an orthogonal set if for i 6= j, hfi, fji = 0. The family F is an orthonormal set if F is an orthogonal set and, in addition, kfik = 1 for each i ≥ 0.
Let F be a countable, linearly independent set of functions. Then there exists an orthogonal set G in bijection with F such that span(F) = span(G). Such a G can be constructed using the Gram-Schmidt process:
Theorem 2. (The Gram-Schmidt Process) Let n ∈ N and {fk : 0 ≤ k ≤ n} be a linearly independent set of functions. Define g0 = f0 and for 1 ≤ k ≤ n,
k−1 X hgj, fki g = f − g . k k kg k2 j j=0 j
Then {gk : 0 ≤ k ≤ n} is an orthogonal set and span{fk} = span{gk}.
(For a proof of this theorem, see, for example, Sadun [15].) The process above can be extended to a countable family of linearly independent functions. We remark that one can obtain an orthonormal set from an orthogonal set by replacing each gk g with k . kgkk
4 1.3 Orthogonal Polynomials with Respect to Measures
Let X be an interval in R or the unit circle and µ a positive Borel measure on X. Define Z 2 2 H = L (X, dµ) = f : X → C: f is measurable and |f| dµ < ∞ , X the space of square integrable functions with respect to the measure µ. The binary operation defined by Z hf, gi = fg dµ (1.3.1) X for f, g ∈ H is an inner product on H and it is well-known that (H, h·, ·i) becomes a Hilbert space. Notice that this inner product is finite by the Cauchy-Schwarz
Inequality: Z Z
|hf, gi| = fg dµ ≤ |f| · |g| dµ X X Z 1/2 Z 1/2 ≤ |f|2 dµ |g|2 dµ < ∞. X X Z Assumptions. We assume that the moments exist, that is, zk dµ is defined for X all k ≥ 0 (later, we give a more general definition of moments, but this definition will Z be sufficient on the unit circle). Further, assume µ is normalized so that 1 dµ = 1. X The set of polynomials {zk : k ≥ 0} is linearly independent, so one may apply
Gram-Schmidt to this set to obtain the set of monic orthogonal polynomials of degree n ≥ 0. These are the orthogonal polynomials on X with respect to the measure µ.
Throughout, the set {ϕn} = {ϕn : n ≥ 0} will denote the set of monic orthogonal polynomials, where ϕn(z) is degree n (i.e., for each n ∈ N, there exist complex num- n−1 n X k bers ak such that ϕn(z) = z + akz ). In certain situations, the set of orthonormal k=0 5 polynomials is desired; denote this set by {Φn}. In more theoretical settings, {ϕn} is usually of interest simply because monic polynomials are easier to work with. In the context of applications, {Φn} may be preferred for its use in approximation.
The first apparent difference between ϕn and Φn involves the leading coefficient of Φn, which we denote by κn:
n Φn(z) = κnz + ··· .
ϕn −1 Of course, Φn = , therefore, κn = kϕnk . kϕnk We mention a few observations about orthogonal polynomials. Since whenever
0 ≤ m < n, hϕm, ϕni = 0, then
m hϕn(z), z i = 0, 0 ≤ m < n.
m Conversely, the conditions hϕn(z), z i = 0 for 0 ≤ m < n are sufficient for {ϕk(z)}
m to be an orthogonal set: if for each n, hϕn(z), z i = 0 whenever m < n, then by the linearity of the inner product, hϕn(z), p(z)i = 0 if p(z) is a polynomial of degree less than n. In particular, hϕn(z), ϕk(z)i = 0 if k < n.
Definition 3. For i, j ≥ 0, denote the moments of µ by σi,j, where Z i j σi,j = z z dµ. X n X j If we write ϕn(z) = cjz (with cn = 1 because ϕn is monic), then for 0 ≤ m < j=0 n, Z n m m X 0 = hϕn(z), z i = ϕn(z)z dµ = cjσj,m. X j=0 Therefore, n−1 X cjσj,m = −σn,m, m = 0, 1, . . . , n − 1. (1.3.2) j=0 6 So finding ϕn(z) amounts to solving the above n×n system of equations for c0, c1,..., cn−1.
Definition 4. An n×n matrix A = (ai,j) is Hermitian if ai,j = aj,i for all 1 ≤ j, k ≤ n.A quadratic form in n variables z1, . . . , zn is a polynomial in these variables of homogeneous degree two.
Note that hx, Axi is a quadratic form (generated byA).
The coefficients of (1.3.2) form a Hermitian matrix which generates the quadratic n X form zizj, and the determinant of this system is i,j=0
σ0,0 σ0,1 ··· σ0,n
σ1,0 σ1,1 ··· σ1,n D = . n ......
σn,0 σn,1 ··· σn,n
If µ is supported on the real line, then Dn is a Hankel determinant. A Hankel matrix is a matrix that satisfies ai,j = ai−1,j+1 for each i > 0 and j < n. If µ is supported on the unit circle, then Dn is a Toeplitz determinant. A Toeplitz matrix satisfies ai,j = ai+1,j+1 for 0 ≤ i, j < n. We will see later why these are such determinants for µ on the real line and on the unit circle.
The system (1.3.2) of equations has a solution by the Sylvester criterion for positive definiteness: the quadratic form generated by a Hermitian matrix A is positive definite if and only if the principal minors of A are positive. The quadratic form generated n X i j here is z z , which is positive definite, so the principal minors (that is, |D0|, i,j=0 |D1|,..., |Dn−1|) are positive (an argument explaining why this quadratic form is
7 positive definite is given in Horn and Johnson [11]). This result gives the following method of constructing orthogonal polynomials:
Proposition 5. Let
σ0,0 σ0,1 ··· σ0,n
......
pn(z) = σ σ ··· σ n−1,0 n−1,1 n−1,n
n 1 z ··· z and suppose either X ⊆ R or X = ∂D, the unit circle. Then for each n ≥ 1,
1 ϕn(z) = pn(z). Dn−1
1/2 Dn Also, kϕnk = . Dn−1
Proof. Fix n ≥ 1 and 0 ≤ m < n. Recall an equivalent condition for {pk(z)} to be
m an orthogonal set is hpn(z), z i = 0 for 0 ≤ m < n.
σ0,0 σ0,1 ··· σ0,n
. . . . Z Z . . .. . m m m hpn(z), z i = pn(z)z dµ = · z dµ X X σ σ ··· σ n−1,0 n−1,1 n−1,n
n 1 z ··· z
σ0,0 σ0,1 ··· σ0,n
. . . . Z . . .. .
= dµ. X σ σ ··· σ n−1,0 n−1,1 n−1,n
m m n m z zz ··· z z
8 Each σi,j is a constant, so the integral of the determinant may be applied to the last row of the matrix. Thus, we have
σ0,0 σ0,1 ··· σ0,n
...... m hpn(z), z i = σ σ ··· σ n−1,0 n−1,1 n−1,n
R m R m R m n X z dµ X z z dµ ··· X z z dµ
σ0,0 σ0,1 ··· σ0,n
......
= . σ σ ··· σ n−1,0 n−1,1 n−1,n
σ0,m σ1,m ··· σn,m
We claim σj,m = σm,j for 0 ≤ j ≤ n.
m If X ⊆ R, then zm = z and Z j+m σj,m = z dµ = σm,j. X
(So if X ⊆ R, then Dn is a Hankel determinant because Z Z i+j (i−1)+(j+1) σi,j = z dµ = z dµ = σi−1,j+1 X X for i > 0 and j < n.) eiθ If X = ∂ , then zm = z−m. The substitutions z = eiθ, dν(t) = dµ , and D ieiθ k = j − m yield Z Z π Z π k ikt σj,m = z dµ(z) = e dν(t) = cos(kt) + i sin(kt) dν ∂D −π −π Z π = cos(kt) dν −π Z π = cos(−kt) dν = σm,j. −π 9 (So if X = ∂D, then Z Z i−j i+1−(j+1) σi,j = z dµ = z dµ = σi+1,j+1 ∂D ∂D for 0 ≤ i, j < n.)
This claim implies we can rewrite the last row of the matrix above as
σ0,0 σ0,1 ··· σ0,n
......
. σ σ ··· σ n−1,0 n−1,1 n−1,n
σm,0 σm,1 ··· σm,n
th th Now, the m and n rows are equal, so the determinant is zero and {pn(z)} forms an orthogonal set. Also, one can see that the leading coefficient of pn is Dn−1 by expansion by minors.
Next, we find the norm of ϕn. Since ϕn(z) is orthogonal to any polynomial of degree less than n,
2 n kϕnk = hϕn(z), ϕn(z)i = hϕn(z), z i .
The calculation above with m = n gives
σ0,0 σ0,1 ··· σ0,n
n 1 n 1 . . . . Dn hϕn(z), z i = hpn(z), z i = . . .. . = . D D D n−1 n−1 n−1
σn,0 σn,1 ··· σn,n Therefore, s Dn kϕnk = . Dn−1
10 Thus, the moments help give the leading coefficient of Φn. In particular,
1/2 Dn−1 κn = . Dn
11 1.4 Approximation by Orthogonal Polynomials
What is the definition of an approximation to a number? Any number other than that number!
Arnold Ross
Orthogonal polynomials are studied for several reasons (in both the applied and pure branches of mathematics) and are of great importance due to their applications to approximating functions – this branch of mathematics is called Approximation theory.
Let f be a function on X that we wish to approximate by polynomials, and suppose {pk(z)} is a given sequence of polynomials. The goal, if possible, is to find a sequence of constants {ck} such that
∞ X f(z) = ckpk(z) k=0 as a uniformly convergent series on X. One of the most famous examples of such an expansion is the Taylor series expansion of f, but this expansion exists if and only if f ∈ C∞(X), and even then, the series may not converge. This idea motivates ap- proximation by orthogonal polynomials. One nice feature of orthogonal polynomials
12 is that the coefficients can be discovered easily: if such an expansion exists, then take the expansion, multiply through by ϕj(z), and integrate to get ∞ X f(z) = ckϕk(z) k=0 ∞ Z Z X f(z)ϕj(z) dµ = ckϕk(z)ϕj(z) dµ X X k=0 (1.4.1) ∞ X Z = ck ϕk(z)ϕj(z) dµ k=0 X
2 = cjkϕjk .
Note that, in the representation (1.4.1), the series converges in H, that is, in square average. Even if the series is not uniformly convergent on X, we may still interchange the integral with the limit in (1.4.1) because the inner product is a continuous linear
X 2 functional, so in particular, if ckϕk(z) converges to f(z) in L (which is true by DX E Parseval’s identity), then for any ϕj ∈ H, ckϕk, ϕj → hf, ϕji.
So each cj can be found with the formula
1 Z cj = 2 f(z)ϕj(z) dµ. kϕjk X Not only is this sequence of constants relatively easy to obtain, it is also the optimal set of constants to use to approximate f(z):
2 Theorem 6. Let f ∈ L (X, dµ) and n ∈ N. Then ( n )
X n+1 inf f − cjΦj : {c0, . . . , cn} ∈ C j=0 is attained if and only if cj = hf(z), Φj(z)i.
This is a key result of the theory of Hilbert spaces (for a proof, see Akhiezer and
Glazman [2]). The theorem states that the best approximation (in the L2 norm) 13 by a polynomial of degree n is determined by the orthogonal polynomials and the coefficients are given by the projections of f onto each ϕj(z).
14 1.5 Classical Orthogonal Polynomials
Some of the first classes of orthogonal polynomials appeared as eigenfunctions of
Sturm-Liouville problems, that is, as solutions of differential equations
00 0 σ(x)y (x) + τ(x)y (x) + λny(x) = 0 with appropriate boundary conditions, where σ(x) is a polynomial of degree at most two, τ(x) a linear polynomial, and λn are constants that need to be identified. These polynomials are also rich in extra properties and are now called the classial orthogonal polynomials. Some of these properties include:
1. their derivatives also form an orthogonal set,
2. they all possess a Rodrigues’ type formula:
dn P (x) = w(x)−1 (w(x)σ(x)n) n dxn with w the weight function and σ some polynomial,
3. they satisfy a differential-difference relation of the form
0 π(x)Pn(x) = (αnx + βn)Pn(x) + γnPn−1(x),
4. they satisfy a non-linear equation of the form
0 2 2 σ(x)(Pn(x)Pn−1(x)) = (αnx + βn)Pn(x)Pn−1(x) + γnPn (x) + δnPn−1(x) with σ a polynomial of degree at most two.
Not only do these properties hold for all of the classical orthogonal polynomials, the classical orthogonal polynomials are the only orthogonal polynomials that satisfy these four properties (Golinskii, [8]). 15 The three classes of the classical orthogonal polynomials are:
(α,β) 1. Jacobi Polynomials: Traditionally denoted by Pn (x), they are orthogonal in L2([−1, 1], µ) with dµ = w(x) dx, the weight function w(x) = (1 − x)α(1 + x)β
(α,β) (α > −1, β > −1). Note that the condition α, β > −1 is needed for Pn to be integrable with respect to the measure (1 − x)α(1 + x)β dx on [−1, 1]. On the other hand, many of the other properties hold for α, β ∈ C \ {−1, −2,...} (there exists a three-term relation, the Rodrigues’ formula holds, and in fact, they form a complete set with respect to the complex-valued measure (1 − x)α(1 + x)β dx).
2. Laguerre polynomials: Orthogonal in L2([0, ∞), µ) with dµ = w(x) dx, w(x) =
−x α (α) th e x (α > −1), Ln denotes the n Laguerre polynomial. 3. Hermite polynomials: These polynomials are orthogonal in L2((−∞, ∞), µ)
−x2 th with dµ = w(x) dx, w(x) = e . The polynomial Hn(x) denotes the n Hermite polynomial, which is orthogonal with respect to w(x) = e−x2 .
Many special subclasses appeared of great interest. For example, the ultraspherical
(or Gegenbauer) polynomials are the Jacobi polynomials with α = β, the Chebyshev 1 polynomials of the first kind are the Jacobi polynomials with α = β = , and the 2 Legendre polynomials satisfy α = β = 0. These polynomials occur in many settings, including those mentioned in Section 1.1.
In the following, we focus our attention on the Jacobi polynomials.
16 CHAPTER 2
JACOBI POLYNOMIALS
2.1 Jacobi Polynomials on the Real Line
Jacobi polynomials have been studied extensively both on the real line and the unit circle. We begin with a summary of results on the real line, and we focus on the scalar case.
2 α β Let Q(x) = x − 1 and wr(x) = (1 − x) (1 + x) , where α, β > −1. Let N denote the nonnegative integers: N = {0, 1, 2,...}.
For each n ∈ N, define pn(x) by the Rodrigues formula:
n 1 d n pn(x) = n [Q(x) wr(x)] . wr(x) dx
The set {pn(x): n ∈ N} is the set of Jacobi polynomials (up to constant multiplica- tion), and is an orthogonal set. A relatively new result gives a nice, recursive property of the Jacobi polynomials:
Theorem 7. (Costin [5]) Define the operator Ak by
0 Ak = kQ (x) + (α + β)x + α − β + Q(x)∂x.
Then for all n, pn(x) = A1A2 ... An1.
17 This result greatly reduces the difficulty of finding the nth Jacobi polynomial for a specified n. For example, the computation time of finding p15(x) with the Rodrigues formula is 16.718 seconds, and with Theorem 7, the computation time is only 0.522 seconds. The code used and the times for computing other Jacobi polynomials are given in Chapter 5.
18 2.2 Jacobi Polynomials on the Unit Circle
Define the inner product of functions defined on the unit circle ∂D by 1 Z dz hf, gi = f(z)g(z)w(z) , 2πi ∂D z where w is the complex Jacobi weight z + z−1 α z + z−1 β w(z) = 1 − 1 + 2 2 and α, β ≥ 0 (on the unit circle, integrability does not hold if α or β are negative).
Since w(z) = w(z) (since on the unit circle, z−1 = z), the values of w(z) are completely determined by its values for z on the upper half circle (or lower). For z on the upper √ half of the unit circle, we may write z = x+i 1 − x2 for x ∈ [−1, 1]. Since R(z) = x, √ we may write w(z) = w(x + i 1 − x2) = (1 − x)α(1 + x)β. A more useful expression for this inner product is Z Z π iθ 1 dz 1 iθ −iθ α β ie dθ f(z)g(z)w(z) = f(e )g(e )(1 − cos θ) (1 + cos θ) iθ 2πi ∂D z 2πi −π e 1 Z π = f(eiθ)g(e−iθ)(1 − cos θ)α(1 + cos θ)β dθ. 2π −π dz This change of variables explains why we write instead of dz: converting z to eiθ z yields an additional term, ieiθ.
For x ∈ [−1, 1], define
α− 1 β− 1 w1(x) = (1 − x) 2 (1 + x) 2
α+ 1 β+ 1 w2(x) = (1 − x) 2 (1 + x) 2 , and let pn(x), qn(x) be the Jacobi polynomials defined as n 1 d n pn(x) = n [Q(x) w1(x)] w1(x) dx n 1 d n qn(x) = n [Q(x) w2(x)] w2(x) dx 19 2 for n ≥ 0 and q−1(x) = 0. (Recall that Q(x) = x − 1). Define µ0 on [−1, 1] and µ on ∂D to be the measures
α β µ0 = µ0(x) = (1 − x) (1 + x) = wr(x) √ (1 − x)α(1 + x)β w(z) µ = µ(z) = µ(x + i 1 − x2) = = . 2πiz 2πiz
20 2.3 Properties of Jacobi Polynomials on the Unit Circle
Throughout this paper, we use the following facts for polynomials on the unit circle which are orthogonal with respect to the Jacobi weight.
j Useful Fact 1. For 0 ≤ j < n, ϕn(z), z = 0 (the proof is given in Section 1.3).
Useful Fact 2. Given n ∈ N, σi,j = σj,i for all 0 ≤ i, j ≤ n (the proof of which is also in Section 1.3).
−1 n −n Useful Fact 3. For all z ∈ ∂D, z = z , and zn = z = z for n ∈ N (it is evident).
Useful Fact 4. w(z) = w(z).
Useful Fact 5. A straightforward calculation gives √ Z Z π Z 1 2 1 iθ iθ 1 f(x + i 1 − x ) f(z) dµ(z) = f(e )w(e ) dθ = − √ dµ0(x) 2 ∂D 2π −π 2π −1 1 − x Z 1 √ 1 2 = − f(x + i 1 − x )w1(x) dx 2π −1
2 for all f ∈ L (∂D, dµ).
Useful Fact 6. Each ϕn(z) has real coefficients (a proof is given in Freud [7]).
21 2.4 Connection between Jacobi Polynomials on the Real Line
and on the Unit Circle
In his review of Simon’s Orthogonal Polynomials on the Unit Circle, Nevai noted
“Szeg¨odiscovered that all OPRL systems living on a finite interval can be mapped to
OPUC systems” [13]. The following theorem describes how the systems are related for Jacobi weights.
Theorem 8. (Szeg¨o,[19]) Let {pn(x)}, {qn(x)} be sets of real polynomials orthogonal
α−1/2 β−1/2 with respect to w1(x), w2(x), respectively (recall w1(x) = (1 − x) (1 + x) and
α+1/2 β+1/2 w2(x) = (1 − x) (1 + x) ). Then there exist constants an, bn, cn, dn such that