<<

Dymore User’s Manual Chebyshev

Olivier A. Bauchau August 27, 2019

Contents

1 Definition 1 1.1 Zeros and extrema...... 2 1.2 relationships...... 3 1.3 Derivatives of ...... 5 1.4 of Chebyshev polynomials...... 5 1.5 Products of Chebyshev polynomials...... 5

2 Chebyshev approximation of functions of a single variable6 2.1 Expansion of a function in Chebyshev polynomials...... 6 2.2 Evaluation of Chebyshev expansions: Clenshaw’s recurrence...... 7 2.3 Derivatives and of Chebyshev expansions...... 7 2.4 Products of Chebyshev expansions...... 8 2.5 Examples...... 9 2.6 Clenshaw-Curtis quadrature...... 10

3 Chebyshev approximation of functions of two variables 12 3.1 Expansion of a function in Chebyshev polynomials...... 12 3.2 Evaluation of Chebyshev expansions: Clenshaw’s recurrence...... 13 3.3 Derivatives of Chebyshev expansions...... 14

4 Chebychev polynomials 15 4.1 Examples...... 16

1 Definition

Chebyshev polynomials [1,2] form a series of , which play an important role in the theory of approximation. The lowest polynomials are

2 3 4 2 T0(x) = 1,T1(x) = x, T2(x) = 2x − 1,T3(x) = 4x − 3x, T4(x) = 8x − 8x + 1,... (1)

1 and are depicted in fig.1. The polynomials can be generated from the following recurrence rela- tionship Tn+1 = 2xTn − Tn−1, n ≥ 1. (2)

1

0.8

0.6

0.4

0.2

0

−0.2

−0.4 CHEBYSHEV POLYNOMIALS

−0.6

−0.8

−1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 XX

Figure 1: The seven lowest order Chebyshev polynomials

It is possible to give an explicit expression of Chebyshev polynomials as

Tn(x) = cos(n arccos x). (3)

This equation can be verified by using elementary trigonometric identities. For instance, it is clear, 2 2 that T2 = cos [2 arccos x] = 2 cos (arccos x) − 1 = 2x − 1, as expected from eq. (1).

1.1 Zeros and extrema

It is now easy to verify that Tn(x) possesses n zeros within the interval x ∈ [−1, +1]: Tn(x) = cos(n arccos x) = 0 implies n arccos x = (2k − 1)π/2. Hence, the zeros of Chebyshev Tn(x) are π(2k − 1) x¯ = cos , k = 1, 2, 3, . . . , n. (4) k 2n √ √ 2 For instance,√ since T3 = x(4x − 3), its zeros are√ 3/2, 0, and − 3/2, which can be written as cos π/6 = 3/2, cos 3π/6 = 0, and cos 5π/6 = − 3/2. The value of Chebyshev polynomial Ti(x)

2 at the zeros of Tn(x) is easily found from eq. (3) as i(2k − 1)π T (¯x ) = cos , i < n. (5) i k 2n

It is also easy to find the extrema of a Chebyshev polynomial√ by imposing the vanishing of its 2 derivative, dTn/dx = 0. This leads to [n sin(n arccos x)] / 1 − x = 0, or sin [n arccos x] = 0. The extrema of Chebyshev polynomial Tn(x) are kπ xˆ = cos , k = 0, 1, 2, 3, . . . , n. (6) k n √ 2 For instance,√ dT4/dx = x(2x − 1) = 0 leads to extrema cos π/4 = 2/2, cos π/2 = 0, and cos 3π/4 = − 2/2. The additional extrema, cos 0 = 1 and cos π = −1, occur at the ends of the interval. The value of Chebyshev polynomial Ti(x) at the extrema of Tn(x) is easily found from eq. (3) as ikπ T (ˆx ) = cos , i < n. (7) i k n 1.2 Orthogonality relationships Chebyshev polynomials are orthogonal within the interval x ∈ [−1, +1] with a weight of (1−x2)−1/2, i.e.  0 i 6= j Z +1 T (x)T (x)  √i j dx = π/2 i = j 6= 0 . (8) 2 −1 1 − x  π i = j = 0 In addition to the orthogonality property defined by eq. (8), Chebyshev polynomials also enjoy the following discrete orthogonality relationship  n 0 i 6= j X  Ti(¯xk)Tj(¯xk) = n/2 i = j 6= 0 . (9) k=1  n i = j = 0 wherex ¯k, k = 1, 2, 3, . . . , n are the zeros of Tn as given by eq. (4), and i, j < n. To prove this orthogonality relationship, trigonometric identities are used

n n X X i(2k − 1)π j(2k − 1)π T (¯x )T (¯x ) = cos cos i k j k 2n 2n k=1 k=1 n−1 1 X  (i + j)(2k + 1)π (i − j)(2k + 1)π  = cos + cos , 2 2n 2n k=0  n (i+j)π h (i+j)π (i+j)π i n (i−j)π h (i−j)π (i−j)π i 1 sin 2 n cos 2n + (n − 1) 2n sin 2 n cos 2n + (n − 1) 2n = + 2  (i+j)π (i−j)π  sin 2n sin 2n " # 1 sin (i+j)π cos (i+j)π sin (i−j)π cos (i−j)π = 2 2 + 2 2 = 0. 2 (i+j)π (i−j)π sin 2n sin 2n

3 The trigonometric identity, eq. (12) was used to eliminate the summation; the last equality results from the fact that cos(i+j)π/2 = cos(i−j)π/2 = 0. If i = j 6= 0, or i = j = 0, similar developments yield the discrete orthogonality given by eq. (9). Chebyshev polynomials also enjoy an additional discrete orthogonality relationship  n 0 i 6= j X 00  Ti(ˆxk)Tj(ˆxk) = n/2 i = j 6= 0 . (10) k=0  n i = j = 0 wherex ˆk, k = 0, 1, 2, ··· , n are the extrema of Tn as given by eq. (6), and i, j < n. The double prime after the summation sign indicates that the first and last terms of the summation must be halved. To prove this orthogonality relationship, trigonometric identities are used

n n X X ikπ jkπ T (ˆx )T (ˆx ) = cos cos , i k j k n n k=0 k=0 n 1 X  (i + j)π (i − j)π  = cos k + cos k 2 n n k=0 " # 1 sin n+1 (i+j)π cos n (i+j)π sin n+1 (i−j)π cos n (i−j)π = 2 n 2 n + 2 n 2 n 2 (i+j)π (i−j)π sin 2n sin 2n " # 1 sin 2n+1 (i+j)π + sin (i+j)π sin 2n+1 (i−j)π + sin (i−j)π = 2 n 2n + 2 n 2n 4 (i+j)π (i−j)π sin 2n sin 2n  h (i+j)π i h (i−j)π i 1 sin (i + j)π + 2n sin (i − j)π + 2n = 2 + + 4  (i+j)π (i−j)π  sin 2n sin 2n 1 1 = [2 + cos(i + j)π + cos(i − j)π] = [1 + cos iπ cos jπ] . 4 2 The first term in the last bracket is the term of the sum corresponding to k = 0, whereas the second term in the last bracket is that corresponding to k = n. Bringing these two terms to the left hand side is identical to replacing the summation sign, P, by P00. Here again, the trigonometric identity, eq. (12) was used to eliminate the summation. If i = j 6= 0, or i = j = 0, similar developments yield the discrete orthogonality given by eq. (10). The following trigonometric identities were used in the derivation of the above discrete orthog- onality relationships

(n + 1)b nb n sin sin(a + ) X sin(a) + sin(a + b) + ... + sin(a + nb) = sin(a + kb) = 2 2 , (11) b k=0 sin 2 (n + 1)b nb n sin cos(a + ) X cos(a) + cos(a + b) + ... + cos(a + nb) = cos(a + kb) = 2 2 . (12) b k=0 sin 2

4 1.3 Derivatives of Chebyshev polynomials The following expression for the derivatives of Chebyshev polynomials  0 2n [Tn−1 + Tn−3 + ... + T1] n even, Tn = (13) 2n [Tn−1 + Tn−3 + ... + T2] + nT0 n odd,

where the notation (·)0 indicates a derivative with respect to x, can be proved by mathematical 0 0 0 induction. Indeed, they are verified for the lowest polynomials, T1 = T0, T2 = 2 × 2 T1, T3 = 0 2 × 3 T2 + 3T0, T4 = 2 × 4 (T3 + T1), etc. It then remains to prove that if it is correct for n it is still correct for n + 1. Taking a derivative of the basic recurrence for Chebyshev polynomials, eq. (2), 0 0 0 leads to Tn+1 = 2xTn + 2Tn − Tn−1. Introducing eq. (13) into this recurrence, it is then easy to show that eq. (13) is true for n + 1.

1.4 Integral of Chebyshev polynomials The following recurrence relationship is easy to prove

T 0 T 0 2T (x) = n+1 − n−1 , (14) n n + 1 n − 1 with the help of eq. (13). It then follows that

Z +1  2 2 n+1 − n−1 n even, 2 Tn(x) dx = (15) −1 0 n odd These two equations are easily combined to yield

Z +1 2 T2n(x) dx = − 2 . (16) −1 4n − 1

1.5 Products of Chebyshev polynomials The product of two Chebyshev polynomials satisfies the following relationship

2Tn(x)Tm(x) = Tn+m(x) + Tn−m(x), n ≥ m. (17)

This relationship is an identity for m = 0; indeed, since T0 = 1, it then follows that 2TnT0 = Tn +Tn. Multiplying both sides of this equation by 2x yields 2Tn(2xT0) = 2(2xTn). Applying the basic recurrence relationship for Chebyshev polynomials, eq. (2), to the two terms in parenthesis then yields 2Tn(2T1) = 2(Tn+1 + Tn−1), or 2TnT1 = Tn+1 + Tn−1, which proves eq. (17) for m = 1. Since eq. (17) is true for m = 0 and 1, suffice now to prove that if it holds for m, it is also true for m + 1. Multiplying both sides of this equation by 2x yields 2Tn(2xTm) = (2xTn+m) + (2xTn−m). Applying the basic recurrence relationship for Chebyshev polynomials, eq. (2), to the three terms in parenthesis then yields 2Tn(Tm+1 +Tm−1) = (Tn+m+1 +Tn+m−1)+(Tn−m+1 +Tn−m−1), and finally, 2TnTm+1 = (Tn+m+1 + Tn+m−1) + (Tn−m+1 + Tn−m−1) − 2TnTm−1 = (Tn+m+1 + Tn+m−1) + (Tn−m+1 + Tn−m−1) − (Tn+m−1 + Tn−m+1) = Tn+m+1 + Tn−m−1.

5 2 Chebyshev approximation of functions of a single vari- able

2.1 Expansion of a function in Chebyshev polynomials A function f(x) can be approximated in terms of Chebyshev polynomials as

N−1 X f(x) ≈ ciTi(x), (18) i=0 where ci are the coefficients of the expansion. N is the number of coefficients in the expansion, whereas N − 1 is the order of the expansion, i.e. the highest order polynomial in the expansion. To find these coefficients given function f(x), the above relationship is expressed at the x =x ¯k, where PN−1 x¯k are the zeros of TN (x), as given by eq. (4). This yields f(¯xk) = i=0 ciTi(¯xk). Multiplying both sides of this equation by Tj(¯xk) and summing the resulting equations expressed at all zeros of TN (x) leads to N N−1 " N # X X X f(¯xk)Tj(¯xk) = ci Ti(¯xk)Tj(¯xk) . (19) k=1 i=0 k=1 Discrete orthogonality relationship of Chebyshev polynomials (9) now implies

N 1 X c = f(¯x ), (20a) 0 N k k=1 N 2 X c = f(¯x )T (¯x ), (20b) i N k i k k=1

where Ti(¯xk) is given by eq. (5). The coefficients of the Chebyshev expansion can be obtained in an alternative manner. Rela- tionship (18) is expressed at the x =x ˆk, wherex ˆk are the extrema of TN (x), as given by eq. (6). PN−1 This yields f(ˆxk) = i=0 ciTi(ˆxk). Multiplying both sides of this equation by Tj(ˆxk) and summing the resulting equations expressed at all extrema of TN (x) leads to N N−1 " N # X 00 X X 00 f(ˆxk)Tj(ˆxk) = ci Ti(ˆxk)Tj(ˆxk) . (21) k=0 i=0 k=0 In view of the discrete orthogonality relationship of Chebyshev polynomials, eq. (10), it then follows that N 1 X c = 00 f(ˆx ), (22a) 0 N k k=0 N 2 X c = 00 f(ˆx )T (ˆx ), (22b) i N k i k k=0

where Ti(ˆxk) is given by eq. (7). Note that as required by the discrete orthogonality relationship of Chebyshev polynomials, eq. (10), the double prime after the summation sign indicates that the first and last terms of the summation must be halved.

6 2.2 Evaluation of Chebyshev expansions: Clenshaw’s recurrence On the other hand, if the coefficients of the Chebyshev expansion are known, the function can then be computed using eq. (18). However, rather than computing the polynomials then summing all contributions, it is preferable to use the , eq. (2), to find

f(x) = c0T0 + c1T1 + ... + cN−2TN−2 + cN−1TN−1,

= c0T0 + c1T1 + ... + cN−2TN−2 + yN−1TN−1,

= c0T0 + c1T1 + ... + (cN−3 − yN−1)TN−3 + (cN−2 + 2x yN−1)TN−2,

= c0T0 + c1T1 + ... + (cN−3 − yN−1)TN−3 + yN−2TN−2,

= c0T0 + c1T1 + ... + (cN−4 − yN−2)TN−4 + (cN−3 − yN−1 + 2x yN−2)TN−3,

= c0T0 + c1T1 + ... + (cN−4 − yN−2)TN−4 + yN−3TN−3,

= (c0 − y2)T0 + y1T1.

The following quantities have been defined

yN+1 = 0, (23a)

yN = 0, (23b)

yN−1 = cN−1 − yN+1 + 2x yN , (23c) . .

y1 = c1 − y3 + 2x y2, (23d)

y0 = c0 − y2 + 2x y1. (23e)

The value of the function now simply becomes

f(x) = (c0 − y2) + y1 x = y0 − x y1. (24)

This approach to the evaluation of functions expressed in Chebyshev series is known as Clenshaw’s recurrence. It provides a numerically stable approach to the evaluation of Chebyshev series.

2.3 Derivatives and integrals of Chebyshev expansions Consider now a function and its derivative, both expanded in Chebyshev series

N−1 N−2 X 0 X 0 f(x) = ciTi(x), and f (x) = ciTi(x), (25) i=0 i=0 where the notation (·)0 indicates a derivative with respect to x. What is the relationship between 0 the coefficients of the two expansions, ci and ci? Using the formula for the derivatives of Chebyshev

7 polynomials, eq. (13), the following recurrence is found

0 cN = 0, (26a) 0 cN−1 = 0, (26b) 0 0 cN−2 = 2 × (N − 1) cN−1 + cN , (26c) . . 0 0 c1 = 2 × 2 c2 + c3, (26d) 0 0 c0 = (2 × 1 c1 + c2)/2. (26e)

Consider finally a function and its integral, both expanded in Chebyshev series

N−1 N 0 X 0 X f (x) = ciTi(x), and f(x) = ciTi(x). (27) i=0 i=0

0 What is the relationship between the coefficients of the two expansions, ci and ci? In view of the relationship established above, it is clear that

2c0 − c0 c = 0 2 , (28a) 1 2 c0 − c0 c = i−1 i+1 , i = 2, 3,...,N. (28b) i 2i

Of course, c0 is the integration constant that can be selected arbitrarily.

2.4 Products of Chebyshev expansions Let function h(x) be the product of two functions, f(x) and g(x), such that h(x) = f(x)g(x). It is assumed that the Chebyshev expansions of functions f(x) and g(x) are known and that function h(x) is to be expanded in Chebyshev series, i.e.,

N+M−1 "N−1 #"M−1 # X X X ckTk(x) = aiTi(x) bjTj(x) , (29) k=0 i=0 j=0

where ai, i = 1, 2,...,N − 1 and bj, j = 1, 2,...,M − 1 are the known coefficients of the Chebyshev expansion of functions f(x) and g(x), respectively, and ck, k = 1, 2,...,N + N − 2 the unknown coefficients of the expansion of h(x). With the help of identity (17), eq. (29) becomes

N+M−1 N−1 M−1 N−1 M−1 X X X X X 2ckTk = aibj 2TiTj = aibj [Ti+j + Ti−j] k=0 i=0 j=0 i=0 j=0 N−1 M−1 N−1 i N−1 M−1 X X X X X X = aibj Ti+j + aibj Ti−j + aibj Tj−i. i=0 j=0 i=0 j=0 i=0 j=i+1

8 Identification of the coefficients of the Chebyshev polynomials of same order then yields the desired coefficients

min[(N−1),(M−1)] X 2c0 = a0b0 + apbp, (30a) p=0 u u u X1 X2 X3 2ck = apbk−p + apbp−k + apbp+k, k = 1, 2,...,M + N − 1. (30b)

p=`1 p=k p=0

where the bound on the three summations are `1 = max[0, k − (M − 1)], u1 = min[(N − 1), k], u2 = min[(N − 1), (M − 1) + k], and u3 = min[(N − 1), (M − 1) − k], respectively.

2.5 Examples To illustrate application of Chebyshev expansions, the following function will be approximated by Chebyshev polynomials f(x) = sin x, x ∈ [0, π/2]. (31) Using the algorithm presented in section 2.1 for N = 12, the coefficients of the Chebyshev approxi- −1 −1 −1 −2 mation were found to be c0 = 6.0219 10 , c1 = 5.1363 10 , c2 = −1.0355 10 , c3 = −1.3732 10 , −3 −4 −6 −7 −8 c4 = 1.3587 10 , c5 = 1.0726 10 , c6 = −7.0463 10 , c7 = −3.9639 10 , c8 = 1.9500 10 , −10 −11 −12 c9 = 8.5229 10 , c10 = −3.3516 10 , c11 = −1.1990 10 . Note the rapid decay in the magni- tudes of the coefficients. Figures2 shows the exact sine function, its Chebyshev approximation, and the error incurred by the approximation. Note that the error is spread over the entire range of the approximation in a nearly uniform manner. This is due to the fact that the extrema of Chebyshev polynomials are distributed over the entire range of the approximation and have alternating values of plus or minus unity. These characteristics make Chebyshev polynomials an ideal for approximating functions. Next, the sine function will be approximated using N = 3, only the terms c0 to c2 are retained in the expansion. Figure2 also shows the results of this crude approximation. Note that the error is nearly evenly distributed over the approximation range and that its magnitude can be estimated by −2 looking at the magnitude of the first neglected term of the expansion: |c3| = 1.3732 10 . The results for an approximation including 5 terms, i.e. N = 5, are presented in fig.2. Here again, the error is nearly evenly distributed over the approximation range and that its magnitude can be estimated −4 by looking at the magnitude of the first neglected term of the expansion: |c5| = 1.0726 10 . Finally, the algorithm presented in section 2.3 to evaluated the coefficients of the Chebyshev expansion of the derivative of the function was used to compute the coefficients of the expansion 0 0 −1 0 −1 f (x) = cos x. The following coefficients were found: c0 = 6.0219 10 , c1 = −5.1363 10 , 0 −1 0 −2 0 −3 0 −4 0 −6 c2 = −1.0355 10 , c3 = 1.3732 10 , c4 = 1.3587 10 , c5 = −1.0726 10 , c6 = −7.0463 10 , 0 −7 0 −8 0 −10 0 −11 c7 = 3.9639 10 , c8 = 1.9500 10 , c9 = −8.5349 10 , c10 = −3.3586 10 . Figure3 shows the exact cosine function and its Chebyshev approximation, together with the error incurred by the approximation for N = 10. Note that the error is closely estimated by the magnitude of the first 0 −11 neglected term of the expansion: |c10| = 3.3586 10 .

9 1 0.01

0 0.5 SIN(X) ERROR −0.01 0 0 0.5 1 1.5 0 0.5 1 1.5

−4 x 10

1 1

0 0.5 SIN(X) ERROR −1 0 0 0.5 1 1.5 0 0.5 1 1.5

−12 x 10

1 1

0 0.5 SIN(X) ERROR −1 0 0 0.5 1 1.5 0 0.5 1 1.5 x x

Figure 2: Chebyshev polynomial expansion of function f(x) = sin x, x ∈ [0, π]. Function f(x): solid line; Chebyshev expansion: circles. Figures on the left represent the function and its approximation; figures on the right show the error associated with the approximation. Top figures: N = 3; middle figures: N = 5, bottom figures: N = 12.

2.6 Clenshaw-Curtis quadrature R b Consider the problem of evaluating the following integral a f(x) dx. To that effect, the function is first expanded in terms of Chebyshev polynomials,

2n 2n 2n Z b Z b X X Z b b − a X Z +1 f(x) dx ≈ c T (x) dx = c T (x) dx = c T (x) dx, (32) i i i i 2 i i a a i=0 i=0 a i=0 −1 where the coefficients ci are found from eqs. (20a) and (20b) or eqs. (22a) and (22b). The integral of the Chebyshev polynomials are evaluated by eq. (16) to find

Z b   c2 c4 c2n f(x) dx ≈ (b − a) c0 − − − · · · − 2 . (33) a 3 15 4n − 1

10 −9 x 10 1

0.8

1 0.6

0.4 0.8

0.2

0.6 0 ERROR −0.2

0.4 FUNCTION DERIVATIVE −0.4

−0.6 0.2

−0.8

0 −1 0 0.5 1 1.5 0 0.5 1 1.5 XX XX

Figure 3: Chebyshev polynomial expansion of the derivative of function f(x) = sin x, x ∈ [0, π]. Function f 0(x): solid line; Chebyshev expansion for N = 12: circles. Figure on the left repre- sents the function and its approximation; figure on the right shows the error associated with the approximation.

The error, e, in the evaluation of the integral can be estimated from the last term in the series,

|c | e ≈ (b − a) 2n . (34) 4n2 − 1 Assume that for a given value of n, the estimate of the integral given by eq. (33) does not satisfy a desired error criterion, say e < , where the error estimate, e, is given by eq. (34) and  a small number. A larger number of Chebyshev polynomials must then be used to approximate the function, in an attempt to meet the accuracy criterion. If the Chebyshev expansion of the function is performed based on the zeros of Chebyshev polynomials using eqs. (20a) and (20b), the abscissa at which the function will be evaluated are shown in fig.4 for n = 2k, k = 1, 2, 3, 4, 5. Note that for k = 2, the four abscissa all differ from those for k = 1. In fact, for two arbitrary but different values of k, all abscissa are different. On the other hand, if the Chebyshev expansion of the function is performed based on the extrema of Chebyshev polynomials using eqs. (22a) and (22b), the abscissa at which the function will be evaluated are shown in fig.5 for n = 2k + 1, k = 1, 2, 3, 4, 5. When using the extrema, the abscissa for n = 2k are a subset of the abscissa for n = 2` where ` > k. For instance, when k = 2, the n = 22 + 1 = 5 abscissa are a subset of the n = 23 + 1 = 9 abscissa corresponding to k = 3. Those five abscissa are represented by black circles in figure5. The Clenshaw-Curtis quadrature scheme based on the extrema of Chebyshev polynomials is well suited for adaptive integration. At stage k of the procedure, the function to be integrated is evalu- ated at n = 2k + 1 abscissa, and its Chebyshev expansion is computed using eqs. (22a) and (22b). Equations (33) and (34) then yield an estimate of the integral and of the error, respectively. If the

11 5 5

4.5 4.5

4 4

3.5 3.5

3 3 POWER OF N POWER OF N 2.5 2.5

2 2

1.5 1.5

1 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x x

Figure 4: Abscissa of the Clenshaw-Curtis in- Figure 5: Abscissa of the Clenshaw-Curtis inte- tegration scheme using the zeros of Chebyshev gration scheme using the extrema of Chebyshev polynomials, for n = 1, 2, 3, 4, 5. polynomials, for n = 1, 2, 3, 4, 5.

error satisfies the accuracy criterion, the process stops. If not, stage k + 1 starts and the function is evaluated at n = 2k+1 + 1 abscissa to evaluate its new Chebyshev expansion. Of these n = 2k+1 + 1 evaluations, however, n = 2k + 1 where already performed at stage k, leaving n = 2k new function evaluations to be performed for stage k + 1. At every stage of the computation, all function eval- uations are used to obtain the new Chebyshev expansion of the function. The process stops when the accuracy criterion is met.

3 Chebyshev approximation of functions of two variables

3.1 Expansion of a function in Chebyshev polynomials Section2 describes the expansion of arbitrary functions of a single variable in series of Chebyshev polynomials. Clearly, functions of two variables can be similarly expanded in double series of Chebyshev polynomials M−1 N−1 X X f(x, y) = cijTi(x)Tj(y). (35) i=0 j=0

To find these coefficients given function f(x, y), the above relationship is expressed at x =x ¯k, y =y ¯`, wherex ¯k andy ¯` the zeros of TM (x) and TN (y), respectively, as given by eq. (4). This yields PM−1 PN−1 f(¯xk, y¯`) ≈ i=0 j=0 cijTi(¯xk)Tj(¯y`). Multiplying both sides of this equation by Tp(¯xk)Tq(¯y`) and summing the resulting equations expressed at all zeros of TM (x) and TN (y) leads to

M N M−1 N−1 " M #" N # X X X X X X f(¯xk, y¯`)Tp(¯xk)Tq(¯y`) = cij Ti(¯xk)Tp(¯xk) Tj(¯y`)Tq(¯y`) . (36) k=1 `=1 i=0 j=0 k=1 `=1

12 In view of the discrete orthogonality relationship of Chebyshev polynomials, eq. (9), it then follows that

M N 1 X X c = f(¯x , y¯ ), (37a) 00 MN k ` k=1 `=1 M N 2 X X c = f(¯x , y¯ )T (¯x ), (37b) i0 MN k ` i k k=1 `=1 M N 2 X X c = f(¯x , y¯ )T (¯y ), (37c) 0j MN k ` j ` k=1 `=1 M N 4 X X c = f(¯x , y¯ )T (¯x )T (¯y ). (37d) ij MN k ` i k j ` k=1 `=1 In some cases, function f(x, y) is partially expanded in Chebyshev series. For instance, the func- tion dependency on the y variable is in the form of a Chebyshev expansion, whereas its dependency on the x variable is not, i.e. N−1 X f(x, y) = gj(x)Tj(y). (38) j=0 The coefficients of the complete Chebyshev expansion are found by introducing the above expression into eqs. (37a) and (37d) to find

M 1 X c = g (x ), (39a) 0j M j k k=1 M 2 X c = g (x )T (x ). (39b) ij M j k i k k=1

3.2 Evaluation of Chebyshev expansions: Clenshaw’s recurrence If the coefficients of the two dimensional Chebyshev expansion are known, the function can be evaluated using eq. (35). However, here again, rather than computing the polynomials then summing all contributions, it is preferable to use Clenshaw’s recurrence defined by eq. (24). To that effect, eq. (35) is rewritten as

M−1 "N−1 # M−1 X X X f(x, y) = cijTj(y) Ti(x) = di(y)Ti(x). (40) i=0 j=0 i=0

Clenshaw’s recurrence, eq. (24), is first used M times to compute the coefficients di, i = 0, 1,...,M− 1. Finally, one more application of Clenshaw’s recurrence yields the desired value of the function. Of course, it is also possible to recast eq. (35) as

N−1 "M−1 # N−1 X X X f(x, y) = cijTi(x) Tj(y) = gj(x)Tj(y). (41) j=0 i=0 j=0

13 At first, N applications of Clenshaw’s recurrence yield the coefficients gj, j = 0, 1,...,N − 1, and one additional step yields the desired function value. Using this second option, Clenshaw’s recurrence, characterized by eqs. (23a) to (23e), is rewritten as

yM+1,j = 0, (42a)

yM,j = 0, (42b)

yM−1,j = cM−1,j − yM+1,j + 2x yM,j, (42c) . .

y1,j = c1,j − y3,j + 2x y2,j, (42d)

y0,j = c0,j − y2,j + 2x y1,j. (42e)

The coefficients, gj(x), now simply become

gj(x) = (c0,j − y2,j) + y1,j x = y0,j − x y1,j. (43)

Clenshaw’s recurrence applied to the coefficients gj(x) then yields the desired function value.

3.3 Derivatives of Chebyshev expansions Consider now a function of two variables and its derivative with respect to x, both expanded in Chebyshev series

M−1 N−1 M−2 N−1 X X 0 X X 0 f(x, y) = cijTi(x)Tj(y), and f (x, y) = cijTi(x)Tj(y), (44) i=0 j=0 i=0 j=0 where the notation (·)0 indicates a derivative with respect to x. What is the relationship between the 0 coefficients of the two expansions, cij and cij? Using the formula for the derivatives of Chebyshev polynomials, eq. (13), the following recurrence is found

0 cM,j = 0, (45a) 0 cM−1,j = 0, (45b) 0 0 cM−2,j = 2 × (M − 1) cM−1,j + cM,j, (45c) . . 0 0 c1,j = 2 × 2 c2,j + c3,j, (45d) 0 0 c0,j = (2 × 1 c1,j + c2,j)/2. (45e)

Consider next a function of two variables and its derivative with respect to y, both expanded in Chebyshev series

M−1 N−1 M−1 N−2 X X + X X + f(x, y) = cijTi(x)Tj(y), and f (x, y) = cijTi(x)Tj(y). (46) i=0 j=0 i=0 j=0

14 where the notation (·)+ indicates a derivative with respect to y. What is the relationship between + the coefficients of the two expansions, cij and cij? Using the formula for the derivatives of Chebyshev polynomials, eq. (13), the following recurrence is found

+ ci,N = 0, (47a) + ci,N−1 = 0, (47b) + + ci,N−2 = 2 × (N − 1) ci,N−1 + ci,N , (47c) . . + + ci,1 = 2 × 2 ci,2 + ci,3, (47d) + + ci,0 = (2 × 1 ci,1 + ci,2)/2. (47e)

4 Chebychev polynomials

Chebychev polynomials [1,2] are defined in the following manner

T0(x) = 1 T1(x) = x 2 T2(x) = 2x − 1 3 T3(x) = 4x − 3x (48) 4 2 T4(x) = 8x − 8x + 1 ... Tn+1(x) = 2xTn(x) − Tn−1(x).

The variable x ∈ [−1, 1]. An explicit formula in terms of transcendental functions is also available: Tn(x) = cos(n arccos x). These polynomials are orthogonal polynomials that form an ideal basis for the approximation of functions. An arbitrary function F (s) can be approximated as

N X F (s) = ci Ti−1(x), (49) i=1 where Ti are the Chebychev polynomials, ci the coefficients of the expansion, N the number of terms in the expansion, and x a non dimensional variable defined as

2s − (s + s ) x = hi lo . (50) shi − slo slo and shi are the lower and upper bounds defining the range over which the approximation is valid. The physical characteristics of dampers and springs are accurately and efficiently approximated by Chebychev polynomials. For linear springs or dampers, the elastic or viscous force, respectively, is approximated in terms of the stretch or stretch rate, respectively. For torsional springs and dampers, the elastic or viscous moment, respectively, is approximated in terms of the rotational stretch or rotational stretch rate, respectively.

15 4.1 Examples 1. Example 1. Consider the simple example of a linear spring of stiffness constant k, i.e. defined as F = ks. At first, the approximation range is assumed to be between slo = −1 and shi = 1. It then follows from eq. (50) that x = s, and

F = ks = kx = 0 × 1 + k × x = 0 × T0(x) + k × T1(x). (51)

Hence, the representation of this spring is slo = −1, shi = 1, c1 = 0, and c2 = k. The input sequence given below defines a torsional damper for k = 1.2e + 04. The moment-rotational stretch rate is depicted in fig.6. @DAMPER DEFINITION {

@DAMPER NAME { damperRvjTeeter }{ @DAMPER TYPE { TORSIONAL } @DAMPER DEFINITION TYPE { CHEBYCHEV } @APPROXIMATION RANGE { -1.0, 1.0 } @CHEBYCHEV COEFFICIENTS { 0.0, 1.2e+04 } } }

4 x 10 damperRvjTeeter 1.5

1

0.5

0

MOMENT [N.m] −0.5

−1

−1.5 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 ANGULAR VELOCITY [rad/sec]

Figure 6: Torsional damper with linear moment versus angular velocity characteristics.

2. Example 2. Suppose now that the approximation range of the same spring is defined as slo = −10 and shi = 10. It then follows from eq. (50) that x = s/10 and

F = ks = 10kx = 0 × 1 + 10k × x = 0 × T0(x) + 10k × T1(x). (52)

Hence, the representation of this spring is slo = −10, shi = 10, c1 = 0, and c2 = 10k. The spring constant is k in both cases, but the Chebychev coefficients are different due to the

16 different approximation range. The input sequence given below defines a torsional damper for k = 1.2e + 04. The resulting viscous moment versus angular velocity is depicted in fig.7. @DAMPER DEFINITION { @DAMPER NAME { damperRvjTeeter }{ @DAMPER TYPE { TORSIONAL } @DAMPER DEFINITION TYPE { CHEBYCHEV } @APPROXIMATION RANGE { -10.0, 10.0 } @CHEBYCHEV COEFFICIENTS { 0.0, 1.2e+05 } } }

5 x 10 damperRvjTeeter 1.5

1

0.5

0

MOMENT [N.m] −0.5

−1

−1.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 ANGULAR VELOCITY [rad/sec]

Figure 7: Torsional damper with linear moment versus angular velocity characteristics.

3. Example 3. As a more elaborated example, consider a nonlinear spring defined as F (s) = 3 k1s + k3s with an approximation range slo = −5 and shi = 5. It then follows from eq. (50) that x = s/5, and 1 F = k s + k s3 = 5k x + 125k x3 = 5k T (x) + 125k [T (x) + 3T (x)] . (53) 1 3 1 3 1 1 3 4 3 1 Collecting the coefficients of the various polynomials then yields  3 × 125  125  F = 5k + k T (x) + k T (x). (54) 1 4 3 1 4 3 3

Hence, the representation of this spring is slo = −5, shi = 5, c1 = 0, c2 = 5 k1 + 375/4 k3, c3 = 0, and c4 = 125/4 k3. The resulting force versus stretch is depicted in fig.8 for k1 = 1000 and k3 = 80. @SPRING DEFINITION {

17 @SPRING NAME { SpringTest }{ @SPRING TYPE { LINEAR } @SPRING DEFINITION TYPE { CHEBYCHEV } @APPROXIMATION RANGE { -5.0, 5.0 } @CHEBYCHEV COEFFICIENTS { 0.0, 4.25e+04, 0.0, 1.25e+04 } }

}

SpringTest 15000 Force 10000

5000

0

FORCES [N] -5000

-10000

-15000 -5 -4 -3 -2 -1 0 1 2 3 4 5 STRETCH [m]

Figure 8: Linear spring with nonlinear force versus stretch characteristics.

4. Example 4. Consider now a spring whose force-stretch relationship is defined by data points, typically experimental measurements. The data points as shown by symbols in fig.9. An ex- pansion in terms of 4 Chebychev polynomials [1] is then performed and the resulting approx- imation is shown in fig.9. It is important to appropriately chose the number of Chebychev polynomials to be used in the expansion: fig. 10 shows the result of the Chebychev approxima- tion using 48 terms in the expansion. Since the spring characteristics are linearly interpolated between the data points, the expansion produces a set of nearly linear segments between the data points. @SPRING DEFINITION {

@SPRING NAME { SpringNew }{ @SPRING TYPE { TORSIONAL } @SPRING DEFINITION TYPE { DATA POINTS } @TABLE ENTRIES { @VARIABLE { -4.0e-03 } @FUNCTION VALUE { -5.0 } @VARIABLE { -3.0e-03 } @FUNCTION VALUE { -3.5 }

18 @VARIABLE { -2.0e-03 } @FUNCTION VALUE { 0.0 } @VARIABLE { -1.0e-03 } @FUNCTION VALUE { 2.0 } @VARIABLE { 0.0e-03 } @FUNCTION VALUE { 2.5 } @VARIABLE { 1.0e-03 } @FUNCTION VALUE { 2.8 } @VARIABLE { 2.0e-03 } @FUNCTION VALUE { 2.0 } @VARIABLE { 3.0e-03 } @FUNCTION VALUE { 2.0 } @VARIABLE { 4.0e-03 } @FUNCTION VALUE { 2.5 } } @NUMBER OF CHEBYCHEV COEFFICIENTS { 4 } }

SpringNew SpringNew 3 3

2 2

1 1 0 0 -1 -1 -2 -2

MOMENTS [N.m] -3 MOMENTS [N.m] -3 -4

-5 -4 Moment Moment Data Data -6 -5 -0.004 -0.003 -0.002 -0.001 0 0.001 0.002 0.003 0.004 -0.004 -0.003 -0.002 -0.001 0 0.001 0.002 0.003 0.004 ROTATION [rad] ROTATION [rad]

Figure 9: Torsional spring with nonlinear mo- Figure 10: Torsional spring with nonlinear mo- ment versus rotation characteristics. The sym- ment versus rotation characteristics. The sym- bols indicate the data points, the solid line the bols indicate the data points, the solid line the Chebychev approximation using 4 Chebychev Chebychev approximation using 48 Chebychev polynomials. polynomials.

5. Example 5. Since Chebychev polynomials are continuous, they are not well suited for the approximation of functions presenting discontinuities, or very sharp gradients. Consider the data points as shown in fig. 11. An expansion in terms of 4 Chebychev polynomials shows poor correlation with the data points, whereas an expansion with 16 terms, see fig. 12, exhibits the , violent oscillations of the approximation in the vicinity of the region of the discontinuity. @SPRING DEFINITION {

@SPRING NAME { SpringDiscontinuity }{ @SPRING TYPE { LINEAR } @SPRING DEFINITION TYPE { DATA POINTS } @TABLE ENTRIES {

19 @VARIABLE { -5.0 } @FUNCTION VALUE { -5.0 } @VARIABLE { -0.25 } @FUNCTION VALUE { -4.0 } @VARIABLE { 0.25 } @FUNCTION VALUE { 4.0 } @VARIABLE { 5.0 } @FUNCTION VALUE { 5.0 } } @NUMBER OF CHEBYCHEV COEFFICIENTS { 4 } }

SpringDiscontinuity SpringDiscontinuity 6 6

4 4

2 2

0 0

FORCES [N] -2 FORCES [N] -2

-4 -4 Force Force Data Data -6 -6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 STRETCH [m] STRETCH [m]

Figure 11: Linear spring with nonlinear force ver- Figure 12: Linear spring with nonlinear force ver- sus stretch characteristics. The symbols indicate sus stretch characteristics. The symbols indicate the data points, the solid line the Chebychev ap- the data points, the solid line the Chebychev ap- proximation using 4 Chebychev polynomials. proximation using 16 Chebychev polynomials.

References

[1] W. H. Press, S. A. Teutolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes. The Art of Scientific Computing. Cambridge University Press, Cambridge, third edition, 2007.

[2] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions. Dover Publications, Inc., New York, 1964.

20