Mathematical Development of the Elliptic Filter

Mark Kleehammer Queen’s University August 26, 2013

The elliptic filter is a very powerful tool for signal processing, however it also requires some sophisticated mathematics to properly describe it. Signal process- ing plays a crucial role in a large part in nearly everything electronic today, from phones, to computers to music. We begin with an abstract introduction to signal processing and prove some basic results. Then we will apply the abstract theory to discuss the Butterworth, Chebyshev and elliptic filters, with a primary focus on the elliptic filter. However, before we can discuss the elliptic filter, we transition into an in depth discussion of elliptic functions.

1 Contents

1 Signal Processing 3 1.1 The Ideal Lowpass Filter ...... 5 1.2 The Causal Filter ...... 6 1.3 The ...... 8

2 The 10 2.1 Butterworth Polynomials ...... 10 2.2 TheButterworthFilter ...... 11 2.3 Conclusions ...... 13

3 The 15 3.1 Chebyshev Polynomials ...... 15 3.2 Transfer Function H(s) for the Chebyshev Filter ...... 16 3.3 The Chebyshev Filter ...... 17 3.4 Conclusions ...... 18

4 Elliptic Functions 20 4.1 Elliptic Integrals ...... 20 4.2 Jacobi’s Elliptic Functions ...... 22 4.3 The Addition Theorems for the Jacobi Elliptic Functions ...... 24 4.4 Transformations of Jacobi Elliptic Functions ...... 27 4.4.1 The First Degree Transformation ...... 28 4.4.2 The nth Degree Transformation ...... 29 4.5 The Jacobi Theta Functions ...... 31

5 Elliptic Rational Function 33 5.1 Statement of the Problems ...... 33 5.2 Solution to Problem C ...... 36 5.3 Elliptic Rational Function ...... 37 5.3.1 Connections Between Texts ...... 42 5.4 Zeros and Poles of the Elliptic Rational Function ...... 43 5.5 The Elliptic Rational function for n 1,2,3 ...... 45 = 6 The Elliptic Filter 47 6.1 The Elliptic Rational Function ...... 47 6.2 The Transfer Function for the Elliptic Filter ...... 47 6.3 Transfer Function Poles for Order n ...... 48 6.4 Transfer Function Poles for Orders n 1,2,3 ...... 49 = 6.5 The Elliptic Filter ...... 52 6.6 Conclusions ...... 52

7 Appendix 55

References 71

2 1 Signal Processing

A signal is a physical quantity or quality that conveys information [LTE01, p. 1]. For exam- ple a person’s voice is a signal, and it may cause the listener to act in a certain manner. This evaluation of the signal is called signal processing. Signals are found in all kinds of places; they could be the voltage produced in an electric circuit, computer graphics, or music. Many electronic signals travel through space, and while traveling they tend to pick up other unde- sirable signals such as white noise, or static. The kinds of signals we will focus on will be sine waves, and we call these signals sinusoidal. The problem which motivates our discussion is given some signal, we must remove the undesirable frequencies from it. A filter takes a signal as its input, performs some operations on the signal which remove these undesirable frequencies, and outputs a "clean" signal. Al- gebraically we think of a filter as a function that maps signals to signals.

Definition 1.1. A filter is a function that is used to modify or reshape the frequency spectrum of a signal according to some prescribed requirements [LTE01, p. 241]

An everyday example of this problem is used in cell phones all the time. When you make a call (or text, etc...), the phone sends a signal to a satellite in space, and that satellite then redirects the signal to your friend’s phone. When the signal was traveling through space it picks up static, so when your friend gets the signal, his/her phone filters out that static. How does the phone filter out the static? Many electrical engineers are well versed in this problem, and there is a solid amount of literature on the subject from an engineering perspective [Cau58] [Dan74] [LTE01], however there is little if any from mathematicians. As such many of the results are explained with little detail and background, and it’s difficult to see the bigger picture with these details omitted. My goal is to present this material in a straight forward and simple manner, with justifications given for each step. It is important that one can see why we get these results, with less of a "this is how it is" mentality.

Many of the functions we will encounter are functions of the complex frequency s σ iω = + (where i p 1) . The variable s comes from the Laplace transform; given some function v(t) = − (for example a voltage dependent on time) its Laplace transform is defined as [Dan74, p. 2] Z ∞ st V (s) L [v(t)] : v(t)e− dt (1.1) = = 0 For our purposes we will view the Laplace transformation as taking a function in the time domain, and translating it into the frequency domain. Typically we will investigate the filter problem in the frequency domain (since the filter requirements given to an engineer will typ- ically be expressed in the frequency domain), and then use the inverse Laplace transform to express our results in the time domain.

Let’s begin by analyzing the following example. Suppose we have a sinusoidal input signal x (t) sinωt, and the engineer only needs this signal for a specific range of frequencies, in = say [ωa,ωb]. The engineer will design a filter that preserves those desired frequencies, and

3 eliminates everything else. And after the signal has been filtered, the output signal will also be a sine wave, but with the amplitude and phase likely altered by the filter, i.e.

x (t) C sin(ωt ϕ) out = + for some real constants C,ϕ [Dan74, p. 3].

Definition 1.2. The transfer function T (s) is defined to be the ratio of the Laplace transform of the output signal (denoted xout(t)), to the Laplace transform of the input signal (denoted xin(t)) [Dan74, p. 3]. Explicitly

Xout(s) L [xout(t)] T (s) (1.2) = Xin(s) = L [xin(t)] Often we will consider the function H(s) 1/T (s), which is referred to as the input/output = transfer function.

Theorem 1.1. Let x (t) A sinωt represent an input signal with amplitude A 0, and in = 1 1 > x (t) A sin(ωt ϕ) represent the output signal with amplitude A 0. Then out = 2 + 2 > A2 T (iω) (1.3) A1 = | | This means that for a sinusoidal signal, the ratio of the output amplitude to the input am- plitude is equal to the magnitude of the transfer function evaluated at s iω. =

Proof. Compute the Laplace transforms of xin and xout Z ∞ st ω Xin(s) L [xin(t)] A1 sin(ωt)e− dt A1 2 2 = = 0 = s ω + Z ωcosϕ s sinϕ ∞ st + Xout(s) L [xout(t)] A2 sin(ωt ϕ)e− dt A2 2 2 = = 0 + = s ω + Therefore ¯ ¯ A2 ¯ωcosϕ iωsinϕ¯ T (iω) ¯ + ¯ | | = A1 ¯ ω ¯ A2 = A1

We should take note that, in general the frequency of the input sine wave ω, is not con- stant, and the amplitude A1 depends on the frequency ω. While the theorem above seems to implicitly assume that these are constant. So to be more precise we will denote the ampli- tude as amp(ω) and reformulate the theorem we just proved (the proof still works the same). Perhaps we should also denote the signals x(t) as a function of two variables x(ω,t), however the frequency ω – while non-constant – is not completely independent of time (we cannot have two different frequencies occurring at the same time). So we think of the signal x as a function of time, the frequency of the signal ω is not constant, and we want to filter out some frequencies of x that may occur.

4 Theorem 1.2. Let x (t) amp (ω)sinωt represent an input signal with amplitude in = in amp (ω) 0, and x (t) amp (ω)sin(ωt ϕ) represent the output of the filter with am- in > out = out + plitude ampout(ω) 0. Then > amp (ω) out T (iω) (1.4) ampin(ω) = | | We say that a sinusoidal signal has been attenuated if the output signal has a smaller am- plitude than the input [Dan74, p. 3]. With this in mind our problem is to create a filter that attenuates certain frequencies of the signal, while leaving others invariant. Regions of low attenuation are called passbands, and we say that the filter passes such frequencies. On the other hand, regions of high attenuation are called stopbands, and we say the filter stops these frequencies. A lowpass filter passes frequencies less than some specific frequency, and attenuates those higher, and a highpass filter does the opposite [Dan74, p. 4]. We will focus our discussion on the lowpass filter.

1.1 The Ideal Lowpass Filter

For the lowpass filter, we pass the frequencies ω with ω ω , where ω is some fixed fre- | | ≤ b b quency [Dan74, p. 8]. That is in the passband {ω : ω ω }, we want the input amplitude to | | ≤ b equal the output amplitude, so their ratio is 1, hence for ω ω we require | | ≤ b T (iω) 1 (1.5) | | = In the stopband, we want very high attenuation, so the output amplitude should be zero. Thus we also require for ω ω , | | > b T (iω) 0 (1.6) | | = Equivalently, in terms of H(s) 1/T (s) we desire = ½ 1 : ω ω H(iω) | | ≤ b (1.7) | | = : ω ω ∞ | | > b Such a function is impossible to realize (with the discontinuities), and so we will examine different ways of approximating this ideal. The approximations we deal with will be polyno- mials (or a ratio of polynomials) of degree at most n N, and we say that the filter is of degree ∈ n. We think of the degree of the filter as a measure of complexity, and often the approxi- mation problem is to find the lowest degree filter that meets certain requirements [Dan74, Sec. 2.6, 3.7]. A higher degree filter may require more precise (and expensive) parts to realize. For an approximation, the transition from passband to stopband will not be instantaneous, and so we will have this region of increasing attenuation in-between the passband and stop- band which we call the transition region [Dan74, p. 4].

5 Figure 1.1: Plot of the ideal lowpass filter’s transfer function T (iω) with a normalized pass- | | band of [ 1,1] −

1.2 The Causal Filter

A causal filter depends only on past and present inputs, a filter that depends on future inputs is non-causal [PTVF07]. Only causal filters are realizable (it can operate in real time), since a realizable filter cannot act on future input (it hasn’t happened yet!). Let x (t) and x (t) represent the input/output signal at time t respectively, also let X (s) in out in = L [x (t)] and X (s) L [x (t)]. Furthermore suppose we know the transfer function T (s) in out = out for our filter, then we know X (s) T (s)X (s) (1.8) out = in Then by taking the inverse Laplace transform of both sides we obtain the time domain re- sponse of the filter. That is, we expressed the output signal xout(t) as a function of the input xin(t). One of the properties of the Laplace transforms is that it converts convolution of functions in the time domain into multiplication of functions in the frequency domain. More explicitly, let f (t),g(t) be real functions, then the convolution of f with g is defined as Z (f g)(t) ∞ f (τ)g(t τ)dτ (1.9) ∗ = − −∞ This property says that £ ¤ L (f g)(t) L [f ]L [g] (1.10) ∗ = 1 Denote h(t) L − [T (s)], this function is called the impulse response of the filter [PTVF07]. = Definition 1.3. We define a causal filter, to be a filter whose impulse response h(t) vanishes for negative t, i.e. h(t) 0 for t 0 [PTVF07]. = < Then from (1.10) and (1.8) we see the causal filter is Z ∞ xout(t) h(τ)xin(t τ)dτ (1.11) = 0 −

6 In general, computing the inverse Laplace transform can be difficult, however notice that for (a) 0 (where (z) denotes the real part of the complex number z) ℜ < ℜ Z £ at ¤ ∞ t(s a) 1 L e e− − dt (1.12) = 0 = s a − 1 at So conversely the inverse Laplace transform of s a is e . Therefore, if we can express the transfer function in partial fractions of this form, then− we can find the inverse Laplace trans- form easily. A partial fractions expansion of T (s) (for an nth degree filter) will look like

n X αj T (s) αj lim (s s j )T (s) (1.13) s s = j 1 s s j = → j − = − where s j are the poles of T (s) (zeros of H). Since the transfer functions we work with will only have simple poles, we can find the constants αj by

s s j 1 αj lim − (1.14) = s s j H(s) H(s ) = H (s ) → − j 0 j In the next section we will learn to construct the ratio of the input amplitude to the output amplitude, that is finding H(iω) for a certain type of filter. Since H(iω) 2 H(iω)H( iω), | | | | = − we let ω i s, so H(iω) 2 H(s)H( s), and then we can find the zeros of H(s)H( s). Notice = − | | = − − that the function H(s)H( s) has twice as many roots and poles as H(s). − To find a partial fractions expansion for T , we need the n zeros of H, which we find indi- rectly by finding the 2n zeros of H(s)H( s). When designing a filter, the zeros of H (poles − of T ) must have negative real part in order to apply (1.12), so we associate these zeros of H(s)H( s) to H(s) (and the zeros with positive real part are associated to H( s) ). If we let z − − j represent the zeros of H(s)H( s), then we need ©j : ¡z ¢ 0ª J, and so the partial fractions − ℜ j < = decomposition of T is X αj T (s) = j J s z j ∈ − From here we know the inverse Laplace transform of T is (from (1.12) )

1£ ¤ X z j t L − T (s) αj e = j J ∈ Hence the filter is Z Ã ! ∞ X z j τ xout(t) αj e xin(t τ)dτ = 0 j J − ∈ We phrase these results as a theorem for reference. Theorem 1.3. Let T (s) 1/H(s) denote the nth degree transfer function of the filter, and z = j the zeros of H(s)H( s), and J ©j : ¡z ¢ 0ª. If x (t) denotes the input signal at time t, and − = ℜ j < in xout(t) the output signal of the filter, then the filtered signal is: Z Ã ! ∞ X z j τ xout(t) αj e xin(t τ)dτ (1.15) = 0 j J − ∈ with constants αj given by (1.14)

7 Much of what we discussed in this section is left out in [LTE01], [Dan74], both of which are standard texts on signal processing. Most engineers prefer to work in the frequency domain [Dan74, p. 282], and Theorem 1.3 expresses results in the time domain. As such, in these texts few results are expressed in the time domain, so the results we obtain in the time domain – while not exactly unknown – are often undiscussed. Throughout the course of this paper, we will focus on expressing the casual filters in the time domain, as this way we are dealing directly with the signals themselves instead of their frequencies. Mostly we’ve been concerned with the amplitude H(iω) , but in some applications (par- | | ticularly radar and digital communications) one may be interested in the phase arg H(iω) [Dan74, p. 238]. This phase characteristic affects the time domain response. However for our discussion we will omit this material, and refer the reader to Daniels’ text for more informa- tion [Dan74, Ch. 14].

1.3 The Transfer Function

In our previous discussion, we assumed the transfer function was known, but in general the transfer function will not be given. The types of filters that we investigate are named after the corresponding type of polynomial (rational function) used in the approximation. Consider the normalized passband [ 1,1], i.e. ω 1 and let ² 0. In the passband we − b = > want 1 ² H(iω) 1 ², and in the stop band we want H(iω) to be large. We then choose − < | | < + | | a polynomial (or ratio of polynomials) P(ω) with P(ω) to be close to 0 in the interval [ 1,1] | | − and outside this interval we want P(ω) to be large. Some typical properties of P(ω) that we | | desire are outlined below

1. P(ω) is a polynomial of degree n or a rational function of polynomials of degree at most n

2. P(ω) 1 in the interval 1 ω 1 | | ≤ − ≤ ≤ 3. P(ω) has all of its zeros in the interval 1 ω 1 − < < 4. P(ω) 1 for ω 1 | | > | | > 5. P(1) 1 = Property 1 is chosen so that the approximation can be carried out in a finite sequence of steps (a realizable filter cannot contain an infinite process); property 2 is chosen so that the attenuation in a normalized passband of [ 1,1] is minimal; it is clear that we do not want any − zeros for ω outside [ 1,1] (we want P(ω) to grow large outside this interval), so this is why − property 3 is needed; property 4 is chosen so that the attenuation in the stopband is large; and property 5 is just a normalization requirement. ω If we want to extend our passband to [ ωb,ωb], then we apply the transformation ω − = ωb into the previous equation. Because then by property 2, ¯ µ ¶¯ ¯ ω ¯ ¯P ¯ 1 for ω ωb (1.16) ¯ ωb ¯ ≤ | | ≤ | |

8 And by applying a linear fractional transformation we can extend the passband to any interval [ωa,ωb]. Thus once we can construct the normalized lowpass filter, we can simply transform the results to conform to any passband interval. To construct a filter with a passband of [ωa,ωb], an engineer will be given: the maxi- mum attenuation allowed in the passband (denoted Amax), the minimum attenuation in the stopband (denoted A ), and the frequency ω where the stopband starts (that is A(ω ) min H H = Amin ). And so this brings us to the attenuation function A(ω).

Definition 1.4. The attenuation function, which measures the attenuation (in decibels dB) due to the filter at any frequency ω, is [Dan74, p. 3]

A(ω): 20log T (iω) 10log H(iω) 2 (1.17) = − | | = | | Sometimes H(iω) 2 is called the magnitude or gain of the signal [LTE01, p. 76]. The en- | | gineer will be given the passband range [ωa,ωb] and the requirements Amax, Amin in deci- bels, and the frequency ωH where the stopband begins, and then [typically] the problem is to design the filter with the lowest degree n that satisfies these requirements (a higher de- gree will take more time in computation and may require more expensive parts to realize). From these specifications the minimal filter degree n, is determined and the filter can be re- alized [Dan74, Sec. 2.6, 3.7]. However we will not focus on determining the minimal degree for the filter, and we will assume the filter degree n is known. Consider a normalized passband of[ 1,1]. In the passband we want the attenuation to be − minimal, that is A(ω) 0 for ω [ 1,1], so from (1.17) ≈ ∈ − H(iω) 2 1 | | ≈ Definition 1.5. Let P(ω) denote the approximating polynomial (ratio of polynomials) used for the filter with the properties listed above, and let ² 0. We set > 2 H(iω) 2 1 ¡²P(ω)¢ (1.18) | | = + Because then for ω [ 1,1] by property 2 we have ∈ − ¯ 2¯ ¯1 ¡²P(ω)¢ ¯ 1 ²2 1 (1.19) ¯ + ¯ ≤ + ≈ Also wherever the polynomial has a zero, the attenuation is also zero, i.e. when P(ω) 0, we = have A(ω) 0. The constant ² is called the factor (the name will make sense later), and = it determines the maximum attenuation in the passband Amax. From (1.19)

A 10log¡1 ²2¢ max = +

9 2 The Butterworth Filter

This type of filter is named after the British engineer Stephen Butterworth who designed it [But30]. We shall see that the Butterworth filter has the property that the attenuation in the passband is maximally flat, this means that the frequencies in the passband are attenuated similarly (the attenuation is as uniform as possible). This is desirable since the frequencies in the passband are the ones we "like", and the Butterworth filter has very little effect on them. The Butterworth filter is very simple, so it makes for a good starting point. However due to the simplicity, this type of approximation tends to be very impractical. Much of our discus- sion here is borrowed from [Dan74, Ch. 2].

2.1 Butterworth Polynomials

The type of approximating polynomial we use for this filter are aptly called Butterworth poly- nomials.

th Definition 2.1. The n order Butterworth polynomial Bn(x) satisfies the following condi- tions [Dan74, p. 9]

th 1.B n(x) is an n degree polynomial

2.B (0) 0 n =

3.B n(x) is maximally flat at the origin

4.B (1) 1 n = From property 1, we can write

B (x) c c x c xn n = 0 + 1 + ··· + n Property 2 requires c 0. When we say B (x) is maximally flat at the origin, we mean that 0 = n we need as many derivatives as possible of B (x) to be 0 at x 0. So n =

dBn(x) n 1 c1 2c2x ncn x − dx = + + ··· + Thus we see c 0. Similarly we see that higher order derivatives are made 0 by making the 1 = corresponding coefficient zero. So we have

B (x) c xn n = n Property 4 then requires that c 1. Therefore we found the nth order Butterworth polyno- n = mial to be B (x) xn n = Theorem 2.1. The nth degree Butterworth polynomial is [Dan74, Sec. 2.4]

B (x) xn (2.1) n =

10 Figure 2.1: Plots of the Butterworth polynomial with degree n 4 and 5 on the interval [ 1,1]. = − Notice that near the origin the polynomial is relatively flat, which will cause the frequencies in the passband to have an almost uniform attenuation. Plots made in R

2.2 The Butterworth Filter

We consider filter that has a normalized passband interval of [ 1,1] and ² 1. We let ² 1, − = = since the Butterworth filter is maximally flat in the passband, whereas other filters will have a "ripple" effect in the passband and ² will have more meaning. However, we do not mean to say you cannot have a Butterworth transfer function with a ripple factor ² 1; setting ² 1 6= = merely simplifies the results. We know the magnitude of the signal is (Definition 1.5)

2 H(iω) 2 1 ¡B (ω)¢ 1 ω2n | | = + n = + and so by substituting ω i s we obtain = − n H(s)H( s) 1 ¡ s2¢ 1 ( 1)n s2n (2.2) − = + − = + − Lemma 2.1. The zeros s of the transfer function H(s)H( s) with negative real part are j − s eiθj j = θ π (2j 1) if n even (2.3) j = 2n − θ π j if n odd j = n

11 for j J with ∈ ½ n 3n ¾ J j : 1 j n is even = 2 + ≤ ≤ 2 ½ n 1 3n 1¾ J j : + 1 j − n is odd = 2 + ≤ ≤ 2 Proof. Notice (2.3) follows immediately from (2.2) for j 1,...,2n. So all that needs to be done = is verify the index set J {j : s 0}. We begin by considering the case where n is even, = ℜ j < s cos(θ ) 0 ℜ j = j < π π 3π (2j 1) ⇐⇒ 2 < 2n − < 2 n 1 3n 1 + j + ⇐⇒ 2 < < 2 n 3n Since j is an integer, we find our bounds on j to be 2 1 j 2 . Similarly when n is odd n 1 3n 1 + ≤ ≤ we find that + j − . 2 ≤ ≤ 2 Now we can easily find the constants α for the transfer function. Let n be even, 1n 1 j − = and 1 1 2n 1 H 0(s) = 2ns − Thus 1 1 1 iθj (2n 1) − iθj αj 2n 1 e− − e = 2n(s j ) − = 2n = 2n The last equality is due to cosθ (2n 1) cosθ and sinθ (2n 1) sinθ , which are easily j − = − j j − = j verified by applying the angle addition identities. Now consider the case when n is odd. We similarly find 1 1 1 − − iθj (2n 1) − iθj αj 2n 1 e− − e = 2n(s j ) − = 2n = 2n This time, since θ πj/n, we have cosθ (2n 1) cosθ and sinθ (2n 1) sinθ , j = j − = j j − = − j again from the angle addition identities. So the only difference between n being even or odd is the angle θj and the index set that j runs along. The following theorem summarizes our results. Theorem 2.2 (The Butterworth Filter). Let ½ n 3n ¾ Jn j : 1 j n is even (2.4) = 2 + ≤ ≤ 2 ½ n 1 3n 1¾ Jn j : + 1 j − n is odd (2.5) = 2 + ≤ ≤ 2 The poles of the transfer function for the Butterworth filter are given by s exp(iθ ) , where θ j = j j is given in (2.3). We can express T (s) in partial fractions as

iθ 1 X e j T (s) − (2.6) = 2n j J s s j ∈ n −

12 If xin(t) and xout(t) represents the input and output signals at time t respectively, then the Butterworth filter is given by à ! Z 1 ∞ X iθj s j τ xout(t) − e + xin(t τ)dτ (2.7) = 0 2n j J − ∈ n 2.3 Conclusions

Lets study the following example problem. We are given the following requirements for a lowpass Butterworth filter (with a passband of ω ω ): A 0.1dB, A 30dB and | | ≤ b max = min = ω /ω 1.3, what degree n is necessary [Dan74, p. 12]? The required degree is n 21, which H b = = is quite when high compared to the other filters we will have at our disposal. The Butterworth Filter is mathematically simple (compared to the others we will study), but it comes with a cost that it is not very practical. While it keeps the frequencies in the passband relatively constant, it is not very good at attenuating frequencies in the stopband, and often requires a high degree to meet specific requirements that a filter designer needs. The other filters we discuss will be able to meet the same requirements with a significantly lower degree than the Butterworth filter. So while the Butterworth filter is not very practical, it provides a good introduction to the theory.

13 Figure 2.2: Plot of the gain T (iω) 2 for the Butterworth approximation with n 5,² 0.15 | | = = to show how well it approximates the ideal lowpass transfer function. While the approximation isn’t so good, it still may find use in applications where speed is prioritized (since the Butterworth filter is very simple compared to the others we will discuss).

14 3 The Chebyshev Filter

We want to approximate the ideal lowpass filter function say f (ω), with a function say g(ω) that is accurate in the interval [ωa,ωb]. With the Butterworth filter, g approximates f well near 0, but not so well elsewhere. An approximation g is a Chebyshev approximation of f if g minimizes max f (ω) g(ω) for ω [ω ,ω ] [Dan74, Sec. 3.1]. For this reason sometimes a | − | ∈ a b Chebyshev approximation is also called a min-max approximation. We will find that the transfer function T (s) for a Chebyshev approximation is equiripple in the passband. Meaning that in the passband, the attenuation oscillates between maximums and minimums of equal magnitude [Dan74, Ch. 3].

3.1 Chebyshev Polynomials

Definition 3.1. The nth order Chebyshev polynomial is, [Akh70, p. 151]

¡ 1 ¢ T (x) cos n cos− x (3.1) n = th While it certainly is not obvious that Tn is an n degree polynomial, the following lemma addresses this.

Lemma 3.1. The nth degree Chebyshev polynomial satisfies the following recursion relation: T (x) 1 and T (x) x, 0 = 1 = Tn 1(x) 2xTn(x) Tn 1(x) + = − − Proof. This proof is adapted from [Dan74, p. 29]. First we check the initial conditions of the recursion relation, observe

¡ 1 ¢ T (x) cos 0cos− x 1 0 = = ¡ 1 ¢ T (x) cos 1cos− x x 1 = = 1 Now let y cos− x, then = £ ¤ Tn 1(x) cos (n 1)y cos(ny)cos y sin(ny)sin y + = £ + ¤ = − Tn 1(x) cos (n 1)y cos(ny)cos y sin(ny)sin y − = − = +

¡ 1 ¢ ¡ 1 ¢ Tn 1 Tn 1 2cos cos− x cos n cos− x ⇒ + + − = ¡ 1 ¢ 2x cos n cos− x = 2xT (x) = n

15 Figure 3.1: Plot of the Chebyshev polynomial for n 4,5 on the interval [ 1,1]. Here we can = − see the equiripple property, how the polynomial oscillates between its maximum 1 and minimum 1. −

3.2 Transfer Function H(s) for the Chebyshev Filter

Similarly to the Butterworth Filter, we will discuss the normalized Chebyshev filter with a passband of [ 1,1] and the ripple factor ² 0. The magnitude of the input output transfer − 2 > function is H(iω) 2 1 ¡²T (ω)¢ , and by substituting s iω, | | = + n = h ³ s ´i2 H(s)H( s) 1 ²Tn − = + i Lemma 3.2. Let s σ iω , then the n roots s of H(s) (with (s ) 0 ) are j = j + j j ℜ j < µ ¶ ³ π ´ 1 1 1 σj sin (2j 1) sinh sinh− (3.2) = − 2n − n ² µ ¶ ³ π ´ 1 1 1 ωj cos (2j 1) cosh sinh− (3.3) = 2n − n ² for j 1,2,...,n. = Proof. This proof is adapted from [Dan74, Sec. 3.10]. We first find the roots of H(s)H( s) and − then consider those with σ 0. The roots of H(s)H( s) are given by j < − ³ s j ´ ³ 1 s j ´ i Tn cos n cos− (3.4) i = i = ±² 1 s j Let, n cos− x i y, and so i = + i cos(x i y) cosx cosi y sinx sini y ±² = + = − cosx cosh y i sinx sinh y = −

16 Then by equating the real and imaginary parts we see, 1 cosx cosh y 0 sinx sinh y (3.5) = = ∓ ² Since cosh y 0, then cosx 0, hence > = π x (2j 1) = 2 − 1 for j 1,2,...,2n. So then, y sinh− 1/². Therefore, = = ∓ 1 s j π 1 1 n cos− (2j 1) i sinh− i = 2 − ∓ ² Therefore, µ ¶ s j π i 1 1 ωj iσj cos (2j 1) sinh− i = − = 2n − ∓ n ² Hence by using the angle addition identity for cosine and equating real and imaginary parts we have, µ ¶ ³ π ´ 1 1 1 σj sin (2j 1) sinh sinh− = ± 2n − n ² µ ¶ ³ π ´ 1 1 1 ωj cos (2j 1) cosh sinh− = 2n − n ² The roots roots of H(s) are those which σ 0, i.e. j < µ ¶ ³ π ´ 1 1 1 σj sin (2j 1) sinh sinh− = − 2n − n ² µ ¶ ³ π ´ 1 1 1 ωj cos (2j 1) cosh sinh− = 2n − n ² for j 1,2,...,n. = 3.3 The Chebyshev Filter

Now that we have the zeros of H, we are able to express T (s) 1/H(s) in partial fractions = n X αj 1 T (s) with α (3.6) j d = j 1 s s j = H(s j ) = − ds ¡ ¢ 1/2 We need to find α , since the derivative of arccosx is 1 x2 − , we have j − − d 2 ³ 1 s ´ ³ 1 s ´ n 1 H(s)H( s) 2² cos n cos− sin n cos− − ds − = i i q 2 i 1 ¡ s ¢ − i 2 ¡ 1 s ¢ ¡ 1 s ¢ in² cos n cos− i sin 2n cos− i = p1 s2 + Therefore q 1 s2 + j αj ³ ´ ³ ´ (3.7) = 2 1 s j 1 s j in² cos n cos− i sin 2n cos− i

17 Theorem 3.1. (The Chebyshev Filter): Let s σ iω as in Lemma 3.2, and α as in equa- j = j + j j tion (3.7). If xin(t) and xout(t) represents the input and output signals at time t respectively, then the Chebyshev filter is given by

Z Ã n ! ∞ X s j τ xout(t) αj e xin(t τ)dτ (3.8) = 0 j 1 − = 3.4 Conclusions

Lets revisit the example filter specifications given at the end of Chapter 2, but now using the Chebyshev transfer function. We have the following lowpass Chebyshev filter requirements: A 0.1dB, A 30dB and ω /ω 1.3, what degree n is necessary [Dan74, p. 34]? For max = min = H b = the Chebyshev filter, the minimal degree is n 8, which is significantly smaller than the re- = quired degree for the Butterworth filter (21). Now, lets study the following theorem from [Dan74, p. 36].

Theorem 3.2. Suppose H(s) is an nth degree Chebyshev transfer function, and Q(s) is some other nth degree transfer function. If max Q(ω) max H(ω) for ω [ω ,ω ] (the passband | | < | | ∈ a b interval). Then Q(ω) H(ω) for ω ( ,ω ) (ω , ) (the stopband interval). < ∈ −∞ a ∪ b ∞ Essentially this theorem tells us that if we have some other transfer function Q(s) that has less attenuation in the passband than H(s), then Q(s) has less attenuation in the stopband than H(s). Hence the Chebyshev filter has the greatest stopband performance out of all the filters for a fixed degree n. However since the Chebyshev filter is equiripple in the passband, some desired frequencies will be attenuated differently than others. Whereas with the But- terworth filter, the attenuation in the passband is maximally flat, so the desired frequencies are all attenuated similarly.

18 Figure 3.2: Plot of the gain T (iω) 2 for the Chebyshev approximation with n 5,² 0.15 to | | = = show how well it approximates the ideal lowpass transfer function. We can see the equiripple property of the transfer function in the passband, and the effect of the ripple factor ² (the larger ² is, the larger the oscillations will be)

19 4 Elliptic Functions

Before we can discuss the elliptic filter, some background knowledge of elliptic functions is necessary. In this Chapter we will introduce and gain familiarity with the Jacobi elliptic func- tions. To start we will go over some basic terminology used in the theory of elliptic functions. We say that a function f (x) is periodic, if there is a nonzero constant Ω such that f (x Ω) + = f (x) [Akh70, p. 1]. The constant Ω is called the period of f ; clearly any integer multiple of Ω is also a period, and so we call Ω a primitive period of f if any other period of f is an integer multiple of Ω. For example f (x) sinx = is a periodic function with primitive period Ω 2π. = A function of a complex variable is called meromorphic in an open set D if it is differen- tiable everywhere in D except on a set of isolated points where the function has poles. For example 1 g(z) = z is a meromorphic function in C with a pole at z 0. Meromorphic functions with two distinct = primitive periods are called elliptic functions [Akh70, p. 6].

Recall Z x dt 1 sin− x (4.1) 0 p1 t 2 = − We could define sinx by inverting the integral in (4.1), we generalize this idea to a different kind of integral to define Jacobi’s elliptic functions. Let P(x) be a polynomial. Consider

Z dx (4.2) pP(x)

If P(x) has degree 2, then (4.2) is called a trigonometric integral and it will be some inverse trig function. If P(x) is of degree 3 or 4, then (4.2) is called an elliptic integral [Akh70, Sec. 17].

4.1 Elliptic Integrals

Historically elliptic integrals arose from the study of the arc length of an ellipse. There are three kinds of elliptic integrals, but we will only be interested in the first kind; we instead refer an interested reader to Akheizer’s text [Akh70, Sec. 17, 29-31] for a more in depth coverage of the other kinds.

Definition 4.1. The elliptic integral of the first kind [Akh70, Sec. 24]

Z sinϕ dt u(ϕ,k) p (4.3) = 0 (1 t 2)(1 k2t 2) − − The parameter k (typically 0 k 1 ) is called the modulus of the elliptic integral (some ≤ < texts use m k2 as the modulus), and ϕ is called the amplitude. This is the Jacobi form of the =

20 elliptic integral, if we let t sinθ, then dt costdθ p1 t 2dθ, and we obtain the Legendre = = = − form Z ϕ dθ u(ϕ,k) p (4.4) = 0 1 k2 sin2 θ − The final form that we will make use of is the Riemann form, which we get from (4.3) by letting sin2 ϕ z and t 2 x, so dx 2tdt 2pxdt and = = = = Z z dx u(z,k) p (4.5) = 0 2 x(1 x)(1 k2x) − − These integrals we defined are sometimes referred to as incomplete elliptic integrals; the complete elliptic integral is obtained by setting ϕ π/2. = Definition 4.2. The complete elliptic integral [Dan74, Sec. 5.5]

π ³π ´ Z 2 dθ K : u ,k p = 2 = 0 1 k2 sin2 θ − Z 1 dt p (4.6) = 0 (1 t 2)(1 k2t 2) − − We conclude this summary of the basics of elliptic integrals by introducing the comple- mentary forms.

2 2 Definition 4.3. We define the complementary modulus k0 so that (k0) k 1, that is + =

p 2 k0 1 k (4.7) = −

Similarly we define the complementary complete elliptic integral K 0 as [Dan74, Sec. 5.5]

π ³π ´ Z 2 dθ K 0 : u ,k0 p = 2 = 0 1 (1 k2)sin2 θ − − Z 1 dt (4.8) = q 0 ¡1 t 2¢¡1 (1 k2)t 2¢ − − −

21 Figure 4.1: Plot of the complete elliptic integral of the first kind K , and the complementary integral K 0.

4.2 Jacobi’s Elliptic Functions

By inverting the elliptic integral (4.4), we obtain what are called the Jacobi elliptic functions [Dan74, Sec. 5.6].

Definition 4.4.

elliptic sine sn(u;k) sinϕ (4.9) = elliptic cosine cn(u;k) cosϕ (4.10) = And the difference function as dϕ dn(u;k) (4.11) = du Since sin2 ϕ cos2 ϕ 1 we have + = sn2(u;k) cn2(u;k) 1 + = So p cn(u;k) 1 sn2(u;k) (4.12) = − Also dϕ q p dn(u;k) 1 k2 sin2 ϕ 1 k2 sn2(u;k) (4.13) = du = − = − So the three basic Jacobi functions can all be expressed in terms of elliptic sine. There are 9 other Jacobi elliptic functions, all defined as quotients of these three, however we will only need cn(u;k) cd(u;k) (4.14) = dn(u;k)

22 Often we write sn(u) sn(u;k) when the modulus k is clear from context, and similarly for = the other Jacobi functions. Some useful properties that follow directly from these definitions are outlined in the following lemma. A more in-depth table of values, and also plots of the Jacobi functions, are included in the appendix.

Lemma 4.1. Basic properties of the Jacobi functions

I sn(0;k) 0 sn¡K ;k¢ 1 = = II cn(0;k) 1 cn¡K ;k¢ 0 = = ¡ ¢ III dn(0;k) 1 dn K ;k k0 = = Proof. We verify the identities for sn(u;k), and the rest follows from equations (4.12) and (4.13). So when u 0, this means that the elliptic integral in (4.4) is 0, that is = Z ϕ dt p 0 0 1 k2 sin2 t = − If ϕ 0 then this integral will be 0, therefore sn(0;k) sin0 0. Similarly if u K , then = = = = ϕ π/2, and so sn(K ;k) sinπ/2 1. = = = We like to think of the Jacobi elliptic functions as generalizations of the trigonometric func- tions like sinx. In fact, when k 0, the Jacobi elliptic functions degenerate into trig functions. = Notice in (4.4) that u(ϕ,0) ϕ, hence = sn(u;0) sin(u) = cn(u;0) cos(u) = dn(u;0) 1 = We called the Jacobi functions elliptic, so they must be meromorphic and doubly periodic. It turns out that sn(u) has primitive periods 4K and 2iK 0, and simple poles at u iK 0 and = u 2K iK 0 [Akh70, Sec. 25]. We formulate this fact as a theorem for reference. = +

Theorem 4.1. The Jacobi elliptic function sn(u) has primitive periods 4K and 2iK 0, and simple poles at u iK 0 and u 2K iK 0. = = + For now we conclude this section by noting that sn(u) is an odd function and both cn(u) and dn(u) are even. To show sn(u) is odd, we apply the change of variable t t to the elliptic = − integral Z ϕ Z ϕ dt − dt u p p − = − 0 1 k2 sin2 t = 0 1 k2 sin2 t − − Since sine is odd, sn( u) sin( ϕ) sin(ϕ) sn(u). Evenness of cn(u) and dn(u) follows − = − = − = − immediately from (4.12) and (4.13), since p cn( u) 1 sn2( u) cn(u) − = − − = And similarly for dn(u).

23 4.3 The Addition Theorems for the Jacobi Elliptic Functions

Recall the familiar trig identity

sin(α β) sinαcosβ sinβcosα + = + This identity allows us to express the trig function of a sum of two arguments α,β in terms of trig functions of the individual arguments. Indeed, such an identity proves to be quite useful in the study and applications of trigonometric functions. In this section we will prove an analogous identity for elliptic functions, as well as go over a couple interesting examples.

Theorem 4.2. The addition theorem for elliptic sine. snu cnv dnv snv cnu dnu sn(u v) + (4.15) + = 1 k2 sn2 u sn2 v − Proof. The following method of proof is due to Darboux and Akhiezer [Akh70, Sec. 28]. Con- sider the equation dx d y 0 (4.16) q + q = ¡1 x2¢¡1 k2x2¢ ¡1 y2¢¡1 k2 y2¢ − − − − It has a transcendental integral Z x dx Z y d y A (4.17) q + q = 0 ¡1 x2¢¡1 k2x2¢ 0 ¡1 y2¢¡1 k2 y2¢ − − − − where A is an arbitrary constant. Our strategy is to find an algebraic integral for (4.16) and compare it to the transcendental integral in (4.17). Proceeding we let Z x dx u (4.18) = q 0 ¡1 x2¢¡1 k2x2¢ − − Z y d y v (4.19) = q 0 ¡1 y2¢¡1 k2 y2¢ − − Notice that by inverting these integrals we have x sn(u) and y sn(v). Also we have by = = (4.17) u v A + = We consider an equivalent system of equations to (4.16), by letting the first term equal dt and the second equal dt, so then − q dx/dt ¡1 x2¢¡1 k2x2¢ = q − − (4.20) d y/dt ¡1 y2¢¡1 k2 y2¢ = − − − Square both sides,

(dx/dt)2 ¡1 x2¢¡1 k2x2¢ = − − (4.21) (d y/dt)2 ¡1 y2¢¡1 k2 y2¢ = − −

24 Now we differentiate (dx/dt)2

dx d 2x dx dx 2 2x ¡1 k2x2¢ 2k2x ¡1 x2¢ dt dt 2 = − dt − − dt − d 2x x ¡2k2x2 1 k2¢ ⇒ dt 2 = − − d 2 y y ¡2k2 y2 1 k2¢ ⇒ dt 2 = − − From here we have d 2x d 2 y y x 2k2x y ¡x2 y2¢ dt 2 − dt 2 = − We recognize the left hand side as a derivative, which gets us to

d µ dx d y ¶ y x 2k2x y ¡x2 y2¢ (4.22) dt dt − dt = −

Using equations (4.21), we obtain after some labor

µ dx d y ¶µ dx d y ¶ µdx ¶2 µd y ¶2 y x y x y2 x2 dt + dt dt − dt = dt − dt y2 x2 k2x4 y2 k2x2 y4 = − + − ¡y2 x2¢¡1 k2x2 y2¢ (4.23) = − − So by dividing (4.22) by (4.23) we have ³ ´ ³ ´ d y dx x d y 2k2x y y dx x d y dt dt − dt dt + dt 2 2 2 y dx x d y = k x y 1 dt − dt − We recognize that both sides are logarithmically differentiated, that is

d µ dx d y ¶ d ln y x ln¡k2x2 y2 1¢ dt dt − dt = dt −

Integrating gives us dx d y y x C ¡1 k2x2 y2¢ dt − dt = − for some constant C. With (4.20) in mind we obtain the algebraic form of the integral in (4.16) p p y (1 x2)(1 k2x2) x (1 y2)(1 k2 y2) − − + − − C (4.24) 1 k2x2 y2 = − From here we can now establish the addition formula for elliptic sine by recalling that in- tegrals in (4.18) and (4.19) give us x snu and y snv respectively. Therefore from (4.24) = = snv cnu dnu snu cnv dnv + C (4.25) 1 k2 sn2 u sn2 v = −

25 We compare the algebraic integral obtained in (4.24) to the transcendental integral (4.17). Now since (4.25) is a consequence of (4.17) , (4.18) , (4.19), this constant C must be some function of the other constant A, that is C f (A). Since A u v, we have = = + snu cnv dnv snv cnu dnu + f (u v) 1 k2 sn2 u sn2 v = + − Now we just need to figure out this function f . Let v 0 then snv 0, cnv dnv 1, hence = = = = we conclude that f (u) snu and the result follows. = The addition formulas for the other functions can be derived from Theorem 4.2 by using the simple relations we already discussed, however we will not go through the lengthy com- putations.

Corollary 4.1. The addition theorems for cn(u v), dn(u v), and cd(u v) + + + cnu cnv snu dnu snv dnv cn(u v) − + = 1 k2 sn2 u sn2 v − dnu dnv k2 snu cnu snv cnv dn(u v) − + = 1 k2 sn2 u sn2 v cdu cd−v snu snv cd(u v) − + = 1 k2 snu snv cdu cdv − Recall the familiar identity ³ π´ sin x cos(x) + 2 = By adding the quarter period π/2 to the argument of trigonometric sine, we obtain another trigonometric function (cosine) of the same argument. We like to think of the elliptic func- tions as generalizations of trig functions, and so we investigate what happens when we add the quarter period K to the argument of elliptic sine. With the addition theorem we see

snu cnK dnK snK cnu dnu sn(u K ) + + = 1 k2 sn2 u sn2 K cnu dn−u = 1 k2 sn2 u cn−u cdu (4.26) = dnu = Because of this identity sometimes people prefer to call the function cd(u) elliptic cosine instead of cn(u). Also from the addition theorem we can see sn(u 2K ) sn(u), similar to + = − sin(x π) sin(x). + = −

26 Figure 4.2: Plot cd(x;k) and sn(x;k) showing that cd(x) sn(x K ). = +

4.4 Transformations of Jacobi Elliptic Functions

Our goal for this section is to develop the relations that transform an elliptic function of the form y sn2(u/M;λ) with some constant M, to x sn2(u;k). More specifically we investigate = = what happens when the periods of y are linearly related to the periods of x. Here we borrow some of the discussion from [Akh70, Sec. 35]. First consider dx d y p M p (4.27) 4x(1 x)(1 k2x) = 4y(1 y)(1 λ2 y) − − − − where M is a constant. We are trying to find an algebraic dependence between x and y, that transforms the first elliptic differential into the second. To simplify the problem, we confine ourselves to determining the integral of (4.27) that assigns x 0 to y 0. That is we want to = = determine the algebraic dependence between x and y that follows from the relation

Z x dt Z y dt p M p (4.28) 0 4t(1 t)(1 k2t) = 0 4t(1 t)(1 λ2t) − − − − Now we set Z x dt p u 0 4t(1 t)(1 k2t) = − − We now replace (4.28) with the parametric equations

x sn2(u;k) = (4.29) y sn2(u/M;λ) = since with the Riemann form of the elliptic integral (4.5), the upper limit x sin2 ϕ sn2(u;k). = =

27 The problem now is to determine the conditions where the elliptic functions x and y are connected by an algebraic relation. We will not go too deep with the details for this problem, but for the interested reader they can be readily found in Akhiezer’s text [Akh70, Ch. 6].

4.4.1 The First Degree Transformation

There are actually two first degree transformations, however we will only discuss one, since it is far more useful to our purposes than the other. This analysis has been adapted from [Akh70, Sec. 37]. 2 We know the periods of x sn (u;k) are 2K and 2iK 0, similarly we denote by 2L and 2iL0 2 = 2 the periods of sn (v;λ). Then y sn (u/M;λ) has periods 2ML and 2i ML0. This transforma- = tion results from setting ML iK 0 i ML0 K = = − So going from x to y we essentially interchanged the roles of K and K 0. 2 Consider the ratio y/x, which is a rational function of sn (u;k). At u iK 0, we have y 2 2 = = sn (L;λ) 1, and x sn (iK 0;k) has a second order pole, hence at u iK 0, y/x has a second = = = order zero. Similarly when u K , y/x has a second order pole. Thus = y A x = sn2(u;k) 1 − for some constant A (because both sides have matching poles, zeros and periods). If we let u tend to zero, on the right we have A, and on the left after applying L’Hospital’s rule twice we − obtain 1/M 2 (see the appendix for differentiation formulas). Explicitly we found that 1/M 2 = A. Continuing − A sn2(u;k) y = sn2(u;k) 1 − Let u iK 0, and we see that 1 A, hence M i. Now set u K iK 0, and we have = = = ± = − + 2 2 2 sn ( K iK 0;k) sn (K iK 0;k) sn (L iL0;λ) − + + + = sn( K iK ;k) 1 = sn(K iK ;k) 1 − + 0 − + 0 − by periodicity. Now since sn(K iK 0;k) 1/k we obtain + = 1 1/k2 λ2 = 1/k2 1 − Solving for λ yields λ k0. Let M i so we have = = − 2 2 sn (u;k) sn (iu;k0) = −1 sn2(u;k) − And we have obtained the following theorem.

Theorem 4.3. The imaginary transformation of elliptic sinus

sn(u;k) sn(iu;k0) i (4.30) = cn(u;k)

28 The formulas for cn(iu),dn(iu) and cd(iu) follow from (4.30), and they are

1 cn(iu;k0) = cn(u;k) dn(u;k) dn(iu;k0) = cn(u;k) 1 cd(iu;k0) = dn(u;k)

These transformations show us how to deal with pure imaginary arguments of the elliptic functions in terms of real variables. So this transformation combined with the addition the- orems allows us to express the Jacobi elliptic functions of any complex variable, in terms of real variables. The other first degree transformation results from letting i ML0 K iK 0 and ML K , we = + = refer the interested reader to [Akh70, Sec. 36] for the details of this transformation.

4.4.2 The nth Degree Transformation

Here we will skip most of the details, and instead provide the framework for the second prin- cipal transformation so that the results have some meaning. The second principal nth degree transformation entails division of one of the periods by an integer n. We consider the follow- ing relations x sn(u;k) y sn¡ u ;λ¢ = = M K K 0 L L0 = M = nM Again we consider the ratio y/x, which is an even function of u. Also we find that the periods of y/x are 2K and 2iK 0, since ¡ ¢ sn u 2K 2iK 0;k sn(u;k) x + + = − = − µ ¶ u 2K 2iK 0 ³ u ´ sn + + ;λ sn ;λ y M = − M = −

Then since y/x has the same periods as sn2(u;k), it must be a rational function of sn2(u;k). From here we will skip ahead to the results, and refer the more interested reader to Akhiezer’s text [Akh70, Sec. 40].

Theorem 4.4 (Second Principal nth degree transformation). By subjecting the periods to the following transformations

K K 0 L L0 = M = nM We have n sn2(u;k) 2 1 ³ u ´ 1 bYc c2r sn ;λ sn(u;k) + (4.31) sn2(u;k) M = M r 1 1 = c2r 1 + −

29 with

n 2 2 ¡ 2r 1 ¢ bYc sn n− K 0;k0 M (4.32) 2 ¡ 2r ¢ = r 1 sn K 0;k0 = n 2 ³ r K ´ n θ 0 ;k Y 4 nK 0 λ ³ ´ (4.33) = r 1 θ2 (2r 1)K 0 ;k = 4 2−nK 0 2 ¡ r ¢ sn K 0;k0 c n (4.34) r = 2 ¡ r ¢ cn n K 0;k0

Where x denotes the floor function, and θ is one of the Jacobi theta functions, which b c 4 we will discuss in the next section. The first principal nth degree transformation theorem is included below, as it will be useful later.

Theorem 4.5 (First Principal nth degree transformation). By subjecting the periods to the following relations

K K 0 L L0 = nM = M We have

n/2 n bYc 2 λ k c2r 1 (4.35) = r 1 − = n/2 bYc c2r 1 M − (4.36) = r 1 c2r = 2 ³ r ´ cr sn K ;k (4.37) = n if n is odd n 1 sn2(u;k) ³ u ´ 1 −2 1 Y − c2r sn ;λ sn(u;k) 2 2 (4.38) M = M r 1 1 k c2r sn (u;k) = − if n is even n sn2(u;k) 2 1 ³ u ´ Y c2r 1 − − sn L;λ 2 2 (4.39) M + = r 1 1 k c2r 1 sn (u;k) = − −

30 4.5 The Jacobi Theta Functions

The Jacobi theta functions are periodic, entire functions that can be defined as Fourier series that rapidly converge (about 4 terms should suffice for most calculations). They depend on a parameter q with q 1, we define [BE55, Sec 13.19] | | < Definition 4.5.

¡ ¢ 1/4 X∞ j j(j 1) ¡ ¢ θ1 v,q 2q ( 1) q + sin 2j 1 πv (4.40) = j 0 − + = ¡ ¢ 1/4 X∞ j(j 1) ¡ ¢ θ2 v,q 2q q + cos 2j 1 πv (4.41) = j 0 + = ¡ ¢ X∞ j 2 ¡ ¢ θ3 v,q 1 2 q cos 2jπv (4.42) = + j 1 = ¡ ¢ X∞ j j 2 ¡ ¢ θ4 v,q 1 2 ( 1) q cos 2jπv (4.43) = + j 1 − = Sometimes the notation θ θ is used. 0 = 4 Consider the function sn(u;k), with periods 4K and 2iK 0, the parameter q is related to these by [BE55, Sec 13.19] π K 0 q e− K (4.44) = Alternatively, a more efficient computation for q for a given k can be found in [Akh70, Sec. 45], under the assumption that ¯ ¯ ¯1 pk ¯ ¯ 0 ¯ ¯ − ¯ 1 ¯1 pk ¯ < + 0 which holds when 0 k 1. On this note let < < 1 pk 2l − 0 (4.45) = 1 pk + 0 Now q can be computed from the following rapidly converging series

q l 2l 5 15l 9 150l 13 (4.46) = + + + + ··· For the nth degree transformation, we used the following notation in the computation of λ, θ4(w;k) (a semicolon instead of a comma). This was deliberate to emphasize that we are given k and we must compute q from (4.45) and (4.46) before we can evaluate the theta func- tion. Another practical benefit of the Jacobi theta functions is that they provide a means for efficient computation of the Jacobi elliptic functions. Let x v = 2K then by [BE55, Sec. 13.20] 1 θ1(v;k) sn(x;k) (4.47) = pk θ4(v;k)

31 Also k can be uniquely determined by q with the following equations

θ ¡0,q¢ θ ¡0,q¢ p 2 p 4 k ¡ ¢ k0 ¡ ¢ = θ3 0,q = θ3 0,q

The other Jacobi functions can be expressed similarly s k θ (v;k) θ (v;k) 0 2 p 3 cn(x;k) dn(x;k) k0 = k θ4(v;k) = θ4(v;k)

Moreover from [BE55, Sec. 13.20] we have µ ¶ π 2 K 1 K θ (0;k) K 0 log = 2 3 = π q

32 5 Elliptic Rational Function

The elliptic rational function is the approximating function used for the elliptic filter, and the key to understanding the elliptic filter lies with this function. There are many equivalent ways of defining and formulating the elliptic rational function, but all of which require use of the Jacobi elliptic functions. It is also known as the Chebyshev rational function [Dan74], or the Chebyshev-Blashke product [NT13]. The mathematics behind this function date back to Zolotarev in 1877 [Zol77] a student of Chebyshev, however it wasn’t until 1958 when Cauer used Zolotarev’s ideas to design the elliptic filter for signal processing [Cau58]. As such some- times the elliptic filter is called the Zolotarev-Cauer filter. In fact these various sources all provide different definitions and derivations of the elliptic rational function, which don’t appear to be equivalent. This caused a lot of confusion, as the functions they derived were used for the same purposes and had the same properties, but other than that, it appeared as if there were no other connections. For example consider in Lutovac’s text [LTE01, Sec. 12.6] the elliptic rational function is defined to be ³ u ´ Rn(k,x) cd ;λ x cd(u;k) (5.1) = M = where M is a scaling factor given in Theorem 4.5. However in Daniels’ text [Dan74, Sec. 5.12], the elliptic rational function is derived to be

n 1 −2 2 2 ¡ 2r ¢ Y x sn n K ;k Rn(k,x) r1x − if n odd (5.2) £ ¡ 2r ¢¤ 2 = r 1 x2 k sn K ;k − = − n n 2 2 2 ¡ 2r 1 ¢ Y x sn n− K ;k Rn(k,x) r2 − if n even (5.3) = 2 £ ¡ 2r 1 ¢¤ 2 r 1 x k sn − K ;k − = − n Where r and r are normalizing constants chosen so that R (k,1) 1. 1 2 n = The two functions don’t seem to be equivalent, and even after applying the nth degree transformation to Lutovac’s form, the resulting rational function looks similar to Daniels’ function, but not quite algebraically identical. These are standard textbooks that an engineer would use to learn about signal processing (or to use as a reference), and it can be frustrating seeing such apparent discrepancies with no clear resolution. So where do these differences come from, and what exactly is the elliptic rational function? Our goal for this chapter will be to define and construct the elliptic rational function, and establish the connections be- tween the various results given from different authors. We will begin by examining some of the problems Zolotarev proposed and solved, and we will define the elliptic rational function from the solution to one of his problems.

5.1 Statement of the Problems

We define the deviation of a function g(x) from a function f (x) on some interval I as ¯ ¯ sup¯f (x) g(x)¯ I −

33 Consider the following problems adapted from [Akh70, Sec. 50]. Problem A : Find the rational function y(t) ϕ(t)/ψ(t) (with ϕ and ψ polynomials of degree = at most n) that deviate the least from the function ½ 1 : t 0 sgnt − < = 1 : t 0 > on the intervals [ 1/κ, 1][1,1/κ] (with 0 κ 1 ) − − < < Problem B : Consider rational functions z(x) f (x)/g(x) (with f and g polynomials of de- = gree at most n) that satisfy z(x) 1 on the intervals ( , 1/k] and [1/k, ) (with 0 k 1 | | ≥ −∞ − ∞ < < ). Find the one that deviate the least from 0 on [ 1,1]. − The last problem we will consider is Problem C : Of all real rational functions Ψ(X ) Y (X ) pX = Φ(X ) where Ψ and Φ are polynomials of degree r , find the one that deviates the least from 1 on [1,1/κ2] with 0 κ 1. < < We now show that problems A, B and C are actually equivalent, that given a solution to one, we can construct a solution to the other. The following discussion is adapted from [Akh70, Sec. 50]. First suppose that z(x) f (x)/g (x) is a solution to Problem B. We can see that f (x) must = 0 0 0 be a polynomial of degree n, because if it were less than n consider z˜ kx f (x)/g (x). z˜ is = 0 0 also a rational function with the degree of the numerator and denominator at most n and we also have the inequality z˜(x) 1 on the intervals ( , 1/k] and [1/k, ). However | | ≥ −∞ − ∞ ¯ ¯ ¯ ¯ ¯ f0(x) ¯ ¯ f0(x) ¯ max z˜(x) max ¯kx ¯ k max ¯ ¯ max z(x) [ 1,1] | | = [ 1,1] ¯ g (x)¯ ≤ [ 1,1] ¯ g (x)¯ < [ 1,1] | | − − 0 − 0 − contradicting our assumption that z(x) is a solution to problem B. From the statement of the problem we see that on the intervals ( , 1/k] and [1/k, ), −∞ − ∞ min z 1. Now let | | = max z(x) m (5.4) [ 1,1] | | = − We see that m 1, shown by the function z(x) kx which solves problem B in the case n 1. < = = Let 1 m z(t) pm y(t) − − (5.5) = 1 m z(t) pm + + 1 pk xpk 1 t + − (5.6) = 1 pk xpk 1 Ã − !2 + 1 pk κ − (5.7) = 1 pk +

34 We show that y(t) solves problem A. Now if 1 x 1 then it is easy to check that t ranges − ≤ ≤ from 1/κ to 1. Similarly for x 1/k then 1 t 1/κ. Now we have for the first interval − − | | ≥ ≤ ≤ m max z(x) max z(t) = x [ 1,1] | | = t [ 1/κ, 1] | | ∈ − ∈ − − And on this interval we have 2(z mpm) y sgnt y 1 + − = + = (1 m)(z pm) + + We note that y 1 is an increasing function since + d y 2pm 0 dz = (z pm)2 > + therefore 2(m mpm) 2pm max y 1 + (5.8) [ 1/κ, 1] | + | = (1 m)(m pm) = 1 m − − + + + Similarly we have in the second interval

1 min z(x) min z(t) = x 1/k | | = t [1,1/κ] | | ≤− ∈ On this interval 2z(m pm) y 1 − + − = (1 m)(z pm) + + By the same reasoning we see that y 1 is a decreasing function so its maximum on this − interval is ¯ ¯ ¯ ¯ 2pm max ¯y sgnt¯ max ¯y 1¯ (5.9) [1,1/κ] − = [1,1/κ] − = 1 m + So we found that the deviation of y(t) from sgnt in the intervals required in problem A is

2pm µ (5.10) = 1 m + Also we see that µ is an increasing function of m in (0,1) since dµ 1 m − dm = pm(1 m)2 + So since m was the smallest deviation for problem B, we see that µ is the smallest deviation for problem A, and therefore y(t) is a solution to problem A. Now conversely, given a solution to problem A, we can apply the inverse transformations in (5.5), (5.6), (5.7) to find a solution to problem B.

We now show that problem A and problem C are equivalent. Suppose Y (X ) pX Ψ(X )/Φ(X ) = is a solution to problem C. Let X t 2 so then the interval X [1,1/κ2] becomes t [ 1/κ, 1] = ∈ ∈ − − and [1,1/κ], and our function transforms to Ψ¡t 2¢ Y t = Φ¡t 2¢

35 And so we have ¯ ¡ ¢¯ ¯ Ψ(X )¯ ¯ Ψ t 2 ¯ ¯ p ¯ ¯ ¯ max ¯1 X ¯ max ¯sgnt t ¡ ¢ ¯ X [1,1/κ2] ¯ − Φ(X ) ¯ = t [ 1/κ, 1] [1,1/κ] − Φ t 2 ∈ ∈ − − ∪ ¯ ¯ And we have arrived at the solution to problem A with n 2r 1 (the numerator ψ(t) has = + degree 2r 1 and the denominator ϕ(t) has degree 2r ). Similarly we can show the converse, + that is given a solution to problem A we can apply the inverse transformation to obtain a solution to problem C. So now everything rests upon the solution to problem C.

5.2 Solution to Problem C

We will apply the following theorem due to Chebyshev [Akh70, Sec. 51]

Theorem 5.1 (Chebyshev). Let [a,b] be a finite closed interval, and let f (x) and s(x) be con- tinuous functions on this interval, with s(x) 0. We consider expressions of the form 6= n q0 q1x qn x W (x) s(x) + + ··· + = p p x p xm 0 + 1 + ··· + m with m,n given. Of these functions W (x) there exists one deviating the least from f (x) on [a,b]. In particular if this function has the form

n ν B(x) b0 b1x bn νx − Q(x) s(x) s(x) + + ··· + − m µ = A(x) = a0 a1x am µx − + + ··· + − with 0 µ m,0 ν n,am µ 0 and B(x)/A(x) is irreducible. Then the number of points of ≤ ≤ ≤ ≤ − 6= [a,b] at which f (x) Q(x) takes its maximal value is not less than m n 2 min{µ,ν}. This − + + − property completely characterizes the function Q(x).

The most important part of the theorem that we will make use out of is where it asserts that f (x) Q(x) achieves its maximum value m n d 2 times in the interval [a,b]. In fact | − | + − + the converse holds too, that is if we find a function Q(x) such that f (x) Q(x) achieves its − maximum deviation m n d 2 times in the interval [a,b], then Q(x) deviates the least from + − + f (x) in the interval [a,b]. This fact is what we will need to verify our solution. Now we apply this to problem C, where the interval is [1,1/k2], f (x) 1,s(x) pX and = = m n r . We present the solution in parametric form and show that this indeed solves = = problem C.

2λ ³ u ´ X sn2(u;k) Y sn ;λ (5.11) = = 1 λ M + with M given in Theorem 4.4.

Proof. Adapted from [Akh70, Sec. 51]. Let 4L and 2iL0 be the periods of sn(v;λ), and 4K ,2iK 0 denote the periods of sn(u;k); we require L K /M and L0 K 0/((2r 1)M). By applying the = = +

36 nth degree transformation (Theorem 4.4) we see that Y is indeed a rational function of the required form. 2 r 1 sn (u;k) 2λ sn(u;k) Y c2α Y + sn2(u;k) = 1 λ M α 1 1 + = c2α 1 + − with c given by (4.34). By subbing in X sn2(u;k) we see Y is of the required form α = r 2λ pX Y 1 X /c2α Y + = 1 λ M α 1 1 X /c2α 1 + = + − Now consider the difference ∆(X ) 1 Y as x runs along the interval [1,1/k2]. To do this = − we will discover the values of u that keep X in this interval, and then examine ∆(X ) for these values. Let u K i v, so = + 2 2 cn (i v;k) 1 X sn (K i v;k) 2 2 = + = dn (i v;k) = dn (v;k0) 2 2 2 2 Let v increase from 0 to K 0. Then dn (v;k0) decreases from 1 to dn (K 0;k0) 1 (k0) k , = − = therefore X increases from 1 to 1/k2 as desired. Now we check what happens to ∆(X ) in this interval

2λ µK i v ¶ ∆(x) 1 sn + ;λ = − 1 λ M + 2λ 1 1 = − 1 λ dn(v/M;λ ) + 0

Now when v increases from 0 to K 0, w v/M increases from 0 to (2r 1)L0. We know that = + dn(w;λ0) is always between 1 and λ, also it obtains the maximum and minimum when w = 0,L0,2L0,...,2r L0,(2r 1)L0, and respectively dn(w;λ0) 1,λ,1,...,1,λ. The values of ∆(X ) at + = these points are 1 λ 1 λ 1 λ − , − ,..., − 1 λ −1 λ −1 λ + + + So 1 Y takes on its maximum value on [1,1/k2] with alternating signs 2r 2 successive − + times. Therefore Chebyshev’s Theorem tells us (note µ ν 0) that Y deviates the least from = = 1 on the interval [1,1/k2].

Plots of the solutions to these problems can be found in the Appendix.

5.3 Elliptic Rational Function

Recall the properties of the approximating function P(ω) given in Section 1.3: P(ω) 1 in | | ≤ [ 1,1], and outside this interval we want P(ω) to grow large. Notice the similarities to the − solution z(x) in problem B; in [ 1,1] we have z(x) m, where − | | ≤ m max z(x) = [ 1,1] | | −

37 and in the intervals ( , 1/k],[1/k, ) (for 0 k 1) we have z 1. If we simply multiply −∞ − ∞ < < | | ≥ z(x) by 1/m, we have in [ 1,1], z(x) 1 and in ( ,1/k] [1/k, ), z 1/m. Since m is − | | ≤ −∞ ∪ ∞ | | ≥ very small, 1/m is very large and we have exactly what we need.

Definition 5.1. Let zn(k,x) be the solution to problem B with the degree of the numerator and denominator at most n, the parameter 0 k 1 indicating the intervals ( , 1/k] [1/k, ) < < −∞ − ∪ ∞ where z (k,x) 1, and with m max z (k,x) on [ 1,1]. The elliptic rational function is: | n | ≥ = | n | −

zn (k,x) Rn(k,x) (5.12) = m The way we define the elliptic rational function is very similar to one way of defining the Chebyshev polynomials. Consider the following problem:

Of all polynomials p(x) of degree n with leading coefficient 1, we desire the one which devi- ates the least from 0 on [ 1,1]. − The solution to this problem is

1 n p(x) 2 − cos(n arccosx) (5.13) = 1 n where the maximum deviation from 0 is ν 2 − [Akh70, Sec. 52]. And we then define the = Chebyshev polynomial as T (x) p(x)/ν cos(n arccosx). We defined the elliptic rational n = = function in a similar fashion based off the solution to problem B. Although, there is one slight problem with this definition. From the previous section we can construct the solution to problem B from problem A and C, but only for odd degree n. It is easy to show that the solution to problem A must be an odd function, and so the degree of the numerator and denominator is at most n, which is odd. Thus when we construct the solution to problem B, based off the solution to A, we can only have a solution for when n is odd. So we seek to construct the solution to problem B which is not dependent on the parity of n.

Lemma 5.1. The solution to problem B can be represented parametrically as ³ u ´ zn(k,x) λcd ;λ x cd(u;k) (5.14) = M = where L K /(nM) and L0 K /M as in the first principal nth degree transformation (Theorem = = 4.5). The maximum deviation of z from 0 on [ 1,1] is m λ [NT13, Sec. 3.2.5]. − = Proof. First we prove that the maximum deviation from 0 on [ 1,1] is m λ. Let u range − = from 0 to 2K . Then x cd(u;k) sn(u K ;k) (5.15) = = + At u 0, x snK 1. As u increases to 2K , sn(u K ) decreases to 1, and hence x [ 1,1]. = = = + − ∈ − Also ³ u ´ ³ u ´ zn(k,x) λcd ;λ λsn L;λ (5.16) = M = M +

38 Consider when u K , then z λsn((n 1)L;λ) λ,0, λ,0 if n 0,1,2,3 mod 4 respectively. = = + = − = Since elliptic sinus is absolutely bounded by 1 for real arguments, we see that max z λ on | | = [ 1,1]. − Also we can easily see that z (k,x) 1 on the intervals ( , 1/k] [1/k, ). Let u i v | n | ≥ −∞ − ∪ ∞ = and have v range from K 0 to K 0 iK , so + 1 x cd(i v;k) (5.17) = = dn(u;k0) at v K 0, dn(v;k0) k so x 1/k. As v runs to K 0 iK , dn(v;k0) is decreasing until = = = + dn(K 0 iK ;k0) 0, hence x increases from 1/k to . For z we have + = +∞ µi v ¶ λ z λcd ;λ (5.18) = = ¡ v ¢ M dn M ;λ0

At v K 0, we have = λ λ z 1 (5.19) = ¡ K 0 ¢ = dn(L ;λ ) = dn M ;λ0 0 0

And as v runs to K 0 iK , dn(v/M;λ0) will oscillate from 1 to 1, ending at v K 0 iK which + − = + gives

µ ¶ K 0 iK dn + ;λ0 dn(L0 inL;λ0) λ,0, λ,0 M = + = − if n mod 4 0,1,2,3 respectively = Hence on [1/k, ), z 1, and a similar argument will show the same for the interval ( , 1/k] ∞ | | ≥ −∞ − (let v range from K 0 2iK to K 0 3iK ). + + Now to show that zn(k,x) solves problem B, instead we will show that yn(κ,t) solves prob- lem A, with

1 λ z pλ yn(κ,t) − − (5.20) = 1 λ z pλ + + 1 pk xpk 1 t + − (5.21) = 1 pk xpk 1 Ã − !2 + 1 pk κ − (5.22) = 1 pk + These are the same relations discussed in Section 5.1, and by that previous discussion if we show y solves problem A, then z solves problem B. We want to show that y sgnt is minimax | − | on [ 1/κ, 1] [1,1/κ]. Let x [ 1,1], we found earlier this implies t [ 1/κ, 1] (see Section − − ∪ ∈ − ∈ − − 5.1). Now consider the difference y sgnt y 1 on this interval. − = + ³ ¡ u ¢ ´ p 2 λcd ;λ λpλ 2(z λ λ) M + y 1 + ³ ´ + = (1 λ)(z pλ) = (1 λ) λcd¡ u ;λ¢ pλ + + + M +

39 Since x [ 1,1], then u [0,2K ], and so u/M [0,2nL]. The maximum deviation here is ∈ − ∈ ∈ 2pλ µ = 1 λ + And we reach this maximum whenever cd(u/M;λ) 1. This happens when u/M is an even = ± multiple of L, i.e. u/M 2jL where j is an integer. Since LM K /n, we see = = 2j u K = n for j 0,...,n. Therefore y sgnt hits its maximum n 1 times on the interval [ 1/κ, 1]. A = | − | + − − similar argument will show that y sgnt hits its maximum n 1 times on the other interval | − | + [1,1/κ] as well (let x 1/k). Therefore by Chebyshev’s Theorem y (κ,t) deviates the least | | ≥ n from sgnt on the required intervals.

From Lemma 5.1 and Definition 5.1 we have the following theorem.

Theorem 5.2 (The Elliptic Rational Function). Denote the periods of sn(v;λ) as L,L0, with L K /(nM) and L0 K 0/M (as in Theorem 4.5). = = ³ u ´ Rn(k,x) cd ;λ x cd(u;k) (5.23) = M = Often the notation ξ 1/k is used, and this parameter ξ is called the selectivity factor. Since = λ is determined by k,n the notation L (ξ) 1/λ is often used, and we call L (ξ) the discrimi- n = n nation factor [LTE01, Sec. 12.6]. We will avoid this notation since we have been denoting the complete elliptic integral with modulus λ as L, and this could cause confusion. We can show that Rn(k,x) is a rational function of polynomials. Indeed, suppose n is even, as the case where n is odd is nearly identical. From the nth degree transformation, we can compute the constants M and λ (see Theorem 4.5), as well as express Rn(k,x) as a rational function of the required form. Proceeding

R (k,x) cd(u/M;λ) sn(u/M L;λ) n = = + n sn2(u;k) 2 1 Y c2r 1 − − 2 2 (5.24) = r 1 1 k c2r 1 sn (u;k) = − − where µ ¶ 2 r K cr sn ;k (5.25) = n

We want to express Rn(k,x) as a rational function of x, but what we have is a rational function of elliptic sinus. The trick to do this is to express x cd(u;k) in terms of elliptic sinus. Observe = 1 sn2(u;k) x2 cd2(u;k) − = = 1 k2 sn2(u;k) − Solve for sn2(u;k), x2 1 sn2(u;k) − = x2k2 1 −

40 Substitute back into (5.24)

n x2 1 2 1 2 2 − Y − (x k 1)c2r 1 Rn(k,x) − − (5.26) = 2 x2 1 r 1 1 k c2r 1 2 −2 = − − x k 1 n − 2 2 2 2 Y x k c2r 1 x 1 c2r 1 − − + − − (5.27) = 2 2 2 2 2 2 2 r 1 x k c2r 1 x k c2r 1 k c2r 1 c2r 1 = − − − + − − − n 2 ¡ 2 ¢ Y2 x k c2r 1 1 1 c2r 1 − − + − − (5.28) ¡ 2 ¢ 2 2 = r 1 c2r 1 k c2r 1 1 x k c2r 1 (1 c2r 1) = − − − + − − − Note that in the solution to problem C, we used the second principal nth degree transfor- mation (Theorem 4.4), and the solution given above to problem B, we used the first principal nth degree transformation (Theorem 4.5). Since the solution to problem B follows from the solution to problem C, where does the solution change from one nth degree transformation to the other? I suspect the change happens with the mapping from problem A to problem B (from Sec. 5.1). Now, if we take the solution given for problem C, and use the mappings we defined to obtain the solution to problem B, the resulting expression is significantly different from the solution in Lemma 5.1; I do not know if it can be shown algebraically that these two expres- sions are equal. The most common definition of the elliptic rational function coincides with Theorem 5.2 [LTE01], and its also much simpler than the expression obtained from mapping the solution to problem C to problem B and then dividing by the constant m max z on [ 1,1]. For these = | | − reasons we use the solution from Lemma 5.1 to create the elliptic rational function. Some plots of the elliptic rational function for n 4,5 are included on the next page. =

41 Figure 5.1: Plot of R (k,x) with k 0.7 on the intervals [ 1,1] and [1,8], with horizontal lines 4 = − at 1/λ. ±

Figure 5.2: Plot of R (k,x) with k 0.7 on the intervals [ 1,1] and [1,6] with horizontal lines 5 = − at 1/λ. ±

5.3.1 Connections Between Texts

Here we aim to understand the differences between the elliptic rational function given in Daniels’ text [Dan74] and in Lutovac’s text [LTE01], and in the paper by Tuen Wai Ng and Chiu

42 Yin Tsang [NT13]. We have already seen how Lutovac’s solution follows from the solution of Zolotarev’s problems, but we have not seen any connection to Daniels’ results yet. Compare the rational function we derived in (5.28) to (5.3); they’re similar but not equivalent. I believe the problem lies with the way the elliptic rational function was derived in Daniels’ text [Dan74, Sec. 5.4, 5.8], in fact this approach seems to originate from Cauer [Cau58, p. 738- 758]. Here Daniels sets up the following differential equations

MdRn du (5.29) = q (1 R2 )¡1 λ2R2 ¢ − n − n dx (5.30) = q (1 x2)¡1 k2x2¢ − − And the solution is

x sn(u C ;k) (5.31) = + 1 R (k,x) sn(u/M C ;λ) (5.32) n = + 2 with arbitrary constants C ,C . Daniels arbitrarily sets the constant C 0, and this leads to 1 2 1 = the rational functions (5.2) (5.3) (here, Cauer has the same results with different notation). However if we set C K , then x cd(u;k) agreeing with our earlier results. Also when u 0, 1 = = = then x 1, and since we require R (k,1) 1 [Dan74, p. 53], = n = 1 sn(C ;λ) = 2 hence C L, and we have R (k,x) cd(u/M;λ), as before. 2 = n = As for the paper by Tuen Wai Ng and Chiu Yin Tsang, they define the Chebyshev Blaschke Product parametrically [NT13]

f (x) pλcd(nL w;λ) x pk cd(K w;k) (5.33) n,k = =

Then they show z˜ pλfn,k (x) solves a modified version of problem B. The same problem ³ = i h ´ except z˜ 1 on , 1/pk 1/pk, , and z˜ deviates the least from 0 on the interval h | |i ≥ −∞ − ∪ ∞ pk,pk . − Substituting u w/K and M K /(nL) returns the same notation we’ve been using for the = = elliptic rational function, and we see this is really the same idea, just scaled accordingly to suit their purposes.

5.4 Zeros and Poles of the Elliptic Rational Function

We can express the elliptic rational function in terms of its zeros and poles since it’s a rational function of polynomials. Let x j be the jth zero, and xp,j be the jth pole, then

n Y x x j Rn(k,x) r0 − (5.34) = j 1 x xp,j = −

43 where r is a normalizing constant so that R (k,1) 1 [Dan74, Sec. 5.3]. Before we derive the 0 n = zeros and poles, we will make use of the following lemma.

Lemma 5.2. 1 cd(u iK 0;k) (5.35) + = k cd(u;k) Proof. Observe ¡ ¢ ¡ ¢ cd u iK 0 sn u K iK 0 + = + ¡ + ¢ ¡ ¢ ¡ ¢ snu cn K iK 0 dn K iK 0 sn K iK 0 cnu dnu + + + + = 1 k2 sn2 u sn2 (K iK ) − + 0 1 cnu dnu = k 1 sn2 u dn−u 1 = k cnu = k cdu

The zeros of the elliptic rational function happen when ³ u ´ Rn(k,x) cd ;λ 0 = M = Since cd(w;λ) 0 when w L,3L,...,(2j 1)L, we have that u (2j 1)K /n for j 1,...,n. = = − j = − = Therefore the zeros of the elliptic rational function are at

µ2j 1 ¶ x j cd − K ;k (5.36) = n

Now the poles are the solutions to

1 1 0 = ¡ u ¢ = Rn(k,x) cd M ;λ

This equation only has solutions for complex u, so let u v iK 0. Then = + 1 1 ³ v ´ λcd ;λ 0 cd¡ u ;λ¢ = cd¡ v iL ;λ¢ = M = M M + 0 We just solved this equation and found v (2j 1)K /n. Hence the poles are at j = − ¡ ¢ x cd v iK 0;k p,j = j + 1 = ³ 2j 1 ´ k cd n− K ;k 1 (5.37) = kx j

44 So we see how the zeros and poles are related to one another, also they come in pairs with equal magnitudes but opposite signs [LTE01, p. 532,533], that is x j xn j 1. Hence we can = − − + write

n 2 2 2 x x Y − j Rn(k,x) r1 if n is even (5.38) = x2 1 j 1 k2x2 = − j n 1 2 2 −2 x x Y − j Rn(k,x) r2x if n is odd (5.39) = x2 1 j 1 k2x2 = − j with r ,r normalizing constants so that R (k,1) 1. Explicitly 1 2 n = n 1 1 n 1 1 1 2 k2x2 −2 k2x2 Y − j Y − j r r 1 = 2 2 = 2 j 1 1 x j j 1 1 x j = − = − This gives us another way of expressing the elliptic rational function as a ratio of polynomi- als, equivalent to applying the nth degree transformation. This form is useful since it is easy to identify the zeros and poles of the function.

5.5 The Elliptic Rational function for n 1,2,3 = Here we will derive explicit formulas for the elliptic rational function that avoid the use of the Jacobi elliptic functions. With these formulas and the nesting property, we can obtain expressions for any order n 2i 3j . People have made algorithms that exploit the nesting = property of the elliptic rational function for orders n 2i 3j , and these algorithms perform = significantly faster than the more traditional means of computing (i.e. using Theorem 5.2 or (5.28)) [LT05, p. 606, 607]. The nesting property of the elliptic rational function is as follows (see [LTE01, Sec. 12.7.1]). Denote the selectivity factor ξ 1/k and the discrimination factor L (ξ) 1/λ. We use this = n = notation to differentiate between the discrimination factor of Rn and Rm.

Theorem 5.3 (Nesting Property).

R (ξ,x) R ¡L (ξ),R (ξ,x)¢ (5.40) mn = m n n

So Rmn can be expressed as an mth degree elliptic rational function where the selectivity factor is the nth degree discrimination factor, and the independent variable is Rn(ξ,x).

The case n 1 is very simple, from the nth degree transformation it forces M 1,λ k. = = = Hence R (k,x) cd(u;k) x (5.41) 1 = =

45 The form of the 2nd degree elliptic rational function is

1 1 2 2 k2x2 x x − j − j R2(k,x) = 1 x2 x2 1 j k2x2 − − j

Where the two zeros are (see the Appendix for values of the Jacobi functions at K /2)

µ ¶ ¡ K ¢ K cn 2 ;k x j cd ;k = ± 2 = ± ¡ K ¢ dn 2 ;k 1 = ±p1 k + 0 Hence we have 1 1 2 2 k (1 k0) x (1 k0) R2(k,x) − + − + = 1 (1 k ) 2 1 0 x k2(1 k ) − + − + 0 Lutovac derives the simplified expression [LTE01, Sec. 13.2]

2 (k0 1)x 1 R2(k,x) + − (5.42) = (k 1)x2 1 0 − + Deriving a formula for the 3rd degree elliptic rational function requires a far lengthier (and dull) discussion. So instead we will skip to the results and refer the reader to Lutovac’s text for the details [LTE01, Sec. 13.4].

Theorem 5.4. Let ξ 1/k = 2 2 2 (x xz )(1 xp ) R3(ξ,x) x − − (5.43) = (x2 x2 )(1 x2) − p − z where the pole xp is a function of ξ

2 2 2ξ pG xp p (5.44) = 8ξ2(ξ2 1) 12Gξ2 G3 pG3 + + − − with the auxiliary parameter G

q 2/3 G 4ξ2 ¡4ξ2(ξ2 1)¢ (5.45) = + − and the zero xz can be found as a function of the pole by q x2 x2 (3 2x2 ) 2x (x2 1)3 (5.46) z = p − p + p p −

46 6 The Elliptic Filter

Elliptic filters are equiripple in both the pass and stop bands; meaning the filter has equal loss maximums in the passband, and equal loss minimums in the stopband. These filters can be expressed in terms of Jacobi’s elliptic functions, hence the name elliptic filter. Sometimes this filter is called Zolotarev or Cauer filter, as Zolotarev was the man who pioneered the under- lying mathematics, and Cauer was the engineer who first applied Zolotarev’s theory to signal processing [Cau58] (see also [HW67] ).

6.1 The Elliptic Rational Function

Similar to how polynomials were the functions used to approximate the ideal lowpass transfer function with the Butterworth and Chebyshev filters, for Zolotarev-Cauer filter we use a ratio of polynomials as the approximating function. The elliptic rational function Rn(k,x), is a ratio of polynomials of degree at most n; the parameter 0 k 1 is called the selectivity factor and < < it determines the intervals ( , 1/k] [1/k, ) where R (k,x) 1/λ and 1/λ R (k,1/k). −∞ − ∪ ∞ | n | ≥ = n Similar to the Chebyshev polynomial, we express the elliptic rational function parametri- cally. First denote the periods of sn(u;k) as 4K ,2iK 0 and the periods of sn(u;λ) as 4L,2iL0, with L K /(nM) and L0 K 0/M. That is K is the complete elliptic integral with modulus k, = = and L is the complete elliptic integral with modulus λ, and the constant M can be computed from Theorem 4.5. From Theorem 5.2 ³ u ´ x cd(u;k) Rn(k,x) cd ;λ (6.1) = = M Often engineers use the notation 1/k ξ, and 1/λ L (ξ) (since λ is determined by n,ξ) = = n [LTE01, Sec. 12.6]. From the first principal nth degree transformation (Theorem 4.5) we can express Rn(k,x) as a rational function of x (see Sec 5.3), also we immediately have what engi- neers call the degree equation [Dan74, pg. 72] [LTE01, pg. 527,528]

KL0 n (6.2) = K 0L

Given the proper filter specifications, the elliptic integrals K ,K 0,L,L0 are determined, and from this equation the degree can be calculated.

6.2 The Transfer Function for the Elliptic Filter

The magnitude of the transfer function for the elliptic filter with a passband of [ 1,1] is 2 − H(iω) 2 1 ¡²R (k,ω)¢ ; where ² is again the ripple factor constant. Then we extend this to | | = + n ³ ³ s ´´2 H(s)H( s) 1 ²Rn ξ, (6.3) − = + i We need to find the zeros with negative real part, in the next section we will derive a general formula to find these zeros for an nth degree filter (adapted from [LTE01, Sec. 12.8]). Follow- ing that we will discuss the cases n 1,2,3, where the poles can be expressed without using = the Jacobi elliptic functions [LTE01, Sec. 12.8.1].

47 6.3 Transfer Function Poles for Order n

Lemma 6.1. Let µ ¶ µ ¶ 2r 1 1 1 xr cd − K ;k v sn− ;λ0 (6.4) = n = p1 ²2 + The appropriate zeros of H(s) (i.e. those with (s ) 0 for the normalized elliptic filter transfer ℜ r < function are q q ¡ v ¢ ¡ v ¢ 2 2 2 ¡ v ¢ sn M ;k0 cn M ;k0 1 xr 1 k xr i xr dn M ;k0 sr − − − + (6.5) = cn2 ¡ v ;k ¢ k2x2 sn2 ¡ v ;k ¢ M 0 + r M 0 with r 1,...,n [LTE01, Sec. 12.8]. = Proof. This proof is adapted from [LTE01, Sec. 12.8]. From (6.3) we must solve

i Rn(k, i sr ) − = ±² µ 1 ¶ cd− ( i sr ;k) i cd − ;λ ⇐⇒ M = ±²

Let 1 cd− ( i s ;k)/M u i v (6.6) − r = r + r Using the addition theorem we have

cd(u i v ;λ) sn(u L i v ;λ) r + r = r + + r sn(u L)cn(i v )dn(i v ) sn(i v )cn(u L)dn(u L) r + r r + r r + r + = 1 λ2 sn2(u L)sn2(i v ) − r + r We dropped the modulus λ to reduce the amount of clutter in the above equation. Now we apply the imaginary transformations, where again we will drop the moduli with the knowl- edge that functions with an argument of u will have a modulus of λ, and functions with an argument of v will have a modulus of λ0.

sn(ur L)dn(vr ) i sn(vr )cn(vr )cn(ur L)dn(ur L) i cd(ur i vr ;λ) + + + + (6.7) + = cn2(v ) λ2 sn2(u L)sn2(v ) = ±² r + r + r Now by equating the real parts of (6.7) we see (recall dn(t) 0, t R) > ∀ ∈ sn(u L;λ) 0 (6.8) r + = cn(u L;λ) dn(u L;λ) 1 ⇒ r + = r + = Equate the imaginary parts of (6.7) and simplifying with aid of the above equation (6.8), we have sn(vr ) 1 cn(vr ) = ± ²

48 Square both sides

2 sn (vr ) 1 1 sn2(v ) = ²2 − r 1 sn(vr ;λ0) (6.9) ⇒ = p1 ²2 + Notice that in (6.9), v only depends on n,k,², and not at all on the subscript. Hence v r 1 = v v v and we have obtained (6.4). Proceeding from (6.8), we have 2 = ··· = n = 0 sn(u L;λ) cd(u ;λ) = r + = r Since cd(w;k) 0 at odd multiples of K , we have that = u (2r 1)L (6.10) r = − Proceeding from (6.6) we have

i s cd(M(u i v);k) − r = r + µ2r 1 v ¶ cd − K i ;k = n + M since M K /(nL). Again we use the addition theorem to separate the real and imaginary = parts (we want the roots with negative real part), and again we omit the modulus with the understanding that functions with an argument of u will have modulus k and those with v will have modulus k0. We obtain

¡ 2r 1 ¢ ¡ v ¢ ¡ v ¢ ¡ v ¢ ¡ 2r 1 ¢ ¡ 2r 1 ¢ sn n− K K dn M i sn M cn M cn n− K K dn n− K K i sr + + + + (6.11) 2 ¡ v ¢ 2 2 ¡ 2r 1 ¢ 2 ¡ v ¢ − = cn k sn − K K sn M + n + M Recall that the zeros of the elliptic rational function are according to (5.36)

µ2r 1 ¶ µ2r 1 ¶ xr cd − K ;k sn − K K ;k r 1,...,n = n = n + =

Substituting xr into (6.11) yields the result (6.5).

6.4 Transfer Function Poles for Orders n 1,2,3 = The formulas for the transfer function poles we found in the previous section are valid for any n, but they require extensive use of the Jacobi elliptic functions. In this section we wish to derive formulas for the transfer function poles without using the Jacobi elliptic functions, for degrees n 1,2,3. Currently algebraic formulas (without using elliptic functions) are known = for the elliptic rational function with degree n 1,2,3. We will avoid the case n 3 as the = = formulas are much more complicated and not very enlightening, the details can be found in [LTE01, Sec. 13.4] for those interested.

49 For n 1 we have R (k,x) x, and one zero at x 0. The transfer function poles are the = 1 = 1 = solutions to i R1(k, i s) ± − = ² That is i i s − = ² 1 s1 − ⇒ = ² The transfer function is thus α1 T (s) (6.12) = s s − 1 where

1 2 H 0(s1) 2² R1(k,s1) 2² α1 = = = − Hence 1 T (s) − (6.13) = 2²¡s 1 ¢ + ²

For n 2, denote ζ sn(v/M;k0), and we rewrite (6.5) as = = q q p 2 2 2 2 p 2 2 ζ 1 ζ 1 xr 1 k xr i xr 1 (1 k )ζ sr − − − − + − − (6.14) = 1 ζ2 ¡1 k2x2¢ − − r In Sec. 5.5 we derived the following formula for the second order elliptic rational function,

2 (k0 1)x 1 R2(k,x) + − (6.15) = (k 1)x2 1 0 − + The transfer function poles are the solutions to

i R2(k, i s) − = ±² Let’s focus on the positive root, we have

2 i (k0 1)( i s) 1 + − − ² = (k 1)( i s)2 1 0 − − + ² i s2 − + ⇒ = ²(1 k ) i(1 k ) + 0 − − 0 Then the squared magnitude of the transfer function poles is

p 2 2 1 ² s p + (6.16) | | = ²2(1 k )2 (1 k )2 + 0 + − 0

50 Now we will use (6.14) to derive another expression for the squared magnitude of the trans- fer function poles, and compare the two. The zeros of the second order elliptic rational func- tion are cn(K /2;k) 1 p1 k0 x1,2 − (6.17) = ±dn(K /2;k) = ±p1 k = ± k + 0 Let’s focus on the positive root x1. To use the formula (6.14), we evaluate

s q r 1 k 1 x2 1 0 − 1 = − 1 k = 1 k + 0 + 0 q p 1 k2x2 1 (1 k ) pk − 1 = − − 0 = 0 So p 2 p 2 2 k0ζ 1 ζ i 1 (1 k )ζ s1 − − − − − = ¡1 k ζ2¢p1 k − 0 + 0 And the squared magnitude is 2 2 1 k0ζ s1 + (6.18) | | = ¡1 k ζ2¢(1 k ) − 0 + 0 Now set (6.16) equal to (6.18)

2 2 1 k0ζ p1 ² + + ¡ 2¢ p 1 k0ζ (1 k0) = ²2(1 k )2 (1 k )2 − + + 0 + − 0 Solve for ζ 2 ζ p (6.19) = (1 k )p1 ²2 (1 k )2 ²2(1 k )2 + 0 + + − 0 + + 0 Now with (6.14) and (6.19), one can compute the transfer function poles without using the Jacobi functions for n 2. = For n 3 we will just give the results on how to compute ζ. in (6.14) Let =

q3 b 1 ¡1 2k2¢2 = − − p c 1 b b2 = + + 1 µ1 ¶ d ¡1 2k2¢ p2 b 2c 1 = 2 c − + + + − b 9²2 ¡1 d d 2¢ (1 d)3 1 = + + + − q a 3² 3¡1 ²2¢£²2(1 2d)3 (1 d)(1 d)3¤ 1 = + + + + − 3²(1 d) E + p3 p3 = a b a b 1 d 1 + 1 − 1 − 1 + − Then 1 ζ (6.20) = p1 E 2 +

51 6.5 The Elliptic Filter

Now that we have the proper roots, we are able to express the transfer function T (s) 1/H(s) = in partial fractions as n X αr r 1 s sr = − To find the constants αr we need to differentiate

µ 1 ¶ cd− ( i s;k) H(s)H( s) 1 ²2 cd2 − ;λ − = + M

Using the formulas [Fun] ¡ ¢ d k2 1 sn(u;k) cd(u;k) − (6.21) du = dn2(u;k) £ 1 ¤ d 1 sn cd− (u;k);k cd− (u;k) (6.22) du = u2 1 − Denote 1 cd− ( i s;k) w(s) − = M Thus ¡ ¢ ¡ ¢ ¡ ¢ d 2²2(λ2 1) cd w(s);λ sn w(s);λ sn w(s);k H(s)H( s) − (6.23) ds − = i M dn2 ¡w(s);λ¢(s2 1) − And we can then evaluate (6.23) at the transfer function poles to get the transfer function T (s). Just evaluate 1 αr (6.24) = H 0(sr )

Theorem 6.1 (The Elliptic Filter). Let xin(t),xout(t) represent the input (output) signal at time t respectively. With sr given by (6.5), and αr given by (6.24) and (6.23), the elliptic filter is then

Z n ∞ X sr τ xout(t) αr e xin(t τ)dτ (6.25) = 0 r 1 − = 6.6 Conclusions

Lets revisit the example filter specifications given at the end of Chapter 2, but now using the elliptic transfer function. We have the following lowpass elliptic filter requirements: A max = 0.1dB, A 30dB and ω /ω 1.3, what degree n is necessary [Dan74, p. 72]? For the min = H b = elliptic filter, the minimal degree is n 5, which is even smaller than the required degree for = the Chebyshev filter (8). In fact, the degree of the elliptic filter will always be smaller than the Butterworth or Chebyshev filters [Dan74, Sec. 5.14]. The elliptic filter is by far the most mathematically complicated filters that we have looked at, however for any practical purposes, modern computers can calculate the Jacobi elliptic

52 functions with ease. The elliptic filter has the sharpest transition region out of all the filters we have seen thus far (for equivalent degree n). Meaning that areas of high attenuation (the stopband) occur almost immediately outside the passband. So if the frequencies in the in- terval [ωa,ωb] are those that you want to remain relatively unaffected by the filter, and any frequency outside that interval to be removed (mostly) then the elliptic filter is best for the job.

2 1 Figure 6.1: Plot of T (iω) for the elliptic approximation with n 5,² 0.15,k ξ− 0.9 to | | = = = = show how well it approximates the ideal lowpass transfer function. We can see that this transfer function is equiripple in both the pass and stop-bands, and that the transition from the passband to the stopband is extremely quick.

53 Figure 6.2: Plots of T (iω) 2 for the Butterworth, Chebyshev and elliptic filters superim- | | posed on the same axis. Here we can directly compare the different types of approximations.

54 7 Appendix

Period Parallelograms for the Jacobi Elliptic Functions

Figure 7.1: Period parallelogram for z sn(u;k) evaluated at common values for u [Akh70, = p. 207].

Figure 7.2: Period parallelogram for z cn(u;k) evaluated at common values for u [Akh70, = p. 207].

55 Figure 7.3: Period parallelogram for z dn(u;k) evaluated at common values for u [Akh70, = p. 207].

56 Differentiation formulas

d snu cnu dnu du = d cnu snu dnu du = − d dnu k2 snu cnu du = − ¡ ¢ d k2 1 sn(u;k) cd(u;k) − du = dn2(u;k) d 1 1 sn− (u;k) £ 1 ¤ £ 1 ¤ du = cn sn− (u;k);k dn sn− (u;k);k d 1 1 cn− (u;k) − £ 1 ¤ £ 1 ¤ du = cn sn− (u;k);k sn cn− (u;k);k d 1 1 dn− (u;k) − 2 £ 1 ¤ £ 1 ¤ du = k cn dn− (u;k);k sn dn− (u;k);k £ 1 ¤ d 1 sn cd− (u;k);k cd− (u;k) du = u2 1 − [Akh70, p. 208], [Fun]

Special Values

K 1 sn 2 = p1 k + 0 K pk cn 0 2 = p1 k0 K + dn pk [Akh70, p. 209] 2 = 0

57 The First Principal nth Degree Transformation

K K 0 Subjecting the periods to L and L0 , we have = nM = M n/2 n/2 2 ³ r ´ n bYc 2 bYc c2r 1 cr sn K ;k λ k c2r 1 M − = n = r 1 − = r 1 c2r = =

n 1 sn2(u;k) ³ u ´ 1 −2 1 Y − c2r sn ;λ sn(u;k) 2 2 if n odd M = M r 1 1 k c2r sn (u;k) = − n sn2(u;k) 2 1 ³ u ´ Y c2r 1 − − sn L;λ 2 2 if n even M + = r 1 1 k c2r 1 sn (u;k) = − − n 1 sn2(u;k) −2 1 ³ u ´ Y c2r 1 − − cn ;λ cn(u;k) 2 2 if n odd M = r 1 1 k c2r sn (u;k) = − n sn2(u;k) ³ u ´ λ sn(u;k) 2 1 0 Y − c2r cn L;λ 2 2 if n even M + = − M cn(u;k) r 1 1 k c2r 1 sn (u;k) = − − n 1 −2 2 2 ³ u ´ Y 1 k c2r 1 sn (u;k) − − dn ;λ dn(u;k) 2 2 if n odd M = r 1 1 k c2r sn (u;k) = − n 2 2 2 ³ u ´ λ0 Y 1 k c2r sn (u;k) − dn L;λ 2 2 if n even M + = −dn(u;k) r 1 1 k c2r 1 sn (u;k) = − − [Akh70, p. 213]

58 The Second Principal nth Degree Transformation

K K 0 Subjecting the periods to L and L0 , we have = M = nM ³ ´ n ¡ 2r 1 ¢ 2 r K 0 2 2 n θ ;k0 bYc sn n− K 0;k0 Y 4 nK M λ 2 ¡ 2r ¢ ³ (2r 1)K ´ = r 1 sn K 0;k0 = r 1 θ2 0 ;k = n = 4 2−nK 0 2 ¡ r ¢ sn n K 0;k0 2 ³ r ´ c δ dn K 0;k0 r = 2 ¡ r ¢ r = cn n K 0;k0 n

n sn2(u;k) 2 1 ³ u ´ 1 bYc c2r sn ;λ sn(u;k) + sn2(u;k) M = M r 1 1 = c2r 1 + − n 1 −2 2 ³ u ´ Y 1 δ2r sn (u;k) cn ;λ cn(u;k) − if n odd sn2(u;k) M = r 1 1 = c2r 1 + − n 1 Q 2 − 2 ³ u ´ r 1 1 δ2r sn (u;k) cn ;λ cn(u;k)dn(u;k) = n − if n even 2 M = Q 2 1 sn (u;k) r 1 c2r 1 = + − n 1 −2 2 ³ u ´ Y 1 δ2r 1 sn (u;k) dn ;λ dn(u;k) − − if n odd sn2(u;k) M = r 1 1 = c2r 1 + − n 2 2 ³ u ´ Y 1 δ2r 1 sn (u;k) dn ;λ − − if n even sn2(u;k) M = r 1 1 = c2r 1 + − [Akh70, p. 214]

59 Plots of the Jacobi Elliptic Functions for Real Values

Figure 7.4: Plot sn(x;k) for various values of k superimposed on top of each other.

60 Figure 7.5: Plot dn(x;k) for various values of k superimposed on top of each other.

61 Figure 7.6: Plot cn(x;k) for k 0.9. Since cn(x) isn’t bounded by 1, superimposing the plots = | | over each other would require adjusting the scale, and then we wouldn’t be able to see everything properly.

62 Figure 7.7: Plot cn(x;k) for k 0.99. =

63 ¡ π ¢ Figure 7.8: Plot of sn(x;k) [black] and sin 2K x [red]. The frequency of sine is scaled so that the periods of sinx and snx are both 4K .

64 Plots of the Jacobi Elliptic Functions for Pure Imaginary Values

Figure 7.9: Plot sn(i x;k) using Theorem 4.3.

65 Figure 7.10: Plot cn(i x;k) using Theorem 4.3.

66 Plots of the Solutions to Zolotarev’s Problems A, B, and C

Figure 7.11: Plot of Y (X ) the solution to problem C with κ 0.5 on the interval [1,1/κ2] = = [1,4]. The maximum deviation µ 0.0025. =

67 Figure 7.12: Plot of y(t) the solution to problem A with κ 0.5 on the interval [1,1/κ] [1,2]. = = The maximum deviation µ 0.0025. =

68 Figure 7.13: Plot of z(x) the solution to problem B with k 0.207 on the interval [ 1,1]. The = − maximum deviation m 0.0006. =

69 Figure 7.14: Plot of z(x) the solution to problem B with k 0.207 on the interval [1,15]. A = vertical line is at 1/k and horizontal lines at 1 to show that for x 1/k, z 1. ± ≥ | | ≥ The maximum deviation m 0.0006. =

70 References

[Akh70] N. I. Akhiezer, Elements of the theory of elliptic functions, AMS, 1970.

[BE55] H. Bateman and A. Erdélyi, Higher transcendental functions, McGraw-Hill, 1955.

[But30] Chebyshev Butterworth, Filter approximation theory, Engineer 7 (1930), 536–541.

[Cau58] , Synthesis of linear communcation networks, McGraw-Hill, 1958.

[Dan74] Richard W. Daniels, Approximation methods for electronic filter design, McGraw- Hill, 1974.

[Fun] Wolfram Functions, Inverse jacobi elliptic function cd, http://functions. wolfram.com/09.37.20.0001.02, [Online; accessed 28-July-2013].

[HW67] MC Horton and RJ Wenzel, The digital elliptic filter–a compact sharp-cutoff design for wide bandstop or bandpass requirements, Microwave Theory and Techniques, IEEE Transactions on 15 (1967), no. 5, 307–314.

[LT05] Miroslav D. Lutovac and Dejan V. Tosic, Elliptic rational functions, The Mathemat- ica Journal, vol. 9, Wolfram Media, 2005, pp. 598–608.

[LTE01] Miroslav D. Lutovac, Dejan V. Tosic, and Brian L. Evans, Filter design for signal pro- cessing using MATLAB and Mathematica, Prentice Hall, 2001.

[NT13] TuenWai Ng and ChiuYin Tsang, Polynomials versus finite blaschke products, Blaschke Products and Their Applications (Javad Mashreghi and Emmanuel Fricain, eds.), Fields Institute Communications, vol. 65, Springer US, 2013, pp. 249– 273.

[PTVF07] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, Numerical recipes, Cambridge University Press, New York, NY, USA, 2007.

[Zol77] E.I. Zolotarev, Application of elliptic functions to questions of functions deviating least and most from zero, Izvestiya Imp. Akad. Nauk, 1877, Reprinted in his Col- lected works Vol. 2, Izdat. Akad. Nauk SSSR, Moscow, 1932, pp. 1-59. (Russian) Ibuch Fortschritte Math. 9, 343.

71