Chapter 3

Linear Filters Mikael Olofsson — 2003–2007

A is a - normally time-varying - measure of some kind, and filters are devices that manipulate . We use mathematical descriptions to model our filters and signals. It should be emphasized that those mathematical descriptions are precisely that - models. How well such a model describes reality has to do with how detailed the model is, what frequencies we are interested in, or in what way input signals are limited. The same model can be good in one situation and bad in another, by which we mean how well it predicts the outcome of an experiment.

Linear filters belong to the class of linear systems. However, before we can say anything about systems, we need to start our presentation with signals as functions of time. After that, we continue to describe systems and their properties in the time domain, followed by both signals and systems described in terms of frequencies. Last, standard characterizations of filters are given.

3.1 Signals

First of all, a signal is represented as a function of time, where we usually let t denote time. In the practical situation, signals can be almost any physical measures, such as currents, voltages, preassures, positions, and so forth. In telecommunication systems, however, most often we are dealing with voltages and currents, but also electromagnetic phenomena like radio waves and light. Mostly, we are interested in signals that represent some information. In some situations, e.g. when recording sound, the signal can be that information in itself. In other situations, a signal can represent some particular information, e.g. a certain signal in a graphics chip in a computer can represent a blue pixel that is produced on its screen, while another signal in the same place may correspond to a red pixel. In yet other situations, a signal may not carry information at all, as is the case with

27 28 Chapter 3. Linear Filters various kinds of noise. In those cases, signals are normally described in probabilistic terms. The description given here, however, is for deterministic (non-random) signals. We will use a number of fundamental signals as building blocks to describe signals, such as the following.

• Stationary sinusoidal signal : sin(2 πf 0t), with frequency f0 and period time T = 1 /f 0.

j2πf 0t • The complex exponential signal : e , cos (2 πf 0t) + jsin (2 πf 0t). 0, t < 0, • The unit step : u(t) ,  1, t ≥ 0.

The complex exponential signal can be used to express sinusoidal signals as ej2πf 0t + e−j2πf 0t cos(2 πf t) = , 0 2 ej2πf 0t − e−j2πf 0t sin(2 πf t) = , 0 j2 using the Euler formulas. Shifting the unit step by τ seconds is simply 0, t<τ u(t − τ) =  1, t ≥ τ The unit step has a discontinuity in 0, so it is not differentiable in the normal sense. However, we can make it differentiable by introducing the unit impulse . It is a so called distribution, and is defined as follows.

Definition 1 (Unit impulse) The unit impulse δ(t) is a function such that ∞ δ(t)x(t) dt = x(0) Z −∞ holds for any limited function x(t).

It is possible to prove the following properties of the unit impulse. ∞, t = 0 , δ(t) = δ(t) = d u(t),  0, elsewhere , dt t δ(τ) dτ = u(t), δ (at ) = 1 δ(t), Z |a| −∞ ∞ δ(t − τ)x(τ) dτ = x(t), δ (t) = ∞ ej2πft df, Z −∞ −∞ R where at least the last property is rather tricky to prove. 3.2. Systems 29

x(t) System y(t)

Figure 3.1: A system with input x(t) and output y(t). Usually, a condensed descrition of the system is given inside the rectangle representing the system.

3.2 Systems

We are not only interested in signals. We are also interested in how signals are treated by various devices that observe those signals.

Definition 2 (System) A system is a device with one or more input and output signals.

Most systems that we will deal with have only one input and one output. That will therefore be understood if nothing else is stated. We usually illustrate a system as in Figure 3.1. The distinction between input signals and output signals is that systems observe input signals and produce output signals based on the observation. We assume that systems do not affect input signals at all. This of course may or may not be a good model. In practice, most (all) systems affect input signals aswell, at least to some extent. Systems can be in different initial states. We say that systems can have initial energy. For example, consider a simple system consisting of a rock, for which the force that we apply to the rock is the input and the position of the rock is its output. If the rock is elevated from the ground before we start observing the rock and then released, it will fall to the ground due to the initial energy it was given when it was elevated. The system is then said to have initial energy. If instead the rock is initially placed on the ground and released, it will stay there. The system is then said to be initially energy-free . The inital state of the rock is not only its initial position, but also its initial speed and direction. Or put in other words: An initially energy-free system is in a state such that the output will be constant (often zero) if the input is zero. What’s important to realize about initial states is that the output of a system depends on its input(s) as well as on its initial state. An example of an initially energy-free electrical system is a network consisting of and resistors, where the capacitors are not charged, i.e. over which the initial voltages are zero. The input and output signals of this system may be two voltages in the network. A mathematical example of an initially energy-free system is a system described by a linear differential equation with all initial values being 0. The electrical network mentioned above can be described by a linear differential equation, where the initial voltages over the capacitors are the initial conditions. For such systems we define the following. 30 Chapter 3. Linear Filters

Definition 3 () Let the input to an initially energy-free system be the unit impulse, δ(t), and let h(t) denote the corresponding output. Then h(t) is referred to as the impulse response of that system.

Definition 4 (Step response) Let the input to an initially energy-free system be the unit step, u(t), and let g(t) denote the corresponding output. Then g(t) is referred to as the step response of that system.

So far, we have not stated anything specific about our systems. In order to say anything about the relation between inputs and outputs, we need to classify systems. Then we can say that for a system of a certain type, we have a certain relation between its input and its output.

Definition 5 (Time-invariant system) Let x(t) be the input to an initially energy-free system, and let y(t) be the corresponding output. If y(t + τ) is the output corresponding to the input x(t + τ) for any x(t), t and τ, then the system is referred to as time-invariant. A system that is not time-invariant is referred to as time-varying.

Put more simply: A time-invariant system is a system for which a time-shift of the input results in the same time-shift of the output. If we introduce the notation y(t) = H{ x(t)}, for the output from an initially energy-free system with input x(t), then the above can be described as y(t + τ) = H{ x(t + τ)} for all inputs x(t) and all time-shifts τ.

Definition 6 (Linear system) Let x1(t) and x2(t) be input signals to a one-input initially energy-free system, and let y1(t) and y2(t) be the corresponding output signals. If

y(t) = a1y1(t) + a2y2(t) is the output corresponding to the input

x(t) = a1x1(t) + a2x2(t) for any x1(t), x2(t), a1 and a2, then the system is referred to as linear. A system that is not linear is referred to as non-linear.

Using the notation H{ x(t)} for the output of the initially energy-free system given the input x(t), linearity can be described as

H{ a1x1(t) + a2x2(t)} = a1H{ x1(t)} + a2H{ x2(t)}, which is supposed to hold for all inputs x1(t) and x2(t) as well as for all coefficients a1 and a2. Systems belonging to both the above mentioned classes are of special interest to us, and such systems have been given a special name. 3.2. Systems 31

C

x(t) L R y(t)

Figure 3.2: A passive linear filter.

C2

R1 R2

x(t) C1 y(t)

Figure 3.3: An active filter.

Definition 7 (LTI system) A system that is both linear and time-invariant is referred to as an LTI system (Linear Time-Invariant).

Examples of LTI systems include passive linear filters, also called RLMC circuits. Those are electrical networks built from resistances (R), inductances (L), mutual inductances (M) and/or capacitances (C). An example of a passive linear filter is given in Figure 3.2. Active filters are built around amplifiers, and those always have limited supply voltages, which limit the output. Those are therefore non-linear according to the definition, but they are normally described as linear within some region. As long as the input obeys some restriction imposed by the supply voltages, the linear demands hold. An example of an active filter is given in Figure 3.3.

Definition 8 (Causal system) Let x(t) be the input to a system, and let y(t) be the corresponding output. If y(t0) does not depend on x(t) for t > t 0, for any x(t) and any t0, then the system is referred to as causal. A system that is not causal is referred to as non-causal. Also, if y(t0) does not depend on x(t) for t < t 0, for any x(t) and any t0, then the system is referred to as anti-causal. 32 Chapter 3. Linear Filters

A causal system is simply a system that has no knowledge about the future. More precisely, the output of a causal system at any given time instance does not depend on future values of the input. Since that is normally the case for practical systems, such systems are sometimes called realizable. Finally, we are interested in systems that behave in a controlled manner.

Definition 9 (Stable system) A system is referred to as stable if any limited input corresponds to a limited output. A system that is not stable is referred to as non-stable.

Some systems behave as if they were stable for some limited inputs, and as if they were non-stable for other limited inputs. Using our definition above, such systems should be called nonstable. However, such systems are often referred to as marginally stable . The filters that we will consider are all examples of stable causal LTI-systems.

3.3 The Time Domain

Let us first study signals and systems in the time domain, i.e. everything is seen as functions of time. We will limit the discussion to LTI systems, for which the following operator is useful.

Definition 10 () The convolution of the signals a(t) and b(t) is denoted (a ∗ b)( t), and it is defined by

∞ (a ∗ b)( t) , a(τ)b(t − τ) dτ. Z −∞

Our use of the convolution is immediate:

Theorem 1 (Output of LTI systems) Let x(t) be the input to an initially energy-free LTI system with impulse response h(t), illustrated in Figure 3.4. Then the output y(t) of that system is given by y(t) = ( x ∗ h)( t).

Proof: Let y(t) = H { x(t)} denote the output of the system given that the input is x(t). Using the definition integral of the unit impulse, we get

∞ y(t) = H x(τ)δ(t − τ) dτ. Z−∞  3.3. The Time Domain 33

x(t) h(t) y(t)

Figure 3.4: An LTI system with input x(t), impulse response h(t) and output y(t).

By assumption, the system is linear. Thus, we can rewrite the expression above as ∞ y(t) = x(τ)H { δ(t − τ)} dτ. Z−∞ Now, using the assumption that the system is time-invariant, and the knowledge that we have defined the impulse response h(t) = H { δ(t)}, we get

∞ y(t) = x(τ)h(t − τ) dτ. Z−∞ Finally, we identify the last integral as the convolution y(t) = ( x ∗ h)( t). 2

So, at least for an energy-free LTI system, the impulse response h(t) is a complete charac- terization of the system through the convolution. Therefore, all properties of the system should be possible to derive from its impulse response. For instance, based on the convo- lution integral, we can state that an LTI system is causal if and only if h(t) = 0 holds for all negative t.

The convolution has a number of properties. One of them is used in the proof above, namely that x(t) = ( x ∗ δ)( t) holds. A few other properties will be given here.

Theorem 2 (Commutativity of the convolution) The convolution is a commutative operation, i.e. (a ∗ b)( t) = ( b ∗ a)( t) holds.

Proof: From the definition of convolution, we have

∞ (a ∗ b)( t) = a(τ)b(t − τ) dτ. Z −∞ Now set λ = t − τ. Then we have dτ = −dλ , and the expression above can be rewritten as

−∞ ∞ (a ∗ b)( t) = − a(t − λ)b(λ) dλ = b(λ)a(t − λ) dλ. Z Z ∞ −∞ 34 Chapter 3. Linear Filters

Identifying the last integral, again using the definition of convolution, we find

(a ∗ b)( t) = ( b ∗ a)( t). 2

Plainly put: The order of the two signals in a convolution is irrelevant.

Example 3.1 Consider an LTI system with impulse responce h(t) = e−tu(t), and let us determine its step response g(t). Then the input is the unit step, u(t), and the output is the step response. The step responce is therefore given by the convolution

∞ g(t) = ( u ∗ h)( t) = ( h ∗ u)( t) = h(τ)u(t − τ) dτ. Z −∞

By the definition of the unit step, we have

1, τ ≤ t, u(t − τ) =  0,τ > t.

Our convolution integral can thus be rewritten as

t g(t) = h(τ) dτ, Z −∞ which holds for any LTI system. For our specific system we then have

t g(t) = e−τ u(τ) dτ. Z −∞

Since the unit step is zero for negative arguments, we have g(t) = 0 for t < 0 and

t t g(t) = e−τ dτ = −e−τ = 1 − e−t Z 0 0   for t ≥ 0, since u(τ) = 1 holds for positive τ. Totally, we get g(t) = (1 − e−t) u(t)

Two systems are cascaded if the output of one of them is the input to the other, as in figure 3.5. In order to see what happens if we cascade two systems, we need the following property of the convolution. 3.3. The Time Domain 35

z(t) x(t) h1(t) h2(t) y(t)

Figure 3.5: Two cascaded LTI systems.

Theorem 3 (Associativity of the convolution) The convolution is an associative operation, i.e. (( a ∗ b) ∗ c)( t) = ( a ∗ (b ∗ c))( t) holds.

Proof: Using the definition of convolution twice, we get

∞ ∞ ∞ (( a ∗ b) ∗ c)( t) = (a ∗ b)( τ)c(t − τ) dτ = a(λ)b(τ − λ) dλ c (t − τ) dτ Z Z Z −∞ −∞ −∞ Changing the integration order, we can write the double integral as

∞ ∞ (( a ∗ b) ∗ c)( t) = a(λ)b(τ − λ)c(t − τ) dτ dλ Z Z −∞ −∞ Now set σ = τ − λ. This gives us dτ = dσ , and we rewrite the expression as

∞ ∞ ∞ ∞ (( a ∗ b) ∗ c)( t) = a(λ)b(σ)c(t − λ − σ) dσ dλ = a(λ) b(σ)c(t − λ − σ) dσ dλ Z Z Z Z −∞ −∞ −∞ −∞ Finally, we identify twice using the definition of convolution, and we get

∞ (( a ∗ b) ∗ c)( t) = a(λ)( b ∗ c)( t − λ) dλ = ( a ∗ (b ∗ c))( t) Z −∞ 2

Plainly put: The order in which subsequent are performed is irrelevant. An immediate consequence of Theorem 3 is the following.

Theorem 4 (Cascaded LTI systems) Consider two cascaded LTI systems with impulse responses h1(t) and h2(t) respectively. Then the two systems correspond to one LTI-system with impulse response h(t), given by h(t) = ( h1 ∗ h2)( t). 36 Chapter 3. Linear Filters

Proof: The intermediate signal, z(t) in figure 3.5, is given by z(t) = ( x ∗ h1)( t). The output is given by y(t) = ( z ∗ h2)( t). Thus, we have y(t) = (( x ∗ h1) ∗ h2)( t). The convolution is associative, and therefore we also have y(t) = ( x ∗ (h1 ∗ h2))( t). Identifying in y(t) = ( x ∗ h)( t), we find that the total system has impulse response h(t) = ( h1 ∗ h2)( t). 2

Recall that a system is said to be stable if its output is limited for all limited inputs. As we can determine if a system is causal from its impulse response, we can also determine if the system is stable by studying its impulse response.

Theorem 5 (Stability of LTI systems) ∞ An LTI system with impulse response h(t) is stable if and only if −∞ |h(t)| dt is finite. R

Proof: We start by proving the if part of the statement. The input x(t) is supposed to ∞ be limited, i.e. |x(t)| < M holds for some finite M. Assuming that −∞ |h(t)| dt is limited, we want to check if the output y(t) is limited. Therefore, study R

∞ |y(t)| = |(h ∗ x)( t)| = h(τ)x(t − τ) dτ . Z

−∞

The triangle inequality then lets us rewrite that as

∞ |y(t)| ≤ |h(τ)| · | x(t − τ)| dτ. Z −∞ Finally, using that the input is finite, we get

∞ |y(t)| < M |h(τ)| dτ, Z −∞ ∞ and y(t) is obviously limited. Thus, the system is stable if −∞ |h(t)| dt is limited. R Second, we prove the only if part of the statement by showing that there is a limited input ∞ x(t) such that the output is unlimited if −∞ |h(t)| dt is unlimited. Define the input R 1, for all t such that h(−t) > 0 holds , x(t) =  −1, elsewhere , which is obviously limited. The corresponding output is then given by

∞ y(t) = ( h ∗ x)( t) = h(τ)x(t − τ) dτ. Z −∞ 3.4. The Frequency Domain 37

Consider t = 0. The output at that time instance is ∞ ∞ y(0) = h(τ)x(−τ) dτ = |h(τ)| dτ, Z Z −∞ −∞ ∞ where we have made use of our definition of x(t). Obviously, if −∞ |h(t)| dt is unlimited, then so is y(0). We have thus shown that there is at least one limitedR input that results ∞ in an unlimited output if −∞ |h(t)| dt is unlimited. Therefore, the system is stable only if ∞ 2 −∞ |h(t)| dt is limited. R R The following theorem, which we state without proof, is related to the stability of LTI systems.

Theorem 6 (Convergence of the convolution) The convolution of the signals a(t) and b(t) is convergent

∞ • if −∞ |a(t)| dt converges and b(t) is finite, or R ∞ • if −∞ |b(t)| dt converges and a(t) is finite. R 3.4 The Frequency Domain

In telecommunication applications, we are usually interested in keeping our signals inside a given frequency band. If we were only dealing with stationary sinusoidal signals, that demand would be simple to adhere to, and the frequency concept is well defined. However, we usually do not transmit stationary sinusoidal signals, so we need a means to give the concept frequency a meaning for (almost) arbitrary signals. That is done by introducing the Fourier transform.

Definition 11 (Fourier transform) Let x(t) be a signal. Then F { x(t)}, the Fourier transform of x(t), is defined as ∞ F { x(t)} , x(t)e−j2πft dt Z −∞

So, the fourier transform maps a function in t on a function in f. Note, however, that we need to show that f in this case has a natural interpretation as frequency. We will return to that. Furthermore, the integral in the Fourier transform converges if and only if ∞ −∞ |x(t)| dt converges. If the signal is x(t), we usually let X(f) be its Fourier transform. RIt would be convenient if there were an inverse mapping taking X(f) to x(t). So, let us try to define one. 38 Chapter 3. Linear Filters

Definition 12 (Inverse Fourier transform) Consider X(f). Then F −1 {X(f)}, the inverse Fourier transform of X(f), is defined as

∞ F −1 {X(f)} , X(f)ej2πft df Z −∞

Let us tie those definitions together.

Theorem 7 (True Inverse) If X(f) is the Fourier transform of x(t), then x(t) is the inverse Fourier transform of X(f).

Proof: As stated in the theorem to be proved, let X(f) be the Fourier transform of x(t), but letx ˜ (t) denote the result of the inverse Fourier transform of X(f). Let us studyx ˜ (t). We have ∞ x˜(t) = X(f)ej2πft df. Z −∞ Expressing X(f) using the Fourier transform, we get

∞ ∞ x˜(t) = x(τ)e−j2πfτ dτ e j2πft df. Z Z −∞ −∞

Changing the integration order, we get

∞ ∞ x˜(t) = x(τ) ej2πf (t−τ) df dτ. Z Z −∞ −∞

∞ j2πft We noted earlier that it is possible to show that δ(t) = −∞ e df holds. Thus, we can rewrite the expression above as R

∞ x˜(t) = x(τ)δ(t − τ) dτ = x(t), Z −∞ where we have used the definition integral of the unit impulse in the last equality. 2

We should note that the Fourier transform of a function is defined for all real f, i.e. we consider both positive and negative frequencies. 3.4. The Frequency Domain 39

Example 3.2 Consider the signal x(t) = ae −bt u(t) with Fourier transform X(f). Then we have

∞ ∞ ∞ X(f) = x(t)e−j2πft dt = ae −bt u(t)e−j2πft dt = ae −(b+j2πf )t dt Z Z Z −∞ −∞ 0 ∞ e−(b+j2πf )t e0 a = a = 0 − a = ,  −(b + j2πf )0 −(b + j2πf ) b + j2πf which converges if b > 0.

Example 3.3 Consider the signal

1, −1/2 ≤ t < 1/2, x(t) = u(t + 1 /2) − u(t − 1/2) =  0, otherwise with Fourier transform X(f). For f =6 0 , we have

∞ 1/2 e−j2πft 1/2 X(f) = x(t)e−j2πft dt = e−j2πft dt = Z Z −j2πf −1/2 −∞ −1/2 e−jπf − ejπf 1 ejπf − e−jπf sin( πf ) = = · = . −j2πf πf j2 πf

For the case f = 0 , we have

∞ 1/2 X(0) = x(t) dt = dt = 1 . Z Z −∞ −1/2

However, we note that X(f) → 1 holds when f → 0, so X(f) is continuous for all f.

The function derived in the last example is a function that is so useful that it has been given its own notation.

Definition 13 (Sinc function) The function sinc (f) is defined as

sin( πf ) , f =6 0 , sinc (f) = πf  1, f = 0 . 40 Chapter 3. Linear Filters

As usual, let x(t) be a signal with Fourier transform X(f). Then X(f) is also referred to as the spectrum of x(t). Futhermore, |X(f)| is referred to as its amplitude spectrum , and arg {X(f)} is referred to as its phase spectrum . In many situations, we will only be interested in the amplitude spectrum. The quantity |x(t)|2 is referred to as the momentary signal power of the signal x(t), and ∞ 2 −∞ |x(t)| dt is the signal energy of x(t). R Theorem 8 (Parseval’s relation) Let X(f) be the Fourier transform of x(t). Then ∞ ∞ x2(t) dt = |X(f)|2 df Z Z −∞ −∞ holds.

Proof: We start by rewriting one of the two x(t) in the left hand side of the relation using the inverse Fourier transform, which gives us ∞ ∞ ∞ x2(t) dt = x(t) X(f)ej2πft df dt. Z Z Z −∞ −∞ −∞ Changing the order of the integrations gives us ∞ ∞ ∞ ∞ ∞ x2(t) dt = X(f) x(t)ej2πft dt df = X(f)X∗(f) df = |X(f)|2 df, Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ which concludes the proof. 2 We can thus calculate the signal energy of a signal both in the time domain using the definition of signal energy, or in the frequency domain using Parseval’s relation. Therefore, |X(f)|2 is often called the energy spectrum of x(t). Theorem 8 is actually only a special case of Parseval’s relation. A more general form is ∞ ∞ a(t)b∗(t) dt = A(f)B∗(f) df, Z Z −∞ −∞ where we have A(f) = F { a(t)} and B(f) = F { b(t)}. The proof of that relation follows the proof of Theorem 8. We have stated that we are mainly interested in LTI systems. So, a natural question at this point should be -What are the spectral properties of the output of an LTI system? Recall that the output of an LTI system is the convolution of the input and the impulse response of the system. 3.4. The Frequency Domain 41

Theorem 9 (Fourier transform of a convolution) Let a(t) and b(t) be signals with Fourier transforms A(f) and B(f). Then we have

F { (a ∗ b)( t)} = A(f)B(f)

Proof: Based on our definitions, we have

∞ ∞ ∞ F { (a ∗ b)( t)} = (a ∗ b)( t)e−j2πft dt = a(τ)b(t − τ) dτ e −j2πft dt. Z Z Z −∞ −∞ −∞

Rewrite the expression above as

∞ ∞ F { (a ∗ b)( t)} = a(τ)b(t − τ)e−j2πft dτ dt Z Z −∞ −∞

Now, set λ = t − τ, and we get

∞ ∞ ∞ ∞ F { (a ∗ b)( t)} = a(τ)b(λ)e−j2πf (λ+τ) dτ dλ = a(τ)e−j2πfτ dτ b(λ)e−j2πfλ dλ Z Z Z Z −∞ −∞ −∞ −∞

Finally, we identify the last two integrals, and we get

F { (a ∗ b)( t)} = A(f)B(f) 2

So, the spectrum of the output of an LTI system is the product of the spectra of the input and of the impulse response of the system. The Fourier transform of the impulse response of an LTI system is also referred to as its .

Actually, this last theorem is what gives f a natural interpretation as frequency. More precisely, if the input x(t) is the stationary sinusoidal signal

x(t) = A · sin(2 πf 0t + φ), then it can be shown that the corresponding output, y(t), is given by

x(t) = A · | H(f0)| sin(2 πf 0t + φ + arg {H(f0)}), where H(f) is the frequency response of the system. This observation together with The- orem 9 is what gives f an interpretation as frequency. 42 Chapter 3. Linear Filters

3.5 Linear Differential Equations

We start with an example.

Example 3.4 A simple passive linear filter is given in Figure 3.6. For the resistor, we have Ri (t) = x(t) − y(t), while for the , we have d i(t) = C y(t). dt Combining those equations, we get

d RC y(t) = x(t) − y(t), dt which we can rewrite as d y(t) + RC y(t) = x(t). dt

In the example above, we ended up describing the relation between the input and the output using a linear differential equation. That is the case for any passive linear filter. We are interested in the frequency response of the filter, and possibly also of the impulse response of the filter. Therefore, it is of interest to study such equations in terms of Fourier transforms. First, we search an expression for the Fourier transform of a derivative. Expressing the signal x(t) using the inverse Fourier transform, we get

∞ ∞ d d x(t) = X(f)ej2πft df = j2πfX (f)ej2πft df. dt dt Z Z −∞ −∞

Thus, we have d F x(t) = j2πfX (f), dt  and similarily dn F x(t) = ( j2πf )nX(f). dt n  Now, consider a system that is described by the linear differential equation

n di m dk a y(t) = b x(t), i dt i k dt k Xi=0 Xk=0 3.5. Linear Differential Equations 43

R i(t)

x(t) C y(t)

Figure 3.6: Another passive linear filter. The voltage x(t) is the input, while the voltage y(t) is the output.

where x(t) and y(t) are the input and output, respectively, and where an and bm are non- zero. If m > n , then the system is certain to be non-stable. If m ≤ n, then the system may or may not be stable, depending on the coefficients. We refer to n as the degree of the system. Based on the reasoning above, we can transform the differential equation to the frequency domain, and we get the equation

n m i k ai(j2πf ) Y (f) = bk(j2πf ) X(f). Xi=0 Xk=0

Again, let H(f) denote the frequency response of the system. Then Y (f) = X(f)H(f) holds, which gives us m k bk(j2πf ) Y (f) k=0 H(f) = = Pn . X(f) i ai(j2πf ) iP=0 So, H(f) is directly given by the differential equation.

Example 3.5 We return to our previous example. We have derived the linear differential equation d y(t) + RC y(t) = x(t). dt describing the relation between the input x(t) and the output y(t) of the filter in Figure 3.6. Transforming that equation to the frequency domain, we get

Y (f) + j2πfRC · Y (f) = X(f), from which we get the frequency response

Y (f) 1 H(f) = = . X(f) 1 + j2πfRC 44 Chapter 3. Linear Filters

3.6 Filters and Filter Types

As an alternative to the description of filters using linear differential equations, we can actually transform the electrical network itself. This is based on the observation that the relations between voltages and currents for resistors, capacitors and are simple linear differential equations. Let v(t) be the voltage over a component and let i(t) be the current through it. Then we have

• v(t) = Ri (t) for a resistance R,

d • v(t) = L dt i(t) for an inductance L and d • i(t) = C dt v(t) for a capacitance C.

Transforming those relations to the frequency domain, we get

• V (f) = RI (f) for the resistance, • V (f) = j2πfL · I(f) for the inductance and

1 • V (f) = j2πfC · I(f) for the capacitance.

We say that

• the resistance has impedance R, • the inductance has impedance j2πfL and

1 • the capacitance has impedance j2πfC .

We can then regard the network as a complex-valued direct current network where those impedances are treated the same way as resistances are treated in orinary direct current analysis.

Example 3.6 Again, we return to our previous example. Transforming the filter in Fig- ure 3.6 to the frequency domain, we get the network in Figure 3.7. Using voltage division, we get 1/j 2πfC 1 Y (f) = · X(f) = · X(f), 1/j 2πfC + R 1 + j2πfRC from which we immediately get the frequency response Y (f) 1 H(f) = = X(f) 1 + j2πfRC without ever writing any differential equation. 3.6. Filters and Filter Types 45

R

1 X(f) j2πfC Y (f)

Figure 3.7: The filter from Figure 3.6 transformed to the frequency domain. X(f) and Y (f) are the Fourier transforms of the input and the output, respectively.

In the very first sentence of this chapter, we stated that filters are used to manipulate signals. Especially, filters are used to remove unwanted parts of the spectrum of signals. One way to characterize filters is to describe what parts of the spectrum that is removed, or that is not removed. There are at least four standard types of filters that should be recognized, namely the following.

• Low pass filters (LP filters) only let frequencies below a certain limit pass. This limit is called the cut-off frequency of the filter. The region below that frequency is referred to as the pass band of the filter, and the region above that frequency is referred to as the stop band of the filter.

• High pass filters (HP filters) only let frequencies above its cut-off frequency pass.

• Band pass filters (BP filters) only let frequencies in a frequency interval pass. The lower limit of that interval is called the lower cut-off frequency of the filter, and consequently, the upper limit of the interval is called the upper cut-off frequency.

• Band stop filters (BS filters) only let frequencies outside a frequency interval pass. The lower limit of that interval is called the lower cut-off frequency of the filter, and consequently, the upper limit of the interval is called the upper cut-off frequency.

For ideal filters, we have |H(f)| = 1 (or some other positive constant) in the pass band, and |H(f)| = 0 in the stop band. For a filter to actually behave that way, it has to have infinite degree, which corresponds to infinitely many components in the implementation. It also has to be non-causal and marginally stable. Ideal filters are thus impossible to construct. Instead we try to approximate the ideal behaviour with finite degrees of our filters. The filter in Figure 3.6, analyzed in the last examples, is an example of a non-ideal LP filter. What degree is actually needed for a certain filter is determined by the demands given by the situation where we intend to use the filter. We will not go any further in that direction.