ECE504: Lecture 2

ECE504: Lecture 2

D. Richard Brown III

Worcester Polytechnic Institute

15-Sep-2009

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 1 / 46 ECE504: Lecture 2 Lecture 2 Major Topics

We are still in Part I of ECE504: Mathematical description of systems model mathematical description → You should be reading Chen chapters 2-3 now.

1. Advantages and disadvantages of different mathematical descriptions 2. CT and DT transfer functions review 3. Relationships between mathematical descriptions

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 2 / 46 ECE504: Lecture 2 Preliminary Definition: Relaxed Systems

Definition

A system is said to be “relaxed” at time t = t0 if the output y(t) for all t t is excited exclusively by the ≥ 0 input u(t) for t t . ≥ 0

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 3 / 46 ECE504: Lecture 2 Input-Output DE Description: Capabilities and Limitations Example: by˙(t) ay(t)+ = du(t)+ eu˙(t) cy¨(t)

+ Can describe memoryless, lumped, or distributed systems. + Can describe causal or non-causal systems. + Can describe linear or non-linear systems. + Can describe time-invariant or time-varying systems. + Can describe relaxed or non-relaxed systems (non-zero initial conditions). — No explicit access to internal behavior of systems, e.g. doesn’t directly to apply to systems like “sharks and sardines”. — Difficult to analyze directly (differential equations).

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 4 / 46 ECE504: Lecture 2 State-Space Description: Capabilities and Limitations Example: x˙ (t) = Ax(t)+ Bu(t) y(t) = Cx(t)+ Du(t)

— Can’t describe distributed systems. Only memoryless or lumped systems. — Can’t describe non-causal systems. Only causal systems. + Can describe linear or non-linear systems (the example is linear). + Can describe time-invariant or time-varying systems. + Can describe relaxed or non-relaxed systems (non-zero initial conditions). + Explicit description of internal system behavior, e.g. we can analyze the properties of signals internal to a system. + Abundance of analysis techniques. Linear state-space descriptions are analyzed with linear algebra, not calculus. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 5 / 46 ECE504: Lecture 2 Description: Capabilities and Limitations Example: as2 + bs + c gˆ(s) = ds3 +1

+ Can describe memoryless, lumped, and some distributed systems. — Can’t describe non-causal systems. Only causal systems. — Can’t describe non-linear systems. Only linear systems. — Can’t describe time-varying systems. Only time-invariant systems. — No explicit access to internal behavior of systems. — Can’t describe systems with non-zero initial conditions. Implicitly assumes that system is relaxed. + Abundance of analysis techniques. Systems are usually analyzed with basic algebra, not calculus.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 6 / 46 ECE504: Lecture 2 Impulse Response Description: Capabilities and Limitations

◮ Recall that the impulse response of a system is the output of the system given an input u(t)= δ(t) and relaxed initial conditions. ◮ Example:

βe−αt t 0 g(t) = ≥ (0 t< 0

◮ What are the capabilities and limitations of the impulse-response description?

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 7 / 46 ECE504: Lecture 2 Impulse Response

Suppose you have a linear system with p inputs and q outputs. Rather than a simple impulse response function, you now need an impulse response matrix:

g11(t,τ) ... g1p(t,τ) . . G(t,τ) =  . .  g (t,τ) ... g (t,τ)  q1 qp    th where gkℓ(t,τ) is the response at the k output from an impulse at the ℓth input at time τ. The vector output can be computed as ∞ y(t) = G(t,τ)u(τ) dτ −∞ Z

◮ Sanity check: What are the dimensions of everything here? ◮ How do we integrate the vector G(t,τ)u(τ)?

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 8 / 46 ECE504: Lecture 2 The One-Sided

◮ Suppose f (t) : R Rq×p is a matrix valued function of t 0, i.e. + 7→ ≥ f11(t) ... f1p(t) . . f(t) =  . .  f (t) ... f (t)  q1 qp    ◮ Define the Laplace transform of f (t) ∞ − fˆ (s)= e stf (t) dt Z0 where the integral of a matrix is done element by element. ◮ The notation here is consistent with your textbook: − fˆ (s)= [f (t)] f (t)= 1 fˆ (s) f (t) fˆ (s) L L ↔ ◮ See your textbook for more details onh the convergencei of the one-sided Laplace transform.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 9 / 46 ECE504: Lecture 2 The Inverse Laplace Transform

∞ 1 σ+j f(t) = estfˆ(s) ds 2πj − ∞ Zσ j where j := √ 1 and σ is any point in the region of absolute convergence − of fˆ(s).

◮ This is complex integration on a path in the complex plane. ◮ In general, this integral is usually not easy to compute. ◮ Whenever possible, use tables instead.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 10 / 46 ECE504: Lecture 2 Continuous-Time Transfer Function Matrix

Definition Given a causal, linear, time-invariant system with p input terminals, q output terminals, and relaxed initial conditions at time t = 0

x(0) = 0 y(t), t 0 u(t), t 0 → ≥ ≥  then the transfer function matrix is defined as

gˆ11(s) . . . gˆ1p(s) . . yˆk(s) gˆ(s) := . . where gˆkℓ(s) :=   uˆℓ(s) gˆ (s) . . . gˆ (s)  q1 qp    for k = 1,...,q and ℓ = 1,...,p.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 11 / 46 ECE504: Lecture 2 Continuous-Time Transfer Function Matrix: Remarks

◮ By the relaxed assumption, the transfer function describes the zero-state response of the system. ◮ Since the transfer function must be the same for any input/output combination, most textbooks (including Chen) define it as the Laplace transform of the impulse response of the system. ◮ What is the Laplace transform of δ(t)? ◮ Hence, given a causal, linear, time-invariant system with p input terminals, q output terminals, and relaxed initial conditions x(0) = 0 y(t), t 0 uℓ(t)= δ(t) and um(t) = 0 for all m = ℓ, t 0 → ≥ 6 ≥  then yˆ1(s) gˆ1ℓ(s) uˆℓ(s) yˆ1(s) . . .  .  =  .  =  .  yˆ (s) gˆqℓ(s) q yˆq(s)    uˆℓ(s)          Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 12 / 46 ECE504: Lecture 2 Rational Transfer Functions and Degree

Theorem (stated as a fact, Chen p.14) If a continuous-time, linear, time-invariant system is lumped, each transfer function in the transfer function matrix is a rational function of s. The proof for this theorem can be found in some of the other textbooks mentioned in Lecture 1. In any case, this result means that

m m− N(s) b s + b − s 1 + + b s + b g s m m 1 1 0 ˆkℓ( )= = n n−1 · · · D(s) ans + an− s + + a s + a 1 · · · 1 0 where N(s) and D(s) are polynomials of s. Definition The degree of a polynomial in s is the highest of s in the polynomial.

Example: deg(0 s4 + 2 s3 +6) = . · · Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 13 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #1

1 eat ↔ s a − ◮ Easy to show directly by integrating according to the definition. ◮ Application: For a continuous-time, linear, time-invariant, lumped system, we have m m− b s + b − s 1 + + b s + b g s m m 1 1 0 ˆ( )= n n−1 · · · ans + an− s + + a s + a 1 · · · 1 0 Compute partial fraction expansion ... c c gˆ(s)= 1 + + n s d · · · s dn − 1 − Then what can we say about g(t)? ◮ Note that this is slightly more complicated if the roots of the denominator are repeated. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 14 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #2

n (n) dg (t) Notation: g (t) := dtn .

− − − g(n)(t) sngˆ(s) sn 1g(0) sn 2g˙(0) g(n 1)(0) ↔ − − −···−

◮ n = 1 case is especially useful: g˙(t) sgˆ(s) g(0). ↔ − ◮ General relationship can be shown inductively using the definition and integration by parts.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 15 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #2

Application #1: Given the input-output differential equation description of a continuous-time, linear, time-invariant, lumped system, we can easily compute the transfer function.

(n) (n−1) any (t)+ an−1y (t)+ + a1y˙(t)+ a0y(t)= (m) (m−1) · · · bmu (t)+ bm− u (t)+ + b u˙(t)+ b u(t) 1 · · · 1 0

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 16 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #2

Application #2: Given the state space description of a continuous-time, linear, time-invariant, lumped system, we can easily compute the transfer function.

x˙ (t) = Ax(t)+ Bu(t) y(t) = Cx(t)+ Du(t)

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 17 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #3

Notation (convolution assuming a causal relaxed system):

t t f(t) g(t) := f(t τ)g(τ) dτ = f(τ)g(t τ) dτ. ∗ − − Z0 Z0 It isn’t too hard to show that

f(t) g(t) fˆ(s)ˆg(s) ∗ ↔

◮ Application #1: By definition of the transfer function of a linear, time-invariant, causal system, yˆ(s) =g ˆ(s)ˆu(s). This implies that

t t y(t)= g(t τ)u(τ) dτ = g(τ)u(t τ) dτ. − − Z0 Z0 ◮ Application #2: When u(t)= δ(t), you can show that y(t)= g(t). Hence g(t)= −1 [ˆg(s)] is the impulse response of the system. L Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 18 / 46 ECE504: Lecture 2 What We Know: Moving Between System Descriptions

impulse response input-output description dierential equation

we will show these soon

transfer function state-space description description

◮ You’ve seen transfer functions in an undergraduate course (hopefully). ◮ A transfer function (matrix) describes the zero-state response of a causal LTI system (relaxed initial conditions at time t0). ◮ Transfer functions can be used to represent some distributed systems, but these systems can’t be represented with a state-space description.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 19 / 46 ECE504: Lecture 2 Moving Between SS and I/O Descriptions

For now, we will focus on the linear, time-invariant case. Theorem Given a continuous-time input-output differential equation

(n) (n−1) any (t)+ an−1y (t)+ + a1y˙(t)+ a0y(t)= (m) (m−1) · · · bmu (t)+ bm− u (t)+ + b u˙(t)+ b u(t) 1 · · · 1 0 a state-space description of this systems exists if and only if m n< . ≤ ∞ ◮ Note that this theorem states “if and only if”. What does this mean? ◮ How can we prove this theorem?

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 20 / 46 ECE504: Lecture 2 An Easy Way to Go From an LTI I/O to SS Description

To prove the “if” part of the theorem, we are going to show that given a continuous-time, linear, time-invariant, lumped system with input-ouput differential equation

(n) (n−1) any (t)+ an−1y (t)+ + a1y˙(t)+ a0y(t)= (m) (m−1) · · · bmu (t)+ bm− u (t)+ + b u˙(t)+ b u(t) 1 · · · 1 0 we can always write a linear, time-invarant state-space description

x˙ (t) = Ax(t)+ Bu(t) y(t) = Cx(t)+ Du(t)

of the system.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 21 / 46 ECE504: Lecture 2 An Easy Way to Go From an LTI I/O to SS Description

Part 1: First derive a state dynamic equation... Part 2: Now derive the output equation... ◮ Case I: m

Hence,Worcester we Polytechnic have Institute proved the “if”D. part Richard of Brown the III theorem. 15-Sep-2009 22 / 46 ECE504: Lecture 2 Going from an LTI SS to I/O Description

To prove the “only if” part of the theorem, we are going to show that a linear, time-invaraint state-space description x˙ (t) = Ax(t)+ Bu(t) y(t) = Cx(t)+ Du(t) can always be written as an I/O differential equation (n) (n−1) any (t)+ an−1y (t)+ + a1y˙(t)+ a0y(t)= (m) (m−1) · · · bmu (t)+ bm− u (t)+ + b u˙(t)+ b u(t) 1 · · · 1 0 with m n< . ≤ ∞ To do this, however, we are going to need to learn a bit of linear algebra: ◮ The determinant of a square matrix. ◮ The adjoint of a square matrix. ◮ The matrix inverse. ◮ The matrix inverse in terms of the determinant and the adjoint. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 23 / 46 ECE504: Lecture 2 The Determinant of a Square Matrix

n×n Given W R , let M ij be the (n 1) (n 1) square matrix formed ∈ − × − by deleting the ith row and the jth column of W . Definition Given W Rn×n, the determinant is defined recursively as ∈ n i+j det[W ]= wij( 1) det[M ij] − i X=1 for any j 1,...,n and where the determinant of any scalar x R is ∈ { } ∈ simply det[x]= x.

i+j th Remark: cij := ( 1) det[M ij] is called the ij cofactor of W . − Examples...

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 24 / 46 ECE504: Lecture 2 The Adjoint of a Square Matrix

Definition The adjoint J Rn×n of the matrix W Rn×n is defined as ∈ ∈ ⊤ c11 ... c1n . . J = adj(W )=  . .  c ... c  n1 nn   th where cij the ij cofactor of W .

Remarks: ◮ Note the transpose. ◮ The adjoint of any scalar x R is simply adj[x] = 1. ∈ Examples...

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 25 / 46 ECE504: Lecture 2 The Matrix Inverse in Terms of the Det. and the Adjoint

Theorem (Hoffman and Kunze, Linear Algebra, 2nd Edition, p.160) Let A be an n n matrix. When A is invertible, the unique inverse for A is ×

− adj(A) A 1 = det(A)

Remarks: ◮ See any decent linear algebra textbook for the proof. ◮ Note that A is invertible if and only if det(A) = 0. 6 ◮ This is not how matrix inverses are actually computed in programs like Matlab and Octave (there are more computationally efficient ways to get the same answer). Nevertheless, we can use this result in our proof of the “only if” part.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 26 / 46 ECE504: Lecture 2 Going from an LTI SS to I/O Description

Back to the “only if” part of the proof. Recall that we can go from a linear, time-invariant state-space description to a transfer function by computing

− N(s) gˆ(s) = C(sI A) 1B + D = − D(s) Using what we now know about matrix inverses, we can write

− Cadj(sI A)B N˜(s) C(sI A) 1B + D = − + D = + D − det(sI A) D(s) −

◮ What can you say about the degree of D(s)? ◮ What can you say about the degree of N˜ (s)? Recall that the elements of C and B are constants (not functions of s). ◮ But what about D? Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 27 / 46 ECE504: Lecture 2 Discrete-Time Systems

Our focus has been a continuous-time systems so far. What about discrete-time systems?

1. Input-output difference equation 2. Transfer function 3. Impulse response 4. State-space description The same capabilities and limitations apply in the DT case as in the CT case. The tools are slightly different however...

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 28 / 46 ECE504: Lecture 2 Discrete-Time Input-Output Difference Equation

Single-input, single-output, causal: y[k] = f(y[k 1],y[k 2],...,u[k], u[k 1],... ) − − − Example: y[k]= y[k 1] + u[k] − What is this?

Example: u[k]+ u[k 1] + u[k 2] + u[k 3] y[k]= − − − 4 What is this? For p-input q-output causal systems, we can write:

yi[k] = f(yi[k 1],yi[k 2],...,u1[k],u1[k 1],...,up[k],up[k 1],... ) − − − − for i = 1,...,q. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 29 / 46 ECE504: Lecture 2 Transfer Function: The One-Sided z-Transform

◮ Suppose f[k] : N Rq×p is a matrix valued function of k = 0, 1,... 7→ f11[k] ... f1p[k] . . f[k] =  . .  f [k] ... f [k]  q1 qp    ◮ Define the one-sided z-transform of f[k] ∞ − fˆ(z)= f[k]z k Xk=0 where z C and the sum of a matrix is done element by element. ∈ ◮ Notation: − fˆ (z)= [f[k]] f[k]= 1 fˆ(z) f[k] fˆ (z) Z Z ↔ h i ◮ See your textbook for details on the convergence of the z-transform.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 30 / 46 ECE504: Lecture 2 The Inverse z-Transform

1 − f[k] = fˆ(z)zk 1 dz 2πj I where j := √ 1 and the integral is along a counterclockwise closed − circular contour in the complex plane, centered at the origin and with radius r > λ (in the region of absolute convergence).

◮ It can be shown that, when the z-transform is a rational function of z, the inverse z-transform can be computed without evaluating this integral (partial fraction expansion). ◮ In general, this integral is usually not easy to compute. ◮ Use tables whenever possible.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 31 / 46 ECE504: Lecture 2 Discrete-Time Transfer Function

Definition Given a causal, linear, time-invariant DT system with p input terminals, q output terminals, and relaxed initial conditions at time k = 0

x[0] = 0 y[k], k = 0, 1, 2,... u[k], k = 0, 1, 2,... →  then the transfer function matrix is defined as

gˆ11(z) . . . gˆ1p(z) . . yˆi(z) gˆ(z) := . . where gˆiℓ(z) :=   uˆℓ(z) gˆ (z) . . . gˆ (z)  q1 qp    for i = 1,...,q and ℓ = 1,...,p.

The overall system is then yˆ(z)= gˆ(z)uˆ(z).

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 32 / 46 ECE504: Lecture 2 Discrete-Time Transfer Function and Impulse Response

◮ The transfer function gˆ(z) describes the zero-state response of a linear, time-invariant, discrete-time system. ◮ Let’s set the input

1 k =0 u[k]= δ[k]= (0 otherwise What is the z-transform of δ[k]? Technically, you should also always specify the region of absolute convergence.

◮ The transfer function (matrix) is the z-transform of the discrete-time impulse response (matrix).

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 33 / 46 ECE504: Lecture 2 Rational Transfer Functions

Theorem (stated as a fact, Chen pp.32-33) If a discrete-time, linear, time-invariant system is lumped, each transfer function in the transfer function matrix is a rational function of z. This result means that

m m− N(z) b z + b − z 1 + + b z + b g z m m 1 1 0 ˆiℓ( )= = n n−1 · · · (1) D(z) z + an− z + + a z + a 1 · · · 1 0 where N(z) and D(z) are polynomials of z.

Note that discrete-time transfer functions are also often written as N ′(z) p + p z−1 + + p z−m g z 0 1 m ˆiℓ( )= ′ = −1 · · · −n (2) D (z) 1+ q z + + qnz 1 · · · where N ′(z) and D′(z) are polynomials in z−1. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 34 / 46 ECE504: Lecture 2 Rational Transfer Functions

Are (1) and (2) equivalent?

When m = n, it should be clear that (1) and (2) are equivalent since ′ − N (z) = z nN(z) ′ − D (z) = z nD(z)

p ,p ,...,pn = bn, bn− ,...,b { 0 1 } { 1 0} q ,q ,...,qn = an− , bn− ,...,a { 1 2 } { 1 2 0} When m

What about the case m>n? Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 35 / 46 ECE504: Lecture 2 The Three Most Important z-Transforms: #1

z ak ↔ z a − where a can be real or complex-valued. ◮ Easy to show using our trick for computing the sum of a power series. ◮ Region of absolute convergence: z C : z > a . { ∈ | | | |} ◮ Application: For a discrete-time, linear, time-invariant, lumped system, we have m m− N(z) b z + b − z 1 + + b z + b g z m m 1 1 0 ˆ( )= = n n−1 · · · D(z) z + an− z + + a z + a 1 · · · 1 0 Compute partial fraction expansion ... c z c z gˆ(z)= 1 + + n z d · · · z dn − 1 − Then what can we say about impulse response g[k]? ◮ Note: Repeated roots make this a little bit more complicated. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 36 / 46 ECE504: Lecture 2 The Three Most Important z-Transforms: #2

Advance in time:

− g[k + ℓ] zℓgˆ(z) zℓg[0] zℓ 1g[1] zg[ℓ 1] ↔ − − −···− − Delay in time:

− − − g[k ℓ] z ℓgˆ(z)+ z ℓ+1g[ 1] + + z 1g[ ℓ +1]+ g[ ℓ] − ↔ − · · · − −

◮ ℓ = 1 case is especially useful:

g[k + 1] zgˆ(z) zg[0] ↔ − − g[k 1] z 1gˆ(z)+ g[ 1] − ↔ − ◮ General relationship can be shown inductively using the definition.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 37 / 46 ECE504: Lecture 2 The Three Most Important Laplace Transforms: #2

Application #1: Given the input-output differential equation description of a discrete-time, linear, time-invariant, lumped system, we can easily compute the transfer function. A causal example:

y[k]+ q y[k 1] + + qny[k n]= 1 − · · · − p u[k]+ p u[k 1] + + pmu[k m] 0 1 − · · · −

Note that the same idea applies to non-causal systems. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 38 / 46 ECE504: Lecture 2 A Note About Causality in Discrete-Time Systems

Theorem A lumped discrete-time system with rational transfer function

m m− N(z) b z + b − z 1 + + b z + b g z m m 1 1 0 ˆ( )= = n n−1 · · · D(z) z + an− z + + a z + a 1 · · · 1 0 with bm = 0 is causal if and only if m n. 6 ≤

Note that the degrees here (m and n) are based on the transfer function representation with positive powers of z.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 39 / 46 ECE504: Lecture 2 A Note About Causality in Discrete-Time Systems (cont)

m m− N(z) b z + b − z 1 + + b z + b g z m m 1 1 0 ˆ( )= = n n−1 · · · D(z) z + an− z + + a z + a 1 · · · 1 0 To see why the theorem must be true, just convert gˆ(z) to a difference equation representation:

y[k + n]+ an− y[k + n 1] + + a y[k +1]+ a y[k]= 1 − · · · 1 0 bmu[k + m]+ bm− u[k + m 1] + + b u[k +1]+ b u[k] 1 − · · · 1 0

Let κ = k + n and rearrange to get

y[κ]= an−1y[κ 1] a1y[κ n + 1] a0y[κ n]+ − − −···− − − − bmu[κ n + m]+ bm−1u[κ n + m 1]+ + b1u[κ n +1]+ b0u[κ n] − − − · · · − − When is this difference equation causal? Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 40 / 46 ECE504: Lecture 2 The Three Most Important z-Transforms: #2

Application #2: Given the state space description of a discrete-time, linear, time-invariant, lumped system, we can easily compute the transfer function.

x[k +1] = Ax[k]+ Bu[k] y[k] = Cx[k]+ Du[k]

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 41 / 46 ECE504: Lecture 2 The Three Most Important z-Transforms: #3

Notation (convolution assuming a causal, linear, time-invariant, relaxed system):

k k f[k] g[k] := f[k i]g[i]= f[i]g[k i]. ∗ − − i=0 i=0 X X It isn’t too hard to show that

f[k] g[k] fˆ(z)ˆg(z) ∗ ↔

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 42 / 46 ECE504: Lecture 2 An Easy Way to Go From an LTI I/O to SS Description

We would like to be able to go from a linear time-invariant I/O difference equation

y[k + n]+ an− y[k + n 1] + + a y[k]= 1 − · · · 0 bmu[k + m]+ bm− u[k + m 1] + + b u[k] 1 − · · · 0 to a state-space description. Recall that the state-space description is only applicable to causal and lumped systems, hence we can assume here that m n< . ≤ ∞ Not that, since m n, we can also write (without any loss of generality) ≤ our difference equation as

y[k + n]+ an− y[k + n 1] + + a y[k]= 1 − · · · 0 bnu[k + n]+ bn− u[k + n 1] + + b u[k] 1 − · · · 0 where the first n m of the b coefficients are equal to zero. − Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 43 / 46 ECE504: Lecture 2 From an LTI SS Description to an I/O Difference Equation

Our strategy here is the same as the continuous time case:

state-space transfer function I/O difference equation → →

− N˜ (z) N(z) gˆ(z) = C(zI A) 1B + D = + D = − D(z) D(z)

The only thing that has changed here from the continuous-time case is that the s is now a z. Hence, we know that ◮ deg(D(z)) = ◮ deg(N˜ (z)) ≤ ◮ deg(N(z)) ≤ ◮ You should be able to easily convert this transfer function to a causal I/O difference equation.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 44 / 46 ECE504: Lecture 2 What We Know: Moving Between System Descriptions

impulse response input-output description dierential equation

transfer function state-space description description

You should feel comfortable making all of these conversions when dealing with linear, time-invariant, causal, lumped continuous-time or discrete-time systems. Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 45 / 46 ECE504: Lecture 2 Conclusions

◮ Capabilities and limitations of different mathematical descriptions of systems. ◮ CT and DT transfer function review. ◮ Linear, time-invariant state-space transfer function → relationship ◮ Linear algebra: identity matrix, matrix inverse, determinant, adjoint. ◮ Linear, time-invariant state-space I/O differential ↔ equation relationship.

Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 46 / 46