<<

Passivity Enforcement on Loewner -Based Descriptor System Models

Xiao, Yi Qing

Master of Engineering

Department of Electrical and Computer Engineering

McGill University Montreal,Quebec 2017-03-24

Requirements Statement Copyright Statement DEDICATION

This document is dedicated to the fun of learning and improving.

ii ACKNOWLEDGEMENTS

I would like to acknowledge Professor Roni Khazaka for teaching me, supporting me, and helping me from the very beginning of the project. I would like to thank Muhammad Kabir for greatly contributing by teaching me how to use all the tools I needed and providing me guidance and support throughout every step of the project. I thank Marco Kassis for also helping me through the initial stages of the project and supporting me during.

iii ABSTRACT

Macromodeling of passive structures such as interconnect networks is a key bottleneck for signal and power integrity analysis. For many structures, a physics- based model is difficult to obtain and numerical techniques such as Vector Fitting and Loewner Matrix interpolation are necessary to obtain a macromodel. However, these methods cannot guarantee a passive macromodel by construction. As a result, perturbation methods for passivity enforcement are necessary as a final macromod- eling step. In this thesis, we study passivity enforcement methods based on the perturbation of the Hamiltonian matrix and propose efficient techniques that are suitable for Loewner Matrix-based macromodels.

iv ABREG´ E´

La macromod´elisationdes structures passives comme les r´eseauxd’interconnecte est une ´etape difficile dans l’analyse de l’int´egralit´edes signaux et de l’´energie.Un mod`ele de base physique est difficile `aobtenir pour beaucoup de structures et pour obtenir un macromod`ele,il est n´ecessaired’utiliser des techniques num´eriquescomme Vector Fitting et l’interpolation par les matrices Loewner. Cependant, les techniques num´eriquesmentionn´eesne peuvent pas garantir la passivet´epar construction des macromod`ele,ce qui r´esultedans la n´ecessit´ed’ajouter des m´ethodes de perturbation pour la rectification de passivit´ecomme derni`ere´etape dans la macromod´elisation. Dans cette th`ese,nous ´etudions les m´ethodes de rectification de passivit´ebas´eessur la matrice Hamiltonien et nous proposons des techniques de perturbation efficaces pour les mod`elesbas´essur les matrices Loewner.

v TABLE OF CONTENTS

DEDICATION ...... ii ACKNOWLEDGEMENTS ...... iii ABSTRACT ...... iv ABREG´ E...... ´ v LIST OF TABLES ...... viii LIST OF FIGURES ...... ix 1 Introduction ...... 1 1.1 Introduction ...... 1 1.2 Key Contributions ...... 2 2 Literature Review ...... 3 2.1 Passivity Condition ...... 4 2.1.1 S-parameter System Passivity ...... 5 2.1.2 Y -parameter System Passivity ...... 6 2.2 Passivity Enforcement Methods Overview ...... 7 2.2.1 Convex Programming Approach ...... 8 2.2.2 Direct Residual Perturbation Approach ...... 9 2.2.3 Hamiltonian Matrix Perturbation Approach ...... 13 3 Hamiltonian Matrix Eigenvalue Perturbation ...... 15 3.1 Hamiltonian Matrix Theory ...... 15 3.2 Perturbation Equation ...... 22 3.2.1 General Matrix Pencil Perturbation ...... 23 3.2.2 Perturbation Equations Properties ...... 25 3.2.3 C Imaginary Matrix Eigenvalue Perturbation Equations . . 28 3.2.4 B Imaginary Matrix Eigenvalue Perturbation Equations . . 32

vi 3.2.5 B and C Imaginary Matrix Eigenvalue Perturbation Equa- tions ...... 36 3.3 Hamiltonian Matrix Eigenvalue Properties ...... 40 3.3.1 Eigenvalue Distribution ...... 41 3.3.2 Singular Value Slope at Imaginary Eigenvalues ...... 43 3.4 Passivity Enforcement Strategy ...... 47 3.4.1 Non-Passive Regions ...... 47 3.4.2 Imaginary Eigenvalue Perturbation Analysis ...... 49 3.4.3 Perturbation Strategies ...... 56 4 Example Simulations ...... 67 4.1 Example Description ...... 68 4.2 Results Comparison and Analysis ...... 70 4.2.1 Examples 3 and 5 Details ...... 73 5 Conclusion and Future work ...... 79 References ...... 81

vii LIST OF TABLES Table page 3–1 Eigenvalue Merge Simulation Results ...... 54 4–1 Simulation Example Details ...... 69 4–2 Simulation Summary Ex. 1-3, Eq. I ...... 70 4–3 Simulation Summary Ex. 1-3, Eq. II ...... 70 4–4 Simulation Summary Ex. 4-5, Eq. I ...... 72 4–5 Simulation Summary Ex. 4-5, Eq. II ...... 72

viii LIST OF FIGURES Figure page 3–1 Example (J , K) pencil eigenvalue distribution...... 40 3–2 Simplified example of a singular value curve passivity violation region. 48 3–3 Example Non-passive Region ...... 50 3–4 Example Enforced Non-passive Region ...... 51 3–5 Example Non-passive Region ...... 52 3–6 Example DC Non-passive Region ...... 55 3–7 Example non-passive region of high imaginary eigenvalue density. . . 57 3–8 Nearest Neighbor perturbation strategy illustration ...... 59 3–9 Region Average perturbation strategy illustration ...... 61 3–10 Region Bound perturbation strategy illustration ...... 63 3–11 Cross Pairing Eigenvalue perturbation strategy illustration ...... 65

4–1 Example 3’s σmax{S(jω)} curve before and after enforcement with added eigenvalues indicated as stars (Γ+ as blue upward stars, Γ− as red downward stars) ...... 73

4–2 Example 5’s σmax{S(jω)} curve before and after enforcement with added eigenvalues indicated as stars (Γ+ as blue upward stars, Γ− as red downward stars) ...... 74

4–3 Ex 3 |S1,1| Plot ...... 75

4–4 Ex 5 |S1,1| Plot ...... 75 4–5 Example 3’s transient plot with capacitive terminations...... 76

ix 4–6 Example 5’s transient plot ...... 77 4–7 Transient Analysis Network Setup for Example 3 ...... 78 4–8 Transient Analysis Network Setup for Example 5 ...... 78

x CHAPTER 1 Introduction 1.1 Introduction

Electronic circuitry has become a defining technology of the modern era with its seemingly endless potential in improving and changing our ways of life. Just like how it evolves our way of life, the way itself evolves is astounding, with creation of components ever more small and of designs ever more complex. As more components are restricted in tighter board space with the need of higher frequency of operation, the design process becomes a challenge as feasible designs are to be produced under limited time. It is under these circumstances that macromodeling techniques be- came central to the design process with its ability to numerically represent complex circuitry under mathematical models that are readily used in simulations. Exam- ples of macromodeling techniques include the Vector Fitting [12] and the Loewner Matrix [15,17] techniques which transform recorded frequency measurements into models through mathematical approximations. However, as the circuitry design be- comes more complex with higher and higher PCB and interconnects frequency of operation, generation of valid macromodels becomes a challenge of its own. Key model properties such passivity, which are given for the real physical circuits, are no longer easily achieved for the generated circuit models. The condition of passivity is roughly described as the condition that a system does not generate energy. Its importance lies in its implied stable condition of the

1 circuit as well as its guarantee that any combination of passive systems results in a passive system [18]. Without stability, numerical simulation in models may result in divergent behavior which invalidates the simulation, preventing validation of the real physical systems that the models represent. Thus, it is highly desirable to ensure passivity of the macromodels which enable modular designs without the risk of unstable behaviors upon combining. It is with the goal of improving and extending existing passivity enforcement techniques that the present thesis is written. 1.2 Key Contributions

For the work presented in this thesis, the main contribution presented are: 1. Hamiltonian Matrix pencil’s eigenvalues relation with the system’s transfer function singular values (section (3.4.1)). 2. Hamiltonian Matrix pencil perturbation properties (section (3.4.2)). 3. Hamiltonian Matrix pencil perturbation strategies (section (3.4.3)). 4. Simultaneous perturbation of matrices B and C when performing (section (3.2.5)) Hamiltonian Matrix pencil perturbation.

2 CHAPTER 2 Literature Review Macromodels are generated using frequency data sampled over the bandwidth of a physical system. Such a system is assumed to be a linear passive p-network modeled using its scattering (S) parameter data or admittance/impedance (Y ) parameter data. The macromodel in question is in the form of a stable Linear Time Invariant (LTI) system in the general descriptor system (DS) form :

Ex˙ (t) = Ax(t) + Bu(t) (2.1) y(t) = Cx(t) + Du(t) where x(t) ∈ Rn is the state vector, u(t) ∈ Rp is the incident power vector at the ports, and y(t) ∈ Rp is the reflected power vector at the ports. The system is described by the matrices E ∈ Rn×n, A ∈ Rn×n, B ∈ Rn×p, C ∈ Rp×n, and D ∈ Rp×p, where n is the order of the system and p is the number of ports. The parameters of the system (2.1) can be expressed as the transfer function:

H(s) = C(sE − A)−1B + D (2.2) where H(s) can be either the admittance/impedance or scattering parameter matrix of the system evaluated at frequency s. Although passivity was given for the original physical system from which we built the macromodel of form (2.1), the macromodel itself may not be passive due to

3 approximation errors or other various factors inherent to most macromoldeling algo- rithms. Thus arise the need for passivity enforcement or measures for guaranteeing passivity of the macromodels. Note that in the context of passivity enforcement, a stable system in the form (2.1) with singular E is treated differently from a system of the same form with non- singular E, but in this thesis, the focus is solely on the latter form. This means that the DS we consider in this thesis can be easily converted into a regular system (RS) having the form:

x˙ (t) = Ax(t) + Bu(t) (2.3) y(t) = Cx(t) + Du(t) with the transfer function:

H(s) = C(sI − A)−1B + D (2.4)

The two main types of data used are the admittance/impedance (Y/Z) parame- ters and the scattering (S) parameters, and the resulting macromodels from the two types of data will have different passivity conditions. Due to the likeliness between Y and Z parameters, we use Y parameters to refer both. 2.1 Passivity Condition

In the most general sense, a system that is passive is a system which does not generate energy: a passive system does not output more energy than it is given. More concrete formulation of the passivity condition can be given through systems represented by a macromodel of the form (2.1) or (2.3). Note that in this section, we concentrate on the formulations for the RS format LTI system (2.3), though these

4 formulations are equally applicable to the descriptor system (2.1) in the case when E is non-singular.

2.1.1 S-parameter System Passivity

Here, we consider a system (2.3) built upon S-parameter data, where the transfer function (2.4) evaluates the S-parameter itself, that is, S(s) = H(s). Theorem 2.1.1. For the LTI system in the form given in the regular system for- mat (2.3), we have passivity of the system if and only if its S-parameters S(s) satisfy all of the following conditions [25,26]: 1. S(s) has no poles ∀ Re{s} > 0

∗ 2. In − S (jω)S(jω) is positive semi-definite, ∀ω ∈ R. 3. S¯(jω) = S(−jω), ∀ω ∈ R where S¯ denotes the complex conjugate of S. The fulfillment of all 3 conditions is equivalent to the condition that the trans- fer function is bounded-real, that is, the condition of passivity is equivalent to the condition of bounded-realness of S(s) [2]. For system (2.3) that is real and stable, conditions 1 and 3 are necessarily satisfied and we only need to fulfil condition 2 in order to ascertain the passivity of the system. A simple passivity check method based on condition 2 can be derived by observing the fact that condition 2 is equivalent to the condition:

sup{σmax(S(jω))} ≤ 1 (2.5) ω∈R where σmax(S(jω)) denotes the largest singular value of S(jω). This condition indi- cates that for a stable system (2.3) to be passive, the singular values of S(jω) must

5 be below 1 at all frequency ω, so a simple frequency scan can be used to provide fast preliminary passivity check. An alternative method for determining the passivity of the system is given by the Linear Matrix Inequality (LMI) [2]:   AT P + PA + CT C PB + CT D   P > 0,   ≤ 0 (2.6) BT P + DT CDT D − I

T There exists a solution P = P ∈ Rn×n if and only if system (2.3) is passive.

2.1.2 Y -parameter System Passivity

Here, we consider a system (2.3) built upon Y -parameter data, where the trans- fer function (2.4) evaluates the Y -parameter itself, that is, Y (s) = H(s). Theorem 2.1.2. Let Y (s) be a square matrix rational function of complex variable s. Then Y (s) is positive-real if and only if all following conditions are met: [4,25]: 1. Y(s) has no poles p where Re{p} > 0 2. Y¯ (s) = Y(¯s), ∀ Re{s} > 0

3. G(jω) = Y (jω) + Y ∗(jω) ≥ 0, if jω is not a pole for ω ∈ R. 4. All imaginary poles p = jω of the rational function Y (s) are simple poles and each has a corresponding residual matrix that is non-negative definite.

The condition of passivity is equivalent to the condition of the transfer function Y (s) being positive-real [2]. For a stable system (2.3) representing a rational func- tion, conditions 1 and 2 of Theorem 2.1.2 are met. Further, if we have no imaginary poles, then the only condition left to fulfil the passivity requirement is condition 3:

6 G(jω) ≥ 0, that is, G(jω) is semi-positive definite. This latter condition is of pri- mary interest as there are no actual direct control over this condition in the process of creating a rational function for fitting the frequency data. Condition 3 is equivalent to the condition: min{eig{G(jω)}} ≥ 0 (2.7) where eig{X} denotes the set of eigenvalues a certain matrix X and min{x} denotes the smallest value in a set x. Condition (2.7) implies that for a stable system (2.3) to be passive, the eigenvalues of Y (jω) must be above 0 at all frequency ω, so a simple frequency scan can be used to provide fast preliminary passivity check. An alternative method for determining the positive-realness of the system is provided by the following LMI [2]:   AT P + PA PB + CT   P > 0,   ≥ 0 (2.8) BT P + C −DT − D

T There exists a solution P = P ∈ Rn×n if and only if system (2.3) is passive.

2.2 Passivity Enforcement Methods Overview

In this section, we shall briefly review 3 of the most important passivity enforce- ment techniques: The convex programming approach, direct residual perturbation of relevant poles, and the Hamiltonian Matrix perturbation approach.

7 2.2.1 Convex Programming Approach

The convex programming approach reviewed in this section targets the Y - parameter-based LTI system (2.3), though a similar algorithm exists for the S- parameter-based system. The technique is an optimization technique which is based on the minimization of the perturbation error under the constraints of the LMI passivity condition equa- tions (2.8) [3]. Of the LTI system matrices {A, B, C, D}, only C and D are used as variables of the minimization so as to preserve the pole structure of the original system. Doing so, the LMI constraints (2.8) are convex [3]. The objective function of the minimization is a norm function representing the perturbation error and as such, it is also a convex function. The overall minimization problem can be roughly described through the follow- ing equation:

r X ˜ min ||H(jωk) − H(jωk)||2 C,D,P k=1 under constraints :   AT P + PA PB + CT     ≥ 0, P ≥ 0. (2.9) BT P + C −DT − D where ωk is the kth sampling point of the sampled data set from which the origi- nal LTI system is built, r is the total number of sample points, and H˜ (jω) is the transfer function of the new LTI system with matrix set {A, B, C˜ , D˜ }. Note that the specific objective function given in the LMI problem (2.9) is a l2-norm function which assumed uniform weighting on all terms, but the l2-norm function is only used

8 as an example for showcasing the minimization problem rather than as a suggested optimal objective function. Because both the objective function and the LMI constraints are convex in (2.9), convex optimization algorithm can be used to determine an optimal solution [3]. Note that the above presented process only presents the essential ideal of the convex programming algorithm, and the more detailed algorithm is deferred to [3] and the more advanced and polished versions are referred to [5,6,8]

2.2.2 Direct Residual Perturbation Approach

The Direct residual perturbation approach reviewed in this section targets the Y -parameter-based LTI system (2.3) and comes from [12]. The method is specifically designed for enforcing passivity of models based on the pole-residue form:

t X R(i,j)l Y(i,j)(s) = + D(i,j) + sJ(i,j), (2.10) s − al l=1 where Y(i,j)(s) denotes the element at coordinate (i, j) of the Y -parameter matrix

evaluated at frequency s [13]. Parameters R(i,j)l and al are the residues and poles of the system, respectively, and D(i,j) and J(i,j) are real values. We make the assumption that the approximated Y -parameter matrix (2.10) is symmetric at all s, that is, matrices D, J, and Rl for all l are symmetric. For example, this assumption can be met if the parameters of equation (2.10) are obtained using the well-known vector-fitting method [12]. Under the assumption of symmetry of the

1 ∗ parameter matrices, we have G(jω) = 2 (Y (jω) + Y (jω)) = R{Y (jω)}, and the passivity requirement (2.7) translates to the requirement that all eigenvalues of the

9 real part of Y (jω) must be positive. We take note to the fact that as G is symmetric and real, its eigenvalues are real. Suppose that the Y -parameter matrix Y evaluated at a certain frequency s = jω does not fulfil the passivity condition due to one of the eigenvalues λ of Re{Y } = G being smaller than 0. The passivity enforcement method begins with stacking the columns of Y into a vector y from the left of Y to its right. Presume that to enforce passivity for this particular Y , it is required to apply the change ∆y on y. We define x as a vector containing the parameters of R in matching order of the indexing of the elements in the y vector. For example, in a 2 by 2 matrix Y case where our pole-residue form approximation has l = 3 poles, our stacked vectors will be:

T T T T T T y = [y1,1, y2,1, y1,2, y2,2], x = [r1,1, r2,1, r1,2, r2,2] (2.11) T where ri,j = [R(i,j)1 , R(i,j)2 , R(i,j)3 ]

Suppose now that a change ∆x to x is required to achieve the desired ∆y. We have the following relation between x and y through the linearisation of (2.10):

∆y = M∆x (2.12) where M is the matrix holding the linear relation. For the 2 by 2 Y example from equation (2.11), M is a block defined as:   1 1 1 0 0 0 ... 0 0 0  s−a1 s−a2 s−a3     0 0 0 1 1 1 ... 0 0 0   s−a1 s−a2 s−a3    (2.13)  0 0 0 0 0 0 ... 0 0 0      0 0 0 0 0 0 ... 1 1 1 s−a1 s−a2 s−a3

10 As the passivity condition only involves the real part of symmetric Y , we write equation (2.12) in the more specific form:

∆g = R{M}∆x = P∆x (2.14) where ∆g = R{∆y}. To proceed further from (2.14), consider the linear relation between G and its eigenvalue λ with the corresponding right eigenvector v:

(G − λI)v = 0 (2.15)

Suppose that we perform a small change ∆G resulting in a change ∆λ and ∆v. We perform the following derivation:

[(G + ∆G) − (λ + ∆λ)I](v + ∆v) = 0

(Gv − λv) + (∆Gv − ∆λv) + (G∆v − λ∆v) + (∆G∆v − ∆λ∆v) = 0 (2.16)

(∆G − ∆λI)v + (G − λI)∆v = 0

Multiply by the left eigenvector w of λ from the left:

w(∆G − ∆λI)v + w(G − λI)∆v = 0

w(∆G − ∆λI)v = 0 (2.17) w∆Gv = ∆λωv w∆Gv ∆λ = wv

11 As G is symmetric, we have w = vT , and if we were to normalize the eigenvectors to unit length, we would have:

∆λ = vT ∆Gv (2.18)

Applying linearisation on equation (2.18) results in:

∆λ = Q∆g (2.19) where ∆g is a vector created by stacking the columns of ∆G from left to right and Q is the matrix maintaining the equality. Taking note that ∆g in equation (2.19) is the same as the one in equation (2.14), we plug (2.14) into (2.19) to obtain:

∆λ = QP∆x (2.20) ∆λ = W∆x

We note that equation (2.20) is a direct relation between the eigenvalue of G and the residual vector x.

Define y˜u as the new Y -parameter vector resulting from applying the change of residual ∆x after the uth iteration. We effectively need to solve the least square problem:

y − (y˜(u−1) + M∆x) → 0 (2.21) under the constraint:

λ + ∆λ = λ + W∆x ≥ 0 (2.22) W∆x ≥ −λ

12 The error term (2.21) is meant to minimize the error with respect to the original y vector in subsequent iterations. The problem can be solved using quadratic programming through the minimiza- tion of: 1 ∆xT MT M∆x − (y − y˜ )T M∆x (2.23) 2 (u−1) subject to (2.22), which is a restatement of the passivity condition (2.7) for a single eigenvalue. Take note that the above presented passivity enforcement process only targeted a single eigenvalue at a single frequency point. The more complete process is deferred to [13] and the more refined and efficient versions of the approach is referred to [10,11,20].

2.2.3 Hamiltonian Matrix Perturbation Approach

The Hamiltonian Matrix method is based on the theory of the Hamiltonian

Matrix pencil (Jλ, K) which is built using the matrices of the LTI system (2.1) and varies with the control scalar variable λ. The theory simply states that the magnitude of each imaginary eigenvalue of the pencil (Jλ, K) must be the angular frequency at which one of the singular value curves of the transfer function H(s) (2.2) of the system crosses threshold λ, and the relation holds both ways. As stated in section 2.1.1, a S-parameter system is not passive if its H(s) has singular values above 1 over even one range of frequencies, so by setting λ to 1, we can determine all such frequency ranges by looking at the imaginary eigenvalues of (Jλ, K), if any.

13 Similarly, this form of passivity checking can be applied to Y -parameter systems by setting λ to 0 to match the passivity condition stated in section 2.1.2. By strategically setting λ, obtaining the imaginary eigenvalues of matrix pen- cil (Jλ, K) allows us to determine all regions of passivity violation. In the same vein, perturbing the matrices of system (2.1) in a manner targeting the imaginary eigenvalues of (Jλ, K) provide potential enforcement of the system by eliminating the imaginary eigenvalues, which is the essential idea of the Hamiltonian Matrix perturbation approach. As the main work presented in this thesis is built upon the Hamiltonian Matrix perturbation method, the full description of the method and the theory involved are differed to the subsequent Chapter 3 in full detail rather than as a review in this chapter.

14 CHAPTER 3 Hamiltonian Matrix Eigenvalue Perturbation In this chapter, we take a look at the theory of the Hamiltonian Matrix and its link to the passivity of the LTI system (2.1) and perform an in-depth review of the related perturbation equations for enforcing passivity. Then, a set of perturbation algorithms are explored as the main work of this thesis. As most of the work are performed on S-parameter-based LTI systems (2.1), all equations are associated to S-parameter systems only, though a complete parallel process is available for the Y -parameter-based systems.

3.1 Hamiltonian Matrix Theory

A robust passivity checking mechanism is provided by the Hamiltonian Matrix pencil which is comprised of the matrices Jγ and K:

     −1   A 0 B 0 −D γIp C 0 Jγ =   +        T   T   T   T  0 −A 0 −C γIp −D 0 B   A − BR−1DT C −γBR−1BT   Jγ =   , (3.1) γCT T−1C −AT + CT DR−1BT   E 0   K =   , (3.2) 0 ET

15 where the LTI system matrix set {E, A, B, C, D} are those of system (2.1), γ is a

T 2 T 2 control variable of Jγ, T = DD − γ Ip, R = D D − γ Ip, and Ip is the of size p. The matrix Jγ is Hamiltonian while matrix K has the property:

U−1KU = KT , (3.3) where U is the standard skew-:   0 I  n U =   (3.4) −In 0

Denoting (X, Y) as a matrix pencil comprised of matrices X and Y, the spectral properties of the (Jγ, K) pencil have a direct link to the singular value of the sys- tem (2.1)’s transfer function H(s) defined as (2.2). We have the following Theorem formally expressing this link:

Theorem 3.1.1. Define σ(X) as the set of all singular values of matrix X and eig(X, Y) as the set of all generalized eigenvalues of matrix pencil (X, Y)(That is, the set of all λ’s for which there is a non-trivial solution x to the equation (λY −

X)x = 0). Consider a purely imaginary value jω with ω ∈ R. If jω∈ / eig(A, E) and

γ∈ / σ(D), then γ ∈ σ(H(jω)) if and only if jω ∈ eig(Jγ, K) [1].

Theorem 3.1.1 states that any frequency ωi at which a singular value of H(jωi) crosses threshold γ must be the magnitude of one of the imaginary eigenvalues of

(Jγ, K). Conversely, the magnitude of any imaginary eigenvalues of (Jγ, K) must be the angular frequency ωi at which a singular value of H(jωi) crosses threshold

16 γ. The theorem also states the preconditions jω∈ / eig(A, E) and γ∈ / σ(D) for the theorem to hold and are only relevant to proving the theorem and shall be made clear later on. Consider the scenario where we set γ to 1, in which case we denote the Hamil- tonian Matrix pencil as simply (J , K). In this case, the imaginary eigenvalues of

(J , K) indicate all frequencies ωi’s for which H(jω) has at least one singular value of value 1. Combining this information with the passivity condition from section 2.1.1 which states that the singular values of transfer function H(jω) must be below 1 at all frequencies for the system to be passive, we can formulate a robust passivity checking method: • If there are no (J , K) imaginary eigenvalues: system is passive. • If there are any (J , K) imaginary eigenvalues: system is non-passive. The above passivity checking method is based on the simple fact that for a singular value curve to lie above 1, it has to have crossed threshold 1 at a certain point, and if no such crossing point exists, then all singular values lie below threshold 1 at all frequencies, in which case the system is passive. However, the above stated method ignored the extreme case where all singular values are above threshold 1, but this scenario can be easily detected by verifying the singular values of matrix D. Consider the limit case for the transfer function H(jω) defined at (2.2):

lim H(jω) = D (3.5) ω→∞

Indeed, H(jω) eventually becomes D as ω increases, so if all singular values of H(jω) are to be above 1, then all singular values of D must also be above 1. Thus, the

17 extreme case can be filtered out by checking the singular values of D. Note that we assumed E is non-singular for equation (3.5). A requirement for proving Theorem 3.1.1 is given by the following proposition: Proposition 3.1.1. The matrix   −D γIp   (3.6)  T  γIp −D is non-singular for γ ≥ 0 if and only if γ∈ / σ(D).

Proof. First, perform a Singular Value Decomposition (SVD) on the D matrix:

D = WΣVT (3.7) where W and V are unitary matrices and Σ is a diagonal matrix having the singular values of D on the diagonal. Matrix (3.6) can now be written as:       T T T −D γIp −WΣV γIp −WΣV γWW   =   =    T   T   T T  γIp −D γIp −VΣW γVV −VΣW       T −W 0 Σ γIp V 0 =       (3.8)      T  0 V γIp Σ 0 −W

Because W and V are unitary matrices, the left and right matrices of eq. (3.8) are non-singular, so for matrix (3.6) to be non-singular, we need the middle matrix of eq. (3.8) to be non-singular.

18 If γ 6= 0, non-singularity can be demonstrated by the simple of the middle matrix of eq. (3.8):       Σ γIp γIp Σ γIp Σ   →   →   =      1 1  γIp Σ Σ γI Σ − γ Σ(γIp) γIp − γ ΣΣ   γIp Σ   (3.9)  1 2  0 γ (γ Ip − ΣΣ)

We have thus established the shared property of singularity/non-singularity between the matrix (3.6), the middle matrix of (3.8), and the matrix (3.9). If γ∈ / Σ: • γ = 0: the middle matrix of eq. (3.8) is non-singular since γ∈ / Σ. Thus, matrix (3.6) is non-singular. • γ > 0: Because γ∈ / Σ, we have γ2 ∈/ ΣΣ, so the row-echelon form (3.9) of (3.8) is non-singular, and thus matrix (3.6) is non-singular. For the converse, if matrix (3.6) is non-singular: • γ = 0: The middle matrix of (3.8) must have no 0 entries on the diagonal. In other words, γ∈ / Σ.

2 • γ > 0: The diagonal sub-matrix (γ Ip − ΣΣ) from matrix (3.9) must have no 0 entries, so we must have γ2 ∈/ ΣΣ, and thus γ∈ / Σ.

We now write the proof for Theorem 3.1.1, which is extracted from [27]. The proof for the case of RS systems, where E is the identity matrix, is made in [1].

19 Proof. Given the first precondition jω∈ / eig(A, E), we have that the only solution for a vector x is the 0 vector for the following equation

Ax = jωEx

0 = (jωE − A)x

Thus, jωE − A is an since its nullspace only contains the 0 vector. Thus, given real system matrices {E, A, B, C, D}, We have:

S(jω) = C(jωE − A)−1B + D

S∗(jω) = BT (−jωET − AT )−1CT + DT

= BT (jωET + AT )−1(−CT ) + DT

Given that γ ∈ σ(S(jω)), SVD states that we are given non-zero vectors v and u such that S(jω)u = γv and S∗(jω)v = γu, which we can write in the matrix form:

   −1           C 0 jωE − A 0 B 0 D 0 u 0 γI u                     +     =     0 BT 0 jωET + AT 0 −CT 0 DT v γI 0 v

   −1         C 0 jωE − A 0 B 0 u −D γI u                     =     0 BT 0 jωET + AT 0 −CT v γI −DT v (3.10)   −D γI   Given the second precondition γ∈ / σ(D), by Proposition 3.1.1, we have that   γI −DT    −1 B 0 −D γI     is non-singular. Pre-multiplying equation (3.10) by     we 0 −CT γI −DT

20 obtain:    −1    −1     B 0 −D γI C 0 jωE − A 0 B 0 u                         = 0 −CT γI −DT 0 BT 0 jωET + AT 0 −CT v     B 0 u         0 −CT v    −1    −1     B 0 −D γI C 0 jωE − A 0 B 0 u                         = 0 −CT γI −DT 0 BT 0 jωET + AT 0 −CT v    −1     jωE − A 0 jωE − A 0 B 0 u                 0 jωET + AT 0 jωET + AT 0 −CT v

(3.11)

 −1     jωE − A 0 B 0 u       Define vector z =      , which is necessarily 0 jωET + AT 0 −CT v non-zero since at the right-hand side of (3.10), u and v are non-zero vectors and   −D γI     is non-singular while the term z is a factor of the left-hand side. We γI −DT

21 can then write equation (3.11) as:

   −1     B 0 −D γI C 0 jωE − A 0               z =   z 0 −CT γI −DT 0 BT 0 jωET + AT      −1     A 0 B 0 −D γI C 0 E 0             +       z = jω   z. 0 −AT 0 −CT γI −DT 0 BT 0 ET

(3.12)

From equations (3.1) and (3.2), we can write equation (3.12) as:

Jγz = jωKz, (3.13)

which, given z is non-zero, shows that jω is an eigenvalue of matrix pencil (Jγ, K) by definition.

3.2 Perturbation Equation

Section 3.1 has provided us with the important information that the presence of imaginary eigenvalues from the Hamiltonian Matrix pencil (J , K)     A − BR−1DT C −BR−1BT E 0     J =   , K =   (3.14) CT T−1C −AT + CT DR−1BT 0 ET indicate non-passivity of the S-parameter-based LTI system (2.1). Not only that, these imaginary eigenvalues, if present, also clearly delimit the frequency regions over which the passivity condition is violated, that is, when σmax(S(jω)) > 1 as

22 explained in section 2.1.1. Thus, a passivity enforcement mechanism is readily avail- able: perform eigenvalue perturbation on the pencil (J , K) such that its imaginary eigenvalues are eliminated. Once the imaginary eigenvalues are eliminated, the sys- tem then becomes passive by virtue of Theorem 3.1.1 and under the assumption that D has singular values that are below 1. In this section, we present the perturbation equations that can be used to modify specific eigenvalues of pencil (J , K). Various details will be given on the specific ways the perturbations can be performed. Note that we now refer to the transfer function as S(jω) rather than H(jω) since we are only dealing with S-parameter systems in this chapter.

3.2.1 General Matrix Pencil Perturbation

We start with a general matrix pencil (M, N), which has a certain generalized eigenvalue λ with the corresponding right eigenvector x and left eigenvector y. We thus have the following relations:

Mx = λNx, y∗M = λy∗N (3.15)

By simple manipulation, (3.15) can be derived into the form:

y∗Mx λ = (3.16) y∗Nx

In the most general case, both M and N can be perturbed, but in the context of perturbation of matrix pencil (J , K), we shall only concentrate on the case where M is perturbed while N is left constant. Explanation on this choice is given later in sub-section 3.2.2. We use the following theorem to provide the perturbation equation:

23 Theorem 3.2.1. Let λ be a simple eigenvalue of regular matrix pencil (M, N) with right and left eigenvectors x and y. Let λ˜ be the corresponding eigenvalue of the perturbation (M˜ , N), where M˜ = M + dM. Then [22]

y∗M˜ x λ˜ = + O(2) (3.17) y∗Nx where  = ||dM||F , the Frobenius norm of dM.

n×m The Frobenius norm of a certain matrix X ∈ C is denoted as ||X||F and is defined as: v u n m uX X 2 ||X||F = t |X(i, j)| (3.18) i=1 j=1 A first order perturbation equation can be obtained by eliminating the 2nd and higher order terms O(2) of (3.17):

y∗M˜ x λ˜ = y∗Nx y∗Mx y∗dMx λ˜ = + y∗Nx y∗Nx y∗dMx λ˜ − λ = (3.19) y∗Nx

For perturbation of (J , K), we simply replace (M, N) with (J , K) to obtain:

y∗dJ x λ˜ − λ = (3.20) y∗Kx

The perturbation error ε introduced to the system is defined as H2 norm of the resulting change in the system’s transfer function S(s)

ε = ||S(·) − S˜(·)||2 = ||dS(·)||2 (3.21)

24 The H2 norm of a frequency domain matrix function X(s) is denoted ||X(·)||2 and can be written as: s Z −∞ 1 ∗ ||X(·)||2 = T race[X(jω) · X(jω)]dω (3.22) 2π ∞

3.2.2 Perturbation Equations Properties

For the actual perturbation equations of pencil (J , K), the perturbations are applied incisively on the matrix E, A, B, C, or D of the LTI system (2.1) rather than on J or K as a whole. This is to allow control over the error introduced to the LTI system. For example, perturbation of matrices A or E is generally avoided as doing so would directly affect the pole structure of the system. The D matrix is also generally avoided as its perturbation would affect the response of the system over all frequencies. Thus, standard perturbation algorithms typically concentrate on perturbation of matrix B or C, or possibly a combination of the two. The following subsections present the latter three possible perturbation algorithms, but before proceeding, a few properties and definitions need to be stated.

(J ,K) Imaginary Eigenvalue’s Eigenvectors

As the eigenvalue perturbation targets imaginary eigenvalue only, we present an important property of the right and left eigenvectors of the imaginary eigenvalues of pencil (J ,K): Proposition 3.2.1. Let an imaginary value λ = iω be an eigenvalue of pencil (J ,K) with respective right eigenvector x and left eigenvector y. Then we have the following

25 relation: y∗ = x∗U−1 (3.23) where U is defined as (3.4).

Proof. By definition of generalized eigenvalue of pencil (J ,K), we have:

J x = λKx

x∗J T = λ∗x∗KT

x∗J T = −λx∗KT (3.24)

We take note of two properties of U defined at (3.4) which can be easily proven:

U−1 = −U, U = −UT (3.25)

Using (3.25) and the property of matrix K 3.3, it follows that:

KT = U−1KU (3.26)

Also, due to the fact that J is Hamiltonian, the following property holds:

UJ = (UJ )T

UJ = J T U−1

UJ U = J T

−U−1J U = J T (3.27)

26 Plugging in equations (3.26) and (3.27) into (3.24), we have:

x∗(−U−1J U) = −λx∗(U−1KU)

x∗U−1J = λx∗U−1K (3.28)

The definition of left eigenvector of generalized eigenvalue of pencil (J , K) is given as: y∗J = λy∗K (3.29) so by eq. (3.28), x∗U−1 is the left eigenvector of (J , K) by definition.

Kronecker Product

The Kronecker Product between matrices X ∈ Cn×m and Y ∈ Cp×q is denoted as X ⊗ Y and is defined as:   X(1, 1) · Y ··· X(1, m) · Y    . . .  X ⊗ Y =  . .. .  (3.30)     X(n, 1) · Y ··· X(n, m) · Y

Let S, Z, T, and W be generic matrices. The Kronecker Product can be used to express the common equation SZT = W into a more con- venient form as follow:

SZT = W =⇒ (TT ⊗ S)vec(Z) = vec(W) (3.31) where matrix operator vec(Z) outputs a vector built by stacking the columns of Z from left to right.

27 3.2.3 C Imaginary Matrix Eigenvalue Perturbation Equations

In this section, I present the derivation of the matrix C perturbation equation in the context of the Hamiltonian Matrix pencil passivity enforcement.

Perturbation Equations

In the case where only matrix C is allowed to be perturbed by a certain pertur- bation matrix dC, the change applied on J due to dC, denoted as dJc, is defined as:   −BR−1DT dC 0   dJc =   (3.32) dCT T−1C + CT T−1dC dCT DR−1BT which is directly derived from (3.14). Let λ = iω be an imaginary eigenvalue of (J , K) with the corresponding right eigenvector x. Using proposition 3.2.1, we write (3.20) as:

x∗U−1dJ x x∗UdJ x λ˜ − λ = c = c x∗U−1Kx x∗UKx ˜ ∗ ∗ (λ − λ)(x UKx) = x UdJcx (3.33)

From this point, we attempt to write the perturbation equation in terms of E, A, B,

C, and D. We first introduce the notations xu and xl as the upper and lower halves of that right eigenvector x, respectively. We now concentrate on the right hand side term of eq. (3.33):

28       −1 T   0 In −BR D dC 0 xu ∗ ∗ ∗ x UdJcx = x x       u l    T −1 T −1 T −1 T    −In 0 dC T C + C T dC dC DR B xl     T −1 T −1 T −1 T   dC T C + C T dC dC DR B xu = x∗ x∗     u l  −1 T    BR D dC 0 xl

∗ T −1 ∗ T −1 ∗ T −1 T ∗ −1 T = xudC T Cxu + xuC T dCxu + xudC DR B xl + xl BR D dCxu (3.34)

We note that T and R are symmetric, so we have that:

(dCT T−1C)T = CT T−1dC

(dCT DR−1BT )T = BR−1DT dC (3.35)

Using (3.35) and the property:

∗ ∗ T ∗ p1Zp2 + p2Z p1 = 2 Re{p1Zp2} (3.36)

We write (3.34) as:

∗ ∗ T −1 ∗ T −1 T x UdJcx = 2 Re{xudC T Cxu} + 2 Re{xudC DR B xl}

∗ T −1 ∗ T −1 T = 2 Re{xudC T Cxu + xudC DR B xl}

∗ T −1 −1 T = 2 Re{(xudC )(T Cxu + DR B xl)} (3.37)

29 We now concentrate on the left hand side of eq. (3.33) x∗UKx:         0 In E 0 xu x∗UKx = x∗ x∗       u l    T    −In 0 0 E xl

∗ T ∗ = xuE xl − xl Exu

∗ T ∗ = 2i Im{xuE xl} = −2i Im{xl Exu} (3.38)

Plugging in (3.37) and (3.38) into (3.33), we have:

˜ ∗ ∗ T −1 −1 T (λ − λ)(−2i Im{xl Exu}) = 2 Re{(xudC )(T Cxu + DR B xl)}

∗ ∗ T −1 −1 T (˜ω − ω)(Im{xl Exu}) = Re{(xudC )(T Cxu + DR B xl)} (3.39)

Performing the change of variable (3.46), which is explained in the block where equation (3.46) is defined, we obtain from (3.39):

∗ ∗ −1 T −1 −1 T (˜ω − ω)(Im{xl Exu}) = Re{(xuL dCt )(T Cxu + DR B xl)}

∗ −1 −1 T ∗ ∗ −1 ∗ (˜ω − ω)(Im{xl Exu}) = Re{(T Cxu + DR B xl) dCt(xuL ) } (3.40)

−1 −1 T For simplification, we apply the short hand zc = T Cxu + DR B xl:

∗ ∗ ∗ −1 ∗ (˜ω − ω)(Im{xl Exu}) = Re{zc dCt(xuL ) } (3.41)

Using the matrix product property (3.31), the above equation can be written as:

∗ −1 T T ∗ (˜ω − ω)(Im{xl Exu}) = Re(((L ) xu) ⊗ zc )vec(dCt)

∗ T −1 ∗ (˜ω − ω)(Im{xl Exu}) = Re(xu L ⊗ zc )vec(dCt) (3.42)

30 In the context of perturbation of multiple eigenvalues λi with corresponding upper ˜ (xiu) and lower (xil) halves of right eigenvectors, let λi be the desired perturbed eigenvalues for i = 1, 2, ..., k. Applying the following notations:

T −1 ∗ 1×np vci = Re(xiuL ⊗ zci) ∈ R

∗ qi = (˜ωi − ωi)(Im{xilExiu}) ∈ R

−1 −1 T where zci = T Cxiu + DR B xil, we obtain from (3.42) the linear system of equation:     vc1 q1      .   .   .  × vec(dCt) =  .  (3.43)         vck qk The following minimization can then be formulated:

min||vec(dCt)||F under constraint Vc × vec(dCt) = q (3.44)

 T  T where V = T T and q = . The constrained optimization c vc1 ... vck q1 . . . qk problem (3.44) is then solved to determine a dCt matrix of least Frobenius norm sat- isfying the perturbation equations. The problem is a standard least-square problem, which can be solved using the pseudo-inverse method:

T T −1 vec(dCt) = Vc (VcVc ) q (3.45)

Once a solution dCt is obtained, the corresponding dC solution can be obtained through equation (3.46). The resulting perturbed system is then defined as {E, A, B, C+ dC, D}.

31 Error Control

Note that a change of variable is applied at equation (3.40) on the variable dC during the perturbation equation derivation. Without the change of variable, solving the standard least-square problem (3.44) would end up solving directly for dC of smallest Frobenius norm. Although minimizing the norm of dC reduces error introduced to the system in the general sense, the minimization of ||dC||F is not equivalent to minimizing ε, the perturbation error defined as (3.21). To introduce the minimization of ε as an integral part of the minimization problem (3.44), we consider the change of basis:

T dCt = dCL (3.46) where L is obtained by performing the Choleski factorization on the controllability

Gramian Gpc:

T Gpc = L L (3.47)

Gpc is obtained by solving the Lyapunov equation [23]

T T T EGpcA + AGpcE = −BB (3.48)

It can shown that the Frobenius norm ||dCt||F is directly proportional to ε [23].

As result, in the context of exclusive perturbation of matrix C, minimizing ||dCt||F implies minimizing the perturbation error.

3.2.4 B Imaginary Matrix Eigenvalue Perturbation Equations

In this section, I present the derivation of the matrix B perturbation equation in the context of the Hamiltonian Matrix pencil passivity enforcement.

32 Perturbation Equations

In the case where only matrix B is allowed to be perturbed by a certain pertur- bation matrix dB, the change applied on J due to dB, denoted as dJb, is defined as:   −dBR−1DT C −dBR−1BT − BR−1dBT   dJb =   (3.49) 0 CT DR−1dBT

Let λ = iω be an imaginary eigenvalue of (J , K) with the corresponding right eigenvector x. Using proposition 3.2.1, we write (3.20) as:

x∗U−1dJ x x∗UdJ x λ˜ − λ = b = b x∗U−1Kx x∗UKx ˜ ∗ ∗ (λ − λ)(x UKx) = x UdJbx (3.50)

We proceed with the derivation using the right hand side of equation (3.50) as follow:     T −1 T   0 C DR dB xu ∗ ∗ ∗ x UdJbx = x x     u l  −1 T −1 T −1 T    dBR D C dBR B + BR dB xl

∗ T −1 T ∗ −1 T ∗ −1 T ∗ −1 T = xuC DR dB xl + xl dBR D Cxu + xl dBR B xl + xl BR dB xl

Using the property given as eq. (3.36):

∗ ∗ T −1 T ∗ −1 T x UdJbx = 2 Re{xuC DR dB xl} + 2 Re{xl dBR B xl}

∗ T −1 T ∗ −1 T = 2 Re{xuC DR dB xl + xl dBR B xl}

∗ −1 T ∗ −1 T = 2 Re{xl dBR D Cxu + xl dBR B xl}

∗ −1 T T = 2 Re{(xl dB)[R (D Cxu + B xl)]} (3.51)

33 Plugging (3.51) and (3.38) into (3.50), we have:

˜ ∗ ∗ −1 T T (λ − λ)(−2i Im{xl Exu}) = 2 Re{(xl dB)[R (D Cxu + B xl)]}

∗ ∗ −1 T T (˜ω − ω) Im{xl Exu} = Re{(xl dB)[R (D Cxu + B xl)]} (3.52)

Using the change of variable (3.59), which is explained in the block where equa- tion (3.59) is defined, we write eq. (3.52) as:

∗ ∗ −1 −1 T T (˜ω − ω) Im{xl Exu} = Re{(xl Q dBt)[R (D Cxu + B xl)]}

∗ −1 T T ∗ T ∗ −1 ∗ (˜ω − ω) Im{xl Exu} = Re{[R (D Cxu + B xl)] dBt (xl Q ) } (3.53)

−1 T T For simplification, we apply the short hand zb = R (D Cxu + B xl):

∗ ∗ T −1 T (˜ω − ω) Im{xl Exu} = Re{zb dBt ((Q ) xl)} (3.54)

Using the matrix product property defined at eq. (3.31), we have:

∗ −1 T T ∗ T (˜ω − ω)(Im{xl Exu}) = Re{((Q ) xl) ⊗ zb }vec(dBt )

∗ T −1 ∗ T (˜ω − ω)(Im{xl Exu}) = Re{(xl Q ) ⊗ zb }vec(dBt ) (3.55)

In the context of perturbation of multiple eigenvalues λi with corresponding upper ˜ (xiu) and lower (xil) halves of right eigenvectors, let λi be the desired perturbed eigenvalues for i = 1, 2, ..., k. Applying the following notations:

T −1 ∗ 1×np vbi = Re{(xil Q ) ⊗ zbi} ∈ R

∗ qi = (˜ωi − ωi) Im{xilExiu} ∈ R

34 −1 T T where zbi = R (D Cxiu + B xil), we obtain from (3.55) the linear system of equation     vb1 q1      .  T  .   .  × vec(dB ) =  .  (3.56)   t       vbk qk The following minimization can then be formulated:

T T min||vec(dBt )||F under constraint Vb × vec(dBt ) = q (3.57)  T  T where V = T T and q = . The constrained optimization b vb1 ... vbk q1 . . . qk T problem (3.57) is then solved to determine a dBt matrix of least Frobenius norm sat- isfying the perturbation equations. The problem is a standard least-square problem, which can be solved using the pseudo-inverse method:

T T T −1 vec(dBt ) = Vb (VbVb ) q (3.58)

Once a solution dBt is obtained, the corresponding dB solution can be obtained through equation (3.59). The resulting perturbed system is then defined as {E, A, B+ dB, C, D}.

Error Control

Note that a change of variable is applied at equation (3.53) on the variable dB during the perturbation equation derivation. Without the change of variable, solving the standard least-square problem (3.57) would end up solving directly for dBT of smallest Frobenius norm. Although minimizing the norm of dBT reduces error

T introduced to the system in the general sense, the minimization of ||dB ||F is not

35 equivalent to minimizing ε, the perturbation error defined as (3.21). To introduce the minimization of ε as an integral part of the minimization problem (3.57), we consider the change of basis:

dBt = QdB (3.59) where Q is obtained by performing the Choleski factorization on the Observability

Gramian Gpo:

T Gpo = Q Q (3.60)

Gpo is obtained by solving the Lyapunov equation [23]

T T T E GpoA + A GpoE = −C C (3.61)

It can shown that the Frobenius norm ||dBt||F is directly proportional to ε [23].

As result, in the context of exclusive perturbation of matrix B, minimizing ||dBt||F implies minimizing the perturbation error.

3.2.5 B and C Imaginary Matrix Eigenvalue Perturbation Equations

In this section, I present the derivation of the simultaneous matrix B and C perturbation equation in the context of the Hamiltonian Matrix pencil passivity enforcement.

Perturbation Equations

When both B and C matrices are specified for perturbation, we use a first order approximation by dropping the higher order derivatives and directly add dJb defined as (3.49) and dJc defined as (3.32):

36 x∗U(dJ + dJ )x λ˜ − λ = b c (3.62) x∗UKx

˜ ∗ ∗ (λ − λ)(x UKx) = x U(dJb + dJc)x

˜ ∗ ∗ ∗ (λ − λ)(x UKx) = x UdJbx + x UdJcx (3.63)

∗ ∗ We replace the terms x UdJbx and x UdJcx with the derived equivalent terms (3.51) and (3.37), respectively, to write eq. (3.63) as

˜ ∗ ∗ −1 T T (λ − λ)(x UKx) =2 Re{(xl dB)[R (D Cxu + B xl)]}

∗ T −1 −1 T +2 Re{(xudC )(T Cxu + DR B xl)}

−1 −1 T −1 T Again, we use the short hands zc = T Cxu + DR B xl and zb = R (D Cxu +

T B xl) to write the more compact equation:

˜ ∗ ∗ ∗ T (λ − λ)(x UKx) = 2 Re{xl dBzb} + 2 Re{xudC zc}

Plugging eq. (3.38) to the left hand side term x∗UKx:

˜ ∗ ∗ ∗ T −2i(λ − λ) Im{xl Exu} = 2 Re{xl dBzb} + 2 Re{xudC zc}

∗ ∗ ∗ T (˜ω − ω) Im{xl Exu} = Re{xl dBzb} + Re{xudC zc}

37 Using the matrix product property (3.31), we have:

∗ T ∗ T ∗ T (˜ω − ω) Im{xl Exu} = Re{zb ⊗ xl }vec(dB) + Re{zc ⊗ xu}vec(dC )     vec(dB) (˜ω − ω) Im{x∗Ex } = T ∗ T ∗   (3.64) l u Re{zb ⊗ xl } Re{zc ⊗ xu}   vec(dCT )

In the context of perturbation of multiple eigenvalues λi = ωi with corresponding ˜ upper (xiu) and lower (xil) halves of right eigenvectors, let λi =ω ˜i be the desired perturbed eigenvalues for i = 1, 2, ..., k. Applying the following notations:   v = T ∗ T ∗ bci Re{zbi ⊗ xil} Re{zci ⊗ xiu}

∗ qi = (˜ωi − ωi) Im{xilExiu}

−1 T T −1 −1 T where zbi = R (D Cxiu + B xil) and zci = T Cxiu + DR B xil, we obtain from (3.64) the linear system of equation:     vbc1 q1      .   .   .  × W =  .  (3.65)         vbck qk

The following minimization can then be formulated:

min||vec(W)||F under constraint Vbc × W = q (3.66)

38 where

 T  T V = T T , q = , bc vbc1 ... vbck q1 . . . qk   vec(dB)   W =   (3.67) vec(dCT )

The constrained optimization problem (3.66) is then solved to determine a W matrix of least Frobenius norm satisfying the perturbation equations. The problem is a standard least-square problem, which can be solved using the pseudo-inverse method:

T T −1 W = Vbc(VbcVbc) q (3.68)

Once a solution W is obtained, the corresponding dB and dC solution can be ob- tained through relation (3.67). The resulting perturbed system is then defined as {E, A, B + dB, C + dC, D}.

Error Control

Unlike in the case of individual matrix B or C perturbation from sub-sections 3.2.4 and 3.2.3, respectively, no change of basis is applied when the two matrices are per- turbed at the same time. This is because the observability gramian Gpo defined at (3.61) is dependent on C while the controllability gramian Gpc defined at (3.48) is dependent on B, and because we are perturbing both B and C, the gramians are invalidated after a single perturbation iteration and must be recomputed after each iteration at an exorbitant cost.

39 Figure 3–1: Example (J , K) pencil eigenvalue distribution.

In this case, error minimization is purely dependent on minimizing the Frobe- nius norm of the combination of matrices dB and dC with the intention that the perturbation error is spread sufficiently between the the two matrices such that the overall perturbation error is reduced.

3.3 Hamiltonian Matrix Eigenvalue Properties

In this section, we discuss the most important and relevant properties pertaining the eigenvalues of the Hamiltonian matrix pencil (J , K), that is, the pencil (Jγ, K) defined in section 3.1 evaluated at γ. These properties presented are required in formulating an efficient imaginary eigenvalue checking method and an efficient eigen- value perturbation strategy. The properties presented are the symmetricity of the eigenvalues of (J , K) and the relation of the singular value curve’s slope at threshold 1 with the corresponding imaginary eigenvalue.

40 3.3.1 Eigenvalue Distribution

A useful property of the eigenvalues of real matrix pencil (J , K) is that they’re symmetrically distributed across the real and imaginary axis as demonstrated on Fig. 3–1 where an example eigenvalue distribution is given. More formally:

Proposition 3.3.1. Let λ ∈ C be a generalized eigenvalue of matrix pencil (J , K). Then the complex values λ, −λ, and −λ are also generalized eigenvalue of pencil (J , K).

Proof. Given λ ∈ eig(J , K), we have non-trivial x solving equation:

J x = λKx (3.69)

Applying complex conjugate, we have:

J x = λKx, which means that λ ∈ eig(J , K). If we apply on equation (3.69), we have

xT J T = λxT KT (3.70)

Applying the Hamiltonian matrix property J T = −U−1J U and the K matrix prop- erty (3.3), equation (3.70) then writes as:

−xT U−1J U = λxT U−1KU

xT U−1J = −λxT U−1K (3.71)

41 which happens to be the standard equation of the left eigenvector xT U−1 of eigen- value −λ for pencil (J , K). Thus, −λ ∈ eig(J , K). If we further apply complex conjugate on equation (3.71), we then have

xT U−1J = −λxT U−1K which happens to be the standard equation of the left eigenvector xT U−1 of eigen- value −λ for pencil (J , K). Thus, −λ ∈ eig(J , K).

Proposition 3.3.1 provides an efficient imaginary eigenvalue filtering scheme: 1. From the set of eigenvalues of (J , K), retain eigenvalues which only have one other eigenvalue having real and imaginary parts of nearly identical magnitude. 2. From the previous step retained set of eigenvalues, retain those having real parts of magnitude smaller than a threshold of detection. The final retained set of eigenvalues would then be deemed as purely imaginary. Note that this imaginary eigenvalue filtering scheme only differs from the standard threshold-based scheme with an extra layer of filtering eliminating the complex eigen- values at step 1. This extra step greatly reduces the odds of numerical error of the filtering process. However, because the imaginary eigenvalues come in pairs mirroring each other across the real axis, we only need either the set of imaginary eigenvalues on the positive imaginary axis or negative imaginary axis to fully describe the non- passivity of the system. From this point on, we work only on the positive imaginary axis which represent the positive frequency band.

42 3.3.2 Singular Value Slope at Imaginary Eigenvalues

From Theorem 3.1.1, we know that the magnitude of an imaginary eigenvalue

λ = iω of matrix pencil (Jγ, K) is the angular frequency at which a singular value curve of the system’s transfer function S(jω) crosses the threshold γ. It is further possible to obtain the slope of the singular value curve at the very point ω where it crosses threshold γ through the following:

Proposition 3.3.2. Define λ = iω ∈ eig(Jγ, K), ω ∈ R with corresponding right eigenvector x. The slope of the singular value curve crossing threshold γ at ω can be computed by: ix∗UKx Γ = ∗ 0 (3.72) x UJγx 0 where Γ denotes the slope of the singular value curve and Jγ is the derivative of Jγ with respect to γ:   dJ −2γBR−2DT C −BR−1BT − 2γ2BR−2BT J 0 = γ =   (3.73) γ   dγ CT T−1C + 2γ2CT T−2C 2γCT DR−2BT

Proof. The proof is extracted from [21] which begins with the matrix pencil eigen- value perturbation equation (3.17). As we are dealing with pencil (Jγ, K), the per- turbation rewrites as:

y∗(J + dJ )x λ˜ = γ + O(2) y∗Kx y∗J x y∗dJ x λ˜ = γ + + O(2) y∗Kx y∗Kx y∗dJ x λ˜ − λ = + O(2) (3.74) y∗Kx

43 We consider now the matrix Jγ evaluated at γ = γo, defined by (3.1), as Jγo . Suppose we apply a small change ν to γo, such that the resulting Hamiltonian matrix Jγo+ν is a function of ν. Accordingly, we denote λ = jωγo to be an imaginary eigenvalue ˜ of (Jγo , K) and the perturbed eigenvalue be λ = jωγo+ν such that the same singular value σ crossing the threshold γo at ωγo before perturbation will cross the threshold

γo + ν at ωγo+ν after perturbation.

We write Jγo+ν as a convergent power series:

0 2 Jγo+ν = Jγo + νJγo + O(ν ) (3.75)

0 where Jγo is the derivative of Jγ with respect to γ (3.73) evaluated at γ = γo. In the case when the only change applied to Jγo is with respect to ν, then

0 2 dJ = νJγo + O(ν ) (3.76)

With equation (3.76) and the previously made definitions, equation (3.74) now writes as y∗J 0 x iω − iω = ν γo + O(ν2) (3.77) γo+ν γo y∗Kx y∗J 0 x ω − ω = ν γo + O(ν2) (3.78) γo+ν γo iy∗Kx

2 2 We take note that O( ) =⇒ O(ν ) since the change in Jγo , dJ , is uniquely controlled by variable ν in the current context. Applying the formal definition of derivatives on (3.78), we can find the derivative of ω with respect to γ evaluated at γo as follow:

y∗J 0 x dω ωγo+ν − ωγo γo = lim = (3.79) dγ ν→0 ν iy∗Kx γo

44 O(ν2) Note that limν→0 ν = 0. Thus, the derivative of γ with respect to ω evaluated at

ωγo is ∗ dγ iy Kx = (3.80) dω y∗J 0 x ωγo γo We note that σ and γ are interchangeable, so we can define the slope of the singular

value σ’s curve crossing threshold γo at ωγo as:

∗ dσ iy Kx Γ = = (3.81) dω y∗J 0 x ωγo γo From the relations (3.23) and (3.25), (3.81) writes as:

ix∗UKx Γ = ∗ 0 (3.82) x UJγo x

0 Because UJγo is positive semi-definite, the denominator of (3.82) is always positive, so the actual sign of the slope is determined by the numerator term ix∗UKx which is a real number as shown by equation (3.38).

In order to show that the sign of Γ defined as (3.72) is determined by the sign of

∗ 0 the term ix UKx, we prove here that UJγ is positive semi-definite, as was claimed during the proof of Proposition 3.3.2. The following proof is extracted from [21].

Proof. The proof starts with the derivative of Jγ with respect to γ as stated by eq. (3.73). The goal is to show that   dJ CT T−1C + 2γ2CT T−2C 2γCT DR−2BT UJ 0 = γ =   (3.83) γ   dγ 2γBR−2DT C BR−1BT + 2γ2BR−2BT

45 is a positive semi-definite matrix. We can write eq. (3.83) as follow:   CT T−1TT−1C + 2γ2CT T−1IT−1C 2γCT DR−1R−1BT     2γBR−1R−1DT C BR−1RR−1BT + 2γ2BR−1IR−1BT

(3.84)

We utilize the property:

D(DT D − γ2I) = (DDT − γ2I)D

(DDT − γ2I)−1D = D(DT D − γ2I)−1

T−1D = DR−1 (3.85) to write eq. (3.84) as   CT T−1TT−1C + 2γ2CT T−1IT−1C 2γCT T−1DR−1BT     2γBR−1DT T−1C BR−1RR−1BT + 2γ2BR−1IR−1BT

(3.86)

A matrix decomposition is performed on eq. (3.86) to obtain:       CT T−1 0 T + 2γ2I 2γD T−1C 0             0 BR−1 2γDT R + 2γ2I 0 R−1BT       CT T−1 0 (DDT − γ2I) + 2γ2I 2γD T−1C 0       =       0 BR−1 2γDT (DT D − γ2I) + 2γ2I 0 R−1BT       CT T−1 0 DDT + γ2I 2γD T−1C 0       =       (3.87) 0 BR−1 2γDT DT D + γ2I 0 R−1BT

46 A second matrix decomposition is performed on the middle matrix of eq. (3.87):         CT T−1 0 D γI DT γI T−1C 0                 0 BR−1 γIDT γID 0 R−1BT        T CT T−1 0 D γI CT T−1 0 D γI         =         (3.88) 0 BR−1 γIDT 0 BR−1 γIDT

For any matrix X, the matrix X∗X is positive semi-definite. Thus, matrix. (3.88),

0 and therefore matrix UJγ, is positive semi-definite.

0 In the case of (J , K), we merely need to evaluate Jγ at γ = 1.

3.4 Passivity Enforcement Strategy

Having defined the required equations for eigenvalue perturbation in section (3.2), I now formulate the proposed perturbation strategies. In order to do so, key proper- ties of the eigenvalues of the Hamiltonian Matrix pencil (J , K) are presented first, then an analysis about the effect of perturbing the imaginary eigenvalues is per- formed. Finally, a perturbation strategy favouring passivity enforcement is formu- lated using the information gained from the two previous steps.

3.4.1 Non-Passive Regions

As stated in section 2.1, the condition of passivity for a real and stable LTI system is the condition that its transfer function S(jω)’s maximum singular value

σmax(S(jω)) be smaller than 1 at all frequencies. Accordingly, we define a non-passive

47 1.1

} 1.05 ) ω1 ω2 jω (

S 1 { max

σ 0.95

0.9 Frequency

Figure 3–2: Simplified example of a singular value curve passivity violation region.

region as all frequency regions where σmax(S(jω)) > 1. A simple example of a non- passive region is presented on Fig. 3–2, where the non-passive region is delimited by frequencies ω1 and ω2, which are the magnitudes of two imaginary eigenvalues of the matrix pencil (J , K) of the corresponding system. Though we can determine all crossing point frequencies of singular value curves of S(jω) with threshold 1 by finding the imaginary eigenvalues of (J , K) as stated by theorem 3.1.1, these imaginary eigenvalues do not directly indicate the exact extent of the non-passive regions, though they do indicate the possible non-passive region boundaries. However, if we add the information from Proposition 3.3.2 about the crossing point slope of the singular value curves at threshold 1, a simple counting method can be used to properly define the non-passive regions.

Suppose that we have obtained the imaginary eigenvalues jωi, i = 1, 2, ..., k from matrix pencil (J , K). We use the slope equation (3.72) by setting γ = 1 to define

Γi as the slope of the singular value curve at the crossing point of the curve with threshold 1 at frequency point ωi. For simplicity, we denote the slope as Γi+ when

48 the slope is positive and Γi− when the slope is negative. Having the slopes at each

ωi, the counting method goes as follow:

1. Initialize the present slope Γp = Γk and the region starting index e = k. Let

c+ and c− be the count of positive and negative slopes, both set at zero.

2. If Γp+: c+ = c+ + 1. Else c− = c− + 1 (We must have Γp− if not Γp+).

3. If c+ = c−, a new non-passive region is identified as [ωp, ωe] and set e = p − 1. 4. Decrement loop index: p = p−1. If p > 0, go to step 2. Else, stop the counting

scan, but if c− > c+, a DC non-passive region is delimited as [0,ωe]. Note that at step 1, it is assumed that the slope at the largest eigenvalue is negative.

If this slope is positive, it means that a non-passive region exists between ωk and +∞, which is an extreme case that our enforcement algorithm cannot deal with. We take note to the fact that we only need to compute the numerator term ix∗UKx from the slope equation (3.72) in order to identify the sign of the slope as shown in section 3.3.2.

3.4.2 Imaginary Eigenvalue Perturbation Analysis

We now analyse the effect of perturbing the imaginary eigenvalues of matrix pencil (J , K) and then establish some basic rules and limitations on perturbation which favour passivity enforcing perturbations.

Imaginary Eigenvalues Merging

We present here the most basic case of passivity violation given on Fig. 3–3. On the left hand side of the figure is the singular value graph of a fabricated LTI system where we have a simple non-passive region delimited by frequencies ω1 and ω2. The

49 Figure 3–3: Example Non-passive Region right hand side graph is a small portion of the complex eigenvalue map of the matrix pencil (J , K) of the same LTI system containing the imaginary eigenvalues related to the two crossing points indicated on the left-hand side singular value graph. From Fig. 3–3, an intuitive solution to reducing the non-passive region involves pushing the lower and upper bound imaginary eigenvalues closer together. Doing so, the size of the frequency region over which the singular value curve lies above 1 reduces in size. Doing so iteratively eventually results in the singular value curve being pushed below threshold 1, thus enforcing passivity through elimination of passivity violation regions. On Fig. 3–4 is plotted the same graphs of Fig. 3–3 after eliminating the non- passive region. From the right-hand side graph of Fig. 3–4, we note the interesting fact that the action of pushing a singular value curve below threshold 1 is equivalent to the action of merging the two corresponding imaginary eigenvalues delimiting the region over which the curve lies above 1. The pair of eigenvalues then split away at

50 Figure 3–4: Example Enforced Non-passive Region the same distance from the imaginary axis. Thus, the pair eigenvalues are no longer imaginary and are mirror images of each other across the imaginary axis, maintaining the property of Proposition 3.3.1. From the above observations made, we draw a few conclusions about perturba- tion of imaginary eigenvalues of the pencil (J , K): 1. Perturbing towards each other the pair of imaginary eigenvalues delimiting the rising point and falling point on threshold 1 of a singular value curve reduces the frequency region over which the curve lies above threshold 1. 2. When sufficient amount of the above perturbation is performed, the pair of imaginary eigenvalues eventually merge and split at the same distance from the imaginary axis. The first point tells us that if we wish to eliminate the non-passive regions, we should perturb upward the imaginary eigenvalues at the rising points on threshold

51 Example Non-Passive Region.

1.008

1.007

1.006

1.005

1.004

1.003

Singular value 1.002

1.001

1

0.999

3.4 3.5 3.6 3.7 3.8 3.9 Frequency (GHz)

Figure 3–5: Example Non-passive Region

1 of singular value curves and perturb downward the imaginary eigenvalues at the falling points on threshold 1 of singular value curves. The second point tells us that when a singular value curve is pushed below threshold 1, the imaginary eigenvalues that originally delimited its rising and falling points will merge and split onto the complex plane. An important point to take from this, which I will not prove, is that the merge can only happen between the delimiting pairs of imaginary eigenvalues of the same singular value curve. Inter-merging be- tween rising and falling points imaginary eigenvalues of different overlapping singular value curves above threshold 1 does not occur.

Merging Condition

As mentioned in the previous subsection 3.4.2, once the two imaginary eigenval- ues delimiting the rising and falling points on threshold 1 of singular value curve are perturbed towards each other by a sufficient amount, the two eigenvalues then merge and split away from the imaginary axis. However, it is not specified how much is a

52 “sufficient” amount of perturbation for a merge to occur. This is because the real amount of perturbation required for merge varies on a case by case basis as well as the fact that the amount is influenced by various factors such as perturbation equa- tion approximation errors. Even if we were to specify the final perturbed imaginary eigenvalue location for both eigenvalues as the midpoint between the two imaginary eigenvalues, it is still not guaranteed that the two eigenvalues will merge, although it is highly likely. However, a general trend of the amount of perturbation required for merges can be heuristically determined. Using an example system’s singular value curve, perturbations of various amounts are carried out in order to assess the relation of the likelihood of imaginary eigenvalue pair merges with the perturbation applied. On Fig. 3–5 is plotted a lone singular value curve of our example which has lower bound jωl = 21.471j and upper bound jωu = 23.903j. The perturbations are carried out using the perturbation equations presented in section 3.2.3 which perform the perturbation on the C matrix while maintaining error control using the controllability gramian of the LTI system. The pair of imaginary eigenvalues are perturbed towards each other by a factor η of the distance between them, and this factor serves as the control parameter for this experiment. The results of the experiment are presented on Table 3–1, where we indicate the post-perturbation versions of the two delimiting imaginary eigenvalues using various values of η. When the merge occurs, the entries for jω˜l and jω˜u are void as the two eigenvalues are no longer imaginary. We observe that an η as small as 0.28 was sufficient to merge the two imaginary eigenvalues. Indeed, merges can occur

53 0 0 η jωl jωu RMS Error Merge 0 21.471j 23.903j 0 No 0.10 21.787j 24.097j 0.642e-4 No 0.15 21.971j 23.903j 0.962e-4 No 0.20 22.195j 23.665j 1.28e-4 No 0.25 22.528j 23.665j 1.61e-4 No 0.27 22.889j 22.943j 1.73e-4 No 0.28 n/a n/a 1.80e-4 Yes 0.30 n/a n/a 1.93e-4 Yes 0.40 n/a n/a 2.57e-4 Yes 0.50 n/a n/a 3.21e-4 Yes Table 3–1: Eigenvalue Merge Simulation Results early with seemingly insufficient amount of perturbation which were intended to only place the merge pair at a modest distance away from each other. We do take note that the perturbation error induced, computed as the root mean square (RMS) error between the model’s data against the original frequency data, increases as the specified perturbation amount increases even passing the minimum η required for the merge to occur. This situation creates the need to balance between the requirement of merge and the requirement of minimal error. From other various experiments conducted which I do not show here, η = 0.4 is chosen as the factor which strikes a balance between the two opposing requirements and this factor is used in various strategies presented from this point on.

Special Case: DC Non-Passive Region

In the special case when a non-passive region contains the DC point or the zero frequency point, extra measures are required for passivity enforcement. A key point in this case is that we have singular value curves above threshold 1 which arch over

54 Figure 3–6: Example DC Non-passive Region the DC point and crosses threshold 1 on both sides of the DC point as well as at equal distances away from the DC point. This situation is illustrated by an example DC non-passive region graphed on Fig. 3–6, in which case when working with positive imaginary eigenvalues only, we would only have λ+. It is redundant to apply or specify perturbation on both mirror image pair eigenvalues as perturbation on one of the two will be applied in equal amount to the other in the opposite direction. Through some heuristic tests, it is determined that the positive imaginary eigen- values λ+ marking the falling point of singular value curves arching over the DC point should be perturbed to a point below the DC point, that is, perturb it to a negative value. This is because only reducing the positive imaginary eigenvalue magnitude re- sults in very slow convergence, and it was necessary to push them pass the DC point for reasonable convergence time. The merge occurs when the imaginary eigenvalue

55 merge with its mirror image across the DC point and both eigenvalues then become purely real eigenvalues.

3.4.3 Perturbation Strategies

In this section, using the observations made previously in section 3.4.2, I present various strategies for performing the imaginary eigenvalue perturbations such that passivity enforcement is carried out as effectively as possible. The section starts with describing the general obstacles when attempting to perturb the imaginary eigenvalues in a desired way. Then, a few strategies are pre- sented in order to deal with these obstacles and to optimize the perturbation process as much as possible. These strategies are built upon the concept of imaginary eigen- value merging pairs presented in the previous section section 3.4.2, so we attempt to assign the best eigenvalues pairing which favour merges of the imaginary eigenvalues within each non-passive region. Do note that the list of presented strategies is not exhaustive and I have only selected the more apparent and logical strategies. Finally, some simulation results are presented to verify the presented strategies.

General Case Consideration

In the section 3.4.2, we have used a simple example to demonstrate under what kind of imaginary eigenvalue perturbation is the non-passivity of the system sup- pressed. A key point taken was that it is ideal to make sure the perturbation targets pairs of imaginary eigenvalues marking the rising and falling points on threshold 1 of the same singular value curve such that they approach each other until they merge. However, in macromodels of more severe non-passivity, identification of such pairings become very difficult.

56 Example Non-Passive Region 1.009

1.008

1.007

1.006

1.005

1.004

1.003

Singular Value 1.002

1.001

1

0.999

9.5 9.55 9.6 9.65 9.7 9.75 9.8 9.85 9.9 9.95 Frequency (GHz)

Figure 3–7: Example non-passive region of high imaginary eigenvalue density.

We present on Fig. 3–7 the singular value plot over the non-passive region of a macromodel of more severe non-passivity. In this case, we have over 200 imaginary eigenvalues within the region, which translates to a large number of singular value curves above threshold 1 intertwining over the non-passive region. Due to the fact that the curves intertwine, there is no straightforward way of identifying imaginary eigenvalue pairs sharing the same singular value curve. We would need to identify all curve crossing points over the non-passive region in order to correctly pair up the imaginary eigenvalues.

One possible method is to use either the left or right singular vectors. Let ωi for i = 1, 2, ..., k be the set of frequency sampling points and define the SVD operation:

[Wi, Σi, Vi] = svd(S(iωi)) (3.89)

where Wi, Vi and Σi are the left singular vector matrix, right singular vector matrix, and the diagonal singular value matrix of the S-parameter matrix evaluated at sam- pling frequency ωi. Let us utilize the left singular vectors, which are the columns of

57 Wi. We make the assumption that the singular values are placed in descending order of magnitude on the diagonal of Σ and that the left eigenvectors are orthonormal with respect to each other, which can be reasonably assumed. We then have that the matrix

∗ Zi = Wi Wi+1, i = 1, 2, ..., k − 1 (3.90) is expected to be very close to an identity matrix assuming the ranks of the magnitude of the singular values have not changed from ωi to ωi+1. In the case that one singular value curve crossing occurred between the two immediate sampling points, we would have two immediate terms on the diagonal of Zi which are significantly smaller than

1. By scanning through Zi from i = 1 to i = k − 1 while keeping track of any singular value rank switching, we would then be able to identify the optimal pairing of imaginary eigenvalues such that each pair represent the rising and falling points of the same singular value curve. However, this method assumes that the sampling frequency set is dense enough to make sure there is at most one rank switch between each consecutive sampling points. This is a difficult criterion to fulfil, as there is always a chance that some crossings are not detected and the amount of sampling points required may result in high CPU cost. The intertwining of singular value curves is a common characteristics of the transfer functions, and is a key issue which prevents the optimal pairings to be de- termined under reasonable cost. We thus suggest to utilize approximate imaginary eigenvalue pairing schemes which come at nearly no cost and yet provide good pas- sivity enforcement results.

58 } ) jω ( S { max σ 1.0

8.5 8.55 8.6 8.65 8.7 8.75 8.8 8.85 Frequency (GHz)

Figure 3–8: Nearest Neighbor perturbation strategy illustration

Strategy 0: Nearest Neighbor

The strategy presented in this section is the strategy presented in [24] which serves as the benchmark strategy. Using this strategy, each imaginary eigenvalues within a non-passive region is perturbed towards its immediate right-hand side neigh- bour by a factor of the distance between them, with the exception of the region’s upper bound imaginary eigenvalue which stays constant. The strategy is illustrated on Fig. 3–8 where the singular values at a non-passive region of an example system are plotted. The 3-sided stick stars indicate the location of the imaginary eigenvalues and their respective singular value curve has positive slope if the star is facing up and in blue or their respective singular value curve has negative slope if the star is facing down and in red.

Let λi = jωi, i = 1, 2, ..., k be the set of all imaginary eigenvalues within a non-passive region where ωi+1 > ωi. The perturbation applied is as follow:

jω˜i = jωi + η ∗ (jωi+1 − jωi), for i = 1, 2, ..., k − 1 (3.91)

59 where jω˜i is the perturbed version of imaginary eigenvalue jωi. The idea of this perturbation strategy is to reduce the size of the non-passive region by forcing all imaginary eigenvalues of a non-passive region, with the exception of the upper bound imaginary eigenvalue, to be displaced upward towards the upper bound, eventually resulting in merges of eigenvalue pairs. Note that the strategy does not take into account of the singular value curve slopes at each of the region’s internal eigenvalue.

Strategy 1: Border Pairing

The first strategy proposed is to simply pair the border pencil (J , K)’s imaginary eigenvalues of each non-passive region. Although the border imaginary eigenvalues usually do not share the same singular value curve, we would at least perturb them in their respective direction which favour the merge with their actual merge partner eigenvalue. Doing so iteratively, all merging partners eventually merge.

Let λil = jωil and λiu = jωiu for i = 1, 2, ..., k be the lower bound and upper bound delimiting imaginary eigenvalues of the ith highest non-passive region on the imaginary axis, respectively. The perturbation applied in the present strategy is as follow:

˜ λil = λil + η(λiu − λil) (3.92) ˜ λiu = λiu − η(λiu − λil) where η is the perturbation factor set to 0.4. What equation (3.92) means is that we simply perturb the border imaginary eigenvalues towards each other by a factor η of the distance between them. This strategy effectively treats all non-passive regions

60 } ) jω ( S { max σ 1.0 jωavg

8.5 8.55 8.6 8.65 8.7 8.75 8.8 8.85 Frequency (GHz)

Figure 3–9: Region Average perturbation strategy illustration as black boxes, where we only know the delimiting imaginary eigenvalues of each region.

Strategy 2: Region Average

The region average strategy uses the average of all pencil (J , K)’s imaginary eigenvalues within a non-passive region and then perturb the imaginary eigenvalues towards this average point by a factor of the distance between themselves and this average point. The strategy is illustrated on Fig. 3–9 where we have plotted the exact same singular value graph as on Fig. 3–8, but with a different perturbation pairing assigned to the imaginary eigenvalues. The idea is that we shrink a non-passive region with respect to its imaginary eigenvalues average point, and once sufficient amount of reduction applied, merges of eigenvalues pairs are eventually forced to occur. Note that rather than the mid-point of the non-passive region, the imaginary eigenvalues average is used as the perturbation reference point as it a more sensitive choice for better eigenvalue pairings.

61 Algorithm 1 Region Average Perturbation 1: Initial Variables 2: λi = jωi, i = 1, 2, ..., k ← Region’s imaginary eigenvalues. 3: Γi, i = 1, 2, ..., k ← Singular Value slope atjωi. 4: λavg = jωavg ← Region’s imaginary eigenvalues average. 5: i = 1 6: Output Variables ˜ 7: λi = jω˜i, i = 1, 2, ..., k ← After-pertubation Region’s imaginary eigenvalues. 8: 9: loop 10: 11: while i ≤ k do 12: if λi < λavg and Γi > 0 then ˜ 13: λi = λi + η(λavg − λi) 14: end if 15: if λi > λavg and Γi < 0 then ˜ 16: λi = λi − η(λi − λavg) 17: end if 18: i = i + 1 19: end while

62 } ) jω ( S { max

σ 1.0

8.5 8.55 8.6 8.65 8.7 8.75 8.8 8.85 Frequency (GHz)

Figure 3–10: Region Bound perturbation strategy illustration

The algorithm is presented in Algorithm 1, where we basically perturbs each imaginary eigenvalue λi of the region towards the λavg by a factor η of the distance between them, but only if this results in λi being pushed towards the direction where its corresponding singular value curve’s slope is rising. If the perturbation would have resulted in the opposite direction of where the slope is facing, no perturbation is applied. This is to avoid applying perturbations that go against imaginary eigenvalue merges. The region average strategy differs from the border pairing one in that a large portions of the imaginary eigenvalues actively participate in the perturbation rather than a small subset.

Strategy 3: Region Bound

The region bound strategy perturb the pencil (J , K)’s imaginary eigenvalues towards the bounds of the region, depending on the singular value curve slope at each imaginary eigenvalue. The strategy is illustrated on Fig. 3–10 where we have

63 plotted the exact same singular value graph as on Fig. 3–8, but with a different perturbation pairing assigned to the imaginary eigenvalues. If the slope is positive at an imaginary eigenvalue, then that eigenvalue is to be perturbed towards the upper bound of the region. If the slope is negative, then the eigenvalue is to be perturbed towards the lower bound of the region.

Algorithm 2 Region Average Perturbation 1: Initial Variables 2: λi = jωi, i = 1, 2, ..., k ← Region’s imaginary eigenvalues, ωi+1 > ωi 3: Γi, i = 1, 2, ..., k ← Singular Value slope at jωi. 4: λavg = jωavg ← Region’s imaginary eigenvalues average. 5: i = 2 6: Output Variables ˜ 7: λi = jω˜i, i = 1, 2, ..., k ← After-pertubation Region’s imaginary eigenvalues. 8: 9: loop 10: 11: while i < k do 12: if Γi > 0 then ˜ 13: λi = λi + η(λk − λi) 14: else if Γi < 0 then ˜ 15: λi = λi − η(λi − λ1) 16: end if 17: i = i + 1 18: end while

The algorithm is presented in Algorithm 2. We observe that each imaginary eigenvalue, excluding from the region’s delimiting ones, are perturbed towards the bound that they are facing by a factor of the distance between themselves and this bound. Effectively, we pair the eigenvalue with the upper bound eigenvalue if the slope is positive and with the lower bound eigenvalue if the slope is negative. We should take note that this algorithm is highly susceptible to over-perturbation as

64 }

) Pair 4 Pair 3 Pair 2 Pair 1 jω ( S { max σ 1.0

8.5 8.55 8.6 8.65 8.7 8.75 8.8 8.85 Frequency (GHz)

Figure 3–11: Cross Pairing Eigenvalue perturbation strategy illustration using the region’s bound as a reference merging partner can be highly inaccurate for certain eigenvalues having their actual merging partner much closer than the boundary point.

Strategy 4: Cross Pairing

The cross pairing strategy involves pairing each imaginary eigenvalue having positive singular value slope with its closest right-hand side imaginary eigenvalue neighbour having a negative singular value slope. The pairing process is illustrated on Fig. 3–11, where we have plotted the exact same singular value graph as on Fig. 3– 9, but with a different perturbation pairing assigned to the imaginary eigenvalues. The pairing was carried-out using a scan like approach from the upper bound to the lower bound of the region. The name of this strategy is based on the fact that the pairing lines cross each other in succession.

Let (λi1, λi2), i = 1, 2, ..., k be the ith imaginary eigenvalue pair of a non-passive region assigned using the process described above. We then simply perturb each pair

65 of eigenvalues towards each other, in the following manner:

˜ λi1 = λi1 + η(λi2 − λi1) (3.93) ˜ λi2 = λi2 − η(λi2 − λi1)

˜ ˜ where (λi1,λi2) are the perturbed version of the imaginary eigenvalue pair and η is a perturbation factor. This pairing strategy is somewhat similar to that of the region bound strategy in that we try to use a substitute reference merging partner for each eigenvalue, but rather than using the extreme case of border eigenvalues as references, we apply a pairing method which ends up with a more steady perturbation amount for each pair with much less cases of over-perturbation.

66 CHAPTER 4 Example Simulations In this chapter, we present the passivity enforcement simulations performed on some selected non-passive examples using the Hamiltonian Matrix pencil perturba- tion method presented in Chapter (3). The enforcement typically involve a perturba- tion strategy, such as the ones described in section 3.4.3, and a perturbation equation such as the ones described in section 3.2. We refer to the combination of a perturbation equation and a perturbation strat- egy as a perturbation method. For example, one possible perturbation method would be the perturbation of the C matrix with gramian-based error control (section 3.2.3) using the strategy of pushing the imaginary eigenvalues of each non-passive region towards the average of the region’s imaginary eigenvalues (section 3.4.3, Strategy 2). The different possible perturbation methods are tested in this section. Note that the B matrix perturbation equation is not tested because of its similarity with the C perturbation equation, so the C matrix perturbation equation is used to demonstrate the performance of both. This leaves 10 permutations of perturbation equations and strategies, resulting in 10 perturbation methods which are tagged as follow: • Ia perturbation eq. from section 3.2.3 with strategy 0: Nearest Neighbor. • Ib perturbation eq. from section 3.2.3 with strategy 1: Border Pairing. • Ic perturbation eq. from section 3.2.3 with strategy 2: Region Average. • Id perturbation eq. from section 3.2.3 with strategy 3: Region Bound.

67 • Ie perturbation eq. from section 3.2.3 with strategy 4: Cross Pairing. • IIa perturbation eq. from section 3.2.5 with strategy 0: Nearest Neighbor. • IIb perturbation eq. from section 3.2.5 with strategy 1: Border Pairing. • IIc perturbation eq. from section 3.2.5 with strategy 2: Region Average. • IId perturbation eq. from section 3.2.5 with strategy 3: Region Bound. • IIe perturbation eq. from section 3.2.5 with strategy 4: Cross Pairing. By extension, we refer to the perturbation equation from sections 3.2.3 and 3.2.5 as perturbation equations I and II, respectively, and we refer to perturbation strategies 0, 1, 2, 3, and 4 from sections 3.4.3 as strategies a, b, c, d, and e, respectively. 4.1 Example Description

The 5 non-passive examples considered here are descriptor systems that were generated from S-parameter data using macromodeling methods such as Vector Fit- ting (V.F.) [7,9,12,14] and Loewner Matrix (L.M.) interpolation [15]. We have chosen examples with different number of ports and different number of poles. Furthermore, for some examples, the original S-Parameter data was obtained by solving the Teleg- rapher’s Equations using the method (E.M.) [18], while for others it was obtained using full-wave simulation using commercial software. Finally, for some examples, the Descriptor System was obtained using the V.F. approach, while for others it was obtained using the L.M. approach. The goal is to provide a variety of cases on which to evaluate the 10 different perturbation methods for passivity enforcement. More details about the examples are presented on Table 4–1, where Data Type refers to the method of computing the original S-parameter data used in the macromodeling algorithm, which is either obtained through full-wave simulation

68 Ex. 1 Ex. 2 Ex. 3 Ex. 4 Ex. 5 Syst. Order (n) 461 1900 371 816 2200 Num. of ports (p) 32 128 38 8 22 Num. of img. eig. 184 560 22 24 42 Num. of 10 3 2 1 2 non-pass reg. Macro. tech. L.M. L.M. L.M. V.F. V.F. Data type E.M. E.M. F.W. F.W. F.W. Table 4–1: Simulation Example Details

(F.W.) or obtained by solving the Telegrapher’s Equations using the matrix expo- nential method (E.M.). Note that Ex. 1 contains 16 coupled microstriplines, Ex. 2 contains 64 coupled microstriplines, Ex. 3 is a 38-port network containing multilayer striplines, Ex. 4 is an 8-port network containing multilayer striplines, and Ex. 5 contains 11 coupled striplines. Note that for all examples, the bandwidth of the original S-Paramenter data is up to 10 GHz. Furthermore, for the case of descriptor systems generated using the L.M. method (examples 1, 2 and 3), the order of the system was intentionally chosen to generate a non-passive macromodel and thus require passivity enforcement [16]. This is typically useful in cases where a reduced order model is required. Finally, for the macromodels generated using the V.F. method, the descriptor system is realized in a way that the matrix B is sparse [19]. For such a realization, only the C matrix was perturbed when applying perturbation equation II which would have normally perturbed both B and C.

69 Perturbation Method Ia Ib Ic Id Ie Num. of Iter. 123 16 4 5 4 Ex1 Runtime (s) 171.8 22.7 8.1 6.7 6.7 RMS error 5.02e-3 4.08e-3 4.15e-3 4.05e-3 3.93e-3 Num. of Iter. \ 66 8 6 4 LM Ex2 Runtime (s) \ 4579.2 611.9 479.6 341.9 RMS error \ 1.88e-4 4.80e-4 2.60e-4 1.22e-4 Num. of Iter. 36 11 6 5 4 Ex3 Runtime (s) 30.6 9.0 5.4 4.7 4.1 RMS error 7.85e-3 4.91e-3 4.80e-3 5.19e-3 4.66e-3

Table 4–2: Simulation Summary Ex. 1-3, Eq. I

Perturbation Method IIa IIb IIc IId IIe Num. of Iter. 141 16 4 4 5 Ex1 Runtime (s) 212.4 23.3 6.9 6.9 8.2 RMS error 5.81e-3 3.87e-3 3.76e-3 3.97e-3 3.69e-3 Num. of Iter. \ 66 6 4 3 LM Ex2 Runtime (s) \ 4523.6 486.6 349.8 280.5 RMS error \ 1.57e-4 2.20e-4 2.11e-4 0.99e-4 Num. of Iter. 47 \ 17 23 16 Ex3 Runtime (s) 39.3 \ 14.6 19.7 14.0 RMS error 3.07e15 \ 1.58e-3 3.61e13 1.65e-3

Table 4–3: Simulation Summary Ex. 1-3, Eq. II

4.2 Results Comparison and Analysis

The overall results of the simulations are divided and presented separately on Tables 4–2 to 4–5. Tables 4–2 and 4–3 present the simulation results for examples 1 to 3, the L.M. examples, with Table 4–2 presenting the results of perturbation using perturbation methods Ia to Ie and Table 4–3 presenting the results of perturbation using perturbation methods IIa to IIe. Tables 4–4 and 4–5 present the simulation results for examples 4 to 5, the V.F. examples, with Table 4–2 presenting the results

70 of perturbation using perturbation methods Ia to Ie. and Table 4–3 presenting the results of perturbation using perturbation methods IIa to IIe. For each example, we compare the number of iterations taken, the runtime, and the RMS error for each of the 10 perturbation methods defined at the beginning of the present chapter. For cases where more than 150 iterations are needed, the perturbation method is considered to have diverged and no data is presented. The RMS error is computed by comparing the S-parameters of the non-passive system against those of the passive system at 1000 equally distributed frequency points over the 10 GHz bandwidth. We analyse the content of the tables by first considering the difference in perfor- mance between the perturbation equations I and II. For the L.M. simulation tables (Tables 4–2 and 4–3), we observe that the accuracy of the models generated by equa- tion II methods are generally higher than those generated by equation I methods, though the differences are not large. On the other hand, the CPU costs of equa- tion I methods are much more consistent than the CPU cost of equation II methods in terms of number of iterations. Ex. 3’s simulation data demonstrate a possible case where the CPU cost of equation II methods becomes significantly higher than equation I methods for all strategies. In the case of V.F. simulation tables (Tables 4–4 and 4–5), by comparing the results from Table 4–5 with the results from Table 4–4, we observe clearly that equation I has an advantage over equation II. For Ex. 4, the equation II methods completely failed the enforcement while equation I methods succeeded using all 5 strategies. For Ex. 5, equation I methods outperformed equation II methods on a

71 Perturbation Method Ia Ib Ic Id Ie Num. of Iter. 18 7 4 5 4 Ex4 Runtime (s) 111.5 46.1 29.2 35.3 32.4 RMS error 1.02e-2 3.39e-3 3.39e-3 4.32e-3 2.96e-3 VF Num. of Iter. 38 8 3 3 3 Ex5 Runtime (s) 3788.9 879.7 388.1 385.9 387.7 RMS error 4.26e-3 1.17e-3 9.78e-4 1.38e-3 7.87e-4

Table 4–4: Simulation Summary Ex. 4-5, Eq. I

Perturbation Method IIa IIb IIc IId IIe Num. of Iter. \ \ \ \ \ Ex4 Runtime (s) \ \ \ \ \ RMS error \ \ \ \ \ VF Num. of Iter. 47 8 7 5 9 Ex5 Runtime (s) 4645.5 879.7 773.9 581.5 974.3 RMS error 1.55e-2 3.47e-3 3.75e-3 3.90e-3 4.29e-3

Table 4–5: Simulation Summary Ex. 4-5, Eq. II case by case basis both in terms of CPU time and accuracy. We do take note that these results for the V.F. examples were to be expected because of the restriction imposed on equation II to only perturb the C matrix. Overall, equation I is preferable due to its robustness and consistent perfor- mance, though equation II methods tend to provide more accurate passive models when used on the L.M. examples. However, the accuracy differences are generally not significant enough to outweigh the robustness of equation I methods. By considering the fact that equation I has an overall advantage over equa- tion II, we can clearly see that strategy e is the optimal strategy since it consistently outperformed the three other strategies at all examples when equation I is utilized.

72 2 Non-passive Model 1.8 Passive Model (Method Ie) Passive Model (Method IIe) }

) 1.6 jω (

S 1.4 {

max 1.2 σ

1

0.8 0 2 4 6 8 10 12 14 16 Frequency (GHz)

Figure 4–1: Example 3’s σmax{S(jω)} curve before and after enforcement with added eigenvalues indicated as stars (Γ+ as blue upward stars, Γ− as red downward stars)

4.2.1 Examples 3 and 5 Details

For some selected examples, I want to show more detailed data of the perturba- tion runs in order to provide more insight into the experiment. The chosen examples are Ex. 3 (L.M.) and Ex.5 (V.F.), from which we present:

• The σmax{S(jω)} comparison plot (Fig. 4–1, 4–2). • Some sample S-parameter plot (Fig. 4–3, 4–4). • Transient simulation comparison plot (Fig. 4–5, 4–6). The passive models presented are the ones generated using methods Ia and IIa.

On the σmax{S(jω)} plots (Fig. 4–1, 4–2), the enforced systems for both ex- amples have the entirety of their σmax{S(jω)} curve below threshold 1, indicating successful passivity enforcement. The upward 3 sided stick stars in blue indicate imaginary eigenvalues with positive sloped singular value curve while the downward

73 1.1 Non-passive Model Passive Model (Method Ie) 1.05 Passive Model (Method IIe) } ) jω

( 1 S { max

σ 0.95

0.9 0 2 4 6 8 10 Frequency (GHz)

Figure 4–2: Example 5’s σmax{S(jω)} curve before and after enforcement with added eigenvalues indicated as stars (Γ+ as blue upward stars, Γ− as red downward stars)

3 sided stick stars in red indicate imaginary eigenvalues with negative sloped sin- gular value curves. We take note from Fig. 4–1 that the major deviation from the original model for both passive model’s σmax{S(jω)} curves occurs at the major non-passivity region centered around 13 GHz. However, the model resulting from perturbation method IIe’s σmax{S(jω)} has a closer characteristic to the original model’s singular value curve, which reflect the results from the result Tables (Ta- ble 4–2 and 4–3), where perturbation method IIe provided a more accurate passive model than method Ie.

On Fig. 4–3 and Fig. 4–4 are plotted the |S1,1| of Ex. 3 and Ex. 5, respectively. We observe in both cases, we observe very little change to the S-parameters by the perturbation, indicating good accuracy preservation. Note that error is only observable if the graph scale is reduced to order similar to the order of error, so it may be more visually pronounced for certain off-diagonal lesser S-parameters.

74 Non-passive Model 1 Passive Model (Method Ie) Passive Model (Method IIe)

0.5 | 1 , 1 S | 0

−0.5

0 2 4 6 8 10 Frequency (GHz)

Figure 4–3: Ex 3 |S1,1| Plot

0.8 Non-passive Model Passive Model (Method Ie) Passive Model (Method IIe) 0.6 | 1 ,

1 0.4 S |

0.2

0

0 2 4 6 8 10 Frequency (GHz)

Figure 4–4: Ex 5 |S1,1| Plot

75 0.5

0 Voltage (V) −0.5 Non-passive Model Passive Model (Method Ie) Passive Model (Method IIe) 12 13 14 15 16 17 18 Time (ns)

Figure 4–5: Example 3’s transient plot with capacitive terminations.

Transient simulations are performed using the equivalent spice netlist system to the macromodels set up in the configuration illustrated on Fig. 4–7 and Fig. 4–8 for Ex. 3 and Ex. 5, respectively. For Ex. 3, the simulation was performed by inputting a periodic square signal with 1 V amplitude, 12 ns period, 0.1 ns rise and fall time and 4 ns pulse width. The voltage at port 1 is plotted on Fig. 4–5 and we observe corrected divergence behaviour for the enforced systems. Indeed, the non-passive model exhibits non-stable behaviour with gradually more expansive divergence char- acteristic as the transient simulation progresses. Though not shown on Fig. 4–5, the voltage strays towards infinity. On the other hand, both passive model’s tran- sient simulation exhibit normal periodic wave characteristic, demonstrating that the passive nature of these models prevented non-stable behaviour. For Ex. 5, the simulation was performed by inputting a single square signal with 1 V amplitude, 0.01 ns rise and fall time and 5 ns pulse width. Notice from Fig. 4–8

76 Non-Passive Model 1.5 Passive Model (Method Ie) Passive Model (Method IIe) 1

0.5 Voltage (V) 0

−0.5 0 2 4 6 8 10 Time (ns)

Figure 4–6: Example 5’s transient plot that a basic CMOS inverter is connected to the probed node in order to include some non-linear behaviour. The voltage at port 22 is plotted on Fig. 4–6, in which we observe no divergent behaviour even for the non-passive model. Indeed, though a non-passive system runs the risk of resulting in a non-stable system after combining with other systems, non-stability in the final system is not guaranteed. We do observe from Fig 4–6 that all 3 transient curves are nearly identical, demonstrating that accuracy was well preserved for the passive models.

77 PROBE

Vpulse 20Ω P1 P20

+ 1pF − P2 P21 20Ω 1pF P3 P22 20Ω 1pF

P17 P36 20Ω 1pF P18 P37 20Ω 1pF P19 P38 20Ω 1pF

Figure 4–7: Transient Analysis Network Setup for Example 3

Vpulse 20Ω P1 P11

+ 1pF − P2 P12 20Ω 1pF P3 P13 20Ω 1pF

P8 P20 20Ω 1pF V P P dd 9 21 PROBE 20Ω 1pF P10 P22 Out 20Ω 1pF 0.03pF

Figure 4–8: Transient Analysis Network Setup for Example 5

78 CHAPTER 5 Conclusion and Future work Over the course of the thesis, we have mainly reviewed the Hamiltonian Ma- trix pencil (J , K) passivity enforcement method, which involved manipulating the imaginary eigenvalues of the matrix pencil such that they disappear. In terms of contribution, a new Hamiltonian Matrix pencil perturbation method is presented where both matrix B and C are perturbed simultaneously. This method did not have as much robustness as the case where B or C is perturbed individually as it could not exploit the gramian-based error control scheme, though it did perform well for certain Loewner Matrix systems. The thesis also provided a more thorough exploration of the characteristics of the imaginary eigenvalues of the (J , K) pencil, disclosing the fact that only a pair of imaginary eigenvalues related to the same singu- lar value curve may merge. This information indicated the ideal imaginary eigenvalue pairing strategy and perturbation method, but it was soon determined implausible to identify all such pairings due to randomized inter-crossing of the singular value curves. Thus, the thesis presented alternative low-cost pairing strategies which still heavily encourages fast imaginary eigenvalues merging. Among these strategies, the inter-crossing eigenvalue pairing strategy has been shown to outperform the others consistently. Thus, an overall complete Hamiltonian Matrix pencil-based passivity enforcement algorithm is available through combination of an enforcement method with a enforcement strategy.

79 As for possible expansion on the subject, it would be interesting to see how these perturbation strategies can be applied in the case of computing the imaginary eigenvalues using the frequency hopping technique. As the frequency hopping algo- rithm concentrate on computation of the closest eigenvalues at specified points on the complex plane, it may be that not all imaginary eigenvalues can be computed at rea- sonable cost, thus providing an incomplete picture of the non-passive regions. In this scenario, different perturbation strategies may be necessary and will be interesting to investigate. On a subject off on a tangent, I would also like to explore more di- rectly the Loewner Matrix Macromodeling technique, and specifically on how we can efficiently perform parametric macromodeling using this relatively novel technique.

80 References [1] S. Boyd, V. Balakrishman, and P. Kabamba. A bisection method for computing the h-infinity-norm of a transfer matrix and related problems. Mathematics of Control, Signals, and Systems, 27(101):207–219, 1989. [2] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix In- equalities in System and Control Theory. Society for Industrial and Applied Mathematics, Philadelphia, Pensylvania, 1994. [3] C. P. Coelho, J. Phillips, and L. M. Silveira. A convex programming approach for generating guaranteed passive approximations to tabulated frequency-data. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Sys- tems, 23(2):293–301, Feb 2004. [4] R. F. Curtain. Old and new perspectives on the positive-real-lemma in sys- tems and control theory. Zeitschrift fr Angewandte Mathematik und Mechanik, 79(9):579–590, 1999. [5] M. de Magistris and M. Nicolazzo. On the concretely passive realization of reduced circuit models based on convex constrained positive real fractions iden- tification. In 2011 IEEE 15th Workshop on Signal Propagation on Interconnects (SPI), pages 29–32, May 2011. [6] L. De Tommasi, M. de Magistris, D. Deschrijver, and T. Dhaene. An algorithm for direct identification of passive transfer matrices with positive real fractions via convex programming. International Journal of Numerical Modelling: Elec- tronic Networks, Devices and Fields, 24(4):375–386, 2011. [7] D. Deschrijver, M. Mrozowski, T. Dhaene, , and D. De Zutter. Macromodeling of multiport systems using a fast implementation of the vector fitting method. IEEE Microw. Wireless Compon. Lett, 18(6):383–385, Jun 2008. [8] S. Grivet-Talocia, A. Chinea, and G. C. Calafiore. A guaranteed-convergence framework for passivity enforcement of linear macromodels. In 2012 IEEE 16th Workshop on Signal and Power Integrity (SPI), pages 53–56, May 2012.

81 82

[9] B. Gustavsen. Improving the pole relocating properties of vector fitting. IEEE Trans. Power Del., 21(3):1587–1592, Jul 2006. [10] B. Gustavsen. Fast passivity enforcement for pole-residue models by pertur- bation of residue matrix eigenvalues. IEEE Transactions on Power Delivery, 23(4):2278–2285, Oct 2008. [11] B. Gustavsen. Fast passivity enforcement for s-parameter models by perturba- tion of residue matrix eigenvalues. IEEE Transactions on Advanced Packaging, 33(1):257–265, Feb 2010. [12] B. Gustavsen and A. Semlyen. Rational approximation of frequency domain responses by vector fitting. IEEE Transactions on Power Delivery, 14(3):1052– 1061, Jul 1999. [13] B. Gustavsen and A. Semlyen. Enforcing passivity for admittance matrices approximated by rational functions. IEEE Transactions on Power Systems, 16(1):97–104, Feb 2001. [14] B. Gustavsen and A. Semlyen. A robust approach for system identification in the frequency domain. Power Delivery, IEEE Transactions on, 19(3):1167–1173, July 2004. [15] M. Kabir and R. Khazaka. Macromodeling of distributed networks from frequency-domain data using the loewner matrix approach. IEEE Transactions on Microwave Theory and Techniques, 60(12):3927–3938, 2012. [16] M. T. Kassis, M. Kabir, Y. Q. Xiao, and R. Khazaka. Passive reduced order macromodeling based on loewner matrix interpolation. IEEE Trans. Microw. Theory Tech., 64(8):2423–2432, Aug 2016. [17] S. Lefteriu and A. C. Antoulas. A new approach to modeling multiport systems from frequency-domain data. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, 29(1):14–27, 2010. [18] Achar R. and Nakhla M.S. Simulation of high-speed interconnects. Proceedings of the IEEE, 89(5):693 –728, may 2001. [19] D. Saraswat, R. Achar, and M. Nakhla. Passive macromodels of microwave sub- networks characterized by measured/simulated data. In Microwave Symposium Digest, 2003 IEEE MTT-S International, volume 2, pages 999–1002 vol.2, June 2003. 83

[20] D. Saraswat, R. Achar, and M. S. Nakhla. A fast algorithm and practical considerations for passive macromodeling of measured/simulated data. IEEE Transactions on Advanced Packaging, 27(1):57–70, Feb 2004. [21] Christian Schr¨oderand Tatjana Stykel. Passivation of lti systems. Technical report, DFG Research Center Matheon, TU Berlin, Germany, 2007. [22] G. W. Stewart and Ji-Guang Sun. Matrix Perturbation Theory. Academic Press, San Diego, California, 1990. [23] T. Stykel. On some norms for descriptor systems. IEEE Transactions on Auto- matic Control, 51(5):842–847, May 2006. [24] Y. Wang, Z. Zhang, C. K. Koh, G. Shi, G. K. H. Pang, and N. Wong. Pas- sivity enforcement for descriptor systems via matrix pencil perturbation. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 31(4):532–545, April 2012. [25] M. R. Wohlers. Lumped and Distributed Passive Networks. Academic Press, New York and London, 1969. [26] D. C. Youla, L. J.Castriota, and H. J. Carlin. Bounded real scattering matrices and the foundations of linear passive network theory. Circuit Theory, IRE Transactions on, 6(1):102–124, 1959. [27] Z. Zhang and N. Wong. Passivity check of s -parameter descriptor systems via s -parameter generalized hamiltonian methods. IEEE Transactions on Advanced Packaging, 33(4):1034–1042, Nov 2010.