1

Minimum-gain Pole Placement with Sparse Static Feedback Vaibhav Katewa, Member, IEEE, and Fabio Pasqualetti, Member, IEEE

Abstract—The minimum-gain eigenvalue assignment/pole input systems (m = 1), the feedback vector that assigns placement problem (MGEAP) is a classical problem in LTI the eigenvalues is unique and can be obtained using the systems with static state feedback. In this paper, we study Ackermann’s formula [3]. On the other hand, for multi-input the MGEAP when the state feedback has arbitrary sparsity constraints. We formulate the sparse MGEAP problem as an systems (m > 1), the feedback is not unique and there equality-constrained optimization problem and present an ana- exists a flexibility to choose the eigenvectors of the closed loop lytical characterization of its locally optimal solution in terms system. This flexibility can be utilized to choose a feedback of eigenvector matrices of the closed loop system. This result matrix that satisfies some auxiliary control criteria in addition is used to provide a geometric interpretation of the solution of to assigning the eigenvalues. For instance, the feedback matrix the non-sparse MGEAP, thereby providing additional insights for this classical problem. Further, we develop an iterative projected can be chosen to minimize the sensitivity of the closed loop gradient descent algorithm to obtain local solutions for the sparse system to perturbations in the system parameters, thereby MGEAP using a parametrization based on the Sylvester equation. making the system robust. This is known as Robust Eigen- We present a heuristic algorithm to compute the projections, value Assignment Problem (REAP) [4]. Alternatively, one which also provides a novel method to solve the sparse EAP. can choose the feedback matrix with minimum gain, thereby Also, a relaxed version of the sparse MGEAP is presented and an algorithm is developed to obtain approximately sparse local reducing the overall control effort. This is known as Minimum solutions to the MGEAP. Finally, numerical studies are presented Gain Eigenvalue Assignment Problem (MGEAP) [5], [6]. to compare the properties of the algorithms, which suggest that Recently, considerable attention has been given to the study the proposed projection algorithm converges in most cases. and design of sparse feedback control systems, where certain Index Terms—Eigenvalue assignment, Minimum-gain pole entries of the matrix F are required to be zero. Feedback spar- placement, Optimization, Sparse feedback, Sparse linear systems sity typically arises in decentralized control problems for large scale and interconnected systems with multiple controllers [7], where each controller has access to only some partial states of I.INTRODUCTION the system. Such constraints in decentralized control problems The Eigenvalue/Pole Assignment Problem (EAP) using are typically specified by information patterns that govern static state feedback is one of the central problems in the which controllers have access to which states of the system design of Linear Time Invariant (LTI) control systems (e.g., [8], [9]. Sparsity may also be a result of the special structure see [1], [2]). It plays a key role in system stabilization and of a centralized control system which prohibits feedback from shaping its transient behavior. Given the following LTI system some states to the controllers. The feedback design problem with sparsity constraints is considerably more difficult than the unconstrained case. There x(k) = Ax(k) + Bu(k), (1a) have been numerous studies to determine the optimal feedback D u(k) = F x(k), (1b) control law for H2/LQR/LQG control problems with sparsity, particularly when the controllers have access to only local where x Rn is the state of the LTI system, u Rm is ∈ × × ∈ information (see [7]–[10] and the references therein). While the control input, A Rn n, B Rn m, and denotes arXiv:1805.08762v2 [math.OC] 14 Aug 2020 ∈ ∈ D the optimal H2/LQR/LQG design problems with sparsity have either the continuous time differential operator or the discrete- a rich history, studies on the REAP/MGEAP in the presence time shift operator, the EAP involves finding a real feedback of arbitrary sparsity constraints are lacking. Even the problem matrix F Rm×n such that the eigenvalues of the closed ∈ of finding a particular (not necessary optimal) sparse feedback loop matrix Ac(F ) , A + BF coincide with a given set = matrix that solves the EAP is not well studied. In this λ , λ , , λ that is closed under complex conjugation.S { 1 2 ··· n} paper, we study the EAP and MGEAP with arbitrary sparsity It is well known that the existence of F depends on the constraints on the feedback matrix F . We provide analytical controllability properties of the pair (A, B). Further, for single characterization for the solution of sparse MGEAP and provide This work was supported in part by awards ARO-71603NSYIP and iterative algorithms to solve the sparse EAP and MGEAP. We AFOSR-FA9550-19-1-0235. also briefly discuss the feasibility of the sparse EAP problem. V. Katewa was with the Department of Mechanical Engineering, University Related work There have been numerous studies on the of California at Riverside, Riverside 92521 USA. He is now with the Department of Electrical Communication Engineering and the Robert Bosch optimal pole placement problem without sparsity constraints. Center for Cyber-Physical Systems, Indian Institute of Science, Bengaluru For the REAP, authors have considered optimizing different 560012, India (e-mail: [email protected]). metrics which capture the sensitivity of the eigenvalues, such F. Pasqualetti is with the Department of Mechanical Engineering, University of California at Riverside, Riverside 92521 USA (e-mail: as the condition number of the eigenvector matrix [4], [11]– [email protected]). [14], departure from normality [15] and others [16], [17]. Most 2 of these methods use gradient-based iterative procedures to feasibility of the sparse EAP is NP-hard and present necessary obtain the solutions. For surveys and comparisons of these and sufficient conditions for feasibility. We develop two heuris- REAP methods, see [11], [18], [19] and the references therein. tic iterative algorithms to obtain a local solution of the sparse Early works for MGEAP, including [20], [21], presented EAP. The first algorithm is based on repeated projections on approximate solutions using low rank feedback and successive linear subspaces. The second algorithm is developed using pole placement techniques. Simultaneous robust and minimum the Sylvester equation based parametrization and it obtains gain pole placement were studied in [14], [22]–[24]. For a a solution via projection of a non-sparse feedback matrix on survey and performance comparison of these MGEAP studies, the space of sparse feedback matrices that solve the EAP. see [5] and the references therein. The regional pole placement Third, using the latter EAP projection algorithm, we develop a problem was studied in [25], [26], where the eigenvalues were projected gradient descent method to obtain a local solution to assigned inside a specified region. While these studies have the sparse MGEAP. We also formulate a relaxed version of the provided useful insights on REAP/MGEAP, they do not con- sparse MGEAP using penalty based optimization and develop sider sparsity constraints on the feedback matrix. In contrast, an algorithm to obtain approximately-sparse local solutions. we study the sparse EAP/MGEAP by explicitly including the Paper organization The remainder of the paper is organized sparsity constraints in the problem formulation and solutions. as follows. In Section II we formulate the sparse MGEAP There have also been numerous studies on EAP with sparse optimization problem. In Section III, we obtain the solution dynamic LTI feedback. The concept of decentralized fixed of the optimization problem using the Lagrangian theory of modes (DFMs) was introduced in [27] and later refined in optimization. We also provide a geometric interpretation for [28]. Decentralized fixed modes are those eigenvalues of the optimal solutions of the non-sparse MGEAP. In Section IV, the system which cannot be shifted using a static/dynamic we present two heuristic algorithms for solving the sparse EAP. feedback with fully decentralized sparsity pattern (i.e. the case Further, we present a projected gradient descent algorithm to where controllers have access only to local states). The re- solve the sparse MGEAP and also an approximately-sparse maining eigenvalues of the system can be arbitrarily assigned. solution algorithm for a relaxed version of the sparse MGEAP. However, this cannot be achieved in general using a static Section V contains numerical studies and comparisons of the decentralized controller and requires the use of dynamic decen- proposed algorithms. In Section VI, we discuss the feasibility tralized controller [27]. Other algebraic characterizations of the of the sparse EAP. Finally, Section VII concludes the paper. DFMs were presented in [29], [30]. The notion of DFMs was generalized for an arbitrary sparsity pattern and the concept of structurally fixed modes (SFMs) was introduced in [31]. Graph theoretical characterizations of structurally fixed modes were II.SPARSE MGEAP FORMULATION provided in [32], [33]. As in the case of DFMs, assigning the non-SFMs also requires dynamic controllers. These studies on A. Mathematical notation and preliminary properties DFMs and SFMs present feasibility conditions and analysis methods for the EAP problem with sparse dynamic feedback. We use the following properties to derive our results [39], In contrast, we study both EAP and MGEAP with sparse static [40]: controllers, assuming the sparse EAP is feasible. We remark that EAP with sparsity and static feedback controller is in fact P.1 tr(A) = tr(AT) and tr(ABC) = tr(CAB), 2 T T important for several network design and control problems, P.2 A F = tr(A A) = vec (A)vec(A), and easier to implement than its dynamic counterpart. P.3 kveck(AB) = (I A)vec(B) = (BT I)vec(A), Recently, there has been a renewed interest in studying P.4 vec(ABC) = (⊗CT A)vec(B), ⊗ linear systems with sparsity constraints. Using a different P.5 (A B)T = AT ⊗BT and (A B)H = AH BH, T ⊗ ⊗ T ⊗ ⊗ approach than [32], the original results regarding DFMs in P.6 1n(A B)1n = tr(A B), [27] were generalized for an arbitrary sparsity pattern by P.7 A B◦= B A and A (B C) = (A B) C, the authors in [34], [35], where they also present a sparse P.8 vec◦(A B) =◦ vec(A) vec◦ (B◦), (A B)T◦= AT◦ BT, dynamic d ◦ T d ◦ T ◦ d ◦ controller synthesis algorithm. Further, there have P.9 dX tr(AX)=A , dX tr(X X)=2X, dx (Ax)=A, been many recent studies on minimum cost input/output and P.10 d(X−1) = X−1dXX−1, feedback sparsity pattern selection such that the system has − 2 P.11 Let Dxf and Dxf be the gradient and Hessian of controllability [36] and no structurally fixed modes (see [37], n T f(x): R R. Then, df = (Dxf) dx and [38] and the references therein). In contrast, we consider the 2 →T 2 d f = (dx) (Dxf)dx, problem of finding a static minimum gain feedback with a n P.12 Projection of a vector y R on the null space of given sparsity pattern that solves the EAP. m×n ∈ + A R is given by yp = [In A A]y. Contribution The contribution of this paper is three-fold. ∈ − First, we study the MGEAP with static feedback and arbitrary The Kronecker sum of two square matrices A and B with sparsity constraints (assuming feasibility of sparse EAP). We dimensions n amd m, respectively, is denoted by formulate the sparse MGEAP as an equality constrained opti- mization problem and present an analytical characterization of A B = (Im A) + B In. an locally optimal sparse solution. As a contribution, we ⊕ ⊗ ⊗ use this result to provide a geometric insight for the non-sparse MGEAP solutions. Second, we show that determining the Further, we use the following notation throughout the paper: 3

Spectral norm k · k2 in various minimum distance and eigenvalue assignment prob- F Frobenius norm k · k lems [41], including the non-sparse MGEAP. < , >F Inner (Frobenius) product · · Cardinality of a set Remark 1. (Choice of norm) The Frobenius norm measures Γ(|·|) Spectrum of a matrix the element-wise gains of a matrix, which is informative in · sparsity constrained problems arising, for instance, in network σmin( ) Minimum singular value of a matrix tr( )· Trace of a matrix control problems. It is also convenient for the analysis, par- ( )·+ Moore-Penrose pseudo inverse ticularly to compute the derivatives of the cost function.  (·)T of a matrix · Definition 1. (Fixed modes [27], [34]) The fixed modes of ( ) Range of a matrix ¯ R · (A, B) with respect to the sparsity constraints F are those A > 0 Positive definite matrix A eigenvalues of A which cannot be changed using LTI static Hadamard (element-wise) product (and also dynamic) state feedback, and are denoted by ◦ Kronecker product ⊗∗ ¯ ( ) Complex conjugate Γf (A, B, F) , Γ(A + BF ). · H ( ) Conjugate transpose c F : F \F¯ = 0 supp· ( ) Support of a vector ◦ We make the following assumptions regarding the fixed vec( ·) Vectorization of a matrix modes and feasibility of the optimization problem (2). diag(·a) n n with diagonal × elements given by n-dim vector a Assumption 1. The fixed modes of the triplet (A, B, F¯) are Re( ) Real part of a complex variable included in the desired eigenvalue set , i.e., Γf (A, B, F¯) . Im(·) Imaginary part of a complex variable S ⊆ S Assumption 2. There exists at least one feedback matrix 1 (0· ) n-dim vector of ones (zeros) F n n that satisfies constraints (2a)-(2b) for the given . 1 × (0 × ) n m-dim (zeros) n m n m × S In n-dim Assumption 1 is clearly necessary for the feasibility of ei i-th canonical vector the optimization problem (2). Assumption 2 is restrictive Tm,n that satisfies because, in general, it is possible that a static feedback matrix T m×n vec(A ) = Tm,nvec(A), A R with a given sparsity pattern cannot assign the closed loop ∈ eigenvalues to arbitrary locations (i.e. for an arbitrary set satisfying Assumption 1)1. In such cases, only a few (< nS) B. Sparse MGEAP eigenvalues can be assigned independently and other remain- The sparse MGEAP involves finding a real feedback matrix ing eigenvalues are a function of them. To the best of our × F Rm n with minimum norm that assigns the closed knowledge, there are no studies on characterizing conditions ∈ loop eigenvalues of (1a)-(1b) at some desired locations given for the existence of a static feedback matrix for an arbitrary by set = λ1, λ2, , λn , and satisfies a given sparsity sparsity pattern F¯ and eigenvalue set [42] (although such S { ··· }× constraints. Let F¯ 0, 1 m n denote a binary matrix that characterization is available for dynamicS feedback laws with ∈ { } specifies the sparsity structure of the feedback matrix F . If arbitrary sparsity pattern [31], [34], [35], and static output ¯ ¯ th Fij = 0 (respectively Fij = 1), then the j state is unavailable feedback for decentralized sparsity pattern [43]). Thus, for th (respectively available) for calculating the i input. Thus, the purpose of this paper, we focus on finding the optimal feedback matrix assuming that at least one such feedback 0 if F¯ = 0, and ij matrix exists. We provide some preliminary results on the Fij = ¯ (? if Fij = 1, feasibility of the optimization problem (2) in Section VI. ¯c ¯ where ? denotes a real number. Let F , 1m×n F denote the complementary sparsity structure matrix. Further,− (with a III.SOLUTION TO THE SPARSE MGEAP slight abuse of notation, c.f. (1a)) let X , [x1, x2, , xn] In this section we present the solution to the optimization n×n ··· ∈ C , xi = 0n denote the non-singular eigenvector matrix of problem (2). To this aim, we use the theory of Lagrangian 6 the closed loop matrix Ac(F ) = A + BF . multipliers for equality constrained minimization problems. The MGEAP can be mathematically stated as follows: Remark 2. (Conjugate eigenvectors) We use the convention 1 2 that the right (respectively, left) eigenvectors (xi, xj) cor- min F F (2) F,X 2 || || responding to two conjugate eigenvalues (λi, λj) are also ∗ ∗ s.t. (A + BF )X = XΛ, (2a) conjugate. Thus, if λi = λj , then xi = xj .  c F¯ F = 0m×n, (2b) We use the real counterpart of (2a) for the analysis. For ◦ two complex conjugate eigenvalues ∗ and corresponding T (λi, λi ) where Λ = diag([λ1, λ2, , λn] ) is the diagonal matrix of ∗ eigenvectors (xi, x ), the following complex equation the desired eigenvalues. Equations··· (2a) and (2b) represent the i eigenvalue assignment and sparsity constraints, respectively. ∗ ∗ λi 0 (A + BF ) xi xi = xi xi 0 λ∗ The constraint (2a) is not convex in (F,X) and, therefore, i     h i the optimization problem (2) is non-convex. Consequently, 1Note that a sparse dynamic feedback law can assign the eigenvalues to multiple local minima may exist. This is a common feature arbitrary locations under Assumption 1 [35]. 4

ˆ ˆ∗ ˆ ˆ is equivalent to the following real equation replacing [li, li ] with [Re(li), Im(li)]. Let Jb(z) be defined in − + Lemma 3 and P (z) = In2+mn J (z)Jb(z). Further, define Re(λi) Im(λi) b (A+BF ) Re(xi) Im(xi) = Re(xi) Im(xi) . T − −Im(λi) Re(λi) L¯ 4T (B Lˆ I ) and let , n,m R ⊗ n h i T     ∗ 2 2 ¯ For each complex eigenvalue, the columns xi xi of X are ˆ 0n ×n L D , ¯ . (7) replaced by Re(x ) Im(x ) to obtain a real X , and the L 2Imn i i  R   λi 0 Re(λi) Im(λi) sub-matrix ∗ of Λ is replaced by to ˆ ˆ 0 λi  −Im(λi) Re(λi) Then, (X, F ) is a local minimum of the optimization problem obtain a realh ΛR.i The real eigenvectors inh X and XR,i and (2) if and only if real eigenvalues in Λ and Λ coincide. Clearly, X is not the R R Fˆ = F¯ (BTLˆXˆ T), (8a) eigenvector matrix of A + BF (c.f. Remark 7), and X can be − ◦ ˆ ˆ ˆ obtained through the columns of XR. Thus, (2a) becomes (A + BF )X = XΛ (8b) (A + BFˆ)TLˆ = LˆΛ (8c) (A + BF )XR = XRΛR, (3) Jb(ˆz) is full rank, (8d) and replaces the optimization variable in (2). In the XR X ˆ theory of equality constrained optimization, the first-order P (ˆz)DP (ˆz) > 0. (8e) optimality conditions are meaningful only when the optimal Proof. We prove the result using the Lagrange theorem for n×n point satisfies the following regularity condition: the Jacobian equality constrained minimization. Let LR R and m×n ∈ of the constraints, defined by Jb, is full rank. This regularity M R be the Lagrange multipliers associated with condition is mild and usually satisfied for most classes of constraints∈ (3) and (2b), respectively. The Lagrange function problems [44]. Before presenting the main result, we derive the for the optimization problem (2) is given by Jacobian and state the regularity condition for the problem (2). P.2 1 T T Computation of J requires vectorization of the matrix = tr(F F ) + 2 1n[LR (Ac(F )XR XRΛR)]1n b L 2 ◦ − constraints (3) and (2b). For this purpose, let xR , vec(XR) T ¯c n2 mn T T T ∈ +1m[M (F F )]1n C , f , vec(F ) R , and let z , [xR, f ] be the vector ◦ ◦ ∈ P.6,P.7 1 T T containing all the independent variables of the optimization = tr(F F ) + 2 tr[LR(Ac(F )XR XRΛR)] problem. Further, let denote the total number of feedback 2 − ns c T ¯c +tr[(M F¯ ) F ]. sparsity constraints (i.e. number of 1’s in F ): ◦ c c c Necessity: We next derive the first-order necessary condition ns = (i, j): F¯ = [¯f ], ¯f = 1 . |{ ij ij }| for a stationary point. Differentiating w.r.t. X and setting L R Note that the constraint (2b) consists of ns non-trivial sparsity to 0, we get constraints, and can be equivalently written as d P.9 T T = 2[Ac (F )LR LRΛR] = 0n×n. (9) dXR L − Qf = 0ns , (4) The real equation (9) is equivalent to the complex equation T ns×mn where Q = eq1 eq2 . . . eqns 0, 1 with (8c). Equation (8b) is a restatement of (2a) for the optimal ¯c ∈ { } ˆ ˆ q1, . . . , qns = supp(vec(F )) being the set of indices in- (F, X). Differentiating w.r.t. F , we get dicating{ the} ones in vec(F¯c).  L d P.9 T T c = F + 2B L X + M F¯ = 0 × . (10) Lemma 3. (Jacobian of the constraints) The Jacobian of the dF L R R ◦ m n equality constraints (2a)-(2b) is given by Taking the Hadamard product of (10) with F¯c and using (2b), T T we get (since F¯c F¯c = F¯c) Ac(F ) ( ΛR) XR B Jb(z) = ⊕ − ⊗ . (5) ◦ 2 ¯c T T ¯c 0ns×n Q F (2B L X ) + M F = 0 × (11)   ◦ R R ◦ m n Proof. We construct the Jacobian Jb by rewriting the con- Replacing M F¯c from (11) into (10), we get straints (3) and (2b), in vectorized form and taking their ◦ ¯ T T (a) ¯ T T derivatives with respect to z. Constraint (3) can be vectorized F = F (2B LRX ) = F (B LX ), − ◦ R − ◦ in the following two different ways (using P.3 and P.4): where (a) follows from the definition of LR and XR and T Remark 2. Equation (8d) is the necessary regularity condition [(A + BF ) ( ΛR)]xR = 0n2 , (6a) T ⊕ −T and follows from Lemma 3 [A (Λ )]x + (X B)f = 0 2 . (6b) ⊕ − R R R ⊗ n Sufficiency: Next, we derive the second-order sufficient con- Differentiating (6a) w.r.t. and (6b) w.r.t yields the first dition for a local minimum by calculating the Hessian of . xR f L (block) row of . Differentiating (4) w.r.t. yields the second Taking the differential of twice, we get Jb z L (block) row of Jb, thus completing the proof. d2 = tr((dF )TdF ) + 4tr(LT BdF dX ) L R R We now state the optimality conditions for the problem (2). (P.2) T T T T = df df + 4vec (dF B LR)dxR ˆ ˆ Theorem 4. (Optimality conditions) Let (X, F ) (equiva- (P.3,P.5) T T ¯ T ˆT T = df df + df Ldx lently zˆ = [ˆxR, f ] ) satisfy the constraints (2a)-(2b). Let ˆ ˆ 1 dx L = [li], i = 1, , n be the left eigenvector matrix of = dxT df T D , ˆ ˆ ··· 2 df Ac(F ), and let LR be its real counterpart constructed by     5 where D is the Hessian (c.f. P.11) defined in (7). The sufficient A solution of the optimization problem (2) can be obtained second-order optimality condition for the optimization prob- by numerically/iteratively solving the matrix equation (12), lem requires the Hessian to be positive definite in the kernel which resembles a Sylvester type equation with a non-linear of the Jacobian at the optimal point [44, Chapter 11]. That is, right side, and using (8a) to compute the feedback matrix. The yTDy > 0, y : J (z)y = 0. This condition is equivalent to regularity and local minimum of the solution can be verified ∀ b P (z)DP (z) > 0, since Jb(z)y = 0 if and only if y = P (z)s using (8d) and (8e), respectively. Since the optimization prob- 2 for a s Rn +mn [44]. Since the P (z) is lem is not convex, only local minima can be obtained via this symmetric,∈ (8e) follows, and this concludes the proof. procedure. To improve upon the local solutions, the procedure can be repeated for different initial conditions to solve (12). Observe that the Hadamard product in (8a) guarantees that However, convergence to a global minimum is not guaranteed. the feedback matrix satisfies the sparsity constraints given The convergence of the iterative techniques to solve (12) in (2b). However, the optimal sparse feedback ˆ cannot be F depends substantially on the initial conditions. If they are obtained by sparsification of the optimal non-sparse feedback. not chosen properly, convergence may not be guaranteed. The optimality condition (8a) is an implicit condition in terms Further, the solution of (12) can also represent a local maxima. of the closed loop right and left eigenvector matrices. Next, we Therefore, instead of solving (12) directly, we use a different provide an explicit optimality condition in terms of L,ˆ Xˆ . { } approach based on the gradient descent procedure to obtain T T T Corollary 5. (Stationary point of (2)) Zˆ , [Xˆ , Lˆ ] is a a locally minimum solution. Details of this approach and stationary point of the optimization problem (2) if and only if corresponding algorithms are presented in Section IV.

¯ ˆ ˆ ¯ ¯T ¯ˆ ˆT ¯ ¯T ˆ A. Results for the non-sparse MGEAP AZ ZΛ = B1[F (B1 IZZ B2)]B2 Z, (12) − ◦ In this subsection, we present some results specific to the where, case when the optimization problem (2) does not have any ¯ ¯ A 0n×n ¯ B 0n×n sparsity constraints (i.e. F = 1m×n). Although the non-sparse A , T , B1 , , 0 × A 0n×m In MGEAP has been studied previously, these results are novel  n n    ¯ and further illustrate the properties of an optimal solution. ¯ In 0n×m F 0m×m B2 , , F , T , and We begin by presenting a geometric interpretation of the 0n×n B 0n×n F¯     optimality conditions in Theorem 4 with B = In, i.e. all the 0n×n In I¯ , . entries of A can be perturbed independently. In this case, the In 0n×n   optimization problem (2) can be written as: Proof. Combining (8b) and (8c) and using ΛT = Λ, we get 1 −1 2 min A XΛX F . (13) AXˆ XˆΛ BFˆXˆ X 2 || − || − = ATLˆ LˆΛ − FˆTBTLˆ Since A and R(X) XΛX−1 are elements (or vectors) of  −    , Fˆ 0 the matrix inner product space with Frobenius norm, a solution A¯Zˆ ZˆΛ = B¯ B¯TZˆ ⇒ − − 1 0 FˆT 2 of the optimization problem (13) is given by the projection of   A on the manifold R(X): X is non-singular . This ¯ T ˆ ˆ T , ¯ F (B LX ) 0 ¯T ˆ projection can be obtainedM { by solving the normal equation,} = B1 ◦ T T B2 Z 0 F¯ (XˆLˆ B) which states that the optimal error vector ˆ ˆ ˆ −1  ◦  F = A XΛX ˆ ˆ T should be orthogonal to the tangent plane of the manifold− ¯ ¯T LX 0 ¯ ¯T ˆ = B1 F B B2 B Z ˆ M ◦ 1 0 XˆLˆT 2 at the optimal point X [45]. The next result shows that the      optimality conditions derived in Theorem 4 are in fact the ¯ ¯T ¯ ˆ ˆT ¯ ¯ ¯T ˆ = B1 F B1 I(ZZ I)B2 B2 Z normal equations for the optimization problem (13). ◦ ◦  n T T T o = B¯1 F (B¯ I¯ZˆZˆ B¯2) B¯ Z,ˆ Lemma 8. (Geometric interpretation) Let F¯ = 1 and { ◦ 1 } 2 m×n . Then, Equations - are equivalent to the where the equalities follow from the Hadamard product. B = In (8a) (8c) following normal equation: Remark 6. (Partial spectrum assignment) The results of < F,ˆ M(Xˆ) > = 0, (14) Theorem 4 and Corollary 5 are also valid when specifying only T F p < n eigenvalues (the remaining eigenvalues are functionally where M(X) denotes the tangent space of at X. related to them; see also the discussion below Assumption 2). T M p×p ˆ n×p ˆ n×p Proof. We begin by characterizing the tangent space M(X), In this case, Λ C , X C and L C . While T partial assignment∈ may be useful∈ in some applications,∈ in this which is given by the first order approximation of R(X): paper we focus on assigning all the eigenvalues.  R(X + dX) = (X + dX)Λ(X + dX)−1 (P.10) Remark 7. (General eigenstructure assignment) Although = R(X) + dXΛX−1 XΛX−1dXX−1 the optimization problem (2) is formulated by considering Λ − to be diagonal, the result in Theorem 4 is valid for any general + higher order terms. Λ satisfying Γ(Λ) = . For instance, we can choose Λ in a Thus, the tangent space is given by Jordan canonical form.S However, note that for a general , Λ −1 −1 −1 n×n M(X) = Y ΛX XΛX YX : Y C X will cease to be an eigenvector matrix.  T { − ∈ } 6

ˆ ˆ n Necessity: Using F given by (8a), we get Proof. Since Λ R, X R , xˆ with xˆ 2 = 1, and ∈ ˆ ∈ k k T −1 −1 −1 Lˆ n ˆl. Let ˆl = β˜l where β ˆl > 0. Substituting < F,ˆ M(Xˆ) >= tr(Fˆ (Y ΛXˆ XˆΛXˆ Y Xˆ )) R , , 2 T − Fˆ =∈ ˆlxˆT from (8a) into (8b)-(8c), wek getk = tr(XˆLˆTY ΛXˆ −1) + tr(XˆLˆTXˆΛXˆ −1Y Xˆ −1) − − T ˆ (P.1) T T − (A ˆlxˆ )ˆx =x ˆΛ (A ΛI )ˆx = β˜l and, = tr(Lˆ Y Λ) + tr(Lˆ XˆΛXˆ 1Y ) − ⇒ − n − T T Tˆ (a) (P.1) (A xˆˆl )ˆl = ˆlΛ (A ΛI ) ˜l = βx.ˆ = tr(LˆTY Λ) + tr(ΛLˆTXˆXˆ −1Y ) = 0, − ⇒ − n − The above two equations imply that the unit norm vectors ˆT ˆ xˆ where (a) follows from the fact that Λ and L X commute. ˜ˆ Sufficiency: From (14), we get and l are left and right singular vectors of A ΛIn associated ˆ 2 − ˆT ˆ with the singular value β. Since F F = tr(F F ) = T −1 −1 −1 k k tr(Fˆ (Y ΛXˆ XˆΛXˆ Y Xˆ )) = 0 tr(ˆxˆlTˆlxˆT) = β2, we pick β as the minimum singular value − (P.1) of A ΛI , and the proof is complete. tr[(ΛXˆ −1FˆT Xˆ −1FˆTXˆΛXˆ −1)Y ] = 0. − n ⇒ − We conclude this subsection by presenting a brief com- Since the above equation is true for all Y Cn×n, we get ∈ parison of the non-sparse MGEAP solution with deflation −1 T −1 T −1 ΛXˆ Fˆ Xˆ Fˆ XˆΛXˆ = 0 × techniques for eigenvalue assignment. For B = In, an − n n XˆΛXˆ −1FˆT = FˆTXˆΛXˆ −1 alternative method to solve the non-sparse EAP is via the ⇒ Wielandt deflation technique [47]. Wielandt deflation achieves ˆ ˆT ˆT ˆ Ac(F )F = F Ac(F ). pole assignment by modifying the matrix A in n steps A ⇒ → Thus, A (Fˆ) and FˆT commute and have common left and A1 A2 An. Step i shifts one eigenvalue of c →to a desired→ · · · location → , while keeping the remaining right eigenspaces [46], i.e., FˆT = XGˆ Xˆ −1 = XˆLˆT, Ai−1 λi eigenvalues of fixed. This is achieved by using the where G is a diagonal matrix. This completes− the proof.− Ai−1 feedback F i = (µ λ )v zT, where µ and v are any df − i − i i i i i Next, we show the equivalence of the non-sparse MGEAP eigenvalue and right eigenvector pair of Ai−1, and zi is any T for two orthogonally similar systems. vector such that zi vi = 1. Thus, the overall feedback that solves the EAP is given as n i . Lemma 9. (Invariance under orthogonal transformation) Fdf = i=1 Fdf Let F¯ = 1 and (A ,B ), (A ,B ) be two orthogonally It is interesting to compare the optimal feedback expression m×n 1 1 2 2 in (8a), ˆ n ˆ T, with theP deflation feedback. Both similar systems such that A = PA P −1 and B = PB , F = i=1 lixˆi 2 1 2 1 feedbacks are− sum of n matrices, where each matrix has rank with P being an . Let optimal solutions P of (2) for the two systems be denoted by (Xˆ , Lˆ , Fˆ ) and 1. However, the Wielandt deflation has an inherent special 1 1 1 structure and a restrictive property that, in each step, all except (Xˆ , Lˆ , Fˆ ), respectively. Then 2 2 2 one eigenvalue remain unchanged. Furthermore, each rank 1 Xˆ = P Xˆ , Lˆ = P Lˆ , Fˆ = Fˆ P T, and term in Fˆ and F involves the right/left eigenvectors of− the 2 1 2 1 2 1 (15) df closed and open loop matrix, respectively. Clearly, since Fˆ is Fˆ1 F = Fˆ2 F . || || || || the minimum-gain solution of (2), Fˆ F Fdf F . Proof. From (8b), we have || || ≤ || ||

(A2 + B2Fˆ2)Xˆ2 = Xˆ2Λ IV. SOLUTIONALGORITHMS −1 T (PA1P + PB1Fˆ1P )P Xˆ1 = P Xˆ1Λ In this section, we present an iterative algorithm to obtain ⇒ a solution to the sparse MGEAP in (2). To develop the (A + B Fˆ )Xˆ = Xˆ Λ. ⇒ 1 1 1 1 1 algorithm, we first present two algorithms for computing non- Similar relation can be shown between Lˆ1 and Lˆ2 using (8c). sparse and approximately-sparse solutions to the MGEAP, Next, from (8a), we have respectively. Next, we present two heuristic algorithms to obtain a sparse solution of the EAP (i.e. any sparse solution, Fˆ = BTLˆ Xˆ T = BTLˆ Xˆ TP T = Fˆ P T. 2 − 2 2 2 − 1 1 1 1 which is not necessarily minimum-gain). Finally, we use these (P.1) algorithms to develop the algorithm for sparse MGEAP. Note Finally, Fˆ 2 = tr(FˆTFˆ ) = tr(FˆTFˆ ) = Fˆ 2 . || 1||F 1 1 2 2 || 2||F that although our focus is to develop the sparse MGEAP Recall from Remark 6 that Theorem 4 is also valid for algorithm, the other algorithms presented in this section are MGEAP with partial spectrum assignment. Next, we consider novel in themselves to the best of our knowledge. the case when only one real eigenvalue needs to assigned for We make the following assumptions: the MGEAP while the remaining eigenvalues are unspecified. Assumption 3. The triplet (A, B, F¯) has no fixed modes, i.e., In this special case, we can explicitly characterize the global Γ (A, B, F¯) = . minimum of (2) as shown in the next result. f ∅ Assumption 4. The open and closed loop eigenvalue sets are Corollary 10. (One real eigenvalue assignment) Let F¯ = disjoint, i.e., Γ(A) Γ(Λ) = , and B has full column rank. 1m×n, Λ R, and B = In. Then, the global minima of ∩ ∅ ∈ the optimization problem (2) is given by Fˆgl = σmin(A Assumption 4 is not restrictive since if there are any T − − ΛIn)uv , where u and v are unit norm left and right singular common eigenvalues in A and Λ, we can use a preliminary vectors, respectively, corresponding to σmin(A ΛI ). Further, sparse feedback F to shift the eigenvalues of A to some − n p Fˆ = σmin(A ΛI ). other locations such that Γ(A + BF ) Γ(Λ) = . Due to k glkF − n p ∩ ∅ 7

Assumption 3, such a Fp always exists. Then, we can solve Proof. Vectorizing (16a) using P.3 and taking the differential, 2 the modified MGEAP with parameters (A + BFp,B, Λ, F¯). If F is the sparse solution of this modified problem, then the Ax˜ + (I B)g = 0 n ⊗ solution of the original problem is Fp + F . −1 dx = A˜ (In B)dg. (19) To avoid complex domain calculations in the algorithms, ⇒ − ⊗ we use the real eigenvalue assignment constraint (3). For Note that due to Assumption 4, A˜ is invertible. Taking the convenience, we use a slight abuse of notation to denote XR differential of (16b) and vectorizing, we get and ΛR as X and Λ, respectively, in this section. Note that the invertibility of X is equivalent to the invertibility of X . R dF P.=10 dGX−1 GX−1 dXX−1 (20) − F A. Algorithms for the non-sparse MGEAP P.3,P.4 −T −T df = (X Im|)dg{z } (X F )dx ⇒ ⊗ − ⊗ (19) −T −T −1 We now present two iterative algorithms to obtain non- = [(X Im) + (X F )A˜ (In B)] dg. (21) sparse and approximately-sparse solutions to the MGEAP, ⊗ ⊗ ⊗ respectively. To develop the algorithms, we use the Sylvester P.=5 ZT(F,X) equation based parametrization [14], [48]. In this parametriza- | {z } tion, instead of defining (F,X) as free variables, we define a P.2 The differential of cost J = 1 f Tf is given by dJ=f Tdf. parameter m×n as the free variable. With this 2 G , FX R Using (21) and P.11, we get (17). To derive the Hessian, parametrization, the non-sparse∈ MGEAP is stated as: we compute the second-order differentials of the involved variables. Note that since g(G) is an independent variable, 2 2 1 2 d g = 0(d G = 0) [40]. Further, the second-order differential min J = F F (16) 2 G 2 || || of (16a) yields d X = 0. Rewriting (20) as dF X = dG F dX, and taking its differential and vectorization, we get − s.t. AX XΛ + BG = 0, (16a) − F = GX−1. (16b) (d2F )X + (dF )(dX) = (dF )(dX) − d2F = 2(dF )(dX)X−1 Note that, for any given G, we can solve the Sylvester equation ⇒ − (16a) to obtain X. Assumption 4 guarantees that (16a) has a P.4 d2f = 2(X−T dF )dx. (22) unique solution [49]. Further, we can use (16b) to obtain a ⇒ − ⊗ non-sparse feedback matrix F . Thus, (16) is an unconstrained Taking the second-order differential of J we get optimization problem in the free parameter G. The Sylvester-based parametrization requires the unique d2J = (df)Tdf + f T(d2f) = (df)Tdf + (d2f)Tf solution X of (16a) to be non-singular, which holds gener- (21),(22),P.5 T T T −1 T ically if (i) (A, BG) is controllable and (ii) (Λ, BG) is = dg ZZ dg 2dx (X (dF ) )f − − − ⊗ observable [50]. Since the system has no fixed modes, P.5 (A, B) = vec((dF )TFX−T) is controllable [27]. This implies that condition (i) holds P.5 T T T | −1 T{z } generically for almost all . Further, since is of full column = dg ZZ dg 2dx (X F In)Tm,ndf G B − ⊗ (19),(21) rank, condition (ii) is guaranteed if (Λ, G) is observable. = dgTZZTdg These conditions are mild and are satisfied− for almost all + 2dgT(I BT)A˜−T(X−1F T I )T ZTdg . instances as confirmed in our simulations (see Section V). n ⊗ ⊗ n m,n The next result provides the gradient and Hessian of the T T T dg (Z1Z +ZZ1 )dg cost J w.r.t. to the parameter g vec(G). (23) , | {z } Lemma 11. (Gradient and Hessian of J The gradient and ) The Hessian in (18) follows from (23) and P.11. Hessian of the cost J in (16) with respect to g is given by

dJ −1 T ˜−T −1 T The first-order optimality condition of the unconstrained = (X Im)+(In B )A (X F ) f, (17) dJ dg ⊗ ⊗ ⊗ problem (16) is dg = Z(F,X)f = 0. The next result shows h i that this condition is equivalent to the first-order optimality , Z(F,X) conditions of Theorem 4 without sparsity constraints. d2J | {z } H(F,X) = Z(F,X)ZT(F,X) d2g , Corollary 12. (Equivalence of first-order optimality condi- tions) Let F¯ = 1 . Then, the first-order optimality condition + Z (F,X)ZT(F,X) + Z(F,X)ZT(F,X), (18) m×n 1 1 Z(F,ˆ Xˆ)fˆ = 0 of (16) is equivalent to (8a)-(8c), where T ˜−T −1 T where Z1(F,X) , (In B )A (X F In)Tm,n, ⊗ ⊗ −T −1 T and A˜ A ( ΛT). ˆl , vec(Lˆ) = A˜ (Xˆ Fˆ )f.ˆ (24) , ⊕ − ⊗

2 Proof. Although the minimization cost of the modified MGEAP is 0.5||Fp + The optimality condition (8b) follows from (16a)- 2 ˆ ˆ −T T ˆ F ||F , it can be solved using techniques similar to solving MGEAP in (2). (16b). Equation (8a) can be rewritten as F X + B L = 0 8 and its vectorization using P.3 yields Z(F,ˆ Xˆ)fˆ = 0. Finally, s.t. (16a) and (16b) hold true, vectorization of the left side of (8c) yields where W Rm×n is a weighing matrix that penalizes the vec[(A + BFˆ)TLˆ LˆΛT] = vec[ATLˆ LˆΛT + (BFˆ)TL] cost for violation∈ of sparsity constraints, and is given by − − P.3,P.5 Tˆ T ˆ = A˜ l + (In (BFˆ) )l 1 if F¯ = 1, and ⊗ W = ij (24) ij ¯ = (Xˆ −1 FˆT)fˆ+ (I (BFˆ)T)A˜−T(Xˆ −1 FˆT)fˆ ( 1 if Fij = 0. ⊗ n ⊗ ⊗  P.5 T As the penalty weights of corresponding to the sparse = (In Fˆ )Z(F,ˆ Xˆ)fˆ = 0. W ⊗ entries of F increase, an optimal solution of (25) becomes To conclude, note that Lˆ is the right eigenvector matrix. more sparse and approaches towards the optimal solution of Using Lemma 11, we next present a steepest/Newton de- (2). Note that the relaxed problem (25) corresponds closely to scent algorithm to solve the non-sparse MGEAP (16) [44]. In the non-sparse MGEAP (16). Thus, we use a similar gradient the algorithms presented in this section, we interchangeably based approach to obtain its solution. use the the matrices (G, F, X) and their respective vector- Lemma 13. (Gradient and Hessian of JW) The gradient and izations (g, f, x). The conversion of a matrix to the vector Hessian of the cost JW in (25) with respect to g is given by (and vice-versa) is not specifically stated in the steps of the dJ algorithms and is assumed wherever necessary. W = Z(F,X)W¯ f, (26) dg 2 Algorithm 1: Non-sparse solution to the MGEAP d JW ¯ T 2 , HW (F,X) = Z(F,X)WZ (F,X) Input: A, B, Λ,G . d g 0 T T Output: Local minimum (F,ˆ Xˆ) of (16). +Z1,W (F,X)Z (F,X) + Z(F,X)Z1,W (F,X), (27) −1 where W¯ , diag(vec(W W )) and, Initialize: G0,X0 Solution of (16a), F0 G0X0 ◦ ← ← T ˜−T −1 T repeat Z1,W (F,X) ,(In B )A (X (W W F ) In)Tm,n. 1 α Compute step size (see below); ⊗ ◦ ◦ ⊗ ← Proof. Since the constraints of problems (16) and (25) coin- 2 g g αZ(F,X)f or; ← − − cide, Equations (19)-(22) from Lemma 11 also hold true for 3 1 g g α[H(F,X) + V (F,X)] Z(F,X)f; P.2,P.8 ← − problem (25). Now, J = 1 (vec(W ) f)T(vec(W ) f) = X Solution of Sylvester equation (16a); W 2 1 f TW¯ f. Thus, dJ = f TW¯ df and d2J◦ = (df)TW¯◦ df + F ← GX−1; 2 W W f TW¯ d2f. Using the relation vec(W W F ) = W¯ f, the until convergence;← remainder of the proof is similar to proof◦ of◦ Lemma 11. return (F,X) Using Lemma 13, we next present an algorithm to obtain Steps 2 and 3 of Algorithm 1 represent the steepest and an approximately-sparse solution to the MGEAP. (damped) Newton descent steps, respectively. Since, in gen- eral, the Hessian H(F,X) is not positive-definite, the Newton Algorithm 2: Approximately-sparse solution to the descent step may not result in a decrease of the cost. Therefore, MGEAP we add a V (F,X) to the Hessian to make Input: A, B, Λ, W, G0 it positive definite [44]. We will comment on the choice of Output: Local minimum (F,ˆ Xˆ) of (25). V (F,X) in Section V. In step 1, the step size α can be deter- −1 Initialize: G0,X0 Solution of (16a), F0 G0X0 mined by backtracking line search or Armijo’s rule [44]. For repeat ← ← a detailed discussion of the steepest/Newton descent methods, 1 α Update step size; ← the reader is referred to [44]. The computationally intensive 2 g g αZ(F,X)W¯ f or; ← − −1 steps in Algorithm 1 are solving the Sylvester equation (16a) 3 g g α[HW (F,X) + VW (F,X)] Z(F,X)W¯ f; and evaluating the inverses of X and H + V . Note that the ← − X Solution of Sylvester equation (16a); expression of the gradient in (17) is similar to the expression F ← GX−1; provided in [14]. However, the expression of Hessian in (18) until convergence;← is new and it allows us to implement Newton descent whose return (F,X) convergence is considerably faster than steepest descent. Next, we present a relaxation of the optimization prob- lem (2) and a corresponding algorithm that provides The step size rule and modification of the Hessian in approximately-sparse solutions to the MGEAP. We remove Algorithm 2 is similar to Algorithm 1. the explicit feedback sparsity constraints (2b) and modify the cost function to penalize it when these sparsity constraints are B. Algorithms for the sparse EAP violated. Using the Sylvester equation based parametrization, In this subsection, we present two heuristic algorithms to the relaxed optimization problem is stated as: obtain a sparse solution to the EAP (not necessarily minimum- 1 2 gain). This involves finding a pair (F,X) which satisfies the min JW = W F F (25) G 2 || ◦ || eigenvalue assignment and sparsity constraints (2a), (2b). We begin with a result that combines these two constraints. 9

Lemma 14. (Feasibility of (F, X)) An X computes the projection of Fns on the convex set of sparse × ∈ Rn n satisfies (2a) and (2b) if and only if feedback matrices. The optimization problem (31) computes the projection of F on the non-convex set of feedback matrices ˜ where, a˜(x) (B(X)) (28) that assign the eigenvalues. Thus, using the heuristics of ∈ R T a˜(x) Ax,˜ B˜(X) (X B)P¯,P¯ diag(vec(F¯)). repeated sparsification of the solution of non-sparse MGEAP , , − ⊗ F F , in (31), the algorithm obtains a sparse solution. The alternating Further, if (28) holds true, then the set of sparse feedback projection method is not guaranteed to converge in general matrices that satisfy (2a) and (2b) is given by when the sets are not convex. However, if the starting point is ˜ mn X = PF¯fns : B(X)fns =a ˜(x), fns R . (29) close to the two sets, convergence is guaranteed [52]. F { ∈ } Note that a solution Fˆns of the problem (31) with parameters Proof. Any feedback f which satisfies the sparsity constraint satisfies ˆ ˆ , where ˆ is a mn (A, B, Λ,F ) Fns = F + Kns Kns (2b) can be written as f = PF¯fns where fns R is a 3 ∈ solution of the optimization problem (16) with parameters non-sparse vector . Vectorizing (2a) using P.3 and P.4, and (A + BF,B, Λ). Thus, we can use Algorithm 1 to solve (31). substituting f = PF¯fns, we get ˜ T Ax = (X B)PF¯fns, (30) Algorithm 4: Projection-based sparse solution to EAP − ⊗ ¯ from which (28) and (29) follow. Input: A, B, Λ, F,G0, itermax. Output: (F,X) satisfying (2a) and (2b). Based on Lemma 14, we develop a heuristic algorithm for Initialize: G0, X0 Solution of (16a), a sparse solution to the EAP. The algorithm starts with a non- −1 ← Fns,0 G0X0 , i 0 sparse EAP solution (F0,X0) that does not satisfy (2b) and ← ← ˜ repeat (28). Then, it takes repeated projections of a˜(x) on (B(X)) ¯ R F F Fns; to update X and F , until a sparse solution is obtained. ← ◦ 1 (K ,X) Algorithm 1(A + BF,B, Λ); ns ← Fns F + Kns; Algorithm 3: Sparse solution to EAP i ←i + 1; ¯ ← Input: A, B, Λ, F,G0, itermax. until convergence or i > itermax; Output: (F,X) satisfying (2a) and (2b). return (F,X) −1 Initialize: G0, X0 Solution of (16a), Fns,0 G0X0 , i 0 ← ← Remark 15. (Comparison of EAP Algorithms 3 and 4) repeat← 1. Projection property: In general, Algorithm 3 results in a 1 ˜ ˜ + ; a˜(x) B(X)[B(X)] a˜(x) sparse EAP solution F that is considerably different from the ←− 2 x A˜ 1a˜(x); initial non-sparse F . In contrast, Algorithm 4 provides a ← ns,0 3 X Normalize X; sparse solution F that is close to F . This is due to the fact ← ns,0 i i + 1 that Algorithm 4 updates the feedback by solving the optimiza- ← until convergence or i > itermax; tion problem (31), which minimizes the deviations between return (f in (29),X) ∈ FX successive feedback matrices. Thus, Algorithm 4 provides a good (although not necessarily orthogonal) projection of a In step 1 of Algorithm 3, we update a˜(x) by projecting it given non-sparse EAP solution Fns,0 on the space of sparse on (B˜(X)). Step 2 computes x from a˜(x) using the fact that EAP solutions. A˜ isR invertible (c.f. Assumption 4). Finally, the normalization 2. Complexity: The computational complexity of Algorithm in step 3 is performed to ensure invertibility of X4. 4 is considerably larger than that of Algorithm 3. This is Next, we develop a second heuristic algorithm for solv- because Algorithm 4 requires a solution of a non-sparse ing the sparse EAP problem using the non-sparse MGEAP MGEAP problem in each iteration. In contrast, Algorithm 3 solution in Algorithm 1. The algorithm starts with a non- only requires projections on the range space of a matrix in sparse EAP solution (F0,X0). In each iteration, it sparsifies each iteration. Thus, Algorithm 3 is considerably faster as ¯ the feedback to obtain f = PF¯fns (or F = F Fns), and then compared to Algorithm 4. solves the following non-sparse MGEAP ◦ 3. Convergence: Although we do not formally prove the convergence of heuristic Algorithms 3 and 4 in this paper, 1 2 min Fns F F (31) a comprehensive simulation study in Subsection V-B suggests Fns,X 2 || − || that Algorithm 4 converges in almost all instances. In contrast, s.t. (A + BFns)X = XΛ, (31a) Algorithm 3 converges in much fewer instances and its con- vergence deteriorates considerably as the number of sparsity to update F that is close to the sparse F . This algorithm ns constraints increase (see Subsection V-B). resembles to the alternating projection method [51] to find an  intersection point of two sets. The operation F = F¯ F If the starting point F of Algorithm 4 is “sufficiently ◦ ns ns,0 close” to a local minima Fˆ of (2), then its iterations will 3 Since f satisfies (4), it can also be characterized as f = (Imn − ˆ + + converge (heuristically) to F . In this case, Algorithm 4 can Q Q)fns, and thus P¯F = Imn − Q Q. 4Since X is not an eigenvector matrix, we compute the eigenvectors from be used to solve the sparse MGEAP. However, convergence to X, normalize them, and then recompute real X. Fˆ is not guaranteed for an arbitrary starting point. 10

C. Algorithm for sparse MGEAP Algorithm 5: Sparse solution to the MGEAP

In this subsection, we present an iterative projected gradient Input: A, B, Λ, F¯,G0, itermax. algorithm to compute the sparse solutions of the MGEAP in Output: Local minimum (F,ˆ Xˆ) of (2). (2). The algorithm consists of two loops. The outer loop is Initialize: same as the non-sparse MGEAP Algorithm 1 (using steepest (F0,X0) Algorithm 4(A, B, Λ, F¯,G0, itermax), descent) with an additional projection step, which constitutes ← G0 F0X0, i 0 the inner loop. Figure 1 represents one iteration of the algo- repeat← ← rithm. First, the gradient Z(Fk,Xk)fk is computed at a current α Update step size; point Gk (equivalently (Fk,Xk), where Fk is sparse). Next, ← gns g αPF Z(F,X)f; the gradient is projected on the tangent plane of the sparsity ← − Xns Solution of Sylvester equation (16a); constraints (4), which is given by F ←G X−1; ns ← ns ns T 1 (F,X) Algorithm 4(A, B, Λ, F¯,Gns, itermax); mn d(Qf) ← F = y R : y = 0 G FX; T ∈ dg ← (   ) i i + 1; mn T ← = y R : QZ (F,X)y = 0 . (32) until convergence or i > itermax; ∈ return (F,X) From P.12, the projection of the gradient on is TF given by PFk Z(Fk,Xk)fk, where PFk = Imn T + T . Next, a move is made− [QZ (Fk,Xk)] [QZ (Fk,Xk)] A. Implementation aspects of the algorithms in the direction of the projected gradient to obtain Gns,k(Fns,k,Xns,k). Finally, the orthogonal projection of In the Newton descent step (step 3) of Algorithm 1, we Gns,k is taken on the space of sparsity constraints to obtain need to choose (omitting the parameter dependence nota- Gk+1(Fk+1,Xk+1). This orthogonal projection is equivalent tion) V such that H + V is positive-definite. We choose T T to solving (31) with sparsity constraints (2b), which in turn is V = δImn Z1Z ZZ1 where, 0 < δ 1. Thus, from − − T  T equivalent to the original sparse MGEAP (2). Thus, the orthog- (18), we have: H + V = ZZ + Imn. Clearly, ZZ is onal projection step is as difficult as the original optimization positive-semidefinite and the small additive term Imn ensures problem. To address this issue, we use the heuristic Algo- that H + V is positive-definite. Note that other possible rithm 4 to compute the projections. Although the projections choices of V also exist. In step 1 of Algorithm 1, we use obtained using Algorithm 4 are not necessarily orthogonal, the Armijo rule to compute the step size α. Finally, we they are typically good (c.f. Remark 15). dJ use dg < , 0 <  1 as the convergence criteria 2  of Algorithm 1. For Algorithm 2, we analogously choose T T VW = δI mn Z1,W Z ZZ1,W and the same convergence criteria and step− size rule− as Algorithm 1. In both algorithms, Zkfk if we encounter a scenario in which the solution X of (16a) is singular (c.f. paragraph below (16b)), we perturb G slightly G such that the new solution is non-singular, and then continue PF Zkfk ns,k k the iterations. We remark that such instances occur extremely TF Gk rarely in our simulations. Algorithm 4 For the sparse EAP Algorithm 3, we use the convergence ˜ ˜ + Qf =0 Gk+1 criteria eX = [In2 B(X)(B(X)) ]˜a(x) 2 < , 0 <  1. For Algorithmk − 4, we use the convergencek criteria  Fig. 1. A single iteration of Algorithm 5. eF = F F¯ F F < , 0 <  1. Thus, the iterations |{z} of thesek algorithms− ◦ k stop when x lies in a certain subspace and Algorithm 5 is computationally intensive due to the use when the sparsity error becomes sufficiently small (within the of Algorithm 4 in step 1 to compute the projection on the specified tolerance), respectively. Further, note that Algorithm space of sparse matrices. In fact, the computational complexity 4 uses Algorithm 1 in step 1 without specifying an initial of Algorithm 5 is one order higher than that of non-sparse condition G0 for the latter. This is because in step 1, we MGEAP Algorithm 1. However, a way to considerably reduce effectively run Algorithm 1 for multiple initial conditions in the number of iterations of Algorithm 5 is to initialize it using order to capture its global minima. We remark that the capture the approximately-sparse solution obtained by Algorithm 2. In of global minima by Algorithm 1 is crucial for convergence this case, Algorithm 5 starts near the the local minimum and, of Algorithm 4. thus, its convergence time reduces considerably. As the iterations of Algorithm 4 progress, the F achieves eigenvalue assignment with increasing accuracy. As a result, near the convergence of Algorithm 4, the eigen- V. SIMULATION STUDIES values of A + BF and Λ are very close to each other. This In this section, we present the implementation details of creates numerical difficulties when Algorithm 1 is used with the algorithms developed in Section IV and provide numerical parameters (A + BF,B, Λ) in step 1 (see Assumption 4). simulations to illustrate their properties. To avoid this issue, we run Algorithm 1 using a preliminary 11

feedback Fp, as explained below Assumption 4. TABLE I Finally, for Algorithm 5, we use the following convergence COMPARISON OF MGEAP SOLUTIONSBY ALGORITHMS 1, 2 AND 5 dJ criteria: PF dg < , 0 <  1. We choose the stopping 2  Non-sparse solutions by Algorithm 1 tolerance  between 10−6 and 10−5 for all the algorithms. −0.0831 + 0.3288j −0.5053 0.2031 0.1919 − 0.4635j 0.5612 −0.2505 Xˆ =  xˆ∗  1  0.1603 + 0.5697j 1 0.5617 0.8441 B. Numerical study 0.3546 + 0.3965j −0.3379 0.4283 −0.5569 + 1.4099j −0.9154 2.5568 We begin this subsection with the following example: −0.2967 + 1.0244j −0.5643 1.7468 Lˆ =  ˆ∗  1 −0.0687 + 0.0928j l1 0.0087 0.2722 3.7653 2.1501 0.3120 0.2484 0.2259 − 0.2523j 0.0869 −0.4692 −1.6789− 1.0374 0.5306− 1.3987 −0.1111 −0.1089 −0.0312 −0.4399 A =  − , Fˆ = kFˆ k = 0.5580 2.1829 2.5142 1.2275 0.2833 1 −0.1774 −0.2072 0.0029 0.1348 , 1 F −13.6811 −9.6804 −0.5242 2.9554    0.3817 −0.3349 0.7280 −0.2109 − − −  Fˆ = kFˆ k = 1.1286  T  2 −0.0873 −0.4488 −0.4798 −0.0476 , 2 F 1 1 2 5 ¯ 1 1 0 0 B = , F = , 0.3130 2.0160 1.2547 −0.6608 1 3 4 2 1 0 1 1 Fˆ = kFˆ k = 2.7972     3 0.0683 −0.7352 −0.0748 −1.0491 , 3 F = 2, 1, 0.5 j . S {− − − ± } Average # of Steepest/Newton descent iterations = 5402.1/15.5 The eigenvalues of A are Γ(A) = 2, 1, 1 2j . Thus, {− − ± } Approximately-sparse solutions by Algorithm 2 with w = 30 the feedback F is required to move two unstable eigenvalues 0.9652 −1.3681 0.0014 −0.0021 Fˆ = kFˆ k = 1.8636 into the stable region while keeping the other two stable eigen- 1 0.4350 −0.0023 −0.6746 −0.1594 , 1 F values fixed. Table I shows the non-sparse, approximately- −1.0599 −1.7036 −0.0013 −0.0071 Fˆ = kFˆ k = 2.0146 sparse and sparse solutions obtained by Algorithms 1, 2 and 2 −0.1702 −0.0057 0.0263 −0.0582 , 2 F 5, respectively, and Figure 2 shows a sample iteration run  3.3768 2.2570 0.0410 −0.0098 Fˆ = kFˆ k = 5.7795 of these algorithms. Since the number of iterations taken by 3 −1.4694 −0.0061 0.1242 −3.8379 , 3 F the algorithms to converge depends on their starting points, Average # of Newton descent iterations = 19.1 we report the average number of iterations taken over 1000 Sparse solutions by Algorithm 5 random starting points. Further, to obtain approximately-sparse −0.3370 + 0.3296i −0.4525 0.5168 solutions, we use the weighing matrix with Wij = w if 0.2884 − 0.1698i ∗ 0.1764 −0.3007 Xˆ1 =  xˆ  F¯ = 1. All the algorithms obtain three local minima, among −0.4705 + 0.5066i 1 −0.7478 0.7831 ij 0.2754 + 0.3347i −0.4527 0.1711 which the first is the global minimum. The second column in  25.2072 + 16.5227i 44.6547 82.3616 Xˆ and Lˆ is conjugate of the first column (c.f. Remark 2). It 11.0347 + 11.8982i 12.5245 35.8266 Lˆ =  ˆl∗  can be verified that the non-sparse and sparse solutions satisfy 1  −12.2600 − 5.8549i 1 −24.5680 −39.1736 the optimality conditions of Theorem 4. −0.6417 − 2.2060i −0.4383 −3.6464 0.9627 −1.3744 0.0000 0.0000 For the non-sparse solution, the number of iterations taken Fˆ = kFˆ k = 1.8694 1 0.4409 0.0000 −0.6774 −0.1599 , 1 F by Algorithm 1 with steepest descent are considerably larger −1.0797 −1.7362 0.0000 0.0000 Fˆ = kFˆ k = 2.0525 than the Newton descent. This is because the steepest descent 2 −0.1677 0.0000 0.0264 −0.0610 , 2 F converges very slowly near a local minimum. Therefore, we  3.4465 2.2568 0.0000 0.0000 Fˆ = kFˆ k = 6.0866 use Newton descent steps in Algorithms 1 and 2. Next, observe 3 −1.8506 −0.0000 0.3207 −4.0679 , 3 F that the entries at the sparsity locations of the locally minimum Average # of Newton descent iterations: feedbacks obtained by Algorithm 2 have small magnitude. 1. Using random initialization = 8231 Further, the average number of Newton descent iterations 2. Using initialization by Algorithm 2 = 715 for convergence and the norm of the feedback obtained of Algorithm 2 is larger as compared to Algorithm 1. This is because the approximately-sparse optimization problem (25) is Figure 3 shows a sample run of EAP Algorithms effectively more restricted than its non-sparse counterpart (16). −1.0138 0.6851 −0.1163 0.8929 3 and 4 for G0 = −1.8230 −2.2041 −0.1600 0.7293 . Finally, observe that the solutions of Algorithm 5 are sparse. The sparse feedback obtained by Algorithms 3 and 4 Note that Algorithm 5 involves the use of projection Algo- 0.1528 −2.6710 0.0000 0.0000  are F = −0.8382 0.0000 0.1775 −0.1768 and F = rithm 4, which in turn involves running Algorithm 1 multiple 4.2595 4.2938 0.0000 0.0000 −0.2519 0.0000 −2.1258 −1.3991 , respectively. The projection times. Thus, for a balanced comparison, we present the total error e and the sparsity error e capture the convergence number of Newton descent iterations of Algorithm 1 involved  X  F 5 of Algorithms 3 and 4, respectively. Figure 3 shows that these in the execution of Algorithm 5. From Table I, we can errors decrease, thus indicating convergence of the algorithms. observe that Algorithm 5 involves considerably more Newton Next, we provide an empirical verification of the conver- descent iterations compared to Algorithms 1 and 2, since it gence of heuristic Algorithms 3 and 4. Let the sparsity ratio involves computationally intensive projection calculations by (SR) be defined as the ratio of the number of sparse entries to Algorithm 4. One way to reduce its computation time is to 0 ¯ the total number of entries in F (i.e. SR = Number of 0 s in F ). initialize is near the local minimum using the approximately mn We perform 1000 random executions of both the algorithms. sparse solution of Algorithm 2. In each execution, n is randomly selected between 4 and 20 5The number of outer iteration of Algorithm 5 are considerably less, for and m is randomly selected between 2 and n. Then, matrices instance, 20 in Figure 2. (A, B) are randomly generated with appropriate dimensions. 12

7 we can reduce the step size α to obtain a new Gns and compute its projection. Further, we compute the average of 6 distance dFsol,F0 over all executions of the Algorithms 3 5 and 4 that converge. Observe that the average distance for

4 Algorithm 4 is smaller than Algorithm 3. This shows that Algorithm 4 provides considerably better projection of F0 in 3 the space of sparse matrices as compared to Algorithm 3 (c.f. 2 Remark 15). Note that the above simulations focus on the

1 convergence properties as SR changes and they do not capture the individual effects of the number of sparsity constraints, m 0 5 10 15 20 25 and n (for instance, small systems versus large systems).

VI.FEASIBILITY OF THE SPARSE EAP Fig. 2. Optimization costs for a sample run of Algorithms 1, 2 and 5 (the algorithms converge to their global minima). For Algorithm 5, number of In this section we provide a discussion of the feasibility of outer iterations are reported. the sparse EAP, i.e., eigenvalue assignment with sparse, static state feedback. We address certain aspects of this problem, and leave a detailed characterization for future research. 0 0 -2 -2 Lemma 16. (NP-hardness) Determining the feasibility of the -4 -4 sparse EAP is NP-hard. -6 -6 0 50 100 150 0 50 100 Proof. We prove the result using the NP-hardness property of the static output feedback pole placement (SOFPP) problem (a) (b) [53], which is stated as follows: for a given A Rn×n, n×m p×n ∈ Fig. 3. Projection and sparsity errors for a sample run of (a) Algorithm 3 B R , and C R , determine if there exists a non- ∈ × ∈ and (b) Algorithm 4, respectively. sparse K Rm p such that the eigenvalues of A + BKC are at some∈ desired locations. Without loss of generality, we assume that C is full row rank. Thus, there exists an invertible ¯ + n×n Next, a binary sparsity pattern matrix F is randomly generated T = C V R with CV = 0. Taking the similarity with the number of sparsity entries given by SR mn , where transformation by∈ T (which preserves the eigenvalues) we get denotes rounding to the next lowest integer.b × Toc ensure   −1 −1 −1 feasibilityb·c (c.f. Assumption 2 and discussion below), we pick T (A + BKC)T = T AT + T B K 0 . ¯ ¯ the desired eigenvalue set randomly as follows: we select A B   a random which satisfiesS the selected sparsity pattern F¯, Fr Clearly, the above SOFPP problem| {z } is| equivalent{z } to a sparse and select = Γ(A + BFr). Finally, we set itermax = 1000 EAP with matrices A,¯ B¯ and sparsity pattern given by F¯ = and selectS a random starting point G (F ,X ), and run both 0 0 0 1m×p 0m×(n−p) . Thus, the NP-hardness of the sparse EAP algorithms from the same starting point. Let Fsol denote the follows form the NP-hardness of the SOFPP problem. feedback solution obtained by Algorithms 3 and 4, respec-   tively, and let d F F denote the distance Next, we present graph-theoretic necessary and sufficient Fsol,F0 , k sol − 0kF between the starting point F0 and the final solution. Since conditions for arbitrary eigenvalue assignment by sparse static feedback. Due to space constraints, we briefly introduce the Algorithm 4 is a projection algorithm, the metric dFsol,F0 captures its projection performance. required graph-theoretic notions and refer the reader to [54] for n×n more details. Given a square matrix A = [aij] R , let A ∈ G TABLE II denote its associated graph with n vertices, and let aij be the CONVERGENCEPROPERTIESOF EAPALGORITHMS 3 AND 4 weight of the edge from vertex j to vertex i. A closed directed path (sequence of consecutive vertices) is called a cycle if the SR Algorithm 3 Algorithm 4 Convergence instances = 427 Convergence instances = 997 start and end vertices coincide, and no vertex appears more 1/4 than once along the path (except for the first vertex). A set of Average dFsol,F0 = 8.47 Average dFsol,F0 = 1.28 Convergence instances = 220 Convergence instances = 988 1/2 vertex disjoint cycles is called a cycle family. The width of a Average d = 4.91 Average d = 1.61 Fsol,F0 Fsol,F0 cycle family is the number of edges contained in all its cycles. Convergence instances = 53 Convergence instances = 983 2/3 Average dFsol,F0 = 6.12 Average dFsol,F0 = 1.92 AB Lemma 17. (Necessary conditions) Let H = [ F 0 ], and let H be its associated graph. Let ns be the number of sparsity G m×n Table II shows the convergence results of Algorithms 3 and constraints (zero entries) in the feedback matrix F R . Further, let S denote the set of cycle families of∈ of 4 for three different sparsity ratios. While the convergence k GH of Algorithm 3 deteriorates as F becomes more sparse, width k, with k = 1, . . . , n. Necessary conditions for arbitrary Algorithm 4 converges in almost all instances. This implies eigenvalue assignment with sparse, static state feedback are: that in the context of Algorithm 5, Algorithm 4 provides a (i) ns (m 1)n, that is, F has at least n nonzero entries, valid projection in step 1 in almost all instances. We remark (ii) for≤ each −k = 1, , n, there exist a nonzero entry f ··· ikjk that in the rare case that Algorithm 4 fails to converge, of F such that its corresponding edge appears in Sk. 13

Proof. (i) Arbitrary eigenvalue assignment requires that the a11 a 3 feedback F should assign all the n coefficients of the char- 31 f23 b acteristic polynomial det(sI A BF ) to arbitrary values. 11 − −mn−n n f21 This requires the mapping h : R s R from the u1 1 u2 nonzero entries of F to the coefficients of→ the characteristic f11 polynomial to be surjective, which imposes that the dimension a12 b22 of the domain of h should not be less that the dimension of 2 its codomain. (a) Graph GH , where blue and orange nodes correspond to state and control vertices, respectively. (ii) Let ck, k = 1, . . . , n denote the coefficients of the polynomial det(sI A BF ). Then, ck is a multiaffine − − a11 function of the nonzero entries of F , which appear in the b11 Width 1 1 u1 1 cycle families in Sk [54]. If there exists no feedback edge in Sk, then ck is fixed and does not depend on F . Thus, arbitrary f11 eigenvalue assignment is not possible in this case. f21 Width 2 1 u2 Lemma 18. (Sufficient conditions) Let H = [ AB ], and let F 0 a12 2 b22 H be its associated graph. Let Sk denote the set of cycle G f families of H of width k, and let Fk denote the set of feedback Width 3 a31 3 23 6 G edges contained in Sk, with k = 1, . . . , n. Then, each of the following conditions is sufficient for arbitrary eigenvalue 1 u2 assignment with sparse, static state feedback: a12 2 b22 (i) for each k = 1, , n, there exist a feedback edge that ··· (b) Cycle families of GH . All cycle families contain a appears in Sk and not in Sj, for all j = k, single cycle. There are two cycle families of width 1, and (ii) there exists a permutation i 6, i , , i of one cycle family each of width 2 and 3. { 1 2 ··· n} 1, 2, , n such that = Fi Fi Fi . { ··· } ∅ 6 1 ⊂ 2 ⊂ · · · ⊂ n Fig. 4. Graph GH and its cycle families. u1, u2 denote the control vertices. Proof. (i) Similar to the proof of Lemma 17, if there exist a nonzero entry f that is exclusive to S , then such variable ikjk k guarantees feasibility of the sparse EAP. We leave the design can be used to assign ck arbitrarily. If this holds for k = 1, . . . , n, then all coefficients of the characteristic polynomial of such algorithm as a topic of future investigation. can be assigned arbitrarily, resulting in arbitrary eigenvalue Remark 19. (Comparison with exiting results) We emphasize assignment. that the conditions presented in Lemmas 17 and 18 for arbi- (ii) Condition (ii) guarantees that, for j = 2, , n, the co- trary eigenvalue assignment using static state feedback are not ··· efficient cij depends on the feedback variables of cij−1 and on equivalent to the conditions for non-existence of SFMs studied some additional feedback variables. These additional variables in [27], [31]–[35], [38]. The reason is that arbitrary assign- can be used to assign the coefficient cij arbitrarily. ment of non-SFMs necessarily requires a dynamic controller and cannot, in general, be achieved by a static controller. To illustrate the results, consider the following example: Further, our graph-theoretic results are based on the feedback

a11 a12 0 b11 0 edges being suitably covered by cycle families, whereas the f11 0 0 A = 0 0 0 ,B = 0 b22 ,F = . results in [32], [33] are based on the state nodes/subgraphs     f21 0 f23 being suitably covered by strong components, cycles/cactus. a31 0 0 0 0        The corresponding graph and cycle families are shown in Fig. 4. Note that the edges f11, f21 and f23 are exclusive VII.CONCLUSION to cycle families of widths 1, 2 and 3, respectively. Thus, condition (i) of Lemma 18 is satisfied and arbitrary eigen- In this paper we studied the MGEAP for LTI systems with value assignment is possible. Next, consider the feedback arbitrary sparsity constraints on the static feedback matrix. We presented an analytical characterization of its locally optimal F = 0 0 f13 . In this case, the sets of feedback edges f21 f22 0 solutions, thereby providing explicit relations between an in theh family cyclesi of different widths are F1 = f22 ,F2 = { } optimal solution and the eigenvector matrices of the associated f22, f13, f21 and F3 = f22, f13 . We observe that F1 closed loop system. We also provided a geometric interpreta- F{ F (condition} (ii) of Lemma{ 18)} and arbitrary eigenvalue⊂ 3 ⊂ 2 tion of an optimal solution of the non-sparse MGEAP. Using assignment is possible. Further, if F = 0 f12 f13 , then f21 0 0 a Sylvester-based parametrization, we developed a heuristic F1 = F3 = , and the coefficients c1, c3 ofh det(sI Ai BF ) projected gradient descent algorithm to obtain local solutions are fixed. This∅ violates condition (ii) of Lemma− 17− and to the MGEAP. We also presented two novel algorithms for prevents arbitrary eigenvalue assignment with the given F . solving the sparse EAP and an algorithm to obtain approxi- Note that the conditions in Lemmas 17 and 18 are construc- mately sparse local solution to the MGEAP. Numerical studies tive, and can also be used to determine a sparsity pattern that suggest that our heuristic algorithm converges in most cases. Further, we also discussed the feasibility of the sparse EAP 6Feedback edges are those associated with the nonzero entries of F . and provided necessary and sufficient conditions for the same. 14

The analysis in the paper is developed, for the most part, [24] A. Varga. Robust and minimum norm pole assignment with periodic under the assumption that the sparse EAP problem with static state feedback. IEEE Transactions on Automatic Control, 45(5):1017– 1022, 2000. feedback is feasible. A future direction of research includes a [25] S. Datta, B. Chaudhuri, and D. Chakraborty. Partial pole placement more detailed characterization of the feasibility of the EAP, a with minimum norm controller. In IEEE Conf. on Decision and Control, constructive algorithm to determine feasible sparsity patterns, pages 5001–5006, Atlanta, GA, USA, 2010. [26] S. Datta and D. Chakraborty. Feedback norm minimization with regional a convex relaxation of the sparse MGEAP with guaranteed pole placement. International Journal of Control, 87(11):2239–2251, distance from optimality, and a more rigorous analysis of 2014. convergence of Algorithms 4 and 5. [27] S. H. Wang and E. J. Davison. On the stabilization of decentralized con- trol systems. IEEE Transactions on Automatic Control, 18(5):473478, 1973. REFERENCES [28] J. P. Corfmat and A. S. Morse. Decentralized control of linear multivariable systems. Automatica, 12(5):479495, 1976. [1] R. Schmid, L. Ntogramatzidis, T. Nguyen, and A. Pandey. A unified [29] B. D. O. Anderson and D. J. Clements. Algebraic characterization of method for optimal arbitrary pole placement. Automatica, 50(8):2150– fixed modes in decentralized control. Automatica, 17(5):703712, 1981. 2154, 2014. [30] E. J. Davison and U. Ozguner. Characterizations of decentralized fixed [2] Y. Peretz. On parametrization of all the exact pole-assignment state- modes for interconnected systems. Automatica, 19(2):169182, 1983. feedbacks for LTI systems. IEEE Transactions on Automatic Control, [31] M. E. Sezer and D. D. Siljak. Structurally fixed modes. Systems & 62(7):3436–3441, 2016. Control Letters, 1(1):6064, 1981. [3] P. J. Antsaklis and A. N. Michael. Linear systems. Birkhauser, 2005. [32] V. Pichai, M. E. Sezer, and D. D. Siljak. A graph-theoretic characteri- [4] J. Kautsky, N. K. Nichols, and P. Van Dooren. Robust pole assignment in zation of structurally fixed modes. Automatica, 20(2):247250, 1984. linear state feedback. International Journal of Control, 44(5):11291155, [33] V. Pichai, M. E. Sezer, and D. D. Siljak. A graphical test for structurally 1985. fixed modes. Mathematical Modelling, 4(4):339348, 1983. [5] A. Pandey, R. Schmid, and T. Nguyen. Performance survey of minimum [34] A. Alavian and M. Rotkowitz. Fixed modes of decentralized systems gain exact pole placement methods. In European Control Conference, with arbitrary information structure. In International Symposium on pages 1808–1812, Linz, Austria, 2015. Mathematical Theory of Networks and Systems, pages 913–919, Gronin- [6] Y. J. Peretz. A randomized approximation algorithm for the minimal- gen, The Netherlands, 2014. norm static-output-feedback problem. Automatica, 63:221–234, 2016. [35] A. Alavian and M. Rotkowitz. Stabilizing decentralized systems with [7] B. Bamieh, F. Paganini, and M. A. Dahleh. Distributed control of arbitrary information structure. In IEEE Conf. on Decision and Control, spatially invariant systems. IEEE Transactions on Automatic Control, pages 4032–4038, Los Angeles, CA, USA, 2014. 47(7):1091–1107, 2002. [36] S. Pequito, G. Ramos, S. Kar, A. P. Aguiar, and J. Ramos. The robust [8] M. Rotkowitz and S. Lall. A characterization of convex problems minimal controllability problem. Automatica, 82:261–268, 2017. in decentralized control. IEEE Transactions on Automatic Control, [37] S. Pequito, S. Kar, and A. P. Aguiar. A framework for structural 51(2):274–286, 2006. input/output and control configuration selection in large-scale systems. [9] A. Mahajan, N. C. Martins, M. C. Rotkowitz, and S. Yuksel. Information IEEE Transactions on Automatic Control, 61(2):303318, 2016. structures in optimal decentralized control. In IEEE Conf. on Decision [38] S. Moothedath, P. Chaporkar, and M. N. Belur. Minimum cost feedback and Control, pages 1291–1306, Maui, HI, USA, December 2012. selection for arbitrary pole placement in structured systems. IEEE [10] F. Lin, M. Fardad, and M. R. Jovanovic.´ Augmented Lagrangian Transactions on Automatic Control, 63(11):3881–3888, 2018. approach to design of structured optimal state feedback gains. IEEE [39] Petersen K. B. and M. S. Pedersen. The Matrix Cookbook. Technical Transactions on Automatic Control, 56(12):2923–2929, 2011. University of Denmark, 2012. [11] R. Schmid, A. Pandey, and T. Nguyen. Robust pole placement with [40] J. R. Magnus and H. Neudecker. Matrix Differential Calculus with moores algorithm. IEEE Transactions on Automatic Control, 59(2):500– Applications in Statistics and Econometrics. John Wiley and Sons, 1999. 505, 2014. [41] D. Kressner and M. Voigt. Distance problems for linear dynamical [12] M. Ait Rami, S. E. Faiz, A. Benzaouia, and F. Tadeo. Robust exact systems. In Numerical Algebra, Matrix Theory, Differential-Algebraic pole placement via an LMI-based algorithm. IEEE Transactions on Equations and Control Theory, pages 559–583. Springer, 2015. Automatic Control, 54(2):394–398, 2009. [42] J. Rosenthal and J. C. Willems. Open problems in the area of pole [13] R. Byers and S. G. Nash. Approaches to robust pole assignment. placement. In Open Problems in Mathematical Systems and Control International Journal of Control, 49(1):97–117, 1989. Theory, page 181191. Springer, 1999. [14] A. Varga. Robust pole assignment via Sylvester equation based [43] J. Leventides and N. Karcanias. Sufficient conditions for arbitrary pole state feedback parametrization. In IEEE International Symposium on assignment by constant decentralized output feedback. Mathematics of Computer-Aided Control System Design, Anchorage, AK, USA, 2000. Control, Signals and Systems, 8(3):222–240, 1995. Linear and Nonlinear Programming [15] E. K. Chu. Pole assignment via the Schur form. Systems & Control [44] D. G. Luenberger and Y. Ye. . Letters, 56(4):303–314, 2007. Springer, 2008. Optimization Algorithms on [16] A. L. Tits and Y. Yang. Globally convergent algorithms for robust pole [45] P. A. Absil, R. Mahony, and R. Sepulchre. Matrix manifolds assignment by state feedback. IEEE Transactions on Automatic Control, . Princeton University Press, 2008. The Theory of Matrices 41(10):1432–1452, 1996. [46] P Lancaster. . American Press, 1985. [47] J. H. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press, [17] E. K. Chu. Optimization and pole assignment in control system design. 1988. International Journal of Applied Mathematics and Computer Science, [48] S. P. Bhattacharyya and E. De Souza. Pole assignment via Sylvesters 11(5):1035–1053, 2001. eqauation. Systems & Control Letters, 1(4):261263, 1982. [18] A. Pandey, R. Schmid, T. Nguyen, Y. Yang, V. Sima, and A. L. Tits. [49] N. J. Hingham. Accuracy and Stability of Numerical Algorithms. SIAM, Performance survey of robust pole placement methods. In IEEE Conf. 2002. on Decision and Control, pages 3186–3191, Los Angeles, CA, USA, [50] S. P. Bhattacharyya and E. De Souza. Controllability, observability 2014. and the solution of AX-XB=C. Linear algebra and its applications, [19] B. A. White. Eigenstructure assignment: a survey. Proceedings of the 39(1):167188, 1981. Institution of Mechanical Engineers, 209(1):1–11, 1995. [51] R. Escalante and M. Raydan. Alternating Projection Methods. SIAM, [20] L. F. Godbout and D. Jordan. Modal control with feedback-gain 2011. constraints. Proceedings of the Institution of Electrical Engineers, [52] A. S. Lewis and J. Malick. Alternating projections on manifolds. 122(4):433–436, 1975. Mathematics of Operations Research, 33(1):216–234, 2008. [21] B. Kouvaritakis and R. Cameron. Pole placement with minimised norm [53] M. Fu. Pole placement via static output feedback is NP-hard. IEEE controllers. IEE Proceedings D - Control Theory and Applications, Transactions on Automatic Control, 49(5):855–857, 2004. 127(1):32–36, 1980. [54] K. J. Reinschke. Multivariable Control: A Graph-Theoretic Approach. [22] H. K. Tam and J. Lam. Newton’s approach to gain-controlled robust Springer, 1988. pole placement. IEE Proceedings - Control Theory and Applications, 144(5):439–446, 1997. [23] A. Varga. A numerically reliable approach to robust pole assignment for descriptor systems. Future Generation Computer Systems, 19(7):1221– 1230, 2003.