2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA The Observability Radius of Network Systems

Gianluca Bianchin, Paolo Frasca, Andrea Gasparri, and Fabio Pasqualetti

Abstract— This paper introduces the observability radius of be unreliable and in fact fail to recognize unobservable network systems, which measures the robustness of a network systems: instead, our notion of observability, that is, the to perturbations of the edges. We consider linear networks, size of the smallest perturbation preventing observability where the dynamics are described by a weighted adjacency matrix, and dedicated sensors are positioned at a subset of or, equivalently, the distance to the nearest unobservable nodes. We allow for perturbations of certain edge weights, realization, can be measured more reliably [10]. Among with the objective of preventing observability of some modes our contributions, we highlight connections between the of the network dynamics. Our work considers perturbations robustness of a network and its structure, and we propose with a desired sparsity structure, thus extending the classic an algorithmic procedure to construct optimal perturbations. literature on the controllability and observability radius of linear systems. We propose an optimization framework to Our work finds applicability, for instance, in network control determine a perturbation with smallest Frobenius norm that problems where the network weights are time varying, in renders a desired mode unobservable from a given set of security applications where an attacker gains control of some sensor nodes. We derive optimality conditions and a heuristic network edges, and in network science for the classification optimization algorithm, which we validate through an example. of network edges and the design of robust complex networks. Related work Our notion of robustness is inspired by classic works on the observability radius of dynamical systems [11], I.INTRODUCTION [12], [13], which is defined as the norm of the smallest per- Network systems are broadly used to model engineering, turbation yielding unobservability. In particular, for a linear social, and natural systems. An important property of such system with matrices (A, C), the radius of observability is systems is their robustness to different contingencies, includ-   ∆A ing failure of components affecting the flow of information, µ(A, C) = min , ∆A,∆C ∆C external disturbances altering individual node dynamics, and 2 variations in the network and weights. It remains s.t. (A + ∆A,C + ∆C ) is unobservable. an outstanding problem to quantify how different topological features enable robustness, and to engineer complex network As a known result [12], the observability radius satisfies systems that ensure functionality and operability in the face sI A µ(A, C) = min σn − , of arbitrary and perhaps malicious perturbations. s C Observability of a network system guarantees the ability where or if complex perturbations are allowed. to reconstruct the state of individual nodes from sparse s R s C The optimal∈ perturbations∈ and are typically full measurements. While observability is a binary notion [1], ∆A ∆C matrices and, to the best of our knowledge, all existing results the degree of observability (or controllability) of a network and procedures are not applicable to the case where the can be quantified in different ways, including the energy perturbations must satisfy a desired sparsity constraint (e.g., associated with the measurements [2], [3], the novelty of see [14]). This scenario is in fact the relevant one for network the output signal [4], the number of necessary sensor nodes systems, where the nonzero entries of the network matrices [5], [6], [7], and the robustness to removal of interconnection and correspond to existing network edges, and it would edges [8]. A graded notion of observability is preferable, as A C be undesirable or unrealistic for a perturbation to modify the it allows us to compare different networks, select optimal interaction of disconnected nodes. A notable exception is [8], sensor nodes, and identify features favoring observability. where, however, the discussion is limited to edge removal. In this work we the robustness of a network We depart from the literature by requiring the perturbation based on the size of the smallest perturbation needed to to be real, with a desired sparsity pattern, and confined to prevent observability. Our notion of robustness is motivated the network matrix, that is, . Our approach builds by the fact that network weights are rarely known without ∆C = 0 on the theory of total least squares [15], [16]. With respect uncertainty, and observability is a generic property [9]. For to existing results, our work proposes tailored procedures for these reasons, numerical tests to assess observability may network systems, fundamental bounds, and insights into the This material is based upon work supported in part by NSF robustness of different network . Although our pre- awards #BCS-1430279 and #ECCS-1405330. Gianluca Bianchin and sentation focuses on perturbations preventing observability, Fabio Pasqualetti are with the Mechanical Engineering Department, University of California at Riverside, [email protected], the extension to controllability is straightforward. [email protected]. Andrea Gasparri is with the Department Contributions The contribution of this paper is twofold. of Engineering, Roma Tre University, [email protected]. Paolo Frasca is with the Department of Applied , University of First, we define a metric of network robustness that cap- Twente, [email protected]. tures the resilience of a network system to structural per- 978-1-4673-8682-1/$31.00 ©2016 AACC 185 turbations (Section II). Our metric evaluates the distance perturbation to achieve unobservability, and encodes of a network from the set of unobservable networks with the desired sparsity pattern of the perturbation.AH It should the same interconnection structure, and it extends existing be observed that (i) the minimization problem (2) is not works on the controllability and observability radius of linear convex because the variables ∆ and x are multiplied by each systems. Second, we formulate an optimization problem to other in the eigenvector constraint (A + ∆)x = λx, (ii) if determine optimal perturbations (with smallest Frobenius A , then the minimization problem is feasible if and norm) preventing observability. We show that the problem is only∈ AifH there exists a network matrix A + ∆ = A˜ not convex, derive optimality conditions, and show that any satisfying the eigenvalue constraint, and (iii) if = ∈, then AH optimal solution solves a (nonlinear) generalized eigenvalue the perturbation modifies the weights of existingH edgesG only. problem (Section III-A). Based on this analysis, we propose We make the following assumption, which implies that ∆ a numerical procedure based on the power iteration method must be nonzero to satisfy the constraints in (2): to determine (sub)optimal solutions (Section III-B). Finally, (A1) The pair (A, C ) is observable. O we analytically compute optimal perturbations for three di- The minimization problem (2) can be solved by two mensional line networks (Section III-C), and we validate the subsequent steps. First, we fix the eigenvalue λ, and compute effectiveness of our numerical procedure to work in practice. an optimal perturbation that solves the minimization problem for the fixed eigenvalue. This computation is the topic of II.PROBLEM SETUPAND PRELIMINARY RESULTS the next section. Second, we search the complex plane for the optimal eigenvalue yielding the perturbation with Consider a directed graph = ( , ), where = λ minimum cost. We observe that (i) the exhaustive search of 1, . . . , n and areG the vertexV E and edgeV sets, the optimal eigenvalue is an inherent feature of this class respectively.{ } LetEA ⊆= [Va ×] be V the weighted adjacency matrix λ ij of problems, as also highlighted in prior work [13]; (ii) in of , where aij R denotes the weight associated with G ∈ some cases and for certain network topologies the optimal the edge (i, j) , and aij = 0 whenever (i, j) . ∈ E 6∈ E λ can be found analytically; and (iii) in several practical Let ei denote the i-th canonical vector of dimension n. Let applications, the choice of λ is guided by the structure of = o1, . . . , op be the set of sensor nodes, and define theO network{ output} ⊆ matrix V as the problem or it is a given constraint in the optimization.  T III.OPTIMALITY CONDITIONSAND ALGORITHMSFOR C = eo1 eop . O ··· THE NETWORK OBSERVABILITY RADIUS Let xi(t) R denote the state of node i at time t, and let We now consider problem (2) with a fixed choice for λ. Let ∈ n x : N 0 R be the map describing the evolution over time 2 ≥ → ∆∗ be an optimal solution to (2). Then, we refer to ∆∗ F of the network state. The network dynamics are described by as to the observability radius of the network A withk sensork the discrete time-invariant linear system model: nodes , constraint graph , and unobservable eigenvalue λ. O H x(t + 1) = Ax(t), y(t) = C x(t), (1) O A. Optimal network perturbation p where y : N 0 R is the output of the sensor nodes . ≥ → O In this section we manipulate the minimization prob- In this work we characterize structured network perturba- lem (2) to facilitate its solution. Without affecting generality, tions that prevent observability from the sensor nodes. To relabel the network nodes such that the sensor nodes satisfy this aim, let = ( , ) be the constraint graph, and H H   define the setH of matricesV E compatible with as = 1, . . . , p , so that C = Ip 0 . (3) H O { } O Accordingly, = M : M R|V|×|V|,Mij = 0 if (i, j) . AH { ∈ 6∈ EH}     A11 A12 ∆11 ∆12 Recall from the eigenvector observability test that the net- A = , and ∆ = , (4) A21 A22 ∆21 ∆22 work (1) is observable if and only if there is no right p p p n p n p p eigenvector of A that lies in the kernel of C , that is, where A11 R × , A12 R × − , A21 R − × , O and ∈n p n p. Let ∈ be the∈ unweighted C x = 0 whenever x = 0, Ax = λx, and λ C [21]. A22 R − × − V = [vij] O 6 6 ∈ ∈ We consider and study the following optimization problem: adjacency matrix of , where vij = 1 if (i, j) , and H ∈ EH vij = 0 otherwise. Following the partitioning of A in (4), let 2 min ∆ F,   k k V11 V12 s.t. (A + ∆)x = λx, (eigenvalue constraint), V = . V21 V22 x 2 = 1, (eigenvector constraint), (2) We perform the following three simplifying steps. k k C x = 0, (unobservability), (1 – Rewriting the structural constraints) Let B = A + ∆, O 2 Pn Pn 2 and notice that ∆ F = (bij aij) . Then, ∆ , (structural constraint), k k i=1 j=1 − ∈ AH the minimization problem (2) can equivalently be rewritten where the minimization is carried out over the network restating the constraint ∆ , as in the following equality: ∈ AH n n n n n perturbation ∆ R × , the eigenvector x C , and the X X ∈ ∈ 2 2 2 2 1 unobservable eigenvalue . The cost function ∆ F = B A F = (bij aij) v− . λ C ∆ F k k k − k − ij quantifies the total cost in the∈ form of magnitude ofk edgek i=1 j=1 186 2 Notice that ∆ F = whenever ∆ does not satisfy the the diagonal matrix with entries d1, . . . , dn. We now derive structural constraint,k k that∞ is, when v = 0 and b = a . the optimality conditions for the minimization problem (7). ij ij 6 ij (2 – Minimization with real variables) Let λ = λ + iλ , Theorem 3.3: (Optimality conditions) Let x∗ , and x∗ be where i denotes the imaginary unit. Let < = a solution to the minimization problem (7). Then,< =  1   1          x x A¯ N¯ M¯ x∗ Sx Tx y1 x = 2< , and x = 2= , − < = σ , < x = x M¯ A¯ N¯ x∗ Tx Qx y2 < = − − = | {z } | {z∗ } | {z } |{z∗ } denote the real and imaginary parts of the eigenvector x, A˜ x Dx y 1 p 1 p 2 n p 2 n p T (8) with x R , x R , x R − , and x R − . We  ¯ ¯ ¯        < ∈ = ∈ < ∈ = ∈ A N M y1 Sy Ty x∗ now prove that the minimization problem (2) can be restated − = σ < , M¯ A¯ N¯ y2 Ty Qy x∗ and solved over real variables only. − − = | {z } |{z∗ } | {z } | {z∗ } Lemma 3.1: (Minimization with real eigenvector con- A˜T y Dy x straint) The constraint (A + ∆)x = λx can equivalently 2n for some σ > 0 and y∗ R with y∗ = 1, and where be written as ∈ k k D = diag(v , . . . , v , v , . . . , v , . . . , v , . . . , v ), (A + ∆ λ I)x = λ x , 1 11 1n 21 2n n1 nn − < < − = = (5) (A + ∆ λ I)x = λ x . D2 = diag(v11, . . . , vn1, v12, . . . , vn2, . . . , v1n, . . . , vnn), < = = < − T T Proof: By considering separately the real and imaginary Sx = (I x∗ ) D1(I x∗ ),Tx = (I x∗ ) D1(I x∗ ), ⊗ < ⊗ < ⊗ < ⊗ = part of the eigenvalue constraint, we have (A+∆)x = λ x+ T T < Qx = (I x∗ ) D1(I x∗ ),Sy = (I y1) D2(I y1), iλ x and (A + ∆)¯x = λ x iλ x¯, where x¯ denotes the ⊗ = ⊗ = ⊗ ⊗ = < − = T T complex conjugate of x. Notice that Ty = (I y1) D2(I y2),Qy = (I y2) D2(I y2). ⊗ ⊗ ⊗ ⊗ (9) (A + ∆)(x +x ¯) = (λ + iλ )x + (λ iλ )¯x, | {z } | < = {z < − = } Proof: We adopt the method of Lagrange multipliers (A+∆)2x 2λ 0. We obtain that − < = p σ Let σ = `T` + `T` , and observe that σ cannot be zero. Aαx˜ = α2D βy = ασD y, 1 1 2 2 αβ x x Indeed, due to Assumption (A1), the optimal perturbation can ˜T σ 2 not be zero; thus, the first constraint in (7) must be active A βy = β Dyαx = βσDyx, αβ and the corresponding multiplier must be nonzero. Then, we which concludes the proof. define y1 = `1/σ and y2 = `2/σ, and verify that Lemma 3.5 shows that a (sub)optimal network perturba-  2 L1VX¯ + L2VX¯ x = σ (Sxy1 + Txy2) , tion can in fact be constructed by solving equations (19). It < = <  2 should be observed that, if the matrices S , T , Q , S , T , L1VX¯ + L2VX¯ x = σ (Txy1 + Qxy2) , x x x y y < = = and Qy were constant, then (19) would describe a general- and ized eigenvalue problem, and a solution (¯σ, z) would be a pair of generalized eigenvalue and eigenvector. These facts T ¯ ¯ T ¯  T ¯ ¯  σ y1 (A N) y2 M = `1 L1VX + L2VX will be exploited in the next section to develop a heuristic − − < = 2 2 2 T algorithm to compute a (sub)optimal network perturbation. = σ Syx + Tyx , < = T ¯ ¯ T ¯  T ¯ ¯  σ y2 (A N) + y1 M = `2 L1VX + L2VX − < = B. A heuristic procedure to compute structural perturbations 2 2 2 T = σ Tyx + Qyx , < = In this section we propose an algorithm to find a solution which conclude the proof. to the set of nonlinear equations (19), and therefore to find a Note that equations (8) may admit multiple solutions, and (sub)optimal solution to the minimization problem (2). Our that every solution to (8) yields a network perturbation that procedure is motivated by (19) and Corollary 3.4: at each satisfies the constraints in the minimization problem (7). We iteration, we fix a vector z, compute the corresponding matrix now state a result to compute of feasible perturbations. D, and approximate an eigenvector associated with the smallest generalized eigenvalue of the pair (H,D). Because Corollary 3.4: (Minimum norm perturbation) Let ∆∗ be n p the size of the perturbation is bounded by the generalized a solution to (2). Then, ∆∗ = [0 × ∆¯ ∗], where eigenvalue σ as motivated in Corollary 3.4, we adopt an  ∆¯ ∗ = σ diag(y1)V¯ diag(x∗ ) diag(y2)V¯ diag(x∗ ) , iterative procedure based on the inverse iteration method for − < − = the computation of the smallest eigenvalue of a matrix [22]. and x∗ , x∗ , y1, y2, σ satisfy the equations (8). Moreover, We remark that our procedure is heuristic, because (19) is in < = 2 2 T T T fact a nonlinear generalized eigenvalue problem due to the ∆ = σ x∗ D x∗ = σx∗ A˜ y∗ σ A˜ F. k kF y ≤ k k dependency of the matrix D on the eigenvector z. Proof: The expression for the perturbation ∆∗ comes We start by characterizing certain properties of H and D, from Lemma 3.2 and (17), and the fact that L1 = σ diag(y1), which will be used to derive our algorithm. Let L = σ diag(y ). To show the second part, notice that 2 2 spec(H,D) = λ C : det(H λD) = 0 , { ∈ − } 2 2 ¯ ¯ 2 ∆ F = A B F = L1VX + L2VX F and recall that the pencil (H,D) is regular if the determinant k k k − k k < =k 2 X X 2 2 2 2  det(H λD) does not vanish for all values of λ [23]. Notice = σ y1ix j + y2ix j vij < = − i j that, if (H,D) is not regular, then spec(H,D) = C. 2 T T T Lemma 3.6: (Generalized eigenvalues of (H,D)) Given = σ x∗ Dyx∗ = σx∗ A˜ y∗, a vector z R2n+2p, define the matrices H and D as in (19). ∈ where the last equalities follow from (8). Finally, the theorem Then, follows from x∗ 2 = x∗ F = y∗ 2 = y∗ F = 1. (i) 0 spec(H,D); k k k k k k k k 188 ∈ (ii) if λ spec(H,D), then λ spec(H,D); and Algorithm 1: Heuristic solution to (19) (iii) if (H,D∈ ) is regular, then− spec∈(H,D) . R Input: Matrix H; max iterations maxiter; ψ (0.5, 1). ⊂ ∈ Proof: Notice that statement (i) is equivalent to Ax˜ = 0 Output: σ and z satisfying (19), or fail. and A˜Ty = 0, for some vectors x and y. Because A˜T repeat 1 (2n 2p) 2n ˜T ∈ z (H µD)− Dz; R − × with p 1, the matrix A features a nontrivial ≥ σ ← z −; null space. Thus, the two equations are satisfied with x = 0 ← k k and y Ker(A˜T), and the claimed statement follows. z z/σ; ∈ µ ←= ψ min σ spec(H,D): σ > 0 ; To prove statement (ii) notice that, due to the block · { ∈ } structure of H and D, if the triple (λ, x,¯ y¯) satisfies the update D according to (9); generalized eigenvalue equations A˜Ty¯ = λD x¯ and A˜x¯ = i i + 1 y until ←convergence or i > max ; λD y¯, so does the triple ( λ, x,¯ y¯). iter x return (σ + µ, z) or fail if i = max ; To show statement (iii),− let Rank−(D) = k n, and notice iter that the regularity of the pencil (H,D) implies≤ Hz¯ = 0 whenever Dz¯ = 0 and z¯ = 0. Notice that (H,D) has n6 k infinite eigenvalues [23]6 because Hz¯ = λDz¯ = λ 0−for encoded in the matrix H according to the definitions (6), (8) every nontrivial z¯ Ker(D). Because D is symmetric,· it and (19). Although convergence of Algorithm 1 is not ∈ n k guaranteed, numerical studies show that it performs well in admits an orthonormal basis of eigenvectors. Let V1 R × contain the orthonormal eigenvectors of D associated∈ with its practice; see Fig. 1 for a numerical validation. nonzero eigenvalues, let ΛD be the corresponding diagonal 1/2 C. Optimal perturbations and algorithm validation matrix of the eigenvalues, and let T1 = V1ΛD− . Then, T ˜ T ˜ T1 DT1 = I. Let H = T1 HT1, and notice that H is In this section validate Algorithm 1 on a three nodes line k k symmetric. Let T2 R × be an orthonormal matrix of network. We first provide the following analytical result on ∈ the eigenvectors of H˜ . Let T = T1T2 and note that optimal perturbations of three dimensional line networks. Theorem 3.7: (Optimal perturbations of 3-dimensional T THT = Λ, and T TDT = I, line networks with fixed λ C) Consider a network with ∈ where is a diagonal matrix. To conclude, consider the graph = ( , ), where = 3, weighted adjacency matrix Λ G V E |V| generalized eigenvalue problem . Let .   Hz¯ = λDz¯ z¯ = T z˜ a11 a12 0 Because T has full column rank k, we have A = a21 a22 a23 , T THT z˜ = Λ˜z = λT TDT z˜ = λz,˜ 0 a32 a33 and sensor node = 1 . Let B = [b ] = A + ∆∗, where which implies that (H,D) has k real eigenvalues. O { } ij ∆∗ solves the minimization problem (2) with = and Lemma 3.6 implies that the inverse iteration method is H G λ = λ + iλ C with λ = 0. Then: not directly applicable to (19). In fact, the zero eigenvalue < = ∈ = 6 of (H,D) leads the inverse iteration to instability, while the b11 = a11, b21 = a21, b12 = 0, presence of eigenvalues of with equal magnitude may (H,D) and b , b , b , b , satisfy: induce non-decaying oscillations in the solution vector. To 22 23 32 33 overcome these issues, we employ a shifting mechanism as b33 b22 (b22 a22) (b33 a33) + − (b23 a23) = 0, detailed in Algorithm 1, where the eigenvector z is iteratively − − − b32 − b updated by solving the equation (H µD)zk+1 = Dzk 23 − (b32 a32) (b23 a23) = 0, until a convergence criteria is met. Notice that (i) the − − b32 − eigenvalues of (H µD, D) are shifted with respect to the 2λ + b22 + b33 = 0, − < eigenvalues of (H,D), that is, if σ spec(H,D), then 2 2 ∈ b22b33 b23b32 λ + λ = 0. σ + µ spec(H µD, D), (ii) the pairs (H µD, D) − − < = (20) and (H,D∈ ) share− the same eigenvectors, and (iii)− by se- Proof: Let x satisfy Bx = λx and notice that, because lecting µ = ψ min σ spec(H,D): σ > 0 , the · { ∈ } λ is unobservable, C x = [1 0 0]x = 0. Then, x = pair (H µD, D) has nonzero eigenvalues with distinct O [x x x ]T, x = 0, b = a , and b = a . magnitude.− Thus, Algorithm 1 estimates the eigenvector z 1 2 3 1 11 11 21 21 We now show that x = 0. By contradiction and let x = associated with the smallest nonzero eigenvalue σ of (H,D), 2 2 0. Notice that the relation6 Bx = λx implies b = λ, which and converges when z and σ also satisfy equations (19). The 33 contradicts the assumption that λ = 0 and b33 R. parameter ψ determines a compromise between numerical = Because x = 0, the relation Bx6 = λx and x ∈= 0 imply stability and convergence speed; larger values of ψ improve 2 1 b = 0. Additionally,6 λ is an eigenvalue of the convergence speed. 12   When convergent, Algorithm 1 finds a solution to (19) and, b22 b23 B2 = . consequently, it allows to compute a (sub)optimal network b32 b33 perturbation preventing observability of a desired eigenvalue. The characteristic of B2 is All information about the network matrix, the sensor nodes, P (s) = s2 (b + b )s + b b b b . the constraint graph, and the unobservable eigenvalue is B2 − 22 33 22 33 − 23 32 189 1.4 hand, we explicitly characterize the observability radius

1.2 of three dimensional line networks, and we validate our

F 1 heuristic algorithm. Several aspects are left as the subject k ⇤ 0.8 of future investigation, including a characterization of the

k 0.6 observability radius of random networks, the computation of F

k eigenvalues requiring perturbations with minimum norm, and ) 0.4 i (

0.2 a study of the relation between the topology of a network k

0 and its observability robustness to structured perturbations.

-0.2 0 10 20 30 40 50 60 70 80 i REFERENCES Fig. 1. This figure validates effectiveness of Algorithm 1 in computing [1] R. E. Kalman, Y. C. Ho, and S. K. Narendra, “Controllability of linear optimal perturbations for the line network in Section III-C. The plot shows dynamical systems,” Contributions to Differential Equations, vol. 1, mean and standard deviation over 100 networks of the difference between no. 2, pp. 189–213, 1963. ∗ (i) ∆ , obtained via optimality conditions (20), and ∆ , computed at the [2] F. Pasqualetti, S. Zampieri, and F. Bullo, “Controllability metrics, i-th iteration of Algorithm 1. The unobservable eigenvalue is λ = i and the limitations and algorithms for complex networks,” IEEE Transactions values aij are chosen independently and uniformly distributed in [0, 1]. on Control of Network Systems, vol. 1, no. 1, pp. 40–52, 2014. [3] F. L. Cortesi, T. H. Summers, and J. Lygeros, “Submodularity of en- ergy related controllability metrics,” in IEEE Conference on Decision and Control, Los Angeles, CA, USA, Dec. 2014, pp. 2883–2888. Then, for λ to be an eigenvalue of B2, we must have [4] G. Kumar, D. Menolascino, and S. Ching, “Input novelty as a control metric for time varying linear systems,” arXiv preprint b22 b33 2λ = 0, and arXiv:1411.5892, 2014. − − − < 2 2 (21) [5] A. Olshevsky, “Minimal controllability problems,” IEEE Transactions b22b33 b23b32 λ + λ = 0. − − < = on Control of Network Systems, vol. 1, no. 3, pp. 249–258, Sept 2014. The Lagrange function of the minimization problem with [6] S. Pequito, G. Ramos, S. Kar, A. P. Aguiar, and J. Ramos, “On the 2 P3 P3 2 exact solution of the minimal controllability problem,” arXiv preprint cost function ∆∗ = (b a ) and con- k kF i=2 j=2 ij − ij arXiv:1401.4209, 2014. straints (21) reads as [7] N. Monshizadeh, S. Zhang, and M. K. Camlibel, “Zero forcing sets and controllability of dynamical systems defined on graphs,” IEEE 2 2 2 2 (b22, b23, b32, b33, p1, p2) = d22 + d23 + d32 + d33 Transactions on Automatic Control, vol. 59, no. 9, pp. 2562–2567, L 2014. + p1(2λ + b22 + b33) < [8] M. Fardad, A. Diwadkar, and U. Vaidya, “On optimal link removals 2 2 for controllability degradation in dynamical networks,” in IEEE Con- + p2(b22b33 b23b32 (λ + λ )), − − < = ference on Decision and Control, Los Angeles, CA, USA, Dec. 2014, pp. 499–504. where p1 and p2 are Lagrange multipliers, and R R [9] W. M. Wonham, Linear Multivariable Control: A Geometric Ap- d = b ∈a . By equating∈ the partial derivatives of to ij ij − ij L proach, 3rd ed. Springer, 1985. zero we obtain [10] C. Paige, “Properties of numerical algorithms related to computing controllability,” IEEE Transactions on Automatic Control, vol. 26, ∂ no. 1, pp. 130–138, 1981. L = 0 2d22 + p1 + p2b33 = 0, (22) ∂b22 ⇒ [11] R. Eising, “Between controllable and uncontrollable,” Systems & ∂ Control Letters, vol. 4, no. 5, pp. 263–264, 1984. L = 0 2d33 + p1 + p2b22 = 0, (23) [12] D. L. Boley and W.-S. Lu, “Measuring how far a controllable system is ∂b33 ⇒ from an uncontrollable one,” IEEE Transactions on Automatic Control, ∂ vol. 31, no. 3, pp. 249–251, 1986. L = 0 2d23 p2b32 = 0, (24) [13] G. Hu and E. J. Davison, “Real controllability/stabilizability radius of ∂b23 ⇒ − lti systems,” IEEE Transactions on Automatic Control, vol. 49, no. 2, ∂ pp. 254–257, 2004. L = 0 2d p b = 0, (25) ∂b ⇒ 32 − 2 23 [14] M. Karow and D. Kressner, “On the structured distance to uncon- 32 trollability,” Systems & Control Letters, vol. 58, no. 2, pp. 128–132, together with (21). The statement follows by substitutions of 2009. the Lagrange multipliers and into (22) and (25). [15] Y. Nievergelt, “Total least squares: State-of-the-art regression in nu- p1 p2 merical analysis,” SIAM Review, vol. 36, no. 2, pp. 258–264, 1994. To validate Algorithm 1, we compute optimal perturba- [16] B. De Moor, “Total least squares for affinely structured matrices tions for 3-dimensional line networks based on Theorem 3.7, and the noisy realization problem,” IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 3104–3113, 1994. and we compare it with the perturbation obtained at different [17] A. Rahmani, M. Ji, M. Mesbahi, and M. Egerstedt, “Controllability of iterations of Algorithm 1. As shown in Fig. 1, Algorithm 1 multi-agent systems from a graph-theoretic perspective,” SIAM Journal determines a network perturbation with minimum norm. on Control and Optimization, vol. 48, no. 1, pp. 162–186, 2009. [18] A. Chapman and M. Mesbahi, “On symmetry and controllability of multi-agent systems,” in IEEE Conference on Decision and Control, IV. CONCLUSION Los Angeles, CA, USA, Dec. 2014, pp. 625–630. [19] F. Pasqualetti and S. Zampieri, “On the controllability of isotropic and In this work we introduce the notion of observability anisotropic networks,” in IEEE Conference on Decision and Control, radius for network systems, which measures the ability Los Angeles, CA, USA, Dec. 2014, pp. 607–612. [20] A. J. Whalen, S. N. Brennan, T. D. Sauer, and S. J. Schiff, “Observabil- to maintain observability of the network modes against ity and controllability of nonlinear networks: The role of symmetry,” perturbations of the edge weights. The paper contains two Physical Review X, vol. 5, no. 1, p. 011005, 2015. sets of results. On the one hand, we perform a rigorous [21] T. Kailath, Linear Systems. Prentice-Hall, 1980. [22] L. N. Trefethen and D. Bau III, Numerical linear algebra. Siam, analysis to characterize network perturbations preventing 1997, vol. 50. observability and describe a heuristic algorithm to compute [23] G. H. Golub and C. F. Van Loan, Matrix computations. JHU Press, perturbations with minimum Frobenius norm. On the other 2012. 190