<<

W&M ScholarWorks

Dissertations, Theses, and Masters Projects Theses, Dissertations, & Master Projects

1996

Structured eigenvectors, interlacing, and completions

Brenda K. Kroschel College of William & Mary - Arts & Sciences

Follow this and additional works at: https://scholarworks.wm.edu/etd

Part of the Mathematics Commons

Recommended Citation Kroschel, Brenda K., "Structured eigenvectors, interlacing, and matrix completions" (1996). Dissertations, Theses, and Masters Projects. Paper 1539623883. https://dx.doi.org/doi:10.21220/s2-5k4y-xe76

This Dissertation is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Dissertations, Theses, and Masters Projects by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zed) Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with with permission permission of the of copyright the copyright owner. owner.Further reproduction Further reproduction prohibited without prohibited permission. without permission. STRUCTURED EIGENVECTORS,

INTERLACING,

AND MATRIX COMPLETIONS

A Dissertation Presented to the

Applied Science Department of

The College of William and Mary

In Partial Fulfillment of the

Requirements for the Degree

Doctor of Philosophy

by-

Brenda K. Kroschel

April 1996

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 9701100

UMI Microform 9701100 Copyright 1996, by UMI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI 300 North Zeeb Road Ann Arbor, MI 48103

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. APPROVAL SHEET

This dissertation is submitted in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

o in-55 Brenda K. Kroschel, Author

Approved. April 1996

Charles R. Johnson/

Marc Sher. Physics

Ilya Spitkovsky

—/— A i, David Stanford

Hugo J. VVoerdeman

Richard 0. rail ' Department of Mathematics y Michigan State University

i

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This dissertation is lovingly dedicated to my husband, Dave, without whose love and support this work would not be possible, and to my daughter. Nicole, who was not around at the beginning of this endeavor, but who makes everything more meaningful.

ii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Contents

Acknowledgments v

List of Figures vi

Abstract viii

1 Introduction 2

1.1 Notation and Matrix Theoretic Background ...... 2

1.1.1 Eigenvectors and Eigenvalues ...... 3

1.1.2 Classical Interlacing Inequalities ...... 4

1.1.3 The Singular Value Decomposition ...... 4

1.1.4 Jacobi's Identity ...... 6

1.1.5 The Schur Complement ...... 7

1.2 Graph Theoretic Background ...... 7

1.3 Partial M atrices...... 11

1.4 Overview ...... 12

2 Structured Eigenvectors 15

2.1 Main R e s u lt ...... 17

2.2 Structured Eigenvector Applications ...... 26

2.2.1 Application to Singular Values ...... 2S

iii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.3 A Perturbation Result 30

3 Interlacing 35

3.1 Breadth and Depth of Interlacing ...... 35

3.2 Interlacing D ia g ra m s ...... 40

3.3 Extended Interlacing In terv als ...... 47

3.3.1 Main R e s u lt ...... 48

3.3.2 Rank Deficient Off-Diagonal Blocks ...... 53

3.3.3 A Union of Two Intervals ...... 55

3.3.4 Applications to Singular Values and

Lehmann's Intervals ...... 60

4 Matrix Completions 63

4.1 Positive Semidefinite Completions ...... 64

4.2 The Euclidean Distance Completion Problem: Cycle Completability . 72

4.2.1 Statem ent of Main R e s u l t ...... 75

4.2.2 Proofs of Necessary Implications ...... 79

4.3 The Combinatorially Symmetric P-matrix

Completion Problem ...... 90

4.4 The Totally Positive Completion Problem ...... 96

4.5 The M aximum M inimum Eigenvalue

Completion Problem ...... 113

Bibliography 122

V ita 126

iv

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ACKNOWLEDGMENTS

I would like to express my appreciation to my advisor. Dr. Charles R. Johnson, whose excitement for is contagious. I thank him for his patient guid­ ance and careful mentoring throughout my studies. In addition, I would like to thank Dr. Hugo J. Woerdeman for his careful reading of this dissertation and his assistance with documentation. I would also like to thank Dr. Richard 0. Hill and Catherine A. Jones with whom some of this dissertation is joint work.

v

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List of Figures

1.1 Wheel, W k ...... 9

1.2 Example of a non-chordal (G'i) and a chordal graph (G?)...... 9

1.3 G '(,4 )...... 10

1.4 Edge subdivision ...... 10

1.5 Vertex partition ...... 11

3.1 Interlacing diagram example ...... 41

3.2 General interlacing diagram ...... 41

3.3 Diagram ruled out by biorthogonality ...... 42

3.4 Diagram ruled out by breadth and depth corollaries ...... 44

3.5 Forbidden interlacing sub-diagram ...... 45

3.6 3-level interlacing diagram that is not ruled o u t ...... 46

3.7 Term rank: G (A ) ...... 55

4.1 C y c le...... 65

4.2 Triangulation ...... 67

4.3 G(A')...... 6S

4.4 W4 ...... 75

4.5 Non-minimal cycle ...... 77

4.6 Clique sum ...... 78

vi

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4.7 Bad d ata for a fc-w heel ...... SO

4.S Bad data for a \\\...... 81

4.9 Bad data for vertex partitioning ...... 82

4.10 Data for a t r i a n g l e ...... So

4.11 Replacing edge a. b with the cycleC ...... S7

4.12 Monotonically labeled block clique g rap h ...... 97

4.13 Partial matrix with a monotonically labeled block clique graph . . . . 98

vii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ABSTRACT

This dissertation presents results from three areas of applicable matrix analysis: structured eigenvectors, interlacing, and matrix completion problems. Although these are distinct topics, the structured eigenvector results provide connections.

It is a straightforward matrix calculation that if A is an eigenvalue of A. .r an associated structured eigenvector and a the set of positions in which x has nonzero entries, then A is also an eigenvalue of the submatrix of .4 that lies in the rows and columns indexed by a. We present a converse to this statement and apply the results to interlacing and to matrix completion problems. Several corollaries are obtained that lead to results concerning the case of equality in the interlacing inequalities for Hermitian matrices, and to the problem of the relationship among eigenvalue multiplicities in various submatrices.

Classical interlacing for an .4 may be viewed as describing how many eigenvalues of A must be captured by intervals determined by eigenvalues of a principal submatrix of A. We generalize the classical interlacing theorems by using singular values of off-diagonal blocks of A to construct extended intervals that capture a larger number of eigenvalues. The union of pairs of intervals is also discussed, and applications are mentioned.

The matrix completion results that we present include the positive semidefinite cycle completion problem for matrices with data from the complex numbers, cycle completability conditions, the P-matrix completion problem, and the totally nonnegative completion problem. We show that the positive semidefinite cycle completion problem for matrices with complex data is a special case of a larger real positive semidefinite completion problem. In addition, we characterize those graphs for which the cycle conditions on all minimal cycles imply that a partial distance matrix has a distance matrix completion. We also prove that every combinatorially symmetric partial P-matrix has a P-matrix completion and we characterize the class of graphs for which every partial totally has a totally nonnegative completion. The structured eigenvector results are used to give a new proof of the the maximum minimum eigenvalue completion problem for partial Hermitian matrices with a chordal graph.

viii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Structured Eigenvectors, Interlacing, and Matrix Completions

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 1

Introduction

This dissertation presents results from three areas of matrix theory: structured

eigenvectors (chapter 2), interlacing (chapter 3), and matrix completions (chapter 4).

Although these are distinct topics in matrix theory, the structured eigenvector results

of chapter 2 provide connections. We will begin by providing some notation and

background material followed by brief introductions to the chapters. More detailed

background and introductions are provided within each of the chapters.

1.1 Notation and Matrix Theoretic Background

The set of all m-by-rc matrices with entries from a F will be denoted by

Mm,n{F), and if m = n. M Utn(F) will be abbreviated to M n (F). If F = C, the

complex numbers, we will often shorten this notation to M m,n. For A € M m,n (F) the

notation A = (a,j) will indicate that the entries of A are a,j 6 F, i = 1,2.

j = 1 ,2 -----, n.

For A 6 A/m,n (F) . a C {1,2,.... m}, and 8 C {1,2,..., n}. the submatrix of

A lying in the rows indexed by a and the columns indexed by 3 will be denoted

o

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3

A [a; ,3]. Similarly. A{a;t3) is the matrix that results from the deletion of the rows

indexed by a and the columns indexed by 3. If A € M n (F) and a = 3, then the

principal submatrix A [a; a] is abbreviated to A [a] and the com plem entary principal

submatrix is A ( a ) . In the sam e fashion, for a vector x € F n, x [a] denotes the entries

of x in the positions indexed by a and x(q) denotes the complementary vector. YVe

will often denote the sets {1.2 m} and {1,2, ....n} by M and N, respectively.

For q, 3 Q N the set difference a — 3 denotes the set of all elements in a that are

not in 3. The set N — a will also be denoted by ctc and N — ,3 by 3C. Note that

this means that A (o: 3) = A [oc: 3C\. The notation a + {n} will indicate the set that

results from adding n to every element of a. For example, if a = {1.3.4} and n = 4.

then q + {n} = {5. 7.8} . The cardinality of a set a will be denoted by |a| .

1.1.1 Eigenvectors and Eigenvalues

For an n-by-n matrix ,4 with entries from a field F, the nonzero vector x is a right

eigenvector of .4 associated with A if Ax = Xx for some scalar A. Similarly, y ^ 0 is a

left eigenvector if y' A = Xym. The scalar A is an eigenvalue of .4 and

the set of all eigenvalues of .4. some of which may lie only in an extension field of

F. The subspace of F n spanned by the set of all eigenvectors of A associated with

A is called the eigenspace of .4 associated with A. The of the eigenspace

of A associated with A is the geometric multiplicity of A as an eigenvalue of A and is

denoted throughout by g.\(A). The principal of biorthogonality (see, e.g. theorem

1.4.7 [HJ1]) says that if A, 6 cr(A) with A ^ p, then any left eigenvector of A

corresponding to p is orthogonal to any right eigenvector of A corresponding to A.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4

1.1.2 Classical Interlacing Inequalities

An important result concerning the eigenvalues of Hermitian matrices is the

interlacing eigenvalues theorem for bordered matrices (see, e.g. theorem 4.3.S [HJ 1].)

Let A E M n be Hermitian and o C N be such that |a| =n — 1 and let the ordered

eigenvalues of .4 be Ai < A 2 < • • • < An and those of A [a] be pi < p2 < • • • < pn-\-

Then

Al < Hi < A2 < • • • < A„_i < f l n - l 5; A n .

That is. the eigenvalues of an Hermitian matrix and any of its (n — l)-by-(rc — 1)

principal submatrices ‘‘interlace.'’ Another way that classical interlacing may be

viewed is that each interval [A,, A,+i]. i = 1,2,..., n —1 contains at least one eigenvalue

from every (n — l)-by-(n — 1) principal submatrix of .4. In addition,

for j 7 ^ i the interval [Aj, AJ+i] captures /zs in whichs ^ t. However, we may also say

that every interval ,i = 1,2...... n — 2 contains a different eigenvalue of .4

(the other two eigenvalues are in (— 0 0 ,/^] and [/*„_!, 0 0 )). As interlacing is applied

to successively smaller principal submatrices we find that, if Ai < A 2 < ■ • • < An_p

are the eigenvalues of a principal (n — p)-by-{n — p) submatrix of .4, then the interval

A,-, A j , 0 < i < j < n — p contains at least j — i — p + 1 eigenvalues of .4.

1.1.3 The Singular Value Decomposition

Another important result is the singular value decomposition of a matrix (see.

e.g. theorem 7.3.5 [HJ1]). For m > n, if the matrix A € M m_n then A may be written

in the form A = V S IT’* in which V € M m, W E M n are unitary, and

diag (crj, cr21 • • • - Gn) v _

0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. {diag (di,d2, -.. ,dn) denotes the with the d, on the diagonal). The

values <7 , are the singular values of A and are the nonnegative square roots of the

eigenvalues of AA ’. Usually the singular values are ordered in a nonincreasing fashion,

i.e. <7 i > er2 > • • • > an > 0. Note that when it is convenient we will think of A as

having singular values > cr2 > ■ ■ ■ > crn > crn+1 = • • • = crm = 0. The columns of V

are unit eigenvectors of A A’ and are called the left singular vectors of ,4. Similarly, the

columns of W are unit eigenvectors of A'A and are called the right singular vectors

of .4. The case in which n > m is analogous for S = diag(ax, 0 2 ,...,(Jrn) 0

The matrix A A' is Hermitian and since the singular values are the nonnegative

square roots of the eigenvalues of .4.4’ it follows that there is also an interlacing

theorem for the singular values of a matrix. Let A £ iV/m,n be given and let .4 be the

matrix obtained by deleting one column from A. For m > n let ax > a2 > • • • > crn > 0

be the singular values of .4 and cr\ > • • • > <3-n_i > 0 the singular values of .4.

Then

> d -! > CT2 > (72 > • • • > S’n—l > a n > 0 .

There are analogous singular value interlacing results for m < n and if a row of .4 is

deleted instead of a column.

Often times there are close analogues to eigenvalue results for the singular values.

This is due to the fact that for the matrix A £ Mm,„ the matrix

0 .4

A' 0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6

is Hermitian and has eigenvalues that are plus and minus the singular values of .4

(with possibly some extra zeros). This is a fact dating back to Wielandt [Wi] and is

used in both chapters 2 and 3 of this dissertation to generalize the main results of

these chapters.

1.1.4 Jacobi’s Identity

If /I € M n {F) is nonsingular, then the minors of ^4 -1 are related to those of .4

by Jacobi’s identity. Jacobi’s identity states (see, e.g. section 0.8.4 [HJ1]) that for

a. [3 C N , both nonempty, in which |a| = |/3|

det A ' 1 [a;fl] = (_ i)'< “ >+»<0) d-e- (i.i) 1 J v ' det A

in which s{a) = j- Observe that if a and f3 have cardinality 1. i.e. a = {/},

(3 = {j}, 1 < i.j < n , then (1.1) becomes

d e tA [(3 - { j} ;a - {i}} ij ’ d e t/I

in which a~l denotes the i,j entry of A~l. This expression is the adjoint formula

for the inverse of a matrix. Thus. Jacobi’s identity is a generalization of the adjoint

formula.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1.1.5 The Schur Complement

For 0 ^ a C ;V and .4 £ M n. if A [a] is nonsingular, then the Schur complement

of A [a] in A is the matrix

A [ac] — A [a1-': a] (A [a])-1 A [a; a c] .

Let A be partitioned as

A l l A i 2 A =

A21 A22

in which An is nonsingular. Then r • ---- O / 0 A n A 12 / - A n lA u - 0 A2lA Hl / A21 A22 O 1 1 1

in which S is the Schur complement of An in A. It is easy to see that A is then

nonsingular if and only if S is nonsingular and det A = det An det S. Moreover, A is

positive (semi)definite if and only if 5 is positive (semi)definite. For more information

on Schur complements see [C].

1.2 Graph Theoretic Background

Many of the graph theoretic definitions and notation used in this dissertation are

adopted from [G]. An undirected graph is a pair G = (K E) in which V is a finite

set called the vertex set and the set, E. called the edge set , contains unordered pairs

of elements from V called the edges of G. For vt, vj £ V the edges {vi.Uj} 6 E will

often be abbreviated to ViVj. If the graph is a directed graph the edge set contains

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. s

ordered pairs. In this dissertation all of the graphs will be undirected. So, graph will

mean an undirected graph without loops or multiple edges. If G = (K E) is a graph

and V{Vj 6 E then u,- and vj are said to be adjacent. A graph is said to be complete

if every vertex is adjacent to every other vertex. The complete graph on n vertices is

denoted K n.

If G = {V. E) and H = (W.F) are graphs for which W C V and F C E.

then H is a subgraph of G. If W C V then the subgraph G\v = (W, E\y) in which

Ew — {V{Vj € E : Vi, Vj G IF} is called the subgraph of G induced by W. An induced

subgraph is called a clique if it is a complete graph. A clique that is not properly

contained in any other clique is called maximal. The graph H = (V, F) is called a

supergraph of G = (V. E) if F D E.

A path in a graph G = (V. E) is a sequence of vertices (iq, u2, ..., iq.) such that

UiUi+i 6 E. The graph is said to be connected if there is a path from every vertex to

every other vertex. If a graph is not connected, then each maximal connected induced

subgraph is called a component of the graph.

A path that begins and ends with the same vertex, i.e. (iq, u-2, .... iq, iq), k > 3.

is called a cycle and a simple cycle is a cycle for which the vertices iq, i>2, . . . , iq

are distinct. In this dissertation the term cycle will mean a simple cycle. The cycle

(iq, • ■ • - vki ) is called a k-cycle and is denoted by Ck■ A chord of a cycle Cjt, k > 4.

in a graph G is an edge of G between two nonconsecutive vertices of Ck. A chordless

cycle is a simple cycle that has no chords. A minimal cycle in a graph G is an induced

subgraph of G that is a chordless cycle. A connected graph with no cycles is called a

tree. A k-wheel is a (k — l)-cycle with one additional vertex that is adjacent to every

vertex of the cycle and is denoted by Wk.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 1.1: Wheel. Wk

If a graph contains no chordless cycles of length 4 or more, then the graph is said

to be chordal. For example,

G

Figure 1.2: Example of a non-chordal (G'i) and a chordal graph ( 6 2 ).

The graph G\ is not chordal since (1.2,3.4, 1) is a cycle of length 4 that does not

contain a chord. However, this same cycle in the graph 6 2 contains the chord {2,4}

and all other cycles of length 4 or more also contain a chord. Chordal graphs have

received considerable attention largely due to their importance in the study of perfect

elimination schemes for Gaussian elimination [G]. This class of graphs is also very

important in the study of matrix completion problems as will be discussed in the next

section.

Graphs are often used to represent the zero/nonzero structure of a matrix. For an

n-by-n Hermitian (or symmetric) matrix A, the graph (7(A) = ( N.E ) is the graph

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 10

on n vertices for which {i.j} is an edge of G'(A) exactly when aXJ ^ 0 for i ^ j (loops

are omitted by convention, so whether or not an is zero is irrelevant.) For example,

the matrix

1 2 - 3 0

2 - 1 4 0 A = - 3 4 0 2

0 0 2 3

has the associated graph

Figure 1.3: G (A)

A new graph may be constructed from a given graph G by using one of several

operations on the edges and/or vertices of G. An edge subdivision of a graph G on n

vertices is a graph G' on n -f 1 vertices that results from replacing an edge of G with

two edges and a vertex between:

• ------• ► • ------o ------•

Figure 1.4: Edge subdivision.

A vertex partition of G (n vertices) is a graph G' {n + 1 vertices) in which a vertex

(of degree at least 1 ) in G is replaced by two adjacent vertices that partition the

neighbors of the original vertex: e.g.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 1.5: Vertex partition.

We say that a graph G2 is homeomorphic to a graph G\ if G2 may be obtained

from G1 via a finite sequence of (at least one) edge subdivisions. The graph G '2 is

built from G\ if Cr'2 may be obtained from G\ via a finite sequence of (at least one)

vertex partitions. Note that edge subdivision is a special case of vertex partitioning,

so that homeomorphism is a special case of “built from .'1 These operations will be

used in chapter 4 in the distance matrix completion problem.

1.3 Partial Matrices

A partial matrix is one in which some entries are specified over a field F, while

the remainder of the entries are unspecified and free to be chosen from F. It will be

assumed throughout this dissertation that the diagonal entries are specified. In many

cases this is a natural assumption. A combinatorially symmetric partial matrix A is

one in which aji is specified whenever aq is specified. For example,

1 0 2 •?

1 - 1 1 ■?

_ 2 3 0 5

? •? 1 - 4

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 12

in which ’?’ indicates an unspecified position, is a combinatorially symmetric par­

tial matrix. A partial symmetric (Hermitian) matrix is a combinatorially symmetric

partial matrix in which if a,y is specified, then aJt- = a,j (a,-,- = a,j). The specified

positions of an rc-by-n combinatorially symmetric partial matrix .4 may be described

by an undirected graph G(A) on n vertices in which there is an edge between vertex

i and vertex j exactly when a,j is specified. Loops are omitted by convention.

A completion of a partial matrix is a choice of values for the unspecified entries

resulting in a conventional matrix. A matrix completion problem asks when a given

partial matrix has a completion with some desired property.

1.4 Overview

Bv structured eigenvectors we mean eigenvectors that have 0’s in (at least) certain

specified positions. If the n-by-n matrix A has a structured eigenvector associated

x with A of the form ,x € F k. 0 < k < n, then by partitioned matrix multipli­ 0 cation. A is also an eigenvalue of .4 [a] in which a = {1,2, with associated

eigenvector x. Note that eigenvalues are invariant under permutation similarity, so,

there is no loss of generality in assuming a = {1.2, ...,&}. In chapter 2 we seek a

converse: if A is an eigenvalue of both A [a] and of .4, is there a structured eigenvector

of A associated with A of the form ,.r 6 F k, 0 < k < n? As will be shown, the 0 number of linearly independent left eigenvectors of this special form plus the number

of linearly independent right eigenvectors with the given structure is at least gx +g2 — k

in which gx and g2 are the geometric multiplicities of A as an eigenvalue of .4 and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 13

A [a], respectively. This result has remarkably many implications as will be seen in

chapters 3 and 4.

The structured eigenvector results of chapter 2 are used in chapter 3 to characterize

what we call interlacing diagrams that describe the relationships of the geometric

multiplicities of a given eigenvalue among various submatrices. In chapter 3 we also

use singular values to extend classical interlacing intervals. If

A i(P ) < A2 (P) < ••• < An_p(B) are the eigenvalues of the (n — p)-by-(n — p)

submatrix. B. of the matrix

B C .4 = C “ D

then the singular values of the off diagonal block, C. are used to extend the classical

interlacing intervals [A,- (B ) , Xj {B)\. 0 < i < j < n — p and possibly capture more

than thej — i — p + 1 eigenvalues of A that classical interlacing insures. The main

result of chapter 3 says that the interval ^ —yJS2 + t + ^ 8 2 + in which crk is

the kth largest singular value of C. t = an([ ^ = captures at

least j — i — k + 2 eigenvalues of A. If B is (n — L)-by-(n — 1). and k = 2. then

and the interval becomes a classical interlacing interval. In this case, for j = i + I

the results of chapter 3 say that the interval contains at least (i + 1) — i — 2 + 2 = 1

eigenvalue of .4. Thus, classical interlacing is a special case of the main result of

chapter 3.

In chapter 4 we present results concerning a variety of matrix completion problems.

Among these are the positive definite cycle completion problem for matrices with data

from the complex numbers, distance matrix cycle completability conditions, the

P-matrix completion problem, and the totally nonnegative completion problem. We

show that the positive definite cycle completion problem for matrices with complex

data is a special case of a larger real positive definite completion problem. In addition

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 14

we characterize those graphs for which the cycle conditions on all minimal cycles

imply that a partial distance matrix has a distance matrix completion. We also prove

that every combinatorially symmetric partial P-matrix has a P-matrix completion

and we characterize the class of graphs for which every partial totally nonnegative

matrix, the graph of whose specified entries is in the class, has a totally nonnegative

completion. The structured eigenvector results of chapter 2 are used to solve the

maximum minimum eigenvalue completion problem for partial Hermitian matrices

with a chordal graph in a new way.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 2

Structured Eigenvectors

It is a straightforward partitioned matrix calculation that if A is an eigenvalue

of A, x an associated eigenvector, and a the set of positions in which x has entries

not equal to zero, then A is also an eigenvalue of A [a]. Converses to this statement

are known in certain special situations. For example, it has been known for some

time (and can be deduced from theorem 5 [Dl]; see also [JKl]) that if A € M n ((D) is

Hermitian, |a| = n — I, and A £ R is an eigenvalue of both A and A [a], i.e. a case of

equality in the interlacing inequalities, then there is an eigenvector

x = ( jj, £2,..., xn) of A associated with A such that if i (fc a then r , = 0. For

a general matrix A £ Mn (F) and A an eigenvalue of A with geometric multiplicity

fc, the rank of A — XI is n — k. Then for |a| > n — k the rank of A [a] — XI is at

most n — k and A is an eigenvalue of A [a]. Moreover, it is implicit in the proof of

theorem 1.4.9 in [HJl] that there is an eigenvector of A associated with A all of whose

components indexed by a c are zero. It is our purpose here to give a converse to the

opening statement that is. in some sense, the most general possible in terms of the

data we use. A variety of statements, including those just mentioned, may then be

easily recognized as special cases.

15

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 16

These results, as well as some special cases, will be valid over a general field F.

For A G cr(A), denote the geometric multiplicity of A in ,4 by < 7.\(A). The most

optimistic converse to the opening statement would be that if A is an eigenvalue of

both A and ,4 [a], then there is an eigenvector x (of A associated with A) in which all

components of t (a) are zero. However, this is not always the case. Consider

0 0 1 0

0 1 0 0

,4 =

1 0 0 1

0 0 1 0

and the set a = {1,2}. This matrix has zero as an eigenvalue, as does A [a], but any

eigenvector of A associated with zero is of the form ^ a o o — a ^ • The converse

cannot, therefore, be as general as one might hope.

Before stating a converse, several definitions are needed. The main result will

be stated in terms of the dimensions of special subspaces, of the left and right

eigenspaces of a general matrix ,4 G Mn{F) associated with A G F, in which the

vectors have support among the components indexed by a. These special subspaces

(of the eigenspaces) are defined as follows:

L E ^A ) = {y € F n ■. yTA = \ y T, y {&) = o}

RE* (A) = {x£ F n : Ax = A.r. x (a) = 0} .

Similarly, let LN(A) and RN(A) denote the left and right nullspaces of A and define

the special subspaces (of the nullspaces) LNa(A) = LE° [A) and R N a{A ) = RE° [A).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 17

It is clear that the dimensions of all these spaces are permutation similarity invariant.

and, by assuming that a = {1.2, n — k}. this fact will be exploited repeatedly

throughout this chapter without further mention. If x is any eigenvector of A as­

sociated with A . then x is an eigenvector of A — XI associated with the eigenvalue

zero. For this reason, results concerning the special nullspaces underlie observations

concerning the special eigenspaces.

2.1 Main R esult

For contrast to the main result, we note some preliminary facts that indicate

circumstances under which both the left and right special subspaces are nonempty.

The rank deficiency of an n-by-n matrix A is n — rank(A) = <70 (A). general

matrices, when the rank deficiency of a principal submatrix is sufficiently large, then

the dimensions of the left and right nullspaces are positive (see discussion preceding

lemma 4.8.) Suppose that the submatrix A [a] is such that its rank deficiency is

greater than the number of rows or columns deleted from A to obtain A [a]. That is.

for |a| = n — k , g0 (A [a]) > k. In this case, the rank of A [a] is n — k — gQ (A [a])

and the rank of A can be at most 2k more than the rank of A [a]. But, then, the

rank deficiency of A is at least g0 (A [a]) — k. Since this number is positive. A is rank

deficient and the left and right nullspaces of A are both nonempty. The lemma below

states that, in fact, the left and right special nullspaces of A are both nonempty.

Lemma 2.1 Let A G iV/n ( F ) and let a C N be such that |a| = n — k.

(0 If go(A[a]) > k, then dim {LNa(A )) , dim (R N a{A)) > g0 (A [a]) — k.

(ii) Let 0 < go < min{&. |a|} be given. Then there is a matrix B such that

go{B [a]) = go and d im (LNa(B)) = d\m(RNa(B)) = 0.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. IS

Proof. It will be assumed, without loss of generality, that a = {1.2 n — k}.

Then .4 has partitioned form

•4n A\2 4 = ( 2. 1) 421 422

in which /In = 4 [a]. In this case, ifx E RNa (4). it is of the form x = in which

4 n x 6 Fn k and note that R N a (4 ) = RN . Similarly, any vector y G (4 )

4 2j

is of the form yT = f o in which y € F n~k and LN a (4 ) = LN 4 11 4 12

Assume <70 (Au ) > k. as in part (i) of the lemma. By elementary linear algebra

" / \ 4 n 4 n dim (R N a (4)) = dim RN = n — k — rank ( 2.2 421 /I21 \ /

Since 421 *s k-by-(n — k) the rank of /I21 is less than or equal to min {k.n — k}.

Therefore.

4 n rank < rank (An)rank (A2 1 ) 421

= n — k — go (An) + rank (4 2i )

< n — k — go ( A n ) + k

= n — <7o ( 4 n ).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 19

But then substituting this in (2.2) gives

/ \

-An Al e dim RN 1 1

V -A21 / — go ( ^ 11) ~ k.

So, dim(/?jVa (4 )) > g0 (A n ) — k. The proof that

d im (LNa (A)) = dim LN ^Aii 'A i2 > go (^ li) — k

is analogous and part (i) of the lemma is verified.

For part (ii) consider the matrix

0 0 : igo 0

0 I n - k - a 0 : 0 0 B ii B\2 B = (2.3) #21 #22 ho 0

0 0 : 0 # 22

in which B n is (n — k)-by-(n — k) and <70 ( #11) = go- For this matrix. 0 < g0 < k.

but there are no nonzero vectors in either LNQ (B ) or R N 0 (# ), and part (ii) of the

lemma is also proved. □

Replacement of A with A — XI in lemma 2.1 gives the following.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 20

Theorem 2.1 Let A € M n(F) and let a C N be such that |o| = n — Ic.

(i) //*^.\(A [o;]) > k. then dim(Z,.E*( A)), dim(i?i?*(A)) > g\{ A [a]) — k.

(ii) Let 0 < ga < min{&, |a|} be given. Then there is a matrix B such that

g \(B [a]) = ga and d\m{LE^{B)) = dim ( RE*{B)) = 0.

Statement (i) in theorem 2.1 is best possible when left and right eigenspaces are

considered separately. By considering the left and right eigenspaces simultaneously,

one arrives at a general converse to the opening statement. This main result will first

be stated in terms of the special nullspaces.

L e m m a 2.2 Let .4 € M n(F) and let a C N be such that |a| = n — k.

(i) dim(£:V0(.4)) + d\m[RNa(A)) > g 0(A) + ^ 0(/1[q]) - k.

(ii) Let g and ga such that 0 < g < n. 0 < gQ < |a|, and \g — 5ra| < k be given.

Then, if g + ga — k > 0 there is a matrix B such that go(B) = g , go{B [a]) = gQ

and

d\m(LNa(B)) + d\m(RNa(B)) = g0(B) + g0(B [a]) - k.

If g+gc, — k < 0. then there is a matrix B, with the given parameters, such that

dim {LNq(B)) = dim(/?iVQ(5)) = 0.

Proof: Let A G AIn{F) and partition A as in (2.1). Also, let

A n ir = rank — rank ( A n ) (2.4) A21

ic = rank — rank (A n ) A n A 12

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 21

Then

/ ' \ ~ A n A n dim RN = n — k — rank A21 A21 \ / = n — k — (rank (An) + iT)

go (A n) — ir-

Similarly, dim ( L N \ X2 = <70 (A n) — ic. The sum of the dimensions of

these two spaces is then

\ / r 1 A n ( dim RN + dim 1 LN An A 12 = 2^0 M il) ~ ir ~ h- (2.5) \ A21 \ /

Choose 6 A/n-fc ( f ) nonsingular matrices such that

0 0 n

5 0 A n A 12 T 0 0 I >2 = .4'

0 [ A21 A 22 0 /

Ai A 2 '22

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9 9

Yi in which the upper left zero block of ,4' is go (An)-by-<7o M u ) • = 5.412, and *2

Ai A' = A 2iT. A second equivalence will zero out A’2 and V2 : 1 1 1 O 0

0 0 : 0 y\ 0 0 0

0 i ; 0 0 1 y2 0 / - v 2 ^ ; 0

1 , —A2 : I X-2 a 22 0 0 1

0 0 Vi

0 / 0 = A.

A\ 0 : A 22

A n . or Neither of these equivalences has disturbed the rank of A. A n A 12 A2i So,

0 0

A n 0 I rank = rank = rank (A n ) + rank( A'i ’

A2i

A'i 0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 23

and

r 1 0 0 V'i = rank = rank (A n ) + rank (Vi) rank An A12 0 / 0

Then,

An rank (AA) = rank — rank (An] A21

and

— rank (A n ) rank (V)) = rank An A 12

0 V'i Since rank > rank (V’i) + rank (AA ) we have A 1 A22

/ *\ ° Yl rank (A) = rank LA) = rank [An ) + rank ( 2 . 6 ) A 1 A22

> rank (A n ) + rank(Y \ ) + rank (AA )

An r = rank -f- rank A n A 12 — rank (A n) • L A21

This was previously shown in a more general setting in [Wo] in another way, but we

have included a proof here for completeness.

Using the definition of ir and A in (2.4) we have

n - go(A) = rank {A) > ir + A- + rank (An) = A + A + n — k — go (Au ) •

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Rearranging this inequality vve have k — go (.4) + go (>lu) > + ic and substituting

this in (2.5) yields

/ \ r <4n ( > dim RN + dim LN '4 u -4i2 \ .4-21 \ / -90 ( ^ u ) + (<7o (^4) ~ go (-411) — k)

— 9o (/4) + go (^4ii) — k.

and part (i) of lemma 2.2 is proved.

There are two cases to consider in proving part (ii) of lemma 2.2. To begin,

consider the case in which g + ga — k < 0. Note that for this to be the case. gQ must

be less than or equal to k. For the matrix B in (2.3). if ga = go, then go{B\\) = gQ and

the submatrix B 22 is {k — <7a )-bv-(k — ga). This submatrix can be chosen so that B

has rank deficiency, g , from 0 to k — gQ. Thus. B has the appropriate parameters, and.

as mentioned in the proof of lemma 2.1. B has dim(LNa {B)) = d im {RNQ (B)) = 0.

For the case in whichg + ga — k > 0. consider

0 0 i Y

0 In-k-ga ■ 0 B =

A' 0 0

The submatrices Y and A’ can independently be chosen to have rank from zero to

min{^Q,Ar}, inclusive, which gives B a rank deficiency, g , from |<7a — k\ to gQ + k.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 25

inclusive. Now. note that there is equality in (2.6) and. therefore,

dim [LNa [B)) + dim (RNa {B)) = go{B) + go{Bu ) - k,

which completes the proof of the lemma. □

Our main result, the proof of which follows from lemma 2.2 by translation, is then:

Theorem 2.2 Let A £ M n{F) and let a C N be such that |a| = n — k.

(i) dim(Z,£^(,4)) + dim{RE^{A)) > gx(A) + <7.x(A[a]) - k.

(ii) Let g and ga such that 0 < g < n, 0 < ga < |a|, and |g — ga | < k be given.

Then, if g + ga — k > 0 there is a matrix B such that g.\[B) = g, g.\(B [a]) = ga

and

dim (LE'a(B)) + dim (RE^{B)) = g\{B) +0.\(5[a]) - k.

If g + ga ~ k < 0. then there is a matrix B, with the given parameters, such that

dim (LE^{B)) = dim {RE*{B)) = 0.

In each of lemmas 2.1 and 2.2 and theorems 2.1 and 2.2, statement (ii) indicates

that statement (i) is in some sense the best possible. The restrictions regarding a only

avoid logical impossibilities and. otherwise, the situations not covered by statement

(i) are covered in statem ent (ii).

At this point we make two general observations that are direct consequences of

theorem 2.2.

(1) If A £ M n (F) and |a| = n — 1. then A £

either a left or a right eigenvector of A (associated with A) whose a c component

is zero.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 26

(2) If .4 G M n (F). A 6 cr(A) and a C .V with |a| = n — k are such that

dim i^LE^ (A)) = dim i^RE^ (A)), then each of

9\ {A) + g \ (A [a]) - k dim ( L E xa (A)) .dim (RE* (A)) >

In this event, if g\ (A) + g\ (A [a]) > k, then both dim (LE £ (A)) and

dim (RE* (A)) are positive.

Note that statem ent (1) does not follow from theorem 2.1 and is in some sense tight

as. under the given hypothesis, a general matrix may not have both a left special

eigenvector and a right special eigenvector. For example.

1 - 1 0

A = - 1 1 0

0 - 1 1

does not have the property assumed in (2) for 0E

go (A [{1.2}]) = I. Thus, as every right null vector of A is a multiple of ^ 111^'

A has no special right eigenvector associated with 0. while it, of course, has a left such

eigenvector, e.g. ^ i } q because of (1). Similarly, for many values of g\{A)

and <7.\ (A [a]), the conclusion of (2) does not follow from theorem 2.1.

2.2 Structured Eigenvector Applications

YVe may now give several specific corollaries to theorem 2.2. First, note that if

A G Mn is normal, then, as UAU' = D. for some U unitary and D diagonal, any left

eigenspace of A is the conjugate transpose of a right eigenspace. Thus, the hypothesis

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. of (2) above is satisfied for each A and a. From this observation vve can conclude the

following.

Corollary 2.1 Let A G M n be a . For a C N with |a| = n — k

\ \ <7 \ (-4.) + g\(4.|al) — k d\m(LE^(A)).dim(RExJA )) > 1 .

Of course Hermitian matrices are normal so the following is a special case of corollary

2.1.

Corollary 2.2 Let 4 G M n be Hermitian. For a C N with |q| = n — k

\ \ <7 \ (.4) + o\( Alol) — k dim(i£0'(4)),dim(fl£^(.4)) > 1 .

In the opening paragraph we mentioned that if A is Hermitian.

A G

with A) in whichx(a) =0. But then g\ (4 ) .g\ (A [or]) > 1 which results in a positive

right-hand side in corollary 2.2. In this case, both the left and the right special

eigenspaces are nonempty, which proves the following corollary.

Corollary 2.3 Let .4 G M n be Hermitian, let a C N be such that |o| = n — 1. and

let A G t be an eigenvalue of A. Then, there is an eigenvector x of A associated with

A such that x(a) = 0 if and only if A G

Thus, the general scheme adopted here provides an algebraic proof to the statement

in the opening paragraph.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 28

2.2.1 Application to Singular Values

As noted in the introduction, there is also an interlacing property for singular

values that follows from classical interlacing of eigenvalues of Hermitian matrices.

Therefore, it is natural to ask if there are results for singular vectors that are similar

to the above theorems. Let A be an m-by-n matrix and let u2 > • • • > erf.,

k < min {m .n} be the set of distinct nonzero singular values of A. Let nr, (.4).

j = 1,2, ....£ denote the multiplicity of crj as a singular value of A. Note that

m.j(A) = g a 2 (A“A) = g a2 (AA*). Then let A = V^IW' be the singular value de­

composition of A. If crj has multiplicity m j(A) then there are right singular vectors.

Wj'i, ujj'2 , and left singular vectors, uyi, vj,2. such that

' ’ *

A WjA Wj,2 ■ wj.mj = CTj VJ,l Vh 2 ' Vj

The right singular of A associated with cr, is

span [rcj.i, wh2,..., w^mj} = R Sa> (A)

and the left singular vector space of A associated with crj is

span {i’j.1, u/,2, -----= L 5

Let a C N and define the special singular vector subspaces

LSZ> (A) = {v € L S 9> (A) : v (a) = 0}

RS°> (A) = {w 6 RS°> (A) : w (a) = 0}

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 9

Also, let M = {1,2,.... m} and, as usual. N = {1,2...., n}. Then

Theorem 2.3 Let A € M m,n( C), then for a C M ,3 C N with |a| = m — kr

\(3\ = n - ke.

m k{A) + m k{A[a\ 3]) - kr - kc dim(RS°k(A))®d\m(LS"k(A)) >

Proof: Assume m > n (if not, reverse the rolls of m and n in the discussion

that follows) and let cri > a2 > • • • > crk be the distinct nonzero singular val-

0 A ues of A. Let B = . Then B is Hermitian with unique eigenvalues

A' 0

—o-!, —<72,..., —. —

sidered, B has (m + n) — 2rank(A) eigenvalues that are 0. and m — n of these zero

eigenvalues do not correspond to singular values of A. Delete kr + kc rows and columns

of B in such a way as to delete the kT rows M — a and the kc columns N — ,3 from .4.

Also delete the corresponding kr columns and kc rows from A". The resulting matrix

is:

0 ,4 [a; 3 ] B = B [q] =

/I [a; 3 ] m 0

in whicha = a U {3 + {«}) and B is (m + n — kr — kc)-by-{m + n — kr — kc). Corol­

lary 2.2 gives

gCk (B) + gak (b ) - kr - kc d im {LElk {B)) ,d:\m{REa6k {B)) > (2.7)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 30

w The eigenvectors of B are of the form in which w is a right singular vector of

A associated with ak and v is a left singular vector of .4 associated with <7*. This

means that the dimension of the eigenspace associated with crk in which w (a) = 0

and v (i3) = 0 is at least

9ok {B) + 9

since g„k (B) = nik(A) and gak = m*(A[a;,d]). Thus,

m k{A) + m fc(i4[a; 0\) - kT - kc dim(/?5^(/l)) w dim(Z,5^fc(A)) >

Note that if mo(A) denotes the multiplicity of 0 as a singular value of /l, then

gQ (B) = 2m 0 (A) + m — n and gQ (b 'j = 2m 0 (A [a; d]) + |("i — kr) — (n — A;c)|, which

when substituted in (2.7) yields

dim(LNa (B)) ,dim{RN6(B)) >

mo (A) + m 0 (A [a; ,d]) + m — n — kr if m — kT > n — kc

mo (A) + m 0 (A [a; 3]) — kc if m — kr < n — kc

2.3 A Perturbation Result

Corollary 2.3 characterizes the case in which A is Hermitian. |o| = n — 1, and

there is equality in the interlacing inequalities. We next consider approximate versions

of the two equivalent statements in corollary 2.3, i.e. the case in which there is an

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 31

eigenvalue of /I [a] that is “near by” an eigenvalue of A. or there is an eigenvector,

x, of A associated with A that has a component that is “small” relative to the other

components of x. Let Aj denote the j tk column of the matrix A and note that ||• ||2

is the usual Euclidean .

Theorem 2.4 Let A E M n be Hermitian, let a C N be such that |a| = n — 1 with

ac = j . Then for A E

(i) There is a /f E cr (A [a]) such that

|A - ,,| < \Zlklt - 1*/

(ii) If A (fc. a (A [a]), then

xll2 y||( A[a] - A/^Afc^a^llj + 1

Note that ||x||2 — |jj|2 = 0 if and only if x(j) = 0.

Proof: Let x E

formed by deleting the j th entry of Aj and note that A(0;{j}) (A({j};0)) is the

n-by-(n — 1) ((n — l)-by-n) matrix in which thej th column (row) of A has been

deleted. By matrix vector multiplication, deleting the j th column of A results in

A{1b,{j})x{j) = A x-xjA j. (2.S)

Further deletion of the j th row yields

A (j) x [j) = A ({j}; 0) x - X jAj (j ) = Ax (j ) - xjA j (j ).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 32

Then

Ik H i)-r(i) - A.r ( i )||2 = |x7| ||Tj (j)||2.

By [P, theorem 4-5-1], for any scalar A and any nonzero vector x, there is an eigenvalue

7 of an rc-by-ri m atrix B satisfying

||fix - Ax ||2 7 - A| < Flla

Then, for the scalar A there is afi £ a [A(j)) for which

- Xx 0 ’)ll2 I/* ~ Al < \\x U )h 2 3.71

X ||2 — \Xj\

which proves (i).

To prove (ii), note that since ,4x = Ax, we have

.4 [a] x [cv] + A [a; q c] xj = Ax [a]

(note that x ( j) = x [a], A (j) = A [a ]. and xj = x (a)) or

— (.4 [a] — XI) x [a] = A [a; a c] Xj.

If A ^

x [a] = — (.4 [a] — A/)- 1 ,4 [a; a e] xj. (2.9)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. By taking the length of both sides in (2.9) we see that

F l b - F i l = \ xj \ (A [a] - A/)-1 A [a; a c] ( 2. 10)

The vector (A [a] — A /)-1 A [a: a c] is zero if and only if A [a; a c] is zero, but then

A G or (A [a]), a contradiction. So, (A [a] — A/)-1 A [a;ac] ^ 0. Rearranging (2.10)

results in UN 1

(A [a] — A I) 1 A [a: ac + 1

which proves part (ii) of the theorem. □

We note that theorem 2.4 says that corollary 2.3 is valid in the following approximate

sense: if xj is ‘‘small” then (i) indicates there is an eigenvalue fi G o"(A(J)) that is

“near” some eigenvalue A G o’(A): also, if there is a fi € cr (A (j )) that is “near" some

eigenvalue A G cr (A), and ||A[Q;ac]|| > (n — 1) \n — A|, then there is an eigenvector

of A associated with A for which xj is also “small” .

We will illustrate this theorem by an example:

- 5 - 2 9 :

-29 -95 : 0 A =

5 0 i -3 6

/ \ -.2 8

which has A = —103.56 as an eigenvalue with associated eigenvector x = -.9 6

.02 The calculations in (i) indicate that there is a /.i G

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 4

|/i — A| < .78. The subm atrix .4 [{1.2}] has an eigenvalue {i = —103.54 which satisfies

this inequality. Similarly, since ||j||2 = 1, statement (ii) indicates that |x3| = .02

which is the case.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 3

Interlacing

3.1 Breadth and Depth of Interlacing

Let A € iV/n(C) be Hermitian. Since any principal submatrix of an Hermitian

matrix is Hermitian. corollary 2.3 may be applied at each “level” of interlacing. After

a few basic observations, careful sequential application of corollary 2.3 will lead to

the results below, but first several definitions are needed. Suppose A is an eigenvalue

of A, then A is said to have interlacing equality at A of breadth k if there are exactly

k distinct index sets 0 1 , 0 2 , ___o*.- Q N in which |a,| =n — 1 and A €

i — 1,2,..., k. If g\ (.4) > 2. then the breadth of interlacing equality at A is n (see

discussion following corollary 3.3.) As will be seen in the proof of corollary 3.1 if ,4 is

such thatg\{A) = 1. then, because of corollary 2.3 the breadth of interlacing equality

at A is just the number of zero components in the essentially unique eigenvector. The

matrix .4 is said to have interlacing equality at A of depth k if A £ a (A [$■]) for some

index sets (30,/31 C N such that j3j+ 1 C fij.j = 0, 1 k — 1. \f3j\ - n — j.

j = 0 ,1,..., and k is a maximum. If in addition, g\ (A [dj+i]) > g\ (A [dj]).

35

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 36

j = 0.1 k — 1. then A is said to have interlacing equality at A of restricted

depth k. Here, k is the number of principal submatrices in the nested sequence for

which the geometric multiplicity of A is nondecreasing, so that the depth of interlacing

equality may be greater than the restricted depth. The following corollaries relate

these concepts.

Corollary 3.1 Let .4 G Mn be Hermitian and be such that g\(A) = 1. If A has

interlacing equality at A of breadth k, then .4 has interlacing equality at A of depth at

least k.

Proof. If A has interlacing equality at A of breadth k, then there are k distinct

principal submatrices A [ct;] such that A G <7 (A [a,]) and |q,| = n — 1. In this case.

g\ (4. [a,-]) > 1 and. by assumption, g\ (,4) = 1. Thus, by corollary 2.3 . for each a,

there is an eigenvector y,- of .4 associated with A. such that y,(a;,) = 0. However, since

gx (A) = 1. the (right) eigenspace of A associated with A is one dimensional, so that

each of the y,'s may be taken to be the same, x. It follows that

x (a-! n a 2 n • • • n a^.) = 0. By the partitioned calculation mentioned in the opening

paragraph of chapter 2 /io = N. and $ = Qi fl a 2 fl • • • D a,, i — 1,2 k, exhibit

that A has interlacing equality at A of depth at least k. □

Corollary 3.1 is stated in the Hermitian case for parallelism to the corollaries

that follow. However, the concepts of breadth and depth of interlacing may also be

thought of simply as coincidence of eigenvalues. In this context, the arguments in

the proof of corollary 3.1 are equally valid for normal matrices (using corollary 2.1 in

place of corollary 2.3 with an obvious generalization of the definitions of breadth and

depth of interlacing). So, corollary 3.1 may be generalized by replacing “Hermitian”

in the hypothesis with “normal/’ On the other hand, the coincidence of eigenvalues

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 37

indicated by corollary 3.1 is not valid for general matrices, as exhibited by the example

1 1 1

1 1 1

0 1 1

in which 0 is an eigenvalue of breadth 2. while its depth is only 1.

The converse to corollary 3.1 does not hold: a counterexample is given by the

matrix

0 0 1 1

0 0 1 1

1 I I 1

1110

which has interlacing equality at 0 of depth 3 (/I ({4}). /I ({3,4}). ,4 ({2,3,4})), but

interlacing equality at 0 of breadth only 2 (A ({3}), .4 ({4})). However, the geometric

multiplicities of 0 in the principal submatrices that yield interlacing equality at 0 of

depth 3 are

<7o(A({4})) = 1

9o (A ({3,4})) = 2

go (A ({2, 3,4})) = 1.

In fact, the restricted depth of interlacing equality at 0 is only 2 and this is exactly

the breadth of interlacing equality at 0. As indicated in the following corollary, the

breadth of interlacing equality at A must be at least that of the restricted depth.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 38

Corollary 3.2 Let A E \ I n be Hermitian and suppose A E cr(.4). If A has interlacing

equality at A of restricted depth k. then A has interlacing equality at A of breadth at

least k.

Proof: If g\ (A) > 1. the breadth at A is n (see discussion later, if necessary)

and the conclusion is automatically valid. Thus, we suppose g\{A) = 1. If A has

interlacing equality at A of restricted depth k. then there is some nested sequence of

k + 1 principal submatrices A [$], such that \8 i\ = n — i, A E cr (A [/?,-]), / = 0 ,1 k.

and g\ (A [#+i]) > _gr.\ (A [/?,•]). i = 0.1 k — 1. Assume, without loss of generality.

that the rows and columns of A [$] are numbered 1 to n — i. Note that n — i is the

index of the row and column deleted from A [/?,•] to obtain A[,^1+1]. By corollary 2.2

dim(l£^1(/4[A])),dim(fl£^|(/l^])) > ~i

> - j

since g,\ (A [/?,•]) < g.\ (A [/^t+i]). Both dimensions must be integral: so, the dimensions

of the special eigenspaces must both be at least g.\ (A [/?;]). Then, every (left and right)

eigenvector of A [/?,-] associated with A is in the special (left and right) eigenspace and.

thus, component n — i of each of these vectors is 0.

Let x be an eigenvector (essentially unique) of A associated with A. Since

9 \{A) = 5.\(A[#o]) = 1 and g.\ (A [/?i]) > 1,

by corollary 2.3 x {/Si) = 0. By the preceding paragraph, if i = 1 then every eigenvec­

tor of A[fd i] associated with A, including x[/?i], has a zero in the n — 1 component.

Thus, x {fii C\ (32) = x [82) = 0.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 9

Continuing in this manner, for each i = 0,1,..., k — 1. x [/?,•] is an eigenvector of

A [/?,-] associated with A with a zero in the n — i component so that

x {i3i n b 2 n... n ,d1+i) = x {/3i+l) = o.

Then. x(/3k) = 0 and for each j £ 3k, ■!'({./}) is an eigenvector of A ({j}) associated

with A. Thus, ctj = N — {n + 1 — j }. j = 1,2,..., k. exhibits that ,4 has interlacing

equality at A of breadth at least k. □

Note that the breadth of interlacing equality can be strictly greater than the

restricted depth of interlacing equality. For example, the matrix

0 0 1

0 0 0

1 0 0

has interlacing equality at 0 of restricted depth 1, but the breadth of interlacing

equality at 0 is 2.

If the matrix .4 is such that g.\(.4[a|) < I for every index set a C N. and .4

has interlacing equality at A of depth k, then /I also has interlacing equality at A of

restricted depth k. In this case, by corollary 3.2, ,4 has interlacing equality at A of

breadth at least k. Combining corollaries 3.1 and 3.2 then yields the following.

Corollary 3.3 Let A E A/n be Hermitian and. suppose for every index set a C A

that <7.\(A[q]) < 1 with g\{A) = 1. Then. ,4 has interlacing equality at A of breadth k

if and only if A has interlacing equality at A of depth k.

Let .4 £ M n be Hermitian. Due to classical interlacing, when 5r,\ (A ) > 1.

A £ ct(A[q]) for any a C N such that |o| = n — 1. In addition, when g\{A) > 1 for

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 40

each sucha there is an eigenvector. z. of .4 associated with A such that z(a) = 0. This

may be seen in an elementary way by noting that, given any two linearly independent

eigenvectors x,y in the eigenspace. there is a linear combination with a zero in any

specified position. Such an .4 has interlacing equality at A of breadth n, but may

have depth at A as little as 1. For example, the matrix

1 1 1

1 1 1

1 1 1

has interlacing equality at 0 of breadth 3. while the depth at 0 is only 1. Thus, the

assumption in corollaries 3.1 and 3.3 that g\{A) = 1 is necessary.

3.2 Interlacing Diagrams

Corollaries 3.1-3.3 indicate restrictions on the values of

a ’s. This led us to look at the characterization of the relationships between different

"levels'’ of interlacing. These relationship are described via what we call interlacing

diagrams. For example, if

0 0 1

.4 = 0 0 0

1 0 0

then A has interlacing diagram

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 41

LEVELS

( 1.2 } { 1.3 } ( 2.3 }

(I) (2) ( 3 }

Figure 3.1: Interlacing diagram example.

in which the entry in the cell in level 0 is the geometric multiplicity of 0 as an

eigenvalue of A and the entry in each of the other cells is the geometric multiplicity

of 0 as an eigenvalue of the principal submatrix of A lying in the rows and columns

indicated by the index set below each cell. The general problem is, for an n-by-n

m atrix A and fixed A G 0-(/l). which n-level diagrams of the form

LEVELS .(A)'

1,2.... n-1} {l,2,...,n-2,n} {1,3 n} {2,3,...,n

Figure 3.2: General interlacing diagram

in which the geometric multiplicity of A as an eigenvalue of the principal submatrix

found in the index set associated with that cell appears in each cell, can occur?

An obvious condition for such diagrams is

(1) Dim ension: <7.\(/l[a]) < |a|.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 2

So. the entries in the cells of level k can be at most n — k. We will be looking prim ar­

ily at Hermitian matrices. For Hermitian matrices, classical interlacing restrictions

require that the geometric multiplicity in a predecessor or successor cell may differ

by at most one from that in a given cell. This gives rise to a second condition.

(2) Interlacing: Let A € Mn be Hermitian and a C jV. Then for j E a .

|0.\(4[a]) ~ 9 \ [a - { j}])I < 1-

For the remainder of this discussion we will assume that A = 0. The general case

in which A ^ 0 follows easily by translation. For Hermitian matrices, the two obvious

necessary conditions given above are not sufficient. That is. there are diagrams that

satisfy (1) and (2) for which there is no Hermitian matrix with that diagram. For

example, the diagram

LEVELS

{ 1.2 } { 1.3 } { 2.3 }

{ 3 }

Figure 3.3: Diagram ruled out by biorthogonality

satisfies conditions (1) and (2) but there is no 3-by-3 Hermitian matrix with the

interlacing diagram given in figure 3.3. For A = 0 level 2 indicates that the diagonal

entries of a m atrix with the diagram in figure 3.3 are nonzero. Level 1 and level 2 of

the diagram, together with the fact that we are considering only Hermitian matrices.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 43

indicates that any matrix with the diagram in figure 3.3 must be of the form

a2 ab ac

.4 = ab b2 be

ac be c2

in which a.b.c ^ 0. But g0 (/i) = 2 and the diagram requires this value to be 1.

Therefore, there is no Hermitian matrix with the diagram in figure 3.3.

The structured eigenvector results of chapter 2 and the principal of biorthogonality

lead to the following corollary which holds for general diagonalizable matrices.

Corollary 3.4 If A is a and g\ (.4) = 1. then g.\{A{i)) = 0

for some I < i < n.

Proof. Let A be diagonalizable, g\(A ) = 1. and assume that 1 for

all i = 1.2___,n. Also, let x.y be right and left eigenvectors, respectively, of .4

associated with A. Then, by theorem 2.2 at least one of x and y must have a 0 in the

iih component for i = 1.2 n so that y'x = 0. By the principal of biorthogonality

y must be orthogonal to every right eigenvector of .4 associated with y 6 cr(A).

y ^ A. Since ,4 is diagonalizable, this means that y is orthogonal to n — 1 vectors.

But y is also orthogonal to x. and is, therefore, orthogonal to a set of n linearly

independent vectors. Then, y must be the zero vector which is a contradiction since

y is an eigenvector. Therefore, there must be some i for which g\ (A(i)) = 0. □

The diagram in figure 3.3 does not satisfy corollary 3.4 and, therefore, there is no

3-by-3 Hermitian matrix with this diagram. Corollary 3.4 gives a third condition on

the interlacing diagrams.

(3) Biorthogonality: For diagonalizable matrices, every diagram with a 1 in the

cell at level 0 must have at least one 0 in level 1 (corollary 3.4.)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 4

Note that any pattern that is not allowed in an interlacing diagram of size k is not

allowed as a subpattern of a diagram of size n.n > k. So. condition (3) indicates that

if a cell in level k of a diagram contains a 1. then there must be at least one 0 among

that cell’s successor cells.

Conditions (l)-(3) are still not sufficient. The diagram

LEVELS

1.2 1.3 2,3

{ 3 }

Figure 3.4: Diagram ruled out by breadth and depth corollaries

satisfies conditions (l)-(3), but. due to levels 1 and 2, any Hermitian m atrix with this

diagram is of the form

a2 ab ac

ab b2 0

ac 0 c2

in which a,6 , c ^ 0. But this matrix is nonsingular and the cell in level 0 of figure 3.4

requires that any matrix with this diagram is singular.

The diagram in figure 3.4 cannot occur because it violates the breadth and depth

corollaries of section 3.1. By the proof of corollary 3.1 the 1 in level 0 and the two

l ’s in level 1 indicate that the first cell in level 2 (corresponding to the 1st diagonal

entry) must be a 1 and cannot be 0. The breadth and depth of interlacing corollaries

(corollaries 3.1-3.3) give a fourth condition on the interlacing diagrams for Hermitian

matrices.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 5

(4) Breadth and Depth of Interlacing: corollaries 3.1-3.3.

The following corollary indicates a pattern that cannot occur as a subpattern of

any larger diagram.

LEVELS

a-fi} a-{j}

a-ftil

Figure 3.5: Forbidden interlacing sub-diagram

Corollary 3.5 For n-by-n Hermitian matrices in which n > 3, the pattern in figure

3.5 cannot occur as a subpattem in any interlacing diagram.

Proof: Assume the pattern in figure 3.5 can be achieved for the eigenvalue A of a

matrix, .4, and that level k corresponds to .4 [a] , |a| = n — k. It can be assumed by

permutation similarity that a = {1,2, n — k}. Then, in the (k + l)st level, the

cell containing a 1 corresponds to .4 [a — {?}] and the cell containing a 0 corresponds

to 4 [or — {J}] . for i.j € a.i ^ j. Let x be an, essentially unique, eigenvector of .4 [a]

associated with A. Then, by theorem 2.2 x [{i}] = 0 and. because there is a 1 in the

cell in level k -1- 2, x [{z, J1}] is also 0. But then, by partitioned .

x({j}) must be an eigenvector of 4 [a — {j}] associated with A, contradicting the

fact that the diagram indicates that g\ (4 [a — { j }]) = 0. □

Thus, corollary 3.5 adds one more condition to the list that governs the allowable

interlacing diagrams of Hermitian matrices:

(5) Forbidden sub-diagram: corollary 3.5

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 6

For 3-level diagrams in which A is a real and g.\{A) = L

these conditions are enough to characterize the diagrams that can occur (proof is by

exhaustion; there are 216 cases) with one exception. In other words, for each 3-level

diagram that satisfies conditions (l)-(5) there is an Hermitian matrix for which that

diagram represents the geometric multiplicities of A as an eigenvalue of each of its

principal submatrices, with the one exception indicated below. The diagram

LEVELS 0

HI {2} {31

Figure 3.6: 3-level interlacing diagram that is not ruled out

is not covered by the conditions given above and cannot occur for A 6 A/3 (R). but

can occur for A 6 M3 ((D). For .4 £ M3 (1R.) the diagram in figure 3.6 corresponds to

a matrix of the form

0 a b

A = a 0

b e 0

The of this matrix is 2abc which is 0 if and only if a, 6, or c is 0. If this

is the case, then level 1 of the diagram is violated. However, if A £ M3 ( € ) , then

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 47

0 a 8

.4 = a 0 7

3 7 0

and det (.4) = q /^7 + ctfi7 which is zero if al3~/ = —a/ihj.

3.3 Extended Interlacing Intervals

In the introduction to this dissertation it was mentioned that the classical in­

terlacing inequalities may be viewed as saying that if Ai < A2 < • • • < A„_i are the

eigenvalues of a. principal submatrix A 6 A/ n- 1 of the Hermitian matrix A. then each

interval A,-, A,+ i ,i = 1.2 n — 2 contains an eigenvalue of the full matrix and if

A,-, A,+i captures As, then Aj, AJ+l . i ^ j, captures A( in which s ^ t. In fact, if

Ai < A2 < • • • < An- p are the eigenvalues of a principal (n — p)-bv-(n — p) submatrix

of A, then the interval A,, Aj ,0

eigenvalues of A. However, this count may be non-positive, meaning that there may

be no eigenvalues of A in the interval. The main result of this section uses the singu­

lar values of a non-principal submatrix of A to extend such intervals so as to capture

more eigenvalues of A in the interval. Because of the relationship between positivity

of singular values and rank, our result provides convenient insights in the event that

the matrix structure limits the rank of certain off-diagonal blocks.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 48

3.3.1 Main Result

An immediate consequence of the Courant-Fischer theorem for singular values

(see. e.g. theorem 7.3.10 [HJ 1]) is the following observation about general matrices.

L e m m a 3.1 The matrix .4 6 Mm,n has k singular values less than or equal to 6 > 0

if and. only if there exists a k-dimensional subspace. 5 C C" such that ||Ar||0 < 6 for

all x E S, ||x||2 = 1.

If A is Hermitian. then the singular values of A are the absolute values of the

eigenvalues of .4. so that lemma 3.1 is also a statement about eigenvalues. Translation

then yields the following fact for Hermitian matrices.

L e m m a 3.2 Let A E Ain be Hermitian, and let i € R. Given S > 0, there exists a

k-dimensional subspace, S C (Dn. such that ||(A — tl) x ||2 < 8 for all x 6 S, ||-r||2 = 1

if and only if A has k eigenvalues in the interval [f — 6. / + £].

Proof: By lemma 3.1, there are k singular values of A — tl less than or equal to

8 if and only if there is a /.--dimensional space. S, for which ||(A — f /) .r||2 < 8 for all

x € 5, ||.r||2 = 1. Since A — tl is Hermitian (A is Hermitian and t is real) its singular

values are the values |At — Z|, in which Ai < A2 < • • • < A„ are the eigenvalues of A.

But then |A; — t\ < 8. for i = j .j + 1 j + k — 1, and some j, 1 < j < n — k + 1.

This means t — 8 < A; < t + 8 for these values of i and that there are k eigenvalues

of A in the interval [t — 8,t + 5]. □

If A is Hermitian, the eigenvalues of A are denoted by Ai (A) < A2 (A) < • • • An (A)

and those of A [a] by A" < A2 < • • • Aj*Q|. The theorem below uses the singular values

> er2 > • • • trj^i of A [a1-'; a] to extend the intervals of the type discussed in the

opening paragraph of section 3.3. In the event that

sufficiently larger than |ac| or because A [ac: a] has sufficiently low rank, our extended

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 49

interval reverts to a conventional interval between two, not necessarily consecutive,

eigenvalues. Note that, since .4 is Hermitian, the nonzero singular values of 4 [ac:a]

are the same as those of 4. [or; a c].

Permutation similarity does not change the eigenvalues of 4; therefore, for the

theorems that follow, it will be assumed, without loss of generality, that

or = {1.2. ____|q |}. Also, let

\0 I \C t \C t __ \ Or t = 7- -—-, 8 = ~~ ~— for fixed i < j < |q|

and define the interval

I (i,j, k,a) =

Note that [t — 8, t + 6] is an interval used in classical interlacing, discussed above,

and, thus, this interval must capture at least j — i — |ac| + 1 eigenvalues of 4. The

main result is:

Theorem 3.1 Let 4 € Mn be Hermitian and. let a C M; then. for each k. 4 has at

least j — i — k + 2 eigenvalues in the interval I (i.j. k.a).

Proof. Let xf. x“+1, ... x°j be an orthonormal set of eigenvectors of 4 [a] corre­

sponding to A“ < Af+1 < • • • < A“. respectively. Also, let «;£, w%+l, ... be a set

of (orthonormal) right singular vectors of 4[ac;a] corresponding to

— at +1 — ''' — Define the subspace S C (D'0^ as

S = span ( x f . x?+1, span (wak,w ak+l, . .. u ^ ,) .

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 50

Then d im (S) > {j — i + 1) + (|a| — k + 1) — |a| = j — i — k + 2. Note that

y G S\ ||y||2 = 1 can be written either as a linear combination of the x a's or the wa's.

T hat is, if y E 5. ||?/||2 = 1. then

y = £ = £ 7,<- £m 2 = £ h,l! = i. p=i q=k p=t q=k

A straightforward calculation using the second representation given above for

y G S, ||y||2 = 1 yields that || A [ac: a] y\\2 < a%. Append zeros to y to obtain vectors

y of the form c = in

:.4 -(/)_ -||2 = ||(.4 [q] — i/)i(||2 + ||.4 [aL*;or] y\\\

+ ||A[Qc;a] y\\i £ A > ( Ap - ' ) ^ P=l

j < E l & + « ) < + « ) p=i

= P + iot)

The vectors c = * y € S. ||t/||2 = 1 together with the zero vector form a subspace. 0

5 C C" that has dimension at least j — i — k + 2, and for which

for any : G 5, ||;||2 = 1. Thus, by lemma 3.2, there are at least j —i —k+2 eigenvalues

of A in the interval I (i,j, k, a). □

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. •51

Note that for |a| = n — 1, .4 [qc: a] is a row vector and erf = 0. Classical interlacing

is then the special case of theorem 3.1 in whichj = i + 1, |a| = n — 1. and k = 2.

Another result that has been studied before, but can be seen as a special case of

theorem 3.1 will be discussed in the next section.

It is im m ediate from theorem 3.1, since \J62 + (erf)2 < 8 + erf, 1 < k < |a|. that

there are at least j — i — k + 2 eigenvalues of A in the interval Af — erf, Af + erf .

This weaker statement can be proven using an application of classical perturbation

theory and interlacing. There are two cases, |ac| > |a| and |ac| < |a|. For |ac| > ja|

let .4 [a; cC] = V'50IT’ be the singular value decomposition of A [a; a c] in which

v _ diag (erf.. . . . crfa|) 0

Then, the following similarity can be performed:

V* 0 V 0 V 'A [a )V —'aV A VT 0 w 0 w —*0 W mA[ac}W

From the result, deleting the k — 1 columns that contain erf, erf,..., erf_t (in S a ) and

the corresponding rows yields

V mA [a] V —'a V'A[a] V 0 0 A = + = B + C

in which v _

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The eigenvalues of B are the eigenvalues of A [a] together with those of A[ac]. As

a result, there are indices r and s such that Af = Ar (B) and Af = A S{B) and then

Af — erf, Af + erf = [Ar (B ) — erf, \ s (B) + erf]. Note that s — r > j — i since the

eigenvalues of A [a] may not be grouped together as eigenvalues of B: there may

be eigenvalues of A [ac] between the eigenvalues Ar (5) and A S(B). The matrix A

is a perturbation of B in which the perturbation matrix C has eigenvalues ±

q = &,..., |q|. Classical perturbation theory then gives

Ar (B) -

which can be proved, for example, by using VVeyl’s inequalities (see. e.g. theorem

4.3.1 [HJl].) Since A is a principal submatrix of a matrix similar to A. by

classical interlacing there are at least s — r — k + 2 eigenvalues of A

in the interval Ar ( a ) , X3 (a) . Then there are at least as many eigenvalues

of A in the larger interval [Ar (5) —

Af — erf, Af + erf = [Ar (B ) — erf, As [B) + o'f] and s — r > j — i there are certainly

at least j — i — k + 2 eigenvalues of A in the interval Af — erf, Af + erf (in fact there

are at least s — r — k + 2.)

For the case in which |a| > |ac|, if 1 < k < |ac| the proof is analogous. If

(c > |a c| then erf = 0 and the result follows from classical interlacing. This result

is immediate from theorem 3.1. however, we know of no direct way to achieve the

stronger statement of theorem 3.1 from the usual statement of interlacing.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.3.2 Rank Deficient Off-Diagonal Blocks

An irreducible Hermitian is of the form

ai bi

k a2 b2

T = b2 a3

0

bn—l

in which bt ^ 0 for t = 1 ,2 . n — 1. It is a well known fact (see, e.g. theorem 1

[HP]) that for a = {1,2. ...,p}, the open interval (Af.Af+1), in which A f and A f+ I

are eigenvalues of T [a], contains an eigenvalue of T. The closed interval version of

this fact can be seen as a special case of theorem 3.1. Because of the low rank of

the off diagonal blocks (X^a^a] contains only one nonzero entry) cr.f = 0 and, then,

for k = 2 and j = i + 1, by theorem 3.1 there is at least (i + 1) — i — 2 + 2 = 1

eigenvalue of T in the interval A f . A f + 1 . (An auxiliary argument shows that the

closed interval is actually open.) This fact can be generalized to any matrix. .4. in

which the off diagonal block A [a1"': a] has low rank. If rank (A [ac; a]) = k, then

is zero, and by theorem 3.1. or careful application of classical interlacing as in the

discussion above, the interval A f . A f+fc contains at least (i + k) — i — {k + 1) + 2 = 1

eigenvalue of A. Thus, we have the following corollary to theorem 3.1, which was also

noted in [H].

Corollary 3.6 Let A G Mn be Hermitian and let a C ;V. If rank (A [ac; a]) = k,

then there is at least one eigenvalue of A in the interval Af,Af+fc .

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. •54

As seen by the Weyl inequality argument in section 3.3.1. if rank (.4 [ac: a]) = k, then

deletion of the last k rows and columns of .4 results in the direct sum

V 'A [a ]V 0

0 4 [qc]

(since C is the 0 matrix). Corollary 3.6 then follows from usual interlacing.

Recall that the term rank of a matrix is the minimum number of lines (rows or

columns) of the matrix that cover all of its nonzero entries. It is a simple and classical

fact [Ry] that the term rank is an upper bound for the actual rank. (It is tight in

the sense that there are matrices, with the same zero pattern, whose rank is the term

rank.) The term rank of 4 [ac; a] is determined by the graph of 4. For an n-by-n

Hermitian matrix 4, the term rank is simply the fewest vertices in G{A) to which all

edges between the subgraphs induced by a and of are incident. For example, if these

two parts (G(4[a]) and G (4 [ac])) of G (4) are connected by just one edge (as in

the tridiagonal case), then the term rank of 4 [aL';o;] is 1. The term rank will also be

1 if all edges connecting G[A [a]) to G{A [ac]) are incident with just one vertex (in

G (4 [q cJ) or G'(4[q]), respectively); see figure 3.7. Based upon this discussion, we

make several combinatorially based observations.

Corollary 3.7 Suppose that A £ M n is Hermitian, a C N and that the term rank of

4 [qc; a] is k. Then, there is at least one eigenvalue of A in the interval A f . .

The special case k = 1 of corollary 3.7 yields a large class of generalizations of the

tridiagonal fact mentioned at the beginning of this section. In particular, if G{A) is

a tree T , there will be many choices of a that generalize the “half” of a path that

gives the tridiagonal case. More generally, if G (4) is a tree T , identify any vertex v

of T (degree i> = p > 2) as the root and consider the branches

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. •55

G(A[ a])

G(A[ a c ])

Figure 3.7: Term rank: G(A)

of T emanating from v. Then, choose any proper

partition {*i, i2, U {ji,j 2 ,---,jr} = {1,2,....p} and let a be the vertices of

£?,,. B{2, . . . . Biq together with v. so th at a c is the vertices of Bjx, jBJ2, . . . . Bjr. Then,

the term rank of ,4[ac;a] is 1 and the interval [Af.A“+1 will contain an eigenvalue

of ,4. If .4 € M n(R) is a combinatorially symmetric matrix such that G(.4) is a

tree T. then there exists a diagonal matrix D for which B = D~l AD is symmetric

and G(B) = T. Therefore, the above discussion also applies to these non- Hermitian

matrices.

3.3.3 A Union of Two Intervals

The discussion surrounding theorem 3.1 considers only intervals formed by eigen­

values of the submatrix .4 [a]. Of course, theorem 3.1 equally applies to intervals

determined by eigenvalues of A [ac]. This leads naturally to the question: what about

the union of two such intervals? If these two intervals are disjoint, their union will

contain the sum of the estimated number of eigenvalues of A in each interval. If the

intervals are not disjoint, however, there may be coincidence of eigenvalues and the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 6

union may not contain the sum of the estimated number of eigenvalues of A in each

interval. An example following the next theorem shows that indeed, the union of the

two intervals from theorem 3.1 may not capture the expected sum of the number of

eigenvalues of A that the two intervals capture separately. However, if each interval

is made slightly larger, then, regardless of whether or not the intervals are disjoint,

their union contains the desired sum of the estimated number of eigenvalues of .4. The

two intervals A? — crk, A? + crk and [^Af2c — ok, A°c + ak (in which ak = cr£ = akc)

constructed using eigenvalues from A [a] and A [ac], respectively, are generally larger

than the theorem 3.1 intervals. The VVevl inequality argument shows that there are at

least ji — ii — k+2 eigenvalues of A in the first interval and j 2 — «2 — k+ 2 eigenvalues of

A in the second. Although the VVeyl inequality argument does not easily enumerate

the number of eigenvalues of A in the union when the two intervals are not disjoint,

the techniques used in the proof of theorem 3.1 may be used to show that the union

of these two intervals contains at least j i + j2 — (

whether or not the intervals are disjoint.

Theorem 3.2 Let ,4 £ M n be Hermitian and let a C N: then

[Af, - Aj, + U K +

contains at least _/[ + J2 — (>1 + 12) — + 4 eigenvalues of A in which crk is the ktk

singular value of both ,4 [ac; a] and .4 [a; a c].

Proof: If the intervals Aft — erk, A“ + ak and Af," - o-fc, A^c + ak are disjoint

the conclusion is obviously true by theorem 3.1 and the above comments.

Suppose Aft - cr/t, A“ + crk PI Af2c - crfc, A“c + ak ± 0. Then

A? -

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 57

in which Amjn = min {A? , A?c }, and Amax = max |A ^ , }. As in the proof of theorem

3.1. define

5 , = span ( x “ . x ? +l. .... x? ) fl span (< , < +l,.... wfa{)

and

52 = span (xf2\x f 2c+l x £ ) fl span (wf, ivf+ l lo f,)

in which x£ are eigenvectors of .4 [a] (A[ac]) associated with A“ (A£c)and ir“

( are right singular vectors of /l[ac;a] (.4[a;ac]). Recall that, for simplicity of

notation, we have assumed a = {1.2...... |a|}. After embedding Si and S2 in C"

by appending zeros in the appropriate spots, we have (5i ® 0) fl (0 0 S2 ) = 0. Let

S = Si S) 2S■ Then, dim(S'i) > ji — ii — k + 2, d im (5 2) > j 2 — <2 — k + 2 as before,

and dim ( 5 ) > (ji + j 2) — (i’i + i2) — 2k + 4. Let c 6 5, ||x||2 = 1. Then

$\U\ ui 6 Si.

S2 IJ2

and

J 1 |o | y 1 = E i ? < = E « - P=* 1 q=k J2 l<*c| 2 /2 = E « = E « - p = 12 q=k ji M , J2 , l»Ci Eh? E k l = E h?‘l = E k ‘ = 1 . p = i.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 8

Define t = Amin±V»s. an

2 2 x a ||( A [a] - tl) y 1H2 = Y. |Ap l'p a P < |Amax - t\2 = 62 p=»1

and

2 a c „ Or l|A[ac:Q]yi||5 = £ > , N WR < ^ 2- < 7 = fc

Similarly,

||( A [o'"'] - tl) y 2H2 < |Amax - f|2 = 62 and || A [q; a c] y2\\22|2 < / a2k_2

Then

+ ||(A [ac] — tl) {,3-2y2) + A [ac; q] {/3\yi)\\l

5: (|/?t| ||(A [a] - tl)yi\\2 + \j32\ ||A [a; a"] t/2||2)2

+ (|A | ||( A [ac] - tl) y21|2 + |A | ||A [ac; a] ^ ||2)2

5: ( |/^l I IA max — + |/^21 & k)2 + ( |/^21 |A max — ^| + | d l | &k) 2

= IA max — f|2 + 4 \fi\ | \ j 3 2 \ |A m ax — f| (Tfc + al

5: IA max — <|2 + 2 | A max — ^| a k + O '2,

— ( | Amax — + CTfc )2 .

Thus, ||(A — i/) - 1|2 < $ + for any z E S. ||*||2 = 1. By lemma 3.2. there are at

least (ji + j 2) — (*i + i2) — 2k + 4 eigenvalues of A in the interval

[t - (8 + o’*) J + (8 + o*)] = [A min -

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 59

An example will illustrate that the two interval statement in theorem 3.2 cannot

be as tight as the unions of the two smaller intervals in theorem 3.1. The eigenvalues

of

1 1 1

J = L 1 1

1 1 1

are 0, 0. and 3. By theorem 3.1. the interval /(1.1,1,{1}) = 1 — \/2,1 + \/2j

captures at least one eigenvalue of J, whereas / (1,2,1, {2,3}) = 1 — \/3,1 + \/3j

captures at least two eigenvalues of J. However, the union of the theorem 3.1 intervals.

/(1 .1.1. {1}) U 7(1,2.1, {2,3}) = [l - v/3,1 + n/3 ] ,

captures only the two 0 eigenvalues of J. The interval given in theorem 3.2 is

1 - \ / 2 .1 + V2\ U [0 - V2.2 + y/2 \ = [o - \ / 2, 2 + s/?\

which does capture all three eigenvalues of J. Thus, a larger interval, such as in

theorem 3.2. is necessary to capture the j i + J2 ~ (*i + * 2) — + 4 = 3 eigenvalues

of J. This shows that theorem 3.2 cannot be improved to the union of the smaller

intervals in theorem 3.1.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 0

3.3.4 Applications to Singular Values and

Lehmann’s Intervals

Application to Singular Values

Once again the similarities between classical eigenvalue interlacing and singular

value interlacing indicate that there may be a theorem analogous to theorem 3.1 for

singular values. For A € A/m.„. define

SI {a.d}) = [<7,(A[a:,d]) - r/fc. oy (.4 [a; A]) + Tjk]

in which rjk is the kth largest singular value of

0 A [or; [3C

A* [/3C; a] 0

Then the singular value result is:

Corollary 3.8 Let A 6 A/m.„ and let a C C N. Then A has at least j — i — k + 2

singular values in the interval S I (A, k, {a, /d}).

Proof: Assume m > n (if not, interchange the rolls of rows and columns in the

following discussion). Let C be the Hermitian matrix

0 A C = A* 0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 61

and let d = a U (3 + {m}). Then

0 A [a; 3] C[d] = Am [3\ a] 0

The eigenvalues of C are plus and minus the singular values of A, and those of

C [d] are plus and minus the singular values of .4 [a; 3\ (both with, possibly, some

extra 0 singular values). Thus. crs (A [a: /?]) = A“_s+1 in which t = |a| + |/d|, and

s = 1,2 min {|q| . |,d|}. Then

/ (C, t — i + 1. t — j + 1, k, d) = SI (A, i,j, k, {a, 3})

and by theorem 3.1 there are at least j —i — k + '2 eigenvalues of C in

I (C. t — i -f l.t — j + 1. k. q) and, since crs (T) = Am+n_s+i (C ), s = 1, 2 ------n. there

are, then, at least j — i — k + 2 singular values of A in 5 / (A, i,j, k, {a, 3 }). □

L e h m a n n ’s In te rv a ls

The techniques used in the proof of theorem 3.1 can be used to give an elementary

proof of the sufficiency of the optimal inclusion intervals attributed to Lehmann [L].

Theorem 3.3 Let A £ Mn be Hermitian, a C /V, |o| = k. and let x 6 R and

0 < j < k be given. Then, an interval of the form [x — 8, x + 8 that contains j

A [q1 - x l eigenvalues of A is given by 8 = ak-j+i{B) with B = A [q c; q ]

Proof: Assume, without loss of generality, that x = 0 (the general case

follows by translation). Let the eigenvalues of A be ordered so that

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 62

A? > A2 — ' ' ' — A„ an<^ ^ wi' w 2 Wk be the right singular vectors of B

corresponding to <7i (S ). cr2 (Z?), ak [B). respectively. Then. define

y S = span . . . . u>fc} and c = G C n. y 6 S. ||t/||2 = 1 so that .4~ = By. 0 By C'ourant-Fischer min-max conditions for eigenvalues (e.g. theorem 4.2.11 [HJ1])

and singular values (e.g. theorem 7.3.10 [HJ1]) we have:

A^ -+1 = min max {zmAmA=) < max (ymBmBy) < (Tk- J+i (B) J cbm(VV)==7 z€vv y£S w P i

The middle inequality follows since 5 has dimension j so that S 0n- k is a candidate

for W. Therefore, there are at least j eigenvalues of A in the interval

[-

Note that the number 8 does not depend on A [ac] and the interval may be determined

whether or not A [ac] is known. Lehmann has further shown that this value of 8 is

the smallest one for which j eigenvalues are captured, independent of .4 [ac]. This is

equivalent to

( \ / .4 [a] — x l .4 [a: a c] A [a] — x l \ sup crn_J+[ — crk-j+1 C=C* .4 [a1-': a] C - xl A [ac; a] \ V /

It follows from theorem 2.2 in [GRSYV] that when the condition of Hermicity is

dropped for the m atrix C then 1 i / \ / \ i A [a] — x l .4 [a: a c] 0^ H l--~1 SUp —

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 4

Matrix Completions

Recall that a matrix completion problem asks when a given partial matrix has

a completion with some desired property. The properties we are interested in are

inherited by principal submatrices, i.e. any principal submatrix of a matrix with the

given property also has the property. If we are interested in property 4>. then .4 is

said to be a partial "property 4>” matrix if every specified principal submatrix of .4

has property <£. For example, a partial positive definite matrix is a partial Hermitian

matrix in which every fully specified principal submatrix is positive definite.

If property is inherited, then any partial matrix that has a completion with

property $ must be a partial $ matrix. For a number of interesting properties

(e.g. positive semidefinite and distance matrices) it has been shown that this obvious

necessary condition on the specified principal submatrices is also sufficient exactly

when the graph of the specified entries is chordal. If the graph is not chordal, more

information needs to be known about the specified data. For a survey on matrix

completion problems see [J].

63

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 4

4.1 Positive Semidefinite Completions

An important class of completion problems that has received considerable at­

tention is the positive semidefinite completion problem. Since positive semidefinite

matrices satisfy the inheritance property, it is necessary that every fully specified

principal submatrix of the matrix in question is positive semidefinite. It was shown

in [GJSW] that this condition is sufficient to ensure a positive semidefinite comple­

tion of a partial positive semidefinite matrix A exactly when G{A) is chordal. This

chordal result is a generalization of the earlier result by [DG] for banded matrices.

If the graph of the specified entries of a partial positive semidefinite matrix is not

chordal more needs to be known about the data. A non-chordal graph contains a

chordless cycle of length 4 or more. Necessary and sufficient conditions that ensure

a positive semidefinite completion for the n-cycle, n > 4. were given in [BJT], and

[F] is a precursor from a different point of view. For an n-cycle these may be taken

to be [j] conditions transcendental in the data for an n-cycle to admit a positive

semidefinite completion. We will call these the cycle conditions. After a diagonal

congruence transformation of the data, a partial positive semidefinite matrix .4 in

which G'(/l) is an n-cycle is of the form

1 cos 0,-, ? cos 0ln

cos 0,-, 1 ' • • ? A =

? ’ •. '• • cos 0,Ml —I

cos 0lfl ? cos 0in_I 1

in whichk > 0\ > d2 > ■.. > 0n > 0 and ix, i2, . • •, in is the permutation of 1.2 n

in which these 0’s occur around the cycle. Note that such a transformation does not

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 o

change the existence of a positive semidefinite completion. According to corollary 1

in [BJT] the conditions that ensure a positive semidefinite completion of .4 are.

£ * < ( * - 1 ) * + E *•- i=l t=Jfc+l

for every 1 < k < n, k odd. In [BJL] those graphs for which these cycle conditions to­

gether with partial positive definiteness are sufficient to ensure a positive semidefinite

completion are characterized.

It is interesting to note that the cycle conditions are independent of the order of

the 9i s around the cycle. This can be seen without knowing the [BJT] result and is

also true for the case in which the data is from the complex numbers. For the cycle

Figure 4.1: Cycle

we will adopt the convention that in the partial matrix .4 with this graph, aI7 = r/,

and dji = d{ for i < j.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 6

Theorem 4.1 Let

1 d\ dn

d\ 1 d2 ?

.4 = d2 1 (4.1)

? d n —l

dn d n - 1 1

positive semidefinite completion if and only if

1 da{ i) da (n)

d, r(i da( 2) I

A = dC{2) 1

? da(n— 1)

d a (n d a (n -l) 1

has a positive semidefinite completion for any permutation a of the indices

i = 1,2, n.

Proof. It suffices to show that the data on two consecutive edges in the cycle

can be interchanged and the completability of the corresponding transformed matrix

is equivalent to that of the original matrix. Then, the general case in which any

permutation of the data results in a corresponding matrix with a positive semidefinite

completion is obtained from a finite sequence of consecutive interchanges. Assume

that the matrixA as in (4.1) has a positive semidefinite completion. Then G(A) is

given by the graph in figure 4.1. Let G' be a triangulation (add edges until the graph

becomes chordal and no cliques of size 4 or more are created) of G(A ) in which {1.3}

is an edge of G'. For example,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 7

Figure 4.2: Triangulation.

It is a necessary condition that A be partial positive semidefinite. So. there exists a

partial positive semidefinite matrix A' such that G(A/) = G', A \n = .4i„, A'nl = .4„i.

4 - i+1 = and A{1+1 = Ai,,+i for i = 1 .2 , n — 1. Assume the value given to

the edge {1.3} is z. The triangulated graph G' is chordal so that by [GJSW] A' has

a positive semidefinite completion.

1 d\ z

The principal submatrix A' [1,2.3] is fully specified and has data J \ d2

z d 2 1 Permuting the first two rows and columns of A! [1.2,3] results in the matrix

1 d\ z

B = d\ 1 d%

c c/2 1

which is positive semidefinite since permutation similarity preserves positive semi-

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 8

definiteness. But then

1 d2 z

B1 = d2 1 d\

z d\ 1

is also positive semidefinite. So. replacing A! [1.2. 3] by BT results in a new m atrix

A that is partial positive semidefinite and for which G'(A) is chordal and G(A) is the

same graph as G' except that di and d2 are interchanged. Since .4 is chordal, by

[GJSW], A has a positive semidefinite completion. Let A be the submatrix of A for

whichG (/i) is given by:

n-1

Figure 4.3: G’(A').

If A does not have a positive semidefinite completion, then A cannot have a positive

semidefinite completion. But A is chordal and, thus, has a positive semidefinite

completion. It follows that A has a positive semidefinite completion. Since this

argument is symmetric, this completes the proof. □

Note that the proof of theorem 4.1 is equally valid for data from the real or the

complex numbers and. therefore, in any list of cycle conditions for matrices with data

from the complex numbers, the order of the data around the cycle must be irrelevant.

In [BJT] and in [F] only real completions of real partial positive semidefinite

matrices the graph of whose specified entries is a cycle are considered. In this case.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 9

for each maximal clique that a given unspecified entry completes there is an interval

of allowed values for a positive semidefinite completion. This interval becomes a

disc in the complex case. In each case we must look at intersections of the sets of

allowed values for each maximal clique that the entry completes. In the real case the

intersection of intervals is still an interval. As a result, the real case is relatively easy.

However, the intersection of two discs is no longer a disc and, therefore, the complex

case is much more difficult.

Let A = B + iC be a partial positive semidefinite matrix in which B. C are real

and 6'(.4) = G(B) = G(C) is an n-cycle. Then

■7 1 b i ? b n 0 Cl Cn

•? ? b i 1 “ Cl 0 + i •? •? 1 b n - 1 0 Cn — i

•? ? b n b n — 1 1 cn C n — 1 0

If ,4 is positive semidefinite. so is AT = A. Then -1-2~- = B is also positive semidefi­

nite. Thus, the cycle conditions of [BJT] give necessary conditions on the data in B.

Let

1

— i

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 70

then

1 Ci

-ci 1 Re(D~lAD) = 1- Cn—i

? _ C-n Cn— 1 1

Therefore, the conditions in [BJT] also provide necessary conditions on the data in C.

This is a start in determining a list of conditions on the data that ensures a completion

in the complex cycle case. However, exactly what the complex cycle conditions are

remains an open question. The following theorem does not by itself give such a list,

but it does indicate that the complex positive semidefinite completion problem is a

special case of a larger real completion problem.

Theorem 4.2 Let G be any graph and A = B + iC, in which B, C are real, be a

partial positive semidefinite matrix with graph G . Then, A has a positive semidefinite

completion if and only if

B C M = - C B

has a positive semidefinite real completion.

Proof: Assume that .4 has a positive semidefinite completion A = B + iC in which

B and C are real matrices. Then

B C M = - C B

is a completion of M. Note that since ,4 is Hermitian. C is such thatCT = — C and M

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 71

is a real symmetric matrix. If x is an eigenvector of A associated with A, then IX

is an eigenvector of M associated with A. That is, any eigenvalue of A is also an

eigenvalue of M. Also. is an eigenvector of M associated with A. Since A is —ix

X X real, A = A, and for x ^ 0 the vectors and are linearly independent.

ix —ix

Therefore, the eigenvalues of M are the eigenvalues of A (with multiplicities doubled)

and are nonnegative. Thus, M is a positive semidefinite completion of M.

Now assume M has a real positive semidefinite completion

By C My =

C T B 2

Note that although the diagonal blocks are equal in the partial matrix, the completion

may not have B y — B 2. Similarly, it may not be the case that CT = — C as required in

order to recover a completion of .4 from M y . The following similarity leads to another

positive semidefinite completion. M 2, of M :

0 i l B x C 0 il i C T i B 2 0 i l

■il 0 C T B 2 - i l 0 —i B \ — i C - i l 0

B 2 - C t = Mi

-C By

Note that the specified entries of By and B 2 are identical, as are those of C and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. —C = CT. so, this is indeed a second completion of M. Then

i (b , + b 2) i ( c - c T) - (A/i + A/2) — = M -'2 { C - C t ) i{B, + B2)

which is yet another completion of A/. Note that A/ has the desired form since the

diagonal blocks are identical and the off diagonal blocks are the negative of one

another. Since A/t and A /2 are positive semidefinite, M is also positive semidefinite.

Then, the matrix .4 = 4 ( + 5 2) +5 (C — C r ) is a positive semidefinite completion

of ,4. □

4.2 The Euclidean Distance Completion Problem:

Cycle Completability

An n-by-n matrix D = (dtJ) is called a (Euclidean) distance matrix if. for some

k. there exist points pi,p2 . pn £ such thatd{j = ||pt- — Pj\\2,i,j = 1.2____ n,

in which ||* ||2 is the Euclidean vector norm on A partial distance matrix is a

partial symmetric matrix in which every specified principal submatrix is a (Euclidean)

distance matrix. Generally, it is assumed that all diagonal entries are specified and

in the case of distance matrices the diagonal entries are all 0 .

Here our interest lies in the general question of determining, for each graph G.

conditions on the specified data in a partial distance matrix with graph G that ensure

the existence of a distance matrix completion. This interest is motivated in part by

the “molecular conformation problem”, (see [HC] and [dLH]) in which some mea­

sured interatomic distances must be fit to a distance matrix of an entire molecule.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The distance completion problem enjoys the property of "inheritance”, so. in order

that a partial symmetric matrix .4 have a distance matrix completion, any principal

submatrix of .4 must have a distance matrix completion. Thus, it is necessary that

.4 be a partial distance matrix.

Recently, it has been shown that, as in the positive semidefinite case, this obvious

necessary condition is sufficient exactly when the graph G of the specified entries of

,4 is chordal [BJ]. Otherwise, more need be known about the data. There is a very

strong relationship between the distance matrices and the positive (semi-)definite

matrices [B]. Unfortunately, these links do not simply extend to the two completion

problems [JT], but. nonetheless, it is reasonable to expect strong analogies between

completion results.

Here, the focus is upon nonchordal graphs G and, in particular, upon the role of

completability conditions for a single full cycle. In the case of the distance completion

problem, the conditions for a simple cycle of data are much simpler than in the positive

semidefinite case, both to state and to understand. The purpose here is to answer the

question, parallel to the one resolved in [BJL] for the positive (semi-)definite case,

"for which graphs are the cycle conditions sufficient for a partial distance matrix to

have a distance matrix completion?’' Importantly, the class of graphs is the same as

in the positive (semi-)definite case, but. also importantly, there are notable differences

in the details (though not the overall structure) of the proof.

As in [BJL], considerable graph theoretic structure is necessary to carry out this

proof and the same notation and definitions as in [BJL] will generally be adopted.

Nonstandard concepts and notation will be defined as they arise.

In the case of a partial distance matrix, suppose di 2?^23- • • • ■> > 0 is a

A:-cycle of specified data. Then, there is a single simple condition that is necessary

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 74

and sufficient for a distance matrix completion:

2 max {di 2: d 23,.... dk-i.k-d^i } < d \2 + d 23 + • • • + d^-i,k + dki •

We refer to this inequality as the polygonal inequality. Its necessity follows from

repeated application of the triangle inequality, while its sufficiency may be seen in­

ductively. initiated with the case of a triangle, in a variety of ways. For example,

two are mentioned here. (1) By adjustment of the angle between the largest distance

and one adjacent to it. the fc distances d12, d2 3,..., dfc-ijt, d/ti may be replaced by

k — 1 distances: a "new" distance lying between the largest plus or minus an adjacent

distance, and the k — 2 other unused distances. The angle may be chosen so that the

new set of distances satisfies the polygonal inequality and, thus, is achievable by the

induction hypothesis. (2) If k = 3, the cycle is a triangle and in this case the cycle

condition (i.e. polygonal inequality) is necessary and sufficient for the specified data

to be a distance matrix. Therefore, it may be assumed that there are at least four

distances. Replace the adjacent pair which has the smallest sum by their sum. If this

is not then the largest distance, the polygonal inequality still holds and the induction

hypothesis applies, realizing all k distances with the two “small” distances lying on

a line. Otherwise, if the sum of the two is then the largest distance, the polygonal

inequality still holds, as there are at least four distances originally and the smallest

adjacent sum was chosen. The induction hypothesis again applies, realizing the two

“small” distances on a line. It should be noted that the cycle of distances may always

be realized in a plane, i.e. 1R2, and that, as in the positive (semi-)definite case, the

order of the data around the cycle is irrelevant.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4.2.1 Statement of Main Result

In order to describe our main result of this section, several definitions are needed.

We say that a partial distance matrix A is distance cycle completable if every prin­

cipal submatrix of .4. corresponding to a minimal cycle in the graph of the specified

entries of A. has a distance matrix completion, i.e. the data for the cycle satisfies the

polygonal inequality. We also informally say that the data satisfy the “cycle condi­

tions.” Recall that an edge subdivision of a graph G on n vertices is a graph G' on

n + 1 vertices that results from replacing an edge of G with two edges and a vertex

between, and a vertex partition of G (n vertices) is a graph G' (n-f 1 vertices) in which

a vertex (of degree at least 1) in G is replaced by two adjacent vertices that partition

the neighbors of the original vertex. Also recall, that a graph Gi is homeomorphic

to a graph G x if G

edge subdivisions, and the graph G2 is built from G\ if G 2 may be obtained from G x

via a finite sequence of (at least one) vertex partitions. Note that edge subdivision

is a special case of vertex partitioning, so that homeomorphism is a special case of

“built from.” The special graph on five vertices that is a single edge subdivision of

ll'i (= A 4 ) is denoted as W4:

Figure 4.4: \\\

Our main result may now be stated.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 76

Theorem 4.3 For an undirected graph G. the following four statements are equiva­

lent:

(0) every distance cycle completable. partial distance matrix /I, the graph o f whose

specified entries is G. has a distance matrix completion:

(1) no induced subgraph of G is Wit, k > 5, or can be built from Wk. k > 4;

(2) every induced subgraph of G that contains a homeomorphic image of K .j also

contains an actual copy of /v4; and

(3) G has chordal supergraph, in which all edges o f any J -clique are already edges

of G.

[f G satisfies condition (3) of theorem 4.3. we say that G has a 3-clique chordal

supergraph. A graph G that satisfies condition (0), the motivating notion, is referred to

as distance cycle completable. (The other three conditions of theorem 4.3 are purely

structural graph theoretic conditions.) So. according to theorem 4.3. the distance

cycle completable graphs are exactly those with 3-clique chordal supergraphs. Of

course, chordal graphs and minimal cycles (the most nonchordal graphs) both qualify

as distance cycle completable. but many other graphs do also.

The proof of the theorem follows the logic (0)=>(1)=»(2)=>(3)=>(0). The purely

graph theoretic implications (1) =>(2) and (2)=»(3) have already been demonstrated

in [BJL], which addresses the cycle completable graphs for the positive definite com­

pletion problem. Even though these turn out to be the same graphs, we know of

no immediate way to deduce the distance completion result from the result of [BJL].

So, section 4.2.2 is devoted to verifying the implications (0)=>(1) and (3)=>(0) in the

distance case. Some purely graph theoretic technology for the latter implication is

also adopted from [BJL].

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Though condition (0) only requires the distance cycle conditions on minimal cy­

cles, we shall freely use them for arbitrary cycles. This is justified by the following

observation.

L e m m a 4.1 Let A be an n-by-n partial distance matrix, the graph of whose specified

entries is G. The data in A satisfies that polygonal inequality for every cycle in G if

and only if A is distance cycle completable.

Proof. Necessity is clear, as “distance cycle completable” formally requires less of

the data (i.e. that the polygonal inequality is satisfied by just the minimal cycles.)

First we consider the case in which a non-minimal cycle has a single chord:

Figure 4.5: Non-minimal cycle

The non-minimal cycle is then the union of two minimal cycles, less the common edge.

Suppose the length of the common edge is d and the remaining lengths in the two

minimal cycles are d x,d 2, ... .d p and dp+i.d p+ 2 , ___ dp+q. Because of the polygonal

inequality for the two minimal cycles, we have:

d < d\ + d2 -{-••• + dp

2d; < d -f d\ + <^2 + • • • + dp i = 1,2,..., p

d < dp+i -f- dp+2 + • • • + dp+q

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 7S

and

2d j < d + d p + i + d p+ 2 + • ■ • + d p + q j — p + 1, p + 2,. . . , p q.

Addition of the second and third inequalities yields

2d, < d\ -t- d2 + • • • + dp+q i — 1.2,.... p.

and addition of the first and fourth yields

2d0 < d l + d 2 -\ ------dp+q j = p + 1. p + 2...... p + q.

Taken together this is the polygonal inequality for the non-minimal cycle. When a

non-minimal cycle has more than one chord, a repeated application of this argument

leads to the desired result for such a non-minimal cycle. □

We close this section by noting that, based upon recent work [JM], another graph

theoretic description may be added to the list in our theorem, we say that a graph

G is a clique sum of two graphs G\ and G2 if each of G\ and G2 contain a copy of Ap

for some p and identification of these two copies of K p results in the single graph G.

e.g.

Figure 4.6: Clique sum

It is shown in [JM] that

(4) G is a sequential clique sum of chordal and series-parallel graphs (see [JM]).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 79

is equivalent to the other graph theoretic conditions in the theorem. This is useful,

as it shows that distance cycle completable graphs may be efficiently recognized.

4.2.2 Proofs of Necessary Implications

The Implication (0) => (1)

The proof that (0)=»(1) is by contrapositive. It is shown that if a graph G contains

a forbidden subgraph (an induced W f.k > 5, or an induced subgraph that is built

from a 14 1, k > 4), then there is a distance cycle completable, partial distance matrix

.4. the graph of whose specified entries is G , that has no distance matrix completion.

The proof is constructive. First, data of the desired type is exhibited for the basic

forbidden subgraphs IT'.,, IT'?, 146, • • • and then it is shown that if such data exists for

G , it also exists for a vertex partition of G. The proof is then completed by showing

how to embed such data for a forbidden subgraph in a distance cycle completable.

partial distance matrix the graph of whose specified entries is otherwise arbitrary.

Because of the inheritance property, the resulting data matrix can have no distance

completion and thus provides that data necessary for the proof.

The strategy for the proof of this implication is generally the same as it was in

the positive semidefinite case [BJL], but the details are noticeably (and probably

necessarily) different.

L e m m a 4.2 None o f the k-ivheels IFt. k > 5. is a distance cycle completable graph.

Proof: For k > 5. let A be the k-by-k real partial symmetric matrix given in figure

4.7 below.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 8 0

0 1 1 ... 1

1 0 2?...? 1

12'.'.? ? A= . >7 • • • '. :

: ? ?

? ’ ■. 2

1 ? ••• 2 0

Figure 4.7: Bad data for a 6-wheel

The graph of the specified entries of .4 is Wk and the specified entries satisfy the cycle

conditions, as each entry is a 1 or 2. so that no entry is larger than the sum even

of only two of the others. There are 6 — 1 maximal cliques in Wk all of cardinality

three: {1,2.3} , {1.3, 4} ,..., {1. k — 1. 6}, and {1,2,6}. The principal submatrix

corresponding to each of these cliques is either

0 1 1 0 1 1

1 0 2 or 1 0 1

1 2 0 1 1 0

The former is realizable on a line, the latter as an equilateral triangle, so that both are

distance matrices. Thus. A is a distance cycle completable, partial distance matrix.

Suppose the data in A were to admit a distance completion. In view of the

distances among the vertices 1, 2, and 3, the vertex 1 would be the midpoint of a line

segment joining vertices 2 and 3. Similarly, 1 is the midpoint of a line segment (of

the same length as the segment joining 2 and 3) joining 3 and 4. It follows that 2 and

4 must coincide. The same reasoning shows that 3 and 5 must coincide, 4 and 6 (and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. SI

2), 5 and 7 (and 3), and so on. Thus, vertex k must coincide with vertex 2 or vertex

3 and, then, cannot have distance 1 (as specified) from vertex 2. This contradiction

shows that .4 has no distance completion and completes the proof. □

The graph W4 is distance cycle completable as it is the complete graph I\4. How­

ever. we now show that any graph built from Wk, k > 4 is not distance cycle com­

pletable. The graph W4 is built from H-'T To see that W4 is not distance cycle

completable, consider the following data:

Figure 4.8: Bad data for a W4

The specified distances again trivially satisfy the cycle conditions, but vertices {4.1.5}

lie on a line with vertex 1 at the center. Similarly {4,1.3} lie on a line with vertex 1 at

the center. This means that vertex 5 is coincident with vertex 3. which is impossible:

vertex 2 cannot be both distance one and distance two from the same point. Thus.

W4 is not distance cycle completable. The preceding discussion yields:

L e m m a 4.3 The graph W4 is not distance cycle completable.

If a graph can be built from IF.), then it can be built from W 4. This observation,

together with the preceding and following lemma suffice to prove that no graph built

from W k,k > 4, is distance cycle completable.

L e m m a 4.4 If G is a graph that is not distance cycle completable and G' is obtained

from G either by subdividing an edge or by partitioning a vertex, then G' is also not

distance cycle completable.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Proof. If G' is an edge subdivision of G, then G' is also a vertex partition of G.

in which one of the neighbor sets has cardinality one. Thus, only the case in which

G' is a vertex partition of G need be considered. Without loss of generality, take the

partitioned vertex to be n and suppose that l,2,...,j are the vertices adjacent to n

and j + 1. j + 2 k are the vertices adjacent to n + 1 in G' . 1 < j < k.

j+2 xi+l, j+2.

i+1 i+1

Figure 4.9: Bad data for vertex partitioning

Let .4 = (a,j), the graph of whose specified entries is G. be any n-by-n partial dis­

tance matrix that satisfies the cycle conditions, but is not distance cycle completable.

Define the (n + l)-by-(n + 1) partial symmetric matrix A' = ( a Q as follows:

/ < n = ani = ? i=j + 1,2+2,.

a ;.n+l = ® 71+1,7 =? < = 1,2...... n

Gi.n+1 = Q7l+l,i — Q-i.n I —2 + 1,2 .,k / ^71,71+1 = a 7i+l,n _ a 7l + l,7l+l _ 0

and

a-j = ciij otherwise.

The graph of A' is G‘ with a distance of zero on the edge { n , n + 1}.

Any minimal cycles in G' are either present in G or are cycles of G with the new

edge {n,n +1} inserted. Since the edge {n .n + 1} has a distance of zero, the cycle

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. conditions hold in A'. The matrix A' is also a partial distance matrix, as the edge

{rc.n + 1} is. by construction, a maximal clique of G'. and all other cliques in G'

correspond to a unique clique in G.

Assume that there is a distance completion A' of A'. The vertices n and n + 1

must coincide spatially and the triangle inequality on the entries in A' requires that

ank — a n+l.fc

a' n.j + l = a n + l.j+ l

n+1,1 - a ni

= a / a n+l.j nj'

As a result, if A' has a distance completion, then A must have a distance completion.

This contradiction completes the proof. □

It follows from lem ma 4.4 that if G is not distance cycle completable, then any

graph G' built from G is also not distance cycle completable. To complete the proof

of the implication (0)=>( 1), data for these forbidden subgraphs must be embedded

in data for larger graphs in such a way that the necessary conditions still hold. The

larger graphs are then seen not to be distance cycle completable. because, if they

were, the smaller ones would be by inheritance.

Proof of the implication (0)=>(1)

Suppose that G' is a connected graph on n vertices that contains, as an induced

subgraph, a graph G that is one of the forbidden subgraphs. If G' were not connected

we would need only consider the connected component containing G. For convenience.

assume that the vertices of G are 1,2____ k. We assume, without loss of generality.

that k < n, so that the remaining vertices of G' are k + 1, k + 2,.... n. As in the

proofs of lemmas 4.1. 4.2. and 4.3, assign distances to the edges of G. We need to

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 8 4

demonstrate data for the remaining edges of G' that does not violate the necessary

conditions.

Suppose that the subgraph of G' induced by vertices k 4-1, k + 2,..., n has p (> I)

connected components. (Since G' is connected, each of these components has an edge

between one of its vertices and a vertex of G.) Within each of these components assign

the distance 0 to each edge. For each component, this has the effect of assigning all

vertices of that component the same spatial location. (Note that an edge of length 0

does not affect satisfaction of the cycle conditions.) We assume then that k + p = n

and treat each component as a single vertex, implicitly consolidating edges as needed.

We assign distances to the edges connecting vertices k + l.k + 2, k + p with the

vertices of G as follows. Since G is Wk or built from H4, G has a natural “center"

vertex or cluster of vertices resulting from partitions of the center. In the latter

case, all specified distances among vertices in the central cluster are 0; the cluster is

connected, and, so. all vertices in the cluster must occupy the same spatial location.

We thus identify all vertices in the central cluster and. in both cases, refer to llthe"

center vertex, which, we suppose, is vertex 1.

Consider now a vertex k + i. k = 1. 2,... , p. If k + i is adjacent to 1 (case one),

we assign that edge the distance 0. All other edges connecting vertices in G to k + i

are then assigned distances by calculating the shortest path in G from that vertex

(i.e. k + i) to vertex 1. This insures that the polygonal inequality holds for any cycle

containing k + i. Any clique in G' that includes vertex k + i has cardinality at most

four. If the cardinality is less than four, there is nothing further to check. If it is four,

then the clique must include vertex 1, as well as two vertices j and j + 1 adjacent

in the cycle of G. For the vertices j and i + 1, the distances assigned to the edges

{j, k + /} and {j + l , k + «} must be d\j and dij+i, the distances from 1 to j and to

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. S o

7 + 1. respectively The data specified by our prescription for these four points is then

0 0 d\j

0 0 d\j di.j+i

d-ij d\j 0 dj,j +1

i.j+i d\ .i+i dj,j+i 0

which is the distance matrix of the triangle of G:

i.j+i

i+i j.j+i

Figure 4.10: Data for a triangle

with vertex 1 repeated.

If fc + i is not adjacent to vertex 1 (case 2), choose an arbitrary one of its neighbors

in G and assign that edge the distance 0. Again, assign distances to all other edges

connecting vertices of G to k + i via shortest path calculations. This ensures the

polygonal inequalities for any cycle containing k + i. In this case, no clique contain­

ing k + i can be of cardinality more than three. Because of the triangle/polygonal

inequalities, the data associated with such cliques are distance matrices.

By assigning edge distances throughout G' we now have a partial symmetric ma­

trix A, the graph of whose specified entries is G'. Because we have checked the data

associated with cliques of G ', A is a partial distance matrix. Furthermore, the polyg­

onal inequalities associated with the cycles of G' are satisfied, so that A is distance

cycle completable. (Note that, if there is a minimal cycle in G' containing more than

one of the k + i. it may also be identified with a cycle of G by our prescription.) If A

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. S 6

had a distance matrix completion .4, then .4 [{1, 2 ...... k}] would be a distance matrix

completion of A [{1.2...., £}], a contradiction that completes the proof. □

The Implication (3)=>(0)

To prove (3)=>(0), vve must show that if G is a graph that has a 3-clique chordal

supergraph, then every distance cycle completable. partial distance matrix, whose

graph is G. has a distance completion. As in the positive definite case, much of the

proof is combinatorial in nature. The logic of the proof is. as in [BJL], to sequentially

add edges ei, e2,.... em to a given graph G to obtain a 3-clique chordal supergraph

H. Upon addition of the edge e*. all new minimal cycles are considered. For any two

of these minimal cycles, a common distance can be chosen for e* so as to make each

cycle completable. Then, an application of Helly’s theorem (see. e.g. [Ro]) produces

a common distance for all new cycles.

Because two minimal cycles may intertwine in a variety of ways, difficulties arise

in considering a general pair of minimal cycles containing a given edge, e k- In [BJL]

there are a number of graph theoretic lemmas dealing with these difficulties for graphs

with 3-clique chordal supergraphs. These lemmas culminate in lemma 9.6 in [BJL],

which is stated below as lemma 4.5.

Before this lemma can be stated, another definition is needed. Given a graph G',

an edge {a, b} of G and a cycle G. we say that G' is obtained from G by replacing the

edge {a, b} with the cycle C if. after two of the vertices of C are identified with a and

b the graph G' is (G — {a.b}) U C. Pictorially,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 87

-£)— (SC

Figure 4.11: Replacing edge a, 6 with the cycle C

A necklace is either a graph that is a cycle plus one additional chord or a graph

obtained from a cycle Gt = [c

{c0, ci}. {ct, c2} {cfc_i.Cfc} and replacing each edge in the subset with a cycle.

Note that the edge {cjt.co} is excluded. The base of a necklace that is a cycle plus

an additional chord is the chord: otherwise, it is the edge {cjt,co}. Recall that if

G = (K E) and W C V. then G w denotes the subgraph of G induced by the vertex

set W.

L e m m a 4.5 Let G = (.'V. E) be a graph that has a 3-clique chordal supergraph H . let

e = {a.6} be an edge of H that is not in G and let G' = (N. £U{e}). Let C = (G. E ')

and D -- (K E" ) be distinct minimal cycles in G' with common edge e, and express

C and D as the union of paths

C = [as,a 0] U [a 0,ai] U [a!,a2] U • • • U [as_ ! ,a s]

D = [bs. 6o] U [bo. 6i] U [b\. bf[ U • • • U [6s_i, bs]

in which do = = a, as = bs = 6. U fl V = {a0, aj,.... a s} = {bo, 6j 6„} and s is

a positive integer. Then G'UuV is a necklace with base {a, b}.

One more lemma is required for the proof of (3)=^(0). A corresponding statement

for the positive definite case, lemma 9.7. is found in [BJL]. Since the non-graph

theoretic parts of the proof in [BJL] require only a chordal theorem and assumed

cycle conditions , the proof in the distance case is similar.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ss

L e m m a 4.6 Let G = (N .E ) be a graph that has a 3-clique chordal supergraph

H = (N. F), and let A be a distance cycle completable, partial distance matrix whose

graph is G. Let e — {r, 5 } be an edge in F that is not in E and let G1 = (N , E U {e}).

For any real number x, let ,4r be the matrix obtained by specifying the r.s and s.r

entries of A to be x. Then, if C = (CL E') and D = (V. E") are distinct minimal

cycles in G' with common edge e, there exists a nonnegative number x such that both

<4r[G] and Ar [F] have distance completions.

Proof: By lemma 4.5. G'VuV is a necklace with base {r.s}. Since Ginjv is G[rur

with the edge {r,s} deleted, it is the union of graphs G', = (IF,-, is}). i = 1.2 k.

in which each G', is either /\2 or a minimal cycle ofG. If G, is a minimal cycle, let G,

be the complete graph on IF, and if G, is I\2, let G,- = G,. Let 5 be the union of the

G',-. i = 1 ,2 , k. Then S is a chordal supergraph of GVuv as any cycle in 5 is a cycle

in one of the G',. Now if G, is a minimal cycle. A [IF,-] has a distance completion B [IF,]

by assumption. Let B [U U F ] be the matrix obtained from A [U U F] by replacing

each A [IF,] for which G, is a minimal cycle by B [IF,]. Then the graph of B [U U V']

is 5 and for each maximal clique IF of S', B [IF] is a distance matrix. The graph

5 is chordal, so, by the chordal theorem [BJ], B[U U F ] has a distance completion

M \U U F], which is also a distance completion of A [U U F]. Let x be the r, s entry of

l\I [U U F]. Then A x [G] and A x [F] have the distance completion M [G] and M [F ].

respectively, which completes the proof. □

YVe may now complete the proof of the implication (3)=>(0). Let ei, e2,.... em be

the edges in a 3-clique chordal supergraph H that are not in G. and let

G,- = (;V,£U {e1,e2,...,e,}),i = 0,1, m. Then Go = G and Gm = H. We

assume, without loss of generality, that G is connected. Let ,4 be a distance cycle

completable partial distance matrix, whose graph is G. We wish to exhibit a distance

matrix completion of .4.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 89

Consider first G\. W rite et = {r.s} and for any real x. let ,4r be the matrix

obtained by specifying the r. s and s, r entries of A to be x. There are two possibilities

regarding the minimal cycles in G\.

(i) The edge e\ is contained in exactly one minimal cycle C = (IT, F) of G i. Since

the induced subgraph (G'i )lv- is a cycle. Gw is a path, and thus a chordal graph.

By the chordal theorem [BJ], .4 [IT] has a distance matrix completion. Let x be

the r. s entry of this completion. Then A x [IT] has the same completion. Either

the edge {r. s} or W (if |IT| = 3) is the only maximal clique of G t that is not

a maximal clique of G. so A x is a partial distance matrix. Since C is the only

minimal cycle in G i that is not a minimal cycle in G, .4X is also distance cycle

completable.

(ii) The edge is contained in two or more minimal cycles of G\. Let

Ci = (IF,. Fi) .i = 1.2,..., p be the minimal cycles in G'i containing the edge ei.

For i = 1.2 p, let /, = {.r 6 R + : A T [IF,] has a distance matrix completion}

If |IT,| = 3, we interpret this to mean A x [IT,] is a distance matrix. By lemma

4.6. /, n Ij 7^ 0 for all distinct i ,j 6 {1,2 p} . Therefore, by Helly's theorem

on this line, flf=I/i ^ 0. Let x € Df_iThen .4r [W,] has a distance matrix

completion for i = 1,2 p. By hypothesis, e\ belongs to no 4-clique in G'i.

Thus, the only maximal cliques in G\ that are not in G are those IT, with

|JT;| = 3, or only {r. s} if |VT,| > 4 for all i = 1,2,... ,p. It follows that ,4r is a

partial distance matrix and distance cycle completable.

If Hi = 1, we are done. Otherwise, G\ has a 3-clique chordal supergraph H and /lx

is a distance cycle completable, partial distance matrix, the graph of whose specified

entries is G\. Therefore, repeating the above process for e 2,e3,...,e m we obtain a

partial distance matrix B with chordal graph H. By the chordal theorem [BJ]. B

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9 0

has a distance matrix completion M, which is also a distance completion of A. This

completes the proof of the implication (3)=>(0), which completes the proof of the

theorem.

4.3 The Combinatorially Symmetric P-matrix

Completion Problem

Positive definite matrices are Hermitian matrices for which all principal minors

are positive. If we relax the symmetry required by Hermicity, we get the P-matrices.

An rc-by-n real matrix is called a P-matrix (P0-matrix) if all its principal minors are

positive (nonnegative) (see. e.g. [BP] or [HJ2]). This class of matrices generalizes

many other important classes of matrices (such as M-matrices and totally positive

matrices), has useful structure (such as inverse closure, inheritance by principal sub­

matrices. and wedge type eigenvalue restrictions), and arises in applications (such as

the linear complementarity problem, and issues of local invertibility of functions).

Here, we consider the P-matrix completion problem under the assumptions that

the partial matrix is square, all diagonal entries are specified, and the data is combi­

natorially symmetric (thej.i entry is specified if and only if the t’J entry is specified).

Further, since the property of being a P-matrix is inherited by principal submatrices,

it is necessary that the partial matrix be apartial P-matrix , i.e. every fully specified

principal submatrix must itself be a P-matrix. Of all these assumptions, the only one

that is truly restrictive is combinatorial symmetry and the general case, in which this

assumption is relaxed, is commented upon later.

In each of the completion problems: positive definite, iV/-matrix, inverse M -matrix.

and totally positive, there are significant combinatorial restrictions (in addition to the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 91

necessity of inheritance) on partial matrices, even when combinatorially symmetry is

assumed, in order to ensure a desired completion. For example, as mentioned be­

fore, the condition on partial positive definite matrices necessary to ensure a positive

definite completion (without further knowledge of the data) is that the undirected

graph of the symmetric data be chordal [GJSVV]. P-matrices are a generalization of

real positive semidefinite matrices in which the matrix is no longer required to be

symmetric. Interestingly, it is shown here that, in the case of P-matrix completions,

there are no combinatorial restrictions necessary to ensure a P-matrix completion

other than the combinatorial symmetry assumption. Every combinatorially symmet­

ric partial P-matrix has a P-matrix completion. However, when the combinatorial

symmetry assumption is relaxed, the conclusion no longer holds, and the question of

which directed graphs for the specified entries ensure that a partial P-matrix has a

P-matrix completion is. in general, open. All 3-by-3 partial P-matrices have P-matrix

completions, but we exhibit a 4-by-4 partial P-matrix with just one unspecified entry

and no P-matrix completion.

Let A be an n-bv-n partial P-matrix with one pair of symmetrically placed unspec­

ified entries. By perm utation similarity it can be assumed without loss of generality

that the unspecified entries are

T •? a n °12

^21 A 22 a 23

T a 32 (133

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. in which ,422 is (n — 2)-by-(n — 2) and ai2, a 2ii fl23i 032 £ R" • Define

#11 Cl|2 %

A {x.y) = ®21 ^ 2 2 «23

y ^32 0 3 3

and denote ,4(0,0) by ,4o- Also define C e {a C ;V : 1, n g q } and let

/1Q = A (x .y ) [a]. Note that since l.n (E a for all a € C , x and y are unspecified

entries in every Aa , a € C.

L e m m a 4.7 Every partial P-matrix with one pair of symmetrically placed unspecified

entries has a P-matrix completion.

Proof. Let A be an n-by-n partial P-matrix with exactly one pair of symmetrically

placed unspecified entries. To find a P-matrix completion of A we must find x, y such

that det A a > 0 for all a € C (the remaining principal minors of A are positive by

hypothesis). For each a (E C, a = {*4 = 1, f2,..., J|a| = rcj define

= det A (x .y) [ |i2, is, • • 1 ^ | a | — 1 } ]

hn = det A0 [{ «2, ^3, |o|} — L^J

cn — det A0 [|j'i> iz, lot|—I } 1 ^ 2 > ^3> • • •

d.\ — det A0 [a].

By using Sylvester's identity (see, e.g. section 0.8.6 [H J1]) we see that

det A a = - a Qxy + (-l)|Q| lbax + ( - l ) ,a| lcQy + da.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 93

Since A (x .y ) is a partial P-matrix. aQ > 0 for all a 6 C. Then x, y can be chosen

so that xy < 0 and —aQxy > ( —l)lal60x -f ( —l)l“lcQ(/ — dQ. In order to have xy < 0.

choose y = —x. The line y = — x intersects the hyperbola det ,4^ = 0 at the points

Define

m (a) = max < x,

Then det > 0 for each x such that |x| > m(a). In order to find a P-matrix

completion of .4 we must find a pair x. y that works for all QgC. The matrix .4(x. y)

is a P-matrix for all x. y such that |x| > maxm(a) and y = — x. □ c*€C This lemma can be used sequentially to find a completion of any combinatorially

symmetric partial P-matrix. The lemma proves the case in which there is one pair of

symmetrically placed unspecified entries. Assume there is a P-matrix completion of

every partial P-matrix with k — 1 pairs of symmetrically placed unspecified entries and

let .4 be a partial P-matrix with k pairs of symmetrically placed unspecified entries.

Choose one symmetrically placed pair i .j of unspecified entries of ,4. Each maximal

principal submatrix that this pair completes (there are no other unspecified entries

in such a maximal submatrix) is a partial P-matrix by inheritance and. by lemma

4.7, can be completed to a P-matrix. For each maximal principal submatrix .4 [a]

let xQ be the value of the unspecified entry as given by lemma 4.7. Then choosing

x so that |x| > m ax { |x 0| : i . j € a.a maximal} completes each of these maximal

principal submatrices. Then, we are left with a partial P-matrix withk — 1 pairs of

symmetrically placed unspecified entries which can be completed to a P-matrix by

the induction hypothesis. Note that the order of completion is immaterial (as long

as combinatorial symmetry is maintained). This proves our main result.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 94

Theorem 4.4 Every combinatorially symmetric partial P-matrix has a P-matrix

completion.

As mentioned above, when the combinatorial symmetry assumption is relaxed,

the conclusion of the theorem no longer holds. The question of which directed graphs

for the specified entries ensures that a partial P-matrix has a P-matrix completion is.

in general, open. However, we do know the following.

Proposition 4.1 Every 3-by-3 partial P-matrix has a P-matrix completion.

Proof. The combinatorially symmetric case is covered by the lemma above. The

only case that remains to be considered is the case in which .4 is a 3-by-3 partial

P-matrix with one unspecified entry. Note that if there are more unspecified entries,

values may be assigned to entries making sure the 2-by-2 principal minors are pos­

itive. until either one pair of symmetrically placed unspecified entries, or only one

unspecified entry remains.

By permutation similarity, it can be assumed without loss of generality that the

unspecified entry is in the 3.1 position. It can also be assumed, by positive left

diagonal multiplication and diagonal similarity (which both preserve P-matrices).

that there are ones on the main diagonal and on the super diagonal. Then .4 is of

the form

I 1 c

.4 = a I 1

y b 1

in which a .b .c < 1 since .4 is a partial P-matrix. In order to complete .4 to a

P-m atrix y must be chosen so that the 1.3 is positive (which yields yc < 1)

and det ,4=1+ abc — a — b + y( 1 — c) is positive.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. There are several cases to consider. If c < 0. choose y > 0 and large enough to

make det .4 positive. Similarly, if c > 1. choose y < 0 and large in absolute value. If

c = 1. then det A = 1 + ab — a — 6 = (1 — a)(l — 6) which is positive since a.b < 1.

So, y = 0 will give a P-matrix completion of .4. All that remains is 0 < c < 1. In this

case, if a. b > 0 or if ab < 0 choose y = ab. This gives yc = abc < I (since a .b .c < 1)

and

det A = 1 + abc — a — 6 + ab{\ — c)

= I — a — b + ab

= ( I -a ){\-b ) > 0.

For a.b < 0 the term 1 + abc — a — b in the determinant of .4 is positive. So, y = 0

will result in a P-matrix completion of 4. Thus, every 3-by-3 partial P-matrix has a

P-matrix completion. □

Proposition 4.2 For every n > 4, there is a partial P-matrix with exactly one un­

specified entry for which there is no P-matrix completion.

Proof: The matrix

1 - l l l

2 l --i l .4 = 0 l 1 2

y -10 --i l

is a partial P-matrix with no P-matrix completion. It is easy to check that all

2-by-2 and 3-by-3 principal minors that do not include both rows 1 and 4 are positive.

However, the two 3-by-3 that involve y cannot simultaneously be made

positive. In order for the determ inant of A [{1.2.4}] to be positive it must be the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 96

case that y < — £ while a positive determinant for A [{1,3.4}] requires that y > —3.

Thus, there is no P-matrix completion of .4. This data can be embedded as a principal

submatrix, by putting l's on the diagonal and 0’s in the other specified positions, to

produce a partial P-matrix with one unspecified entry and no P-matrix completion

for any n > 4. □

Other completion problems, intermediate between the positive definite completion

problem and the combinatorially symmetric P-matrix completion problem are open

and may be of interest. For example, when does a sign symmetric, partial

P-matrix have a sign symmetric P-matrix completion and what may be said about

the infimum of the Frobenius norms of completions of combinatorially symmetric

partial P-matrices.

4.4 The Totally Positive Completion Problem

An rc-by-rc matrix is said to be totally positive (nonnegative) if every minor

(principal and non-principal) is positive (nonnegative). In particular, this means

that every entry of a totally positive (nonnegative) matrix is positive (nonnegative).

Further discussion of totally positive matrices may be found in [K] or [A]. This class

of matrices arises in many applications including approximation theory, geometric

design, and wavelets. It is interesting to note that according to [GM] one can test

whether or not a matrix is strictly totally positive in polynomial time. This uses the

fact that only the determinants of submatrices consisting of consecutive rows and

columns need to be checked [K].

Here we consider the question: for which graphs G does every partial totally

nonnegative matrix, the graph of whose specified entries is G, have a totally nonneg­

ative completion. Total nonnegativity is inherited by submatrices. Therefore, it is a

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 97

necessary condition that every fully specified submatrix be totally nonnegative.

Total nonnegativity is not, however, preserved by permutation similarity. This

can easily be seen by noting that the 1.1 entry of a matrix enters positively in every

minor in which it occurs. Therefore, total positivity is preserved when this entry is

arbitrarily increased. However, if the first and second rows and columns are inter­

changed. this entry is then in the 2.2 position and enters negatively in some minors:

making it larger may then make some of these minors negative.

Because total positivity is not preserved by permutation similarity we will restrict

our attention to labeled graphs. Labeled graphs are those in which the numbering

of the vertices is fixed. iNote that two partial totally positive matrices the graphs of

whose specified entries are isomorphic may not both have totally positive completions.

So, the labeling of the graphs is important.

As mentioned in the introduction a clique is an induced subgraph that is a com­

plete graph. A block clique graph is a chordal graph in w’hich every pair of maximal

cliques. Ci,Cj, C, ^ C: intersect in at most one vertex. That is |C,-nCj| < 1. A

monotonically labeled block clique graph is a labeled block clique graph in which for

every pair of intersecting cliques such that C.TlC; = -u, the labeling in the two cliques

is such that {u :v € C, — u} < u and {w : w £ Cj — u} > u. Then, a monotonically

labeled block clique graph is of the form

Figure 4.12: Monotonically labeled block clique graph

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9S

and a matrix, the graph of whose specified entries is a monotonically labeled block

clique graph is of the form

r r ' “I ~

Figure 4.13: Partial matrix with a monotonically labeled block clique graph

In the remaining discussion all graphs will be assumed to be connected. This

assumption may be made without loss of generality since totally positive matrices are

closed under direct sums. To prove this we will use the classical fact of Frobenius-

Konig (see [Ry] or [S] and its references) that if an n-by-n matrix contains a zero

block of size p-by-q in whichp + q > n + 1, then the matrix is singular [Ry]. Although

we use only this zero block result we prove the more general the result below. Note

that this result is a consequence of theorem 2.2 in [CJRVV] which may also be viewed

as a generalization of the classical Frobenius-Konig result.

L e m m a 4.8 Suppose that .4 6 Mn (F) has a p-by-q submatrix of rank r. Then .4 is

singular whenever p + q > n + r+l.

Proof. Without loss of generality, independently permute the rows and columns

of A so that the p-by-q submatrix of rank r is in the lower left corner of /l. That is

A n A 12 .4 = A21 A22

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 99

in which A 2 1 is p-by-<7 and r a n k (A 2 1) = r. Using row and column operations ,42i can

be reduced so that .4 becomes

B\ B2 : .421

A = 0 C

0 0 D 2

B x

in whichC is r-by-r and nonsingular. Then A is singular if either 0 has linearlv

0

dependent columns or 0 0 D2 has linearly dependent rows. Since B\ is

B i

(n-p)-by-{q - r). 0 is guaranteed to have linearly dependent columns if

0

q — r > n — p. Similarly, since D2 is (p — r)-by-{n — q), D 2 has linearly dependent

rows if p — r > n — q. Rearranging both of these inequalities we see that .4 is singular

if p + q > n + r, that is. if p + q > n + r + 1. Since .4 is singular if and only if .4 is

singular, this completes the proof. □

Note that ifp + <7 < n + r + l, itis easy to construct matrices with a p-by-q block

of rank r that are nonsingular. So. this lemma is best possible. Lemma 4.8 may be

used to prove that the direct sum of totally positive matrices is totally positive.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 100

L e m m a 4.9 The matrices .4 £ A/n (R.) and B € M m (R.) are totally nonnegative if

and only if

A 0 C = 0 B

is totally nonnegative.

Proof. If C is a totally nonnegative matrix, then it is obvious that .4 and B are

totally nonnegative since total nonnegativity is inherited by submatrices. To prove

the forward implication, assume that .4 and B are totally nonnegative and we will

show that C is also totally nonnegative. Let Qi.,di C N and ct2lj32 C M and define

d 2 = a 2 + {n}, ,d2 = $2 + {n} • If c*i = dt = 0 or a 2 = 32 = 0. then C Qi U d 2: 3X U 32

is a submatrix of B or of .4. respectively, and therefore, since .4 and B are totally

nonnegative, has nonnegative determinant. On the other hand, if ci 2 = ,^i = 0 °r

= 32 = 0, then C Qi U a 2\ U S2 is a , and thus has determinant 0.

If exactly one of a \ , a 2.3 \. or 32 is the empty set, then C a j U d 2; ,di U 32 has at

least one zero row or column and. therefore, has determinant 0.

So, assume a \, Q2, 3h $ 2 i1 0- In this case, if |qi| = |/?j|. then, since

|Q l| + |q 2| = \3i\ + |d2|, it must be the case that |a2| = |,d2|. Then.

C Qi U d 2; 3i U ^ 2] is a block diagonal matrix in which the diagonal blocks are sub­

matrices of A and B and, thus.

det (C [au U d 2; 3X U ,d2]) = det (A [«i; di]) det {B [a2; 32\) > 0.

The last case to check is that in which ai,a 2,/3i.d 2 7^ 0 and |at| ^ |di|

(and 10:21 7^ \3i\-) Assume |at| < |di| (the case in which |oi| > |di| follows by

symmetry). Note that C a 1 U 0 2 ; di U ^ 2] contains two blocks of zeros; one of size

|cvi |-by-1/£?21 and one of size |o2|-by-|di| • The sum of the dimensions of the zero block

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 101

in the lower left corner is |a2| 4- |/3i| • Since |ai | + 1 < \ fi\ | . the sum of the dimensions

satisfies the inequality |a2| + |/3i| > jo 21 + |c*i| + 1 and, therefore, by lemma 4.8.

C ct\ U a2; U /)2] is singular. Thus, det (C [au U q 2: /4i U ,d 2 ) > 0 for all

S N. a 2,r?2 C M and C is totally nonnegative. □

By the adjoint formula for the inverse of a matrix we see that the inverse of a

has a checkerboard sign pattern. That is. the sign pattern of

the inverse is

+ - + -■••

- + - + ••■

+ - + -••■

- + - + ••■

Therefore, the inverse of a totally positive matrix cannot be totally positive. However,

as pointed out in [M, theorem 2.2], the inverse of a totally positive matrix is similar to

a totally positive matrix via a signature similarity. The proof of this result is included

here for completeness.

1

- 1 Lemma 4.10 Let D = . Then a nonsingular matrix

n + l (- 1) A E M n (R) is totally nonnegative if and only if DA~lD~l is totally nonnegative.

Proof: Let .4 be a totally nonnegative nonsingular matrix. Since D is diagonal.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 2

partitioned matrix multiplication shows that

(.DA ~ 1 D ~l) [a: f3\ = D [a] A ~ l [a; $} D ~l [/*].

By Jacobi’s identity for a./l C N such that |a| =\fi\

d e t ( .4 - '[ a : f l ) = (-!)•<->-<•.<« d e t^ t[^ :)° ‘])

in which s (a) = j- Then,

det [a: d]) = det [D [a]) det (/I -1 [a; /i]) det (D [/J])

= ^_^s(a)+l“l ^ 1 Q ]) ^ _ j j s ( / 3 ) + |j3| det ( A.)

= ( ^2(s(o)+j(/3)+|o|) det(,4 [/j , Q ]) (4 det (.4)

The last equality holds since |a| = |,d|. If .4 is totally nonnegative, then det A [{3°: q c]

is nonnegative and, since /I is also nonsingular, det (/I) is positive. Therefore,

det ({D A ~ l D ~l ) [a; ,d]) is nonnegative for all q. /J C ;V.

Now assume that D A ~ l D ~l is totally nonnegative and invertible. Then, by the

above discussion. D (DA~l D ~l )-1 D ~ l = .4 is totally nonnegative which proves the

lemma. □

We say a partial totally nonnegative matrix is regular if every maximal specified

principal submatrix is nonsingular and for every pair of maximal specified principal

submatrices A [a]. A [ j 3 \ such that a PI 3 ^ 0, then .4 [a fl d] is also nonsingular. The

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 103

main result of this section is then:

Theorem 4.5 Let G be a labeled graph on n vertices. Every regular partial totally

nonnegative matrix, the labeled graph of whose specified entries is G has a totally

nonnegative completion if and only if G is a monotonically labeled block clique graph.

The forward implication of theorem 4.5 will be proven by contrapositive. This will

be done by first "ruling out" non-chordal graphs. That is. it will be shown that

there exists partial totally nonnegative matrices the graph of whose specified entries

is not chordal for which there is no totally positive completion. After ruling out non-

chordal graphs, those chordal graphs that are not block clique graphs will be ruled

out. Finally, it will be shown that a block clique graph that is not monotonically

labeled is also ruled out. The reverse implication will be proven by showing that

every regular partial totally nonnegative matrix the graph of whose specified entries

is a monotonically labeled block clique graph has a totally nonnegative completion.

This will be done by exhibiting such a completion.

Lemma 4.11 Let G be a graph on n vertices. If every partial totally nonnegative ma­

trix A. the graph of whose specified entries is G, has a totally nonnegative completion,

then G is a monotonically labeled block clique graph.

Proof. As mentioned above, the proof is by contrapositive. We begin by ruling

out non-chordal graphs. Let G be a graph on n vertices that is not chordal. Then

G contains a simple cycle of length 4 or more as an induced subgraph. The graph of

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 104

the specified entries of the k-by-k m atrix

10?? 2

2 1 0

C =

••• 0

0 ? ? 2 1

is a simple cycle and C is partial totally nonnegative. In order for the {2,3} . {3 , k}

minor to be nonnegative, the 2 . k entry of C must be 0 . regardless of the 3, k entry.

However, if the 2 .k entry is 0. then the {1,2} , {l.fc} minor is —4. Thus, C has no

completion to a totally nonnegative matrix. All that remains in order to rule out non-

chordal graphs is to embed this data in a larger matrix in such a way that the matrix

is a partial totally nonnegative matrix and use the fact that total nonnegativity is

inherited by submatrices. This is done by specifying l’s on the diagonals and 0 ’s for

any other specified entries. The resulting matrix is totally nonnegative by lemma

4.9 since any fully specified principal submatrix is a direct sum of an identity and a

principal submatrix of C (any other fully specified submatrix is a submatrix of one of

these principal submatrices). However, because C does not have a totally nonnegative

completion, by inheritance the larger matrix also does not have a totally nonnegative

completion.

Next we look at chordal graphs that are not block clique graphs. Let G be a graph

on n vertices that is not a block clique graph. Then there are two cliques C i and

C2 that are induced subgraphs of G and for which \C\ fl C2I > 2 . The simplest such

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 105

graph is one in which G = C\ U Ci. |Ci| = \Ci\ = 3. and |C[ D C 2I = 2. The graph of

1 1 .4 ?

.4 1 1 .4 .4 = .2 .8 1 1

? .2 .4 1

is G and A has no totally nonnegative completion. Define

1 1 .4 x

.4 1 1 .4 A(x.y) = .2 .8 1 1

y .2 .4 1

It is easy to check that all fully specified minors of A(x.y) are positive. However.

det (A (x.y)) = —.00 16 — .008x — . 32 8y — . 2yx. (4-3)

For x and y nonnegative. (4.3) is always negative and, thus, there is no totally non­

negative completion of A(x,y).

For the graphs that are not block clique graphs there is a little more work than

in the non-chordal case in order to embed this data in a larger matrix. If G is not a

block clique graph, then there exist cliques C\ and C 2 that are induced subgraphs of

G and for which |Ci| = IC2I, and |Ci H C2| = |Ci| — 1 (note that Ci and C 2 may not

be maximal cliques). Let B(x.y) be a partial matrix with graph G and let B(x.y)

be the submatrix of B(x.y) such that the graph ofB (x.y) is C1 U (?2. Then, assign

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 6

data to B {x.y) by beginning with A {x.y) and repeating row and column 2 as many

times as necessary to complete the clique created by C\ D C2. T hat is

1 1 1 .4 x

.4 1 1 1 .4

B {x.y) ---- .4 1 1 1 .4

.2 .8 .8 1 1

y -2 .2 .4 1

Since any minor of this matrix is either 0 or is a minor of A{x,y). B{x.y) is a

partial totally nonnegative matrix. For the remainder of the specified entries of

B {x.y) specify l's on the diagonal and O’s for the off diagonal entries as in the non-

chordal case. The resulting matrix is partial totally nonnegative, but has A {x.y) as

a principal submatrix and. by inheritance, has no totally nonnegative completion.

Finally, if G is a block clique graph that is not monotonically labeled, then any

partial totally nonnegative matrix the graph of whose specified entries is G will have

a submatrix of the form

* * *

* *

* : *

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 7

in which iX' indicates a specified position. The matrix

2 1 2

.V/ (x.y) = 1 2 x

1 y 2

in which x.y are free to be chosen has this form and is partial totally positive. The

{1,2} , {3,4} minor of M (x.y) requires that x > 4 and the {3.4}, {1.2} minor re­

quires that y > 2. Then, xy > S. but the {3.4} , {3.4} minor requires that xy < 4.

Therefore, M (x.y) has no totally nonnegative completion. As usual, this data can

be embedded in a larger matrix by specifying l's on the diagonal and 0 ’s in the other

specified positions. The resulting matrix will be a partial totally nonnegative matrix,

the graph of whose specified entries is block clique graph that is not linearly labeled,

that has no totally nonnegative completion. □

The reverse implication of theorem 4.5 will be proven by induction. We will

show that every regular partial totally nonnegative matrix, .4. such that 6 '(.4) is

a monotonically labeled block clique graph with exactly two maximal cliques has a

totally nonnegative completion and that completion is nonsingular. Assume the same

is true for monotonically labeled block clique graphs with k — 1 maximal cliques and let

.4 be a regular partial totally nonnegative matrix such that Cr(.4) is a monotonically

labeled block clique graph with k maximal cliques. Then, the principal submatrix

,4c,uc2 determined by the two maximal cliques C\ and C2 is a regular partial totally

nonnegative matrix such that G (/lciuc2) is a monotonically labeled block clique graph

containing the two cliques, C\ and CY By the induction hypothesis, Ac,uc 2 has a

totally positive completion. Let .4 be the matrix that results when the submatrix

'4ciuc 2 °f *4 is completed. Then, A is a regular partial totally positive matrix such

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. LOS

that G (.4) is a monotonically labeled block clique graph with k — 1 maximal cliques.

By the induction hypothesis. A has a totally nonnegative completion. But then .4

also has a totally nonnegative completion.

From this discussion vve see that the only case that needs to be considered is the

one in which .4 is an n-by-n m atrix for which G(A) is a monotonically labeled block

clique graph with two maximal cliques. Then .4 is of the form

An «12 ?

.4 = T T (4.4) a 2l a '-2 a 23

? 032 A‘‘133

in which 012,021 G JR.P, 023.032 G R.T and p + q = n — 1. The following is a special

case of the chordal result found in [JL].

Lemma 4.12 Let

An «12 ^12^23

a2\T O22 0 23T (4.5)

fl3202i 032 A33

A n a 12 a 22 a 23 in which 012,021 G BLP. 023.^32 € K.9. and p+q = n — 1. and

a 21 a22 0 32 A33 are nonsingular, and 022 7^ 0. Then

-1

A n O i2 0 0 0 0 0 0 -1 fc-iM A" 1 = + T -1 02 2 022 a 23 0 a 22 0 1 0

0 0 a 32 A 33 0 0 0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 109

Note that the matrix in (4.5) is the completion of (4.4) that gives 0’s in the unspec­

ified entries of the inverse. This 0's in the inverse completion is shown in [BJLu].

Interestingly, the same completion works for total nonnegativity.

This result is used in the proof of the following lemma.

Lemma 4.13 Let G be a monotonically labeled block clique graph. If A is a regular

partial totally nonnegative matrix in which G[A) = G, then A has a nonsingular

totally nonnegative completion.

Proof. Let A be an n-by-n partial totally nonnegative matrix the graph of whose

specified entries is a monotonically labeled block clique graph. The only case that

needs to be considered is the case in which there are two cliques since the others

follow by induction (see discussion above). Then 4 is of the form given in (4.4).

Since positive left diagonal multiplication preserves total positivity and a 22 > 0. it

can be assumed without loss of generality that a22 = 1- Then, it will be shown that

the completion given by

4 n a i 2 a 12a 23

. 4 = T T (4.6) a 21 1 (123

a 32a 21 «32 433

is totally nonnegative. The proof will show that D A l D 1 is totally nonnegative for

- 1 D =

n+I (- 1)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 110

Then, by lemma 4.10. A is totally nonnegative. By theorem 4.12

r _ - - 1 1 - - - A n a 12 0 0 0 0 0 0 -1 A ~ l = + 1 <*23 0 1 0 1 1 0 0 0 ^32 A33 0 0 0

Define 0 0 0 B 0 0 0 T = D A ~ l D~ l = + 0 1 0 0 0 0 c 0 0 0

in which

-i B n 612 A n a 12 B = = DP T _ bn b22 _ «21 1

1 -t T cn 1 a23 C = = D q

C21 C 22 ^32 A33

Dn = D[{1. 9 P + l } ] . and Dq = «}]

A n a 12 I a.[3 Since and are totally nonnegative, B and C are also totally

‘ 21 0.32 A,33 nonnegative by lemma 4.10. Using the adjoint formula for the inverse of A 1 we see

that det (fl„) det (C 22) 1 = det (A-1)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I l l

Then, since B\\ and C 22 are principal submatrices of totally positive matrices,

det (^-l) > 0 .

Now it must be shown that all minors of T of size re —1 and smaller are nonnegative.

That is, det (T (7 ; 6 )) > 0 for all 7 .SC yV. I7 j = |<5|. Let q = {1.2,.... p} and

3 = {re — q + 1, n — q + 2 ,..., re}. Then, the size of T (7 ; <$) is |a| + |/2| + 1 — |71 (or

|a| + \3\ + 1 — |<5|). Many of the minors will be zero. This will be shown by finding

a block of zeros large enough to ensure that T (7 ;<$) is singular by lemma 4.8. For

this to be the case, the sum of the dimensions of the zero block must be at least

|a| + \3\ — hi + 2. There are two blocks of zeros in T (7 ; 6 ) to consider. These blocks

have dimensions

(|a| - |q fl 7 I) —by—(101 - |,dn<5|) (4.7)

and

i\3\ - \3 n 7 I) —by— (|q| - |an£|). (4.S)

There are several cases to consider in order to show that det (T (7 ; 6 )) > 0 for all

7 . 6 C yV. If p + 1 £ 7 . 8. then T (7 ; <5) is a direct sum of submatrices of B and C and

|7| - 1 = |a n 7| + \3 n 7| = |a n 8\ + |d n s\ = \S\ - 1. (4.9)

Since B and C are totally positive, if the submatrices in the direct sum are square,

det (T ( 7 ; £)) is positive. If the submatrices of B and C are not square, then, if

|a fl 6 | > |a fl 7 I. the sum of the dimensions in (4.7) is:

|of| + \3\ - |a n 7 | - \3 n <5| > |ar| + |d| — (la n<5| - 1) - |/ifl(5| = |Q| + |d|-(|an<5| + |dn<5|) + i

= |o| + |/J|-|6|+2. (4.10)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. By lemma 4.8, this ensures that T (7 : 8) is singular. The case in which

|a fl <5| > |a fl 7 1 uses (4.8) and is analogous.

There are several possibilities for 1 ^ 7 , £. If |a fl 7 I = |q fl 6 |, then T (7 ; 8) has

the same form as T and. therefore, has positive determinant by the same argument as

that for det ( T ). The determinant of T (7 ;^) is also positive if ||a fl <5| — |a D 7 (| = 1.

because in this case T(~p,8) is block triangular with submatrices of B and C on the

diagonal. For the case in which ||o fl 6 | — |a fl 7 11 > 2 note that, since p + 1 ^ 7 . 8,

|7 | = |or n 7 | + \,8n~/\ = | a n ^ | + |/3n<5| = |^ |. (4.11)

Then, argum ents similar to those in (4.10) (using (4.7) if |a fl 6 | — | a f ) 7 | > 2 and

(4.8) if |or fl 7 I — |a fl >2) show that T (7 ; 8) is singular.

It remains to show that the minors are nonnegative for the case in which p+ 1 is in

exactly one of 7 or 8. Assume p + 1 6 7 , k ( £ 8 (the other case follows by sym m etry).

In this case,

|q n 7 I + |/Jn 7 I = h i - 1 (4.12)

and

|afU| + |/?n$| = 1*1. (4.13)

If |or fl 7 I = |q fl 6 | or |q fl 7 | + 1 = |an£|, then T(^;8) is block triangular with

submatrices of B and C on the diagonal and det ( T (7 ; 6 )) > 0. If |a fl 7 I = |a D 6 | + 1

then using (4.8) and (4.12) we see by lemma 4.S that T (7 ; 6 ) is singular. Similarly,

if | | a f l 6 | — |o: n 7 [| > 2, T{~f\8) contains a zero block that is sufficiently large, by

lemma 4.8, to ensure that T (7 ; 8) is singular. Thus, det (T ( 7 ; 5)) > 0 for all 7 ,8 C N

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 113

and D A ~ l D ~l is totally nonnegative. Then, by lemma 4.10. .4 is totally nonnegative

and the lemma is proved. □

Together, lemma 4.11 and lemma 4.13 prove theorem 4.5.

4.5 The Maximum Minimum Eigenvalue

Completion Problem

The maximum minimum eigenvalue problem asks: for a partial Hermitian matrix

.4. what is the largest value that the minimum eigenvalue over all completions of .4

can attain? By classical interlacing, the minimum eigenvalue of an Hermitian matrix

cannot exceed the minimum of the eigenvalues of any principal submatrix. Therefore,

in the maximum minimum eigenvalue completion problem, the best that can be hoped

for is the minimum of the minimum eigenvalues of the specified principal submatrices.

In [GJSVV] it was shown that every partial positive semidefinite matrix, the graph of

whose specified entries is chordal has a positive semidefinite completion. This work

was preceded by [DG] which considered the case of banded matrices. Since each

fully specified principal submatrix of a partial positive semidefinite matrix is positive

semidefinite. by translation the [GJSVV] result is equivalent to saying that, in the

chordal case, the maximum minimum eigenvalue over all completions is the minimum

of the minimum eigenvalues of the fully specified principal submatrices.

The key observation in the proof in [GJSVV] is the case in which there is only

one pair of symmetrically placed unspecified entries (the one variable case). This

one variable case was done before, and dates back at least to [DG] in which banded

patterns were treated. Here we give an entirely different proof of the one variable case.

This new proof uses the structured eigenvector results of chapter 2 and, therefore.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 14

for the one variable case our approach gives information about the structure of the

eigenvectors of the maximum minimum eigenvalue completion that is not present

in [GJSVV]. This eigenvector information is however present in theorem 2.3 of [D2],

However, we prove both the completion result of [GJSVV] and the eigenvector result of

[D2] simultaneously and our approach allows us to write down the maximum minimum

eigenvalue completion determined by these structured eigenvectors. In the following

theorem let C(.4) denote the set of all completions of the partial matrix .4.

Theorem 4.6 Let A € M n be a partial Hermitian matrix in which only x = a,j (and

x = aji) is unspecified. Also, let Ai = .4 (i). A 2 - A (j) and Ai = min {p : p £ cr (.41)} .

A2 = min {p : p 6 < 7 (,42)} . Let A = min {Ax, A2}, then

max < min {p : p £ a (B)}> = X. [ s e c ( . 4 > J

Moreover, there is a maximum minimum eigenvalue completion A of .4 such that

A = min{^ : p £ <7 (.4.)} and there exists an associated eigenvector z such that if

A = Alf c[f] = 0 and if X = A2. r[y] = 0.

Proof: It suffices to show that there is a completion .4 of .4 in which

A = min {Ax, A2} is the minimum eigenvalue of .4. Assume, without loss of generality

that the l,n position is unspecified. Then

a n a 2l X

a 2i to to d23

X a .23 033

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 1 5

" an fl21 •422 a '23 in whichA 22 6 A/ „ _ 2 is Hermitian. T hen .4i = and .4 2 =

a2i .422 a 23 a 33 There are two cases to consider: Ax ^ A 2 and Ai = A2. To begin, assume At < A 2

so that A = At (the case in which 2 A < At follows by symmetry). By corollary 2.2

A is an eigenvalue of a completion A of .4 if and only if there is an eigenvector of .4

y associated with A of the form in which y £

y associated with A. In order for to be an eigenvector of .4 associated with A. the 0 following equality must hold:

y Aiy y = A 0 x y [1] + a23y (1 ) 0

If y [1] 7^ 0. then

.v = - q?3 y ( 1) (4.14)

s [ i ]

gives the desired completion. If y [1] = 0, then

a n a 21 0 ■4iy = a 21 A22 y( i) 1 r O «2ll/(l) II

A22y ( 1) yd) _

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 11 6

so that A.22IJ (1) = Ai/(1) and A is an eigenvalue of A 22- But this is a contradiction

since, by interlacing, every eigenvalue of A 22 must be greater than or equal to A 2.

Therefore, y [1] cannot be 0.

Now, assume A = Aj = A 2, but g\ (Aj) ^ g\ (A2). After a similarity transformation

of A that diagonalizes A 22 this case can be reduced to the case discussed above.

Assume without loss of generality that <7.\(A2) <

interlacing, ^ ( ^ 22) = 9 and g,\ (Aj) = g + 1. Also assume that A = 0. We may

do this without loss of generality since the general case follows easily by translation.

Then, the maximum minimum eigenvalue completion A of A is such that A is positive

semidefinite. Let U be the that diagonalizes A 22- Then.

U' A2 2 U =

0 D

in which D = diag (Aff+i , An_2) and A, > 0, i = g + 1 n — 2 are the nonzero

eigenvalues of A 22- Then

r 1 r 1 «11 a 21 .r I 0 I 0 0 U' A u a2i 0-23 D 0 / 0 / X ^23 «33

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 117

a n 0 ••• 0 a^i x

0 0 0

= B. (4.15)

021 D a.23

x 0 • • • 0 d ; 3 033

The last equality holds because if a positive semidefinite matrix has a zero on the

diagonal, then the entire row and column containing that diagonal entry must be 0

[HJ1, pg. 400]. The matrix B in (4.15) is the direct sum of a 0 matrix and

Q,\i ^21 £

B = O21 D a 23 (4.16)

X 0 2 3 a 33

Therefore, if we find a completion of B that is positive semidefinite, we will have a

completion of B that is also positive semidefinite. Then, since similarity preserves

eigenvalues, we have also found a completion of A that is positive semidefinite as

On 021 D a 23 desired. Let B i = and B 2 = . Since g\ (At ) = g\ (A2) + 1. 021 D a23 a 33

B\ is positive semidefinite and B 2 is positive definite. Thus, B is a partial positive

in which the smallest eigenvalue of B\ is less than the smallest eigen­

value of B 2 s o that we are in the case discussed earlier. For y an eigenvector of B 1

associated with A by the calculation as in (4.14) we see that x = ' I f lf 1' §ives the

desired completion.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. US

We next look at the case in which A = Ai = A-2 and g.\(Ai) = < 7.\(A 2). Let

g = g\(A\) and a = {2, n — 1}. By interlacing, <7.\(-d22) is g or g — 1. First

consider the case in which g\ {A 2 2 ) = 9 — 1- In this case, by corollary 2.2

g.\ Mi) + g \ M 22) — 1 d\m(LEUAl)).d\m(RE'UAl)) >

= 9 - 1-

Thus, there are at least g — 1 linearly independent eigenvectors of Ai of the form

0 , u,- 6 (D71 . By partitioned matrix multiplication each u,, i = 1,2...... g — 1. is

an eigenvector of A 22 associated with A. But, since g\ (A J = g, this means that there

0 0 is another eigenvector y of /lx associated with A for which < ui Ug-l is a linearly independent set. In addition, it must be the case that r/ [ 1] 0 since

if j/ [1] = 0. then r/ (1) is another eigenvector of A 22 which is a contradiction since

g \ [ A2 2 ) — g — 1 and we already have the g — 1 linearly independent vectors u, of

A22 associated with A. The eigenvector y can be used as in (4.14) to find a value

y for x that gives a completion A of A with A as an eigenvalue and, then. is an 0

eigenvector of A associated with A. Note that the matrix A 2 is an (n — l)-by-(n — 1)

matrix that we will consider to have entries indexed from 2 to n. To see that A is the

minimum eigenvalue of A note that by corollary 2.2

9\ (-d2) + g\ (A22) — 1 dim i^LE^ (A2)) .d im ( RE* (A2)) >

= g ~ 1.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 119

Then, for each u,-, i = 1. 2 . g — 1. the vector is an eigenvector of .4 2

associated with A. Recall that the vectors i = 1. 2 , . . . ,g — 1. are eigenvectors

of .4i associated with A. But then for each i — 1.2 g — 1. is an eigenvector

of .4 associated with A. Then there are at least g linearly independent eigenvectors of

.4 associated with A so that g\ (/i) > g and for 7 = {2, 3,..., n} corollary 2.2 gives

g \ (^) + g \ (^2 ) ~ 1 dim (LE* . dim (RE* (<4)) >

> g-«-

Since the dimension of the special eigenspaces must be integer, this means there

are at least g linearly independent eigenvectors of A associated with A with a 0 in

0

the first position. We already know the g — 1 vectors u i are eigenvectors of .4

0

associated with A. So, there must be at least one vector of the form in which

6 (Dn and r n_ 1 ^ 0 (otherwise we would contradict the fact that g\ (.422) = g — 1.)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 2 0

■ “ ' r -] 0 0 r -I y 0 Then • Ui u3_ i • ► is a set of g + 1 linearly independent 0 z 0 O I I 1 1 eigenvectors of A associated with A and, therefore, by interlacing, A is the minimum

eigenvalue of A.

All that remains is the case in whichg = g\(Ai) = g\{A2) and <7.\(A22) = 9-

Once again, assume that A = 0 and perform the similarity of .4 as in (4.15). Then.

0

every vector of the form w , w 6 (D3 is an eigenvector of B associated with A = 0

0 and, therefore. A = 0 is an eigenvalue of .4 for all values of x. We must find x so

that A = 0 is the minimum eigenvalue of .4. Since g\ (Ai) = g\ (A2) = g\ (A22) the

matrix B as defined in (4.16) has both B\ and B2 positive definite. In order for .4

to be positive semidefinite we must find x so that B is positive definite. The Schur

complement of D in B is

a n x a,, 12 -i D a 12 a 23 i «33 ‘23

a n — h^2D ‘d n *i’ — a l2D ^o 23 (4.17)

x — a23D ~ la.i 2 U33 ~ a23D ~la23

The diagonal entries an — a\2D ~xa X2 and 033 — a23D ~ la .23 are the Schur complements

of D in Bi and B 2, respectively. Since B\ and B 2 are both positive definite, these

diagonal entries are positive. Then, choosing .r = a\2D ~ la 22 gives a completion of

B that is positive definite and, therefore, the resulting completion of A is positive

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 121

semidefinite with A = 0 as the minimum eigenvalue. This completes the proof. □

Note that the completion in the last case of this proof is the completion given in

[GJSVV] and [DG]. However, by this proof we know a little bit more since in this

last case we know that the completed matrix has an eigenvector associated with the

0

minimum eigenvalue A of the form in whichy is an eigenvector of A 22 associated

0 with A.

As shown in [GJSVV], the general chordal problem may be solved by sequential

application of the one variable problem. So. theorem 4.6 may be applied sequentially

to solve the chordal maximum minimum eigenvalue problem. As mentioned above,

the approach to the proof of the theorem that we use provides information about

the eigenvectors associated with the maximum minimum eigenvalue that the [GJSVV]

approach does not provide. In addition, the maximum minimum eigenvalue comple­

tion as given in the above proof is easy to write down. It is shown in theorem 1.2 of

[D2] that, in the chordal case, there exists an eigenvector of the maximum minimum

eigenvalue completion with support contained in the entries that correspond to the

maximal specified principal submatrix with the smallest minimum eigenvalue of all

the maximal specified principal submatrices.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Bibliography

[A] T. Ando, Totally Positive Matrices. Linear Algebra Appl. 90 (1987). 165-219.

[B] L.M. Blumenthal. Theory and Applications of Distance Geometry , 2nd Ed., New

York. Chelsea (1970).

[BJ] M. Bakonyi and C.R. Johnson. The Euclidean Distance Matrix Completion Prob­

lem. submitted.

[BJL] W. Barrett. C.R. Johnson, and R. Loevvy, The Real Positive Definite Comple­

tion Problem: Cycle Completability. submitted.

[BJLu] YV. Barrett, C.R. Johnson, and M. Lundquist. Determinantal Formulae for

Matrix Completions Associated with Chordal Graphs. Linear Algebra Appl. 121

(1989). 265-290.

[BJT] VV. Barret. C.R. Johnson, and P. Tarazaga. The Real Positive Definite Com­

pletion Problem for a Simple Cycle, Linear Algebra Appl. 192 (1993). 3-31.

[BP] A. Berman, and R.J. Plemmons. Nonnegative Matrices in the Mathematical

Sciences, Academic Press, New York, 1979.

[C] D. Carlson, What are Schur Complements Anyway?, Linear Algebra Appl. 74

(1986), 257-275.

122

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [CJRW] N. Cohen. C.R. Johnson. L. Rodman, and H.J. Woerdeman. Ranks of Com­

pletions of Partial Matrices. Oper. Theory Adv. Appl. 40 (1989), 165-185.

[Dl] J. Dancis, An Interlacing Theorem for Eigenvalues of Self-Adjoint Operators.

Linear Alg. Appl. 88/89 (1987), 117-122.

[D2] J. Dancis. Positive Semidefinite Completions of Partial Hermitian Matrices. Lin­

ear Algebra Appl. 175 (1992). 97-114.

[DG] H. Dym, and I. Gohberg, Extensions of Band Matrices with Banded Inverses.

Linear Algebra Appl. 36 (1981), 1-24.

[F] M. Fiedler, Matrix Inequalities. Numerische Mathematik 9 (1966), 109-119.

[G] M. Golumbic. Algorithmic and Perfect Graphs. Academic Press.

New York (1980).

[GJSVV] R. Grone. C.R. Johnson. E. Sa, and H. YVolkowicz, Positive Definite Com­

pletions of Partial Hermitian Matrices, Linear Algebra Appl. 58 (1984). 109-124.

[GM] M. Gasca, and G. Muhlbach. A Test for Strict Total Positivity via Neville

Elimination, Current Trends in Matrix Theory (1987), 225-232.

[GRSYV] I. Gohberg. L. Rodman. T. Shalom, and H.J. Woerdeman, Bounds for Eigen­

values and Singular Values of Matrix Completions. Linear and Multilinear Algebra

33 (1992), 1-17.

[H] R.O. Hill. Certain Rank 2k Updates Act Like Rank k. Linear Algebra Appl. 177

(1992), 173-174.

[HC] T. Havel, and G. Crippen, Distance Geometry and Molecular Conformation.

Wiley, New York, 1988.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 2 4

[HJ1] R. Horn, and C.R. Johnson. Matrix Analysis. Cambridge University Press.

Cambridge, 1985.

[HJ2] R. Horn, and C.R. Johnson. Topics in Matrix Analysis , Cambridge University

Press. Cambridge. 1991.

[HJK] R.O. Hill. C.R. Johnson, and B.K. Kroschel, Extended Interlacing Intervals.

Linear Algebra Appl.. accepted.

[HP] R. Hill, and B.N. Parlett. Refined Interlacing Properties, SIAM J. Matrix Anal.

Appl. 13 (1992), 239-247.

[J] C.R. Johnson. Matrix Completion Problems: A Survey, Proc. of Symposia in

Appl. Math. 40 (1990). 171-198.

[JJK] C.R. Johnson. C.A. Jones, and B.K. Kroschel, The Euclidean Distance Com­

pletion Problem: Cycle C'ompletability, Linear and Multilinear Albebra 39 (1995).

195-207.

[JK1] C.R. Johnson, and B.K. Kroschel, Principal Submatrices, Geometric Multi­

plicities, and Structured Eigenvectors, SIAM J. Matrix Anal. Appl. 16 (1995).

1004-1012.

[JK2] C.R. Johnson, and B.K. Kroschel. The Combinatorially Symmetric P-matrix

Completion Problem, submitted.

[JL] C.R. Johnson, and M. Lundquist, An Inverse Formula for Matrices with Sparse

Inverses, in preparation.

[JM] C.R. Johnson, and T. McKee. Structural Conditions for Cycle Completable

Graphs, Discrete Math., accepted.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [JT] C.R. Johnson, and P. Tarazaga, Connections Between the Real Positive Semidef­

inite and Distance Matrix Completion Problems, submitted.

[K] Karlin, Totally Positive Matrices. Stanford U.P., Stanford, 1968.

[L] N.J. Lehmann, Optimale Eigenwerteinschiessung, Numer. Math. 5 (1963), 246-

272.

[dLH] J. de Leeuw, and W. Heiser. Theory of Multidimensional , Chapter 13

in P.R. Krishnaiah, and L.H. Kanal (eds.), Handbook of Statistics, vol. 2. North-

Holland. Amsterdam, 1982.

[M] Mine, H.K.. Nonnegative Matrices. John Wiley & Sons, Inc., New York. 1988.

[P] B. Parlett, The Symmetric Eigenvalue Problem. Prentice-Hall, Englewood Cliffs.

NJ. 19S0.

[Ro] R.T. Rockafellar, Convex Analysis. Princeton University Press, NJ, 1970.

[Ry] H.J. Ryser, Combinatorial Mathematics , The Mathematical Association of

America, Rahway, 1963.

[S] H. Schneider, The Concepts of Irreducibility and Full Indecomposability of a

Matrix in the works of Frobenius. Konig and Markov, Linear Algebra Appl. IS

(1977), 139-162.

[Wi] H. Wielandt, An Extremum Property of Sums of Eigenvalues, Proc. Amer.

Math. Soc. 6 (1955), 106-110.

[Wo] H.J. Woerdeman, The Lower Order of Lower Triangular Operators and Minimal

Rank Extensions. Integral Equations, 10 (1987), 859-879.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 126

VITA

Brenda K. Kroschel

Born in St. Paul. Minnesota on January 23, 1963. Valedictorian of Mahtomedi High School, 1981. Received a B.A. in Mathematics and Chemistry from St. Olaf college in May 1985 and an M.S. in Mathematical Sciences from Clemson University in May 1987. Worked as a programmer analyst for Control Data Corporation in the Surfacing and Machining group from 1987 to 1989. From 1989 to 1992 worked as a statistical programmer analyst for O’Connor and Associates, an options trading firm in Chicago, and was a member of the technical staff upon leaving O’Connor and Associates. Entered the College of William and Mary Applied Science program as a research assistant in August of 1992 to begin work on a Ph.D. degree.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.