The Matrix Metho d and the

Computation of Invariant Subspaces

y y

Ralph Byers Chunyang He Volker Mehrmann

November

Abstract

A p erturbation analysis shows that if a numerically stable pro ce

dure is used to compute the matrix sign function then it is comp etitive

with conventional metho ds for computing invariant subspaces Stabil

ity analysis of the Newton iteration improves an earlier result of Byers

and conrms that illconditioned iterates may cause numerical insta

bility Numerical examples demonstrate the theoretical results

Introduction

nn

If A R has no eigenvalue on the imaginary axis then the matrix sign

function sign A may b e dened as

Z

sign A z I A dz I

i

where is any simple closed curve in the enclosing all eigen

values of A with p ositive real part The sign function is used to compute

eigenvalues and invariant subspaces and to solve Riccati and

Sylvester equations The matrix sign function is attractive for

machine computation b ecause it can b e eciently evaluated by relatively

University of Kansas Dept of Mathematics Lawrence Kansas USA Partial

supp ort received from National Science Foundation grant INT Some supp ort

was also received from University of Kansas General Research Allo cation

y

TUChemnitzZwickau Fak f Mathematik D Chemnitz FRG Partial sup

p ort was received from Deutsche Forschungsgemeinschaft Pro jekt La

simple numerical metho ds Some of these are surveyed in It is partic

ularly attractive for large dense problems to b e solved on computers with

advanced architectures

Beavers and Denman use the following equivalent denition Let

A XJX b e the Jordan canonical decomp osition of a matrix A having

no eigenvalues on the imaginary axis Let the diagonal part of J b e given

by the matrix D diag d d If S diag s s where

n n

if d

i

s

i

if d

i

then signA is given by sign A XSX

Let V V A b e the invariant subspace of A corresp onding to eigen

values with p ositive real part let V V A b e the invariant subspace of

A corresp onding to eigenvalues with negative real part let P A P b e

the skew pro jection onto V parallel to V and let P P A b e the

skew pro jection onto V parallel to V Using the same contour as in

the pro jection P has the resolvent integral representation Page

Z

P z I A dz

i

It follows from and that sign A P P P I I P

The matrix sign function was introduced using denition by Rob erts

in a technical rep ort which was not published until Kato

Page rep orts that the resolvent integral go es back to

and

There is some concern ab out the numerical stability of numerical meth

o ds based up on the matrix sign function In this pap er we demon

strate that evaluating the matrix sign function is a more illconditioned

computational problem than the problem of nding bases of the invariant

subspaces V and V Sometimes it is tremendously more illconditione d

See Example in Section Nevertheless we also give p erturbation and

error analyses which show that at least for Newtons metho d for the compu

tation of the matrix sign function in most circumstances the accuracy

is comp etitive with conventional metho ds for computing invariant subspaces

Our analysis improves some of the p erturbation b ounds in

In Section we establish some notation and clarify the relationship b e

tween the matrix sign function and the Schur decomp osition The next

two sections give a p erturbation analysis of the matrix sign function and

its invariant subspaces Section gives a posteriori b ounds on the forward

and backward error asso ciated with a corrupted value of signS Section

is a stability analysis of the Newton iteration Section demonstrates the

results with some numerical examples

Throughout the pap er k k denotes the sp ectral norm k k the norm

q

P

ja j The or column sum norm and k k the Frobenius norm k k

ij F F

set of eigenvalues of a matrix A is denoted by A The op en left half plane

is denoted by C and the op en right half plane is denoted by C Borrowing

some terminology from engineering we refer to the invariant subspace V

nn

V A of a matrix A R corresp onding to eigenvalues in C as the

stable invariant subspace and the subspace V V A corresp onding to

eigenvalues in C as the unstable invariant subspace We use P P A

for the the skew pro jection onto V parallel to V and P P A for the

skew pro jection onto V parallel to V

Relationship with the Schur Decomp osition

Supp ose that A has the Schur form

k n k

k A A

H

Q AQ

n k A

where A C and A C If Y is a solution of the Sylvester

equation

YA A Y A

then

k n k

k I Y

H

Q sign AQ

n k I

k n k

k I Y

H

Q P Q

n k

and

k n k

Y k

H

Q P Q

n k I

The solution of has the integral representation

Z

z I A A z I A dz Y

i

where is a closed contour containing all eigenvalues of A with p ositive real

part

The stable invariant subspace of A is the range or column space of

sign A I P If

i h

R R

signA I QR Q Q

is a QR factorization with column pivoting then the columns of Q

form an orthonormal basis of this subspace Here Q is orthogonal is a

p ermutation matrix R is upp er triangular and R is nonsingular

It is not dicult to use the singular value decomp osition of Y to show

that

r

k signAk kY k kY k

It follows from that

kA k

kY k

sep A A

kA Z ZA k

11 22

F

where sep is dened as in by sep A A min

Z

kZ k

F

The Eect of Backward Errors

In this section we discuss the sensitivity of the matrix sign function sub

ject to p erturbations Based on Frechet Kenney and Laub

presented a rst order p erturbation theory for the matrix sign function via

the solution of a Sylvester equation Mathias derives an expression for

the Frechet using the Schur decomp osition Katos encyclop edic

monograph includes an extensive study of series representations and

of p erturbation b ounds for eigenpro jections In this section we derive an

expression for the Frechet derivative using integral formulas

For a p erturbation matrix E we give estimates for signA E in terms

of p owers of kE k Partition E conformally with as

k n k

k E E

H

Q EQ

n k E E

Consider rst the relatively simple case in which A is blo ck diagonal

Lemma Suppose A is block diagonal

A

A

A

nn

where A C and A C Partition the perturbation E R

conformally with A as

E E

E

E E

If kE k is suciently smal l then

F

sign A E sign A O kE k

F

where F and F satisfy the Sylvester equations

A F F A E

F A A F E

Proof Note that the eigenvalues of A E have negative real part

and the eigenvalues of A E have p ositive real part In the denition

choose the contour to enclose A and A E but neither

A nor A E So

Z

dz I z I A E signA E

i

Z

z I A z I A E z I A dz I

i

O kE k

signA F O kE k

where

Z

z I A E z I A dz F

i

Partitioning F conformally with E and A then we have

Z

F z I A E z I A dz

i

Z

z I A E z I A dz F

i

Z

F z I A E z I A dz

i

Z

F z I A E z I A dz

i

As in F and F are the solutions to the Sylvester equations

and The contour encloses no eigenvalues of A so z I

A E z I A is analytic inside and F

We rst prove that F in the case that A is diagonalizable say

A X X where diag Then

nk

Z

F X z I X E X z I dz X

i

R

Each comp onent of the ab ove integral is of the form cz z

j

dz for some constant c If j k then this is the integral of a residue

k

free holomorphic function and hence it vanishes If j k then

Z Z

c c

dz dz

z z z z

i j i j i j

The general case follows by taking limits of the diagonalizable case and using

the dominated convergence theorem

Theorem Let the Schur form of A be given as in and let E be as in

If kE k is suciently smal l then

signA E signA E sign AE sign A O kE k

t p

where

Y E Y

21

E

H

E Q Q

t

E

H

E Q Q

p

E

E satises the Sylvester equation

A E E A E

Y satises and E satises

E Y YE Y YE

E A A E E

Y

I

then Proof If S

I

A A A

S S

A A

and

YE YE E Y YE Y

21 22 11 21

E E E E

S S

E Y

21

E E

E E

It follows from Lemma that

I E

H

O kE k signSQ A E QS

I

E

Since signS AS S signAS multiplying QS on the left side and

H

SQ on the right side of the ab ove equation we have

Y E Y

21

Y E E

H

sign A E signA Q Q O kE k

E E Y

It is easy to verify that

Y E Y

21

Y E E

E E Y

Y E Y

21

I Y I Y

E

I I

E

E

The theorem follows from

I Y

H H

Q sign AQ signQ AQ

I

Of course Theorem also gives rst order p erturbations for the pro jec

tions P P A and P P A

Corollary Let the Schur form of A be given as in and let E be as in

If kE k is suciently smal l then

P A E

P A E signAE signA O kE k

t p

E P A P A E P A P A P A

t p

O kE k

P A E P A I E P A I O kE k

t p

where E and E are as in the statement of Theorem

t p

Taking norms in Theorem gives rst order p erturbation b ounds

Corollary Let the Schur form of A be given as in E as in and

let sepA A then the rst order perturbation of the matrix

sign function stated in Theorem is bounded by

kA k

kE sign AE signAk kE k

t p

The corollary follows from the sum of the ab ove b ounds

On rst examination Corollary is discouraging It shows that calculat

ing the matrix sign function may b e more illcondi tioned than nding bases

of the stable and unstable invariant subspace If the matrix A whose Schur

decomp osition app ears in is p erturb ed to A E then the stable invari

ant subspace Im Q is p erturb ed to ImQ Q E where kE k kE k

q q

Corollary and the following example show that k signA E k

may dier from signA by a factor of which may b e much larger than

kE k

Example Let

A

E

The matrix A is already in Schur form so sep A A If

then we have

signA

p

sign A E

The dierence is

sign A E signA O

Perturbing A to A E do es indeed p erturb the matrix sign function by a

factor of

Of course there is no rounding error in Example so the stable invariant

subspace of A E is also the stable invariant subspace of sign A E

and in particular evaluating the matrix sign function exactly has done no

more damage than p erturbing A The stable invariant subspace of A is

V A Im the stable invariant subspace of A E and signA E

is

O V A E Im Im

p

2

For a general small p erturbation matrix E the angle b etween V A

and V A E is of order no larger than O The matrix

sign function and the projections P and P may be signicantly more

il lconditioned than the stable and unstable invariant subspaces Neverthe

less we argue in the next section that despite the p ossible p o or conditioning

of the matrix sign function the invariant subspaces are usually preserved

ab out as accurately as their native conditioning p ermits

Perturbation Theory for Invariant Subspaces

of the Matrix Sign Function

In this section we discuss the accuracy of the computation of the stable

invariant subspace of A via the matrix sign function

An easy rst observation is that if a backward stable metho d was used

to compute the matrix sign function then the computed value of signA is

the exact value of signA E for some p erturbation matrix E prop ortional

to the precision of the arithmetic The exact stable invariant subspace of

sign A E is also an invariant subspace of A E

However in general we can not guarantee that the computed value of

sign A is exactly the value of sign A E for a small p erturbation E

We probably can not even represent such signA E within the limits of

nite precision arithmetic The b est that can b e hop ed for is to compute

sign A E F for some small p erturbation matrices E and F Consider

now the eect of the hypothetical forward error F

Let A have Schur form and let E b e a p erturbation matrix partitioned

conformally as in Let Q b e the rst k columns of Q and Q b e the

remaining n k columns If

kE k kA k kE k

sepA A kE k kE k

then A has stable invariant subspace V A ImQ and A E has an

invariant subspace ImQ Q W where W satises

kE k

kW k

sep A A kE k kE k

The singular values of W are the tangents of the canonical angles

b etween V ImQ and ImQ Q W In particular the canonical

angles are at most of order sepA A

For simplicity of notation ignore for the moment the backward error

matrix E and consider only the forward error Let B sign A F where

F represents the forward error in evaluating the matrix sign function and A

has Schur form Let sign A and F have the forms

k n k

k I Y

H

Q sign AQ

n k I

and

k n k

k F F

H

Q FQ

n k F F

where Q is the unitary factor from the Schur decomp osition of A Now

consider the stable invariant subspace V A V signA ImQ

If kF kkY k kF k sepI I kF k kF k then p erturbing

sign A to sign A F p erturbs the invariant subspace ImQ to ImQ

Q W where kW k kF k kF k kF k If kF k

s s

k signAk then by and

kF k k signAk

s

kY k

A

kY k

kA k

sepA A

Since sepA A kAk W ob eys the b ounds

F s

kA k

12

sepA A

11 22

kW k

s

kF k kF k

kAk

F

O

sepA A

Comparing with we see that p erturbing the computed value of

sign A by a relative error to a nearby sign matrix disturbs the stable

invariant subspace no more than twice as much as p erturbing the original

data A by a relative error of size might

In order to illustrate the results we give a comparison of our p erturbation

b ounds and the b ounds given by Bai and Demmel for b oth the matrix

sign function and the invariant subspaces in the case of Example The

distance to the illp osed problem

d min A iI

A min

where A iI is the smallest singular value of A iI in which

min

p

is real and i leads to overestimating b ounds in Since d

A

the b ounds given in are resp ectively O for the matrix sign function

and O for the invariant subspaces

A Posteriori Backward and Forward Error

Bounds

A priori backward and forward error b ounds for evaluation of the matrix

sign function remain elusive even for the simplest algorithms However it

isnt dicult to derive a p osteriori error b ounds for b oth backward and

forward error

We will need the following lemma to estimate the distance b etween a

matrix S and sign S

nn

Lemma If S R has no eigenvalue with zero real part and

k signS S I k then k signS S k kS S k

Proof Let F sign S S The matrices F S and signS commute

so

I sign S S F S SF F

This implies that

S S S F

F

Taking norms and using kFS k k signS S I k we get

kS S k kF k kF k

and the lemma follows

It is clear from the pro of of the Lemma that signS S S S

is asymptotically correct as k signS S k tends to zero The b ound in the

lemma tends to over estimate smaller values of k signS S k by a factor of

two

Supp ose that a numerical pro cedure for evaluating signA applied to a

nn nn

matrix A R pro duces an approximation S R Consider nding

nn nn

small norm solutions E R and F R to

sign A E S F

Of course E and F are not uniquely determined by Common algo

rithms for evaluating signA like Newtons metho d for the square ro ot of I

guarantee that S is very nearly a square ro ot of I ie S is a close ap

proximation of signS In the following theorem we have arbitrarily taken

F sign S S

Theorem If k signS S I k then admits a solution with

kF k kS S k and

kSA AS k kE k

kS S k

kAk kAk

The righthandside of is easily computed or estimated from the known

values of A and S but it is sub ject to subtractive cancellation of signicant

digits

Proof The matrices S F and A E commute so an underdetermined

consistent system of equations for E in terms of S A and F signS S

is

E S F S F E sign S A A signS SA AS FA AF

Let

I Y

H

U sign S U

I

b e a Schur decomp osition of sign S whose unitary factor is U and whose

H

triangular factor is on the lefthandside of Partition U EU and

H

U AU conformally with the righthandside of as

E E

H

U EU

E E

and

A A

H

U AU

A A

H

Multiplying on the left by U and on the right by U and partitioning

gives

YE E Y YE E YA A Y YA A

E E Y A A Y

A hop efully small norm solution for E is

A Y YA A E E

H

U EU

A E E

For this choice of E we have

kE k k signS A A sign S k

kSA AS k kFA AF k

kSA AS k kS S k kAk

from which the lemma follows

Lemma and Theorem agree well with intuition In order to assure

small forward error S must b e a go o d approximate square ro ot of I and in

addition to assure small backward error S must nearly commute with the

original data matrix A Newtons metho d for a ro ot of X I tends to do

a go o d job of b oth Note that in general Newtons metho d makes a

p o or algorithm to nd a square ro ot of a matrix The square ro ot of I is a

sp ecial case See for details

The Newton Iteration for the Computation of

the Matrix Sign Function

There are several numerical metho ds for computing the matrix sign func

tion Among the simplest and most commonly used is the Newton

Raphson metho d for a ro ot of X I starting with initial guess X A

It is easily implemented using matrix inversion subroutines from

widely available high quality linear algebra packages like LAPACK

It has b een extensively studied and many variations have b een suggested

Algorithm Newton Iteration without scaling

X A

FOR k

X X X

k k

k

If A has no eigenvalues on the imaginary axis then Algorithm converges

globally and lo cally quadratically in a neighborho o d of sign A Al

though the iteration ultimately converges rapidly initially convergence may

b e slow However the initial convergence rate may b e improved by scaling

Theorem shows that the rst order p erturbation of signA may b e as

large as k signAk where is the relative uncertainty in A If there is no

other uncertainty then is at least as large as the unit round of the nite

precision arithmetic Thus it is reasonable to stop the Newton iteration

when

kX X k ckX k

k k k

The ad hoc constant c is chosen in order to avoid extreme situations eg

c n Exp erience shows furthermore that it is often advantageous to

take an extra step of the iteration after the stopping criterion is satised

In exact arithmetic the stable and unstable invariant subspaces of the

iterates X are the same as those of A However in nite precision arith

k

metic rounding errors p erturb these subspaces The numerical stability of

the Newton iteration for computing the stable invariant subspace has b een

analyzed in we give an improved error b ound here

Let X and X b e resp ectively the computed k th and k st iterate

of the Newton iteration starting from

A A

H

X A Q Q

A

Supp ose that X and X have the form

X X X X

H H

X Q Q X Q Q

E X E X

A successful rounding error analysis must establish the relationship b etween

and E In order to do so we assume that some stable algorithm is E

applied to compute the inverse X in the Newton iteration More precisely

we assume that X satises

X E X E

X X

E X

Z

where

kE k ckX k

X

kE k ckX k kX k

Z

for some constant c Note that this is a nontrivial assumption Ordinarily

if Gaussian elimination with partial pivoting is used to compute the inverse

the ab ove error b ound can b e shown to hold only for each column separately

Write E and E as

X Z

E E

H

E Q Q

X

E E

E E

H

E Q Q

Z

E E

The following theorem b ounds kE k and indirectly the p erturbation in the

stable invariant subspace

Theorem Let X X E and E be as in and If

X Z

k k and ckX kkX ckX kkX

kkX k ckX k kkX kE k ckX kkX

where c is as in then

k kkX kX

k kE kE k ckX k ckX k kX k

Proof We start with In fact the relationship b etween E and E

follows from applying the explicit formula for the inverse of X E in

X

H

Q X E Q

X

X X X X X E E X X X

c c

X X E E X

c c

Here

X X E

X X E

X X E

X X X E E X

c

Then

X X E E X E X X X X

c

X X X E X X

c

E E E E X E E X

c

X X X E

c

Using the Neumann lemma that if kB k then kI B k kB k

we have

k kX k kX

kX k kX k

kX kkE k ckX kkX k

The following inequalities are established similarly

kX k kX k

kX k kX k ckX k

k kX k kX

kX k

c

kkX k kkE k kE kkX kX

Inserting these inequalities in we obtain

k kkX kX

k kE kE k ckX k kE k

The b ound in Theorem is stronger than the b ound of Byers in A

step of Newton iteration is backward stable if and only if

kE k kE k

sep X X

sepX X

is dominated by X The term sep X

X X X X

sep

In order to guarantee numerical stability the factors in the b ound of Theo

k and kX k kX k should b e not so large as to violate kkX rem kX

the inequality

1 1

X X X X

22 11

22 11

sep

kE k k kE

sepX X

Roughly sp eaking to have numerical stability throughout the algorithm

neither kX kkX k nor kX k kX k should b e much larger than

sepA A

The following example from shows violation of inequality which

explains the numerical instability

Example Let

A

A A

T

b e a real matrix and let A A Form R

E A

T

and A QRQ where the orthogonal matrix Q is chosen to b e the unitary

factor of the QR factorization of a matrix with entries chosen randomly

uniformly distributed in the interval The parameter is taken as

so that there are two eigenvalues of A close to the imaginary

axis from the left and right side The entries of A are chosen randomly

uniformly distributed in the interval to o The entries of E are chosen

randomly uniformly distributed in the interval eps where eps

is the machine precision

In this example sepA A and A

min

The following table shows the evolution of kE k sepX X

during the Newton iteration starting with X A and X R resp ectively

where E is as in The norm is taken to b e the norm

k kE k sepX X sep X X

A R

e e e

e e e

e e e

e e e

e e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

e e

k kA k inequality k kA Because kA

is violated in the rst step of the Newton iteration for starting matrix

A which is shown in the rst column of the table Newtons metho d never

recovers from this

It is remarkable however that Newtons metho d applied to R directly

seems to recover from the loss in accuracy in the rst step The second

column shows that although kE k sepX X at the

rst step it is reduced by the factor every step until it reaches

which is approximately kE k sepA A Observing that in this

case the p erturbation E in E as in is zero and kE k is dominated by

Z

k It is surprising to see that from the second step E X kE k kX

on kX E X k is as small as eps since A and A do not explicitly

after the rst step E X app ear in the term X

By our analysis the Newton iteration may b e unstable when X is ill

k

conditioned To overcome this diculty the Newton iteration may b e carried

out with a shift along the imaginary line In this case we have to use complex

arithmetic

Algorithm Newton Iteration With Shift

X A iI

FOR k

X X X

k k

k

END

The real parameter is chosen such that A iI is not small

min

For Example when is taken to b e we have kE k sepX X

for k Then by our analysis the computed invariant

subspace is guaranteed to b e accurate

We can combine Algorithm and Algorithm in the following way

Algorithm Computing the Stable Invariant Subspace

Cal l Algorithm with the stopping criterion and get X

k

Perform a QR factorization of X I and partition Q Q Q

k

H

Stability test If kQ AQ k kAk n kX k where is the

k

machine precision then use Q as the orthonormal basis of the computed

stable invariant subspace Otherwise call Algorithm and start the step

again

Numerical Exp eriments

In this section we demonstrate the theoretical results of this pap er with

some numerical exp eriments The numerical algorithms were implemented

in MATLAB on a HP workstation with eps

The stopping criterion for the Newton iteration is as in with c

and an extra step of the Newton iteration is p erformed after the stopping

criterion is satised

Example This example is devoted to demonstrate the validity of our

stopping criterion We constructed a matrix

H

A QRQ

where Q is a random unitary matrix and R an upp er triangular matrix

with diagonal elements i i

a parameter in the k k p osition and zero everywhere else We

chose such that the norm k signAk varies from small to large The

typical numerical b ehavior of log kX X k is that it go es down

k k

and then b ecomes staionary This b ehavior is shown in the following graph

for the cases and in which k signAk is

and resp ectively The Newton iteration with our

stopping criterion stops at the th step for and at the th step for

and 10

5

0 alpha=50

-5 alpha=20

log10(||X_{k+1}-X_k||_1) -10

alpha=0 -15

-20 0 5 10 15 20 25 30 35 40 45 50

the number of iterations

Example In this test random matrices of size were consid

H

ered In all cases the condition kQ A Q k kAk n eps kX k is

k

satised which indicates by the p erturbation analysis in Section that the

computed stable invariant subspace is acceptable

H

In Example however we have kQ A Q k kAk n

eps kX k and hence the computed stable invariant

k

subspace is not acceptable However when the Newton iteration is started

with X A iI the stability condition is satised

Conclusions

We have given a rst order p erturbation theory for the matrix sign function

and an error analysis for Netwons metho d to compute it This analysis

suggests that computing the stable or unstable invariant subspace of a

matrix with the Newton iteration in most circumstances yields results as

go o d as those obtained from the Schur form

Acknowledgments

The authors would like to express their thanks to N Higham for valuable

comments on an earlier draft of the pap er and Z Bai and P Benner for

helpful discussions

References

E Anderson Z Bai C Bischof J Demmel J Dongarra J D Croz

A Greenbaum S Hammarling A McKenney S Ostrouchov and

D Sorensen LAPACK Users Guide So ciety for Industrial and Applied

Mathematics Philadelphia

Z Bai and JW Demmel Design of a parallel nonsymmetric eigenrou

tine to olb oxPart I Technical Rep ort CDS Dept of Mathemat

ics University of California Berkeley Berkeley Ca

Z Bai and JW Demmel Design of a parallel nonsymmetric eigenrou

tine to olb ox In Proc of the th SIAM Conf on Parallel Processing for

Scientic Computing Philadelphia SIAM Long version available

from Dept of Mathematics University of California Berkeley

Z Bai and JW Demmel Design of a parallel nonsymmetric eigenrou

tine to olb oxPart II Technical rep ort Dept of Mathematics University

of California Berkeley Berkeley Ca

Z Bai JW Demmel and M Gu Inverse free parallel sp ectral di

vide and conquer algorithms for nonsymmetric eigenproblems Tech

nical rep ort Dept of Mathematics University of California Berkeley

Berkeley Ca

LA Balzer Accelerated convergence of the matrix sign function

metho d of solving Lyapunov Riccati and other matrix equations Int

J Control

AN Beavers and ED Denman A computational metho d for eigenval

ues and eigenvectors of a matrix with real eigenvalues Numer Math

R Byers Numerical stability and instability in matrix sign function

based algorithms In CI Byrnes and A Lindquist editors Computa

tional and Combinatorial Methods in Systems Theory pages

Elsevier North Holland

R Byers Solving the algebraic Riccati equation with the matrix sign

function Lin Alg Appl

ED Denman and AN Beavers The matrix sign function and compu

tations in systems Appl Math and Comput

ED Denman and J LeyvaRamos Sp ectral decomp osition of a matrix

using the generalized sign matrix Appl Math and Comput

JD Gardiner and AJ Laub A generalization of matrixsignfunctions

solution for algebraic Riccati equations Int J Control

JD Gardiner and AJ Laub Parallel algorithms for algebraic Riccati

equations Int J Control

GH Golub and CF Van Loan Matrix Computations The Johns

Hopkins University Press Baltimore Maryland nd edition

NJ Higham Computing the p olar decomp osition with application

SIAM J Sci Stat Comput

NJ Higham Newtons metho d for the matrix square ro ot Math

Comp April

T Kato On the convergence of the p erturbation metho d i Progr

Theor Phys

T Kato On the convergence of the p erturbation metho d ii Progr

Theor Phys

T Kato Perturbation Theory for Linear Operators SpringerVerlag

Berlin

C Kenney and AJ Laub Polar decomp ositions and matrix sign func

tion condition estimates SIAM J Sci Stat Comp

C Kenney and AJ Laub Rational iteration metho ds for the matrix

sign function SIAM J Math Anal Appl

C Kenney and AJ Laub Matrixsign algorithms for Ricatti equations

IMA J Math Contr Info

C Kenney and AJ Laub On scaling Newtons metho d for p olar de

comp ositions and the matrix sign function SIAM Matrix Anal Appl

C Kenney and AJ Laub The matrix sign runction Technical rep ort

Department of Electrical and Computing Engineering University of

California Santa Barbara

P Lancaster and M Tismenetsky The Theory of Matrices Academic

Press Orlando

R Mathias Condition estimation for the matrix sign function via the

schur decomp osition In John G Lewis editor Proceedings of the Fifth

SIAM Conference on Applied Linear Algebra pages

D V Ouellette Schur complements and statistics Lin Alg Appl

C Kenney P Pandey and AJ Laub A parallel algorithm for the

matrix sign function Int J High Speed Computing

J Rob erts Linear mo del reduction and solution of algebraic Riccati

equations by use of the sign function Technical Rep ort Engineering

Rep ort CUEDBControl Tr Cambridge University Cambridge

England

JD Rob erts Linear mo del reduction and solution of the algebraic

riccati equation Int J Control Reprint of technical

rep ort Cambridge Univ

M Rosenblum On the op erator equation BX XA Q Duke

Math J

G W Stewart Error and p erturbation b ounds for subspaces asso ciated

with certain eigenvalue problems SIAM Review pages

G W Stewart Introduction to Matrix Computations Academic Press

New York

G W Stewart and JiGuang Sun Matrix Perturbation Theory Aca

demic Press London

B SzNagy Perturbations des transformations autoadjointes dans

lespace de Hilb ert Comment Math Helv

J H Wilkinson The Algebraic Eigenvalue Problem Clarendon Press

Oxford University Press Walton Street Oxford OX DP