JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 213, 393᎐405Ž. 1997 ARTICLE NO. AY975517

A Power Method for Computing Square Roots of Complex Matrices

Mohammed A. Hasan

Department of Electrical Engineering, Colorado State Uni¨ersity, Fort Collins, Colorado 80523 View metadata, citation and similar papers at core.ac.uk brought to you by CORE Submitted by Harlan W. Stech provided by Elsevier - Publisher Connector

Received August 22, 1995

In this paper higher order convergent methods for computing square roots of nonsingular complex matrices are derived. These methods are globally convergent and are based on eigenvalue shifting and powering. Specifically, it is shown for each positive r G 2, a convergent method of order r can be developed. These algorithms can be used to compute square roots of general nonsingular complex matrices such as computing square roots of matrices with negative eigenvalues. ᮊ 1997 Academic Press

1. INTRODUCTION

m=m A of a complex A g C is defined to be any m=m 2 matrix B g C such that B s A, where C is the field of complex numbers. If all eigenvalues of an m = m matrix A are distinct, then the 2 m matrix equation X s A generally has exactly 2 solutions. This follows from the fact that A is diagonalizable, i.e., there exists a similarity matrix y1 Ž. Usuch that A s UDU , where D s diag ␭1,...,␭m and thus B s y1''ŽŽ .ii1 Ž .m . UDU, where D s diag y1 ''␭1,..., y1 ␭mk, and i s 0or1 for k s 1, 2, . . . , m. However, if A has multiple eigenvalues, the number of m solutions will be different from 2 as shown next. Let m s 2, then without loss of generality, we can assume that A ␭220or A ␭1. s 0␭22s 0␭ 2 cosŽ.␪ sin Ž.␪ Assume that A ␭ I, then the family ␭ 0 F ␪ - 2␲ forms an s ½5sinŽ.␪ ycos Ž.␪ infinite set of square roots of A. On the other hand, if A ␭2 1 , then s 0 ␭2 A has only two square roots given by"␭ "1r2 ␭ provided that ␭ / 0. 0 "␭ Unlike the square roots of complex numbers, square roots of complex

393

0022-247Xr97 $25.00 Copyright ᮊ 1997 by Academic Press All rights of reproduction in any form reserved. 394 MOHAMMED A. HASAN matrices may not exist. For example, when ␭ s 0 in the last matrix no square root exists. From this observation, it is obvious that for 2 = 2 2 matrices, the equation B s A / 0 has a solution if and only if A has a nonzero eigenvalue. 2 To understand the structure of solutions of the equation B s A for bb m 2, let B 11 12 , and A aa11 12 such that B 2 A. Then we s s bb21 22 s aa21 22 s have the following four equations bbi11jiqbb22jijsa for i, j s 1, 2 which are equivalent to F s 0, where

bb bb a ¡11 11q 12 21y 11 bb11 12qbb 12 22ya 12 FbŽ.11,b 12,b 21,b 22 s~ Ž1. bb21 11qbb 22 21ya 21 ¢bb21 12qbb 22 22ya 22 .

The Jacobian of this system can be shown to be

2bb11 21 b12 0

Ѩ FbŽ.11,b 12,b 21,b 22 bb12 11qb 220 b 12 J ss . ѨŽ.b11,b 12,b 21,b 22 b21 0b11qbb 22 21

0 bb21 12 2b22

Ž.2Ž. It can be verified that <

bb00 B 11 12 0 s 00 bb21 22

2 0 0 be a solution of the equation B s A such that b11q b 22 / 0. Since J is 0000 nonsingular at Žb11, b 12, b 21, b 22., it follows from the implicit function theorem that B0 is the only solution. From the eigendecomposition of A indicated before, one can see that there are at least four square roots of A. The implicit function theorem guarantees exactly four square roots with nonzero traces. These square roots of A which have nonzero traces are referred to as functions of A wx1 . Essentially, B is a function of A if B can be expressed as a polynomial in A. Ž. 2 Now if b11q b 22 s 0, then it follows from 1 that a11q a 22 s ␭ , 2 a12s a 21 s 0, i.e., A is diagonal of the form ␭ I. In this case the equation SQUARE ROOTS OF COMPLEX MATRICES 395

2 B s A has a two-dimensional family of solutions given by

2 "'␭ y rs r r, s g C. ½5s.'␭2 rs y

Note that when r s s s ␭ sinŽ.␪ , we get the one-parameter family ␭ cosŽ.␪ sin Ž.␪ 0 ␪ - 2␲ which is described before. ½5sinŽ.␪ ycos Ž.␪ F The following result provides conditions on the eigenstructure of A which ensure the existence of square roots which are functions of A.

PROPOSITION 11.wx Let A be nonsingular and its p elementary di¨isors be coprime, that is, each eigen¨alue appears in only one Jordan block. Then A has precisely 2 p square roots, each of which is a function of A.

Several computational methods of square roots of complex matrices have been reported in the literature. Inwx 2 , the Newton-Raphson method was used for computing the principal square root of a complex matrix. An accelerated algorithm for computing the positive definite square root of a positive was presented inwx 3 . A matrix continued fraction method was presented inwx 4 . The matrix sign algorithm was developed in wx5 . A Schur method for computing square roots was developed in wx 6 . Fast stable methods for computing square roots were also presented inwx 7, 8 . It is noted in almost all of the above methods either a linear or quadratic convergence can be obtained. In this paper, higher order conver- gent methods of order r G 2 will be derived. The essence of these methods is a process whereby a sequence of matrices which in the limit converges to a square root of A is generated. This process involves creating gaps between the magnitudes of eigenvalues of different square roots of A so that for sufficiently high powers the eigenvalues will become decoupled. This is similar in principle to well-known methods such as those of Graeffe, Bernoulli, and the qd algorithm for solving polynomial equations in that these methods are based on eigenvalue powering. For a survey of some of these methods the reader is referred towx 9, 10 and the references therein. Let S be a set of commutative and thus simultaneously diagonalizable matrices. In the sequel, the notation ␭iŽ.X denotes the ith eigenvalue of the square matrix X g S relative to a fixed similarity matrix which diagonalizes the set S. The notation ␴ Ž.A denotes the set of eigenvalues of A. The symbol R is used to denote the set of real numbers and 55A denotes any vector norm of the matrix A. 396 MOHAMMED A. HASAN

2. DERIVATION OF THE MAIN RESULTS

In the next theorem we will generate a sequence which converges to a square root of a square matrix.

m=m THEOREM 2. Let A g C be a nonsingular matrix. Let r be a positi¨e integer such that r G 2 and define Akk and B recursi¨ely as follow. Let r r A ABAry2l2ll,2Ž. kq1sÝž/2l kk ls0 and r r B ABAry2 ly12lq1l.3Ž. kq1sÝž/2l1kk ls0 q

Then there exists an a g C such that Bk is nonsingular for all sufficiently large y1 k. Set Xkkks B A , then the initial guess A0s aIm and B0s Im the sequence Xk con¨erges to a square root W of A. Moreo¨er,

r y1 r Xkq1" W s BBXkq1kkŽ."W,4 Ž. i.e., if the sequence Xk con¨erges, it is rth order con¨ergent to W. Addition- ally,

lim Ay1 Ar I and lim By1 B r I. k ªϱ kq1 kks ªϱkq1ks

2 Proof. Let W be any square root of A, i.e., W s A and show by induction that

r k Akk" BWsŽ.aI " W .5Ž.

ClearlyŽ. 5 holds for k s 0. Assume thatŽ. 5 holds for the positive integer k. Then

r r kq 1 r r Ž.ŽaI " W A " BW . ABAry2 l 2 ll s kks Ýž/2l kk ls0 r r " ABAWry2ly12lq1l Ýž/2l1kk ls0 q

sAkq1"BWkq1 , where the last equality follows fromŽ. 2 and Ž. 3 . Hence Ž. 5 is true for the integer k q 1. This shows thatŽ. 5 is true for each nonnegative integer k. SQUARE ROOTS OF COMPLEX MATRICES 397

The nonsingularity of A implies that there exists an a g C such that Ž.Ž. Ž. Ž < ␭jjaI q W <<) ␭ aI y W <, for j s 1,...,m. From 5 we have aI q .rk Ž.rk WsAkkqBW and aI y W s Akky BW. Solving the last two equations for Akkand B yields

1 rrkk AksÄ4Ž.Ž.aI q W q aI y W ,6 Ž. 2 and

1 kk rry1 BksÄ4Ž.Ž.aI q W y aI y WW.7Ž. 2 Note that the order of is immaterial in this case since Ä4ϱ the choice X0 s aI implies that the elements of the sequence Xkks1 commute with each other. It should also be noted that for sufficiently large Ž.Ž. kboth Akkand B are invertible since < ␭jaI q W <<) ␭jaI y W <, j s y1 1,...,m. Thus multiplying both sides ofŽ. 5 by Bk yields

k k y1 BAy1W BaIy1Wrr2WI aI WaIy1 W kkq s kŽ.q s½yÄ4 Ž.Ž.q y

rk 2rk y1 y1 s2WI½5qÄ4Ž.Ž.Ž.Ž.aIqWaIyWqÄ4aIqWaIyWqиии . Ž.Ž. Since <␭iiaI q W <<) ␭ aI q W <, for i s 1,...,m, it follows from the last equation that

k y1 y1 r lim BAkkqWslim BaIkŽ.qWs2W. kªϱ kªϱ Therefore, lim By1A W. k ªϱ kks Ž. Ž.r To prove 4 we have from the relation Akq1" BWkq1 sAkk"BW that r y1 y1 r y1 BAkq1kq1"WsBBBAkq1kkkŽ."W, or equivalently r y1 r Xkq1" W s BBXkq1kkŽ."W. Finally, the last conclusion follows from r kq 1 kq1 y1 kk y1rrr rr BBkq1ksÄ4Ž.Ž.Ž.Ž.aIqW q aIyWaIÄ4qWqaIyW 1 rkq1y y1 s½5žIyŽ.Ž.Ž.aI q WaIyW r rk y1 ½ž/IyŽ.Ž.Ž.aI q WaIyW ªI as k ª ϱ. Q.E.D. 398 MOHAMMED A. HASAN

An analogous algorithm to that of Theorem 2 can be obtained by setting y1 Xkkks B A as shown next. THEOREM 3. Let A be a nonsingular matrix and let r be a positi¨e integer such that r G 2. Let r r C XAry2 ll,8Ž. kksÝž/2l ls0 r r D XAry2ly1 l,9Ž. kksÝž/2l1 ls0 q

y1 m=m and set Xkq1 s DkkC . Then there exists an initial matrix X0s aI g C , Ä4ϱ agCfor which the sequence Xkks1 con¨erges to a square root W of A. Moreo¨er, r y1 Xkq1" W s DXkkŽ."W,10 Ž. i.e., if the sequence Xk con¨erges it is rth order con¨ergent to W.

Proof. Theorem 3 follows directly from Theorem 2 by setting Xk s y1 BkkA . Q.E.D. In Theorems 2 and 3 and if r s 2, quadratically convergent algorithms are obtained as follows.

COROLLARY 4. LetAbeanm=m nonsingular complex matrix and 2 2 define the following sequence Akq1 s Akkq B A, and B kq1s 2 Akk B with y1 A00saI and B s I. Then for some a g C, the sequence BkkA con¨erges ' y1 quadratically to A . Moreo¨er, if we set Xkkks BA,then the iteration 1y12Ž. Xkq1 sXkkky2XXyA with X0s aI con¨erges quadratically to some ''1y12Ž. ' A and Xkq1y A s 2XXkkyA. 1y12Ž. Remark. The iteration Xkq1 s Xkkky 2XXyAof the above 2 corollary is exactly the Newton iteration for solving X y A s 0. Several variants of Newton’s method and their implementations were presented in wx2 , where the matrix A is assumed diagonalizable. Note that the algorithm of Corollary 4 and the Newton method are equivalent. However, the derivation of the above algorithm lends itself to the development of other methods whose convergence is of any prescribed order.

A cubically convergent algorithm can be derived by setting r s 3in Theorems 2 and 3 as in the next result.

COROLLARY 5. LetAbeanm=m nonsingular complex matrix and 3 2 2 define the following sequence Akq1 s Akkkkq 3B A A and B q1s 3 AkkB q 3 y1 Bk A,with A00s aI and B s I. Then for some a g C, the sequence BkkA SQUARE ROOTS OF COMPLEX MATRICES 399

y1 con¨erges cubically to a square root of A. Moreo¨er, if we set Xkkks B A Ž2.y1Ž2 then the sequence satisfies the recursion Xkq1s Xkky 23X qAX ky . ' A which with the initial guess X0 s aI con¨erges cubically to A and ''Ž.Ž.2 y13 Xkq1yAs23XkkqAXyA.

3. ANALYSIS OF CONVERGENCE

Ä4ϱ To analyze the convergence of the sequence Xkks1we formulate the iteration in Theorem 2 as a fixed point iteration as established in the next result. Ž THEOREM 6. Assume there exists an a g C such that < ␭i aI y ''.Ž . Ar␭i aI q A <- 1, for i s 1,...,m. Then the sequence Xk defined in Theorem 3 is rth order con¨ergent, i.e., ''r Xkq1yAsOXŽ.ky A , or equi¨alently, there exists a constant S G 0 such that 5555''r Xkq1yAFSXky A .

m=m Proof. Let Z g C be any square matrix of A and define CZŽ.s ÝrrrrZAy2lland DZŽ. Ý rZA ry2ly1l. Then ls0ž/2l s ls0ž2lq1 / r CZŽ."DZ Ž.'' AsŽ.Z" A . 1 m=mm=m Define ⌽Ž.Z s DZ Ž.y CZŽ.. Then ⌽: C ª C and rry1 rrry2ly1ry2l ⌽Ž."''AÝÝŽ."AAllŽ." 'AA s½5ž/2l12½5 ž/l ls0q ls0

rry1 r1y1r1 s½52yŽ.Ž."''A"A ½52y Ž." 'As" 'A.

Therefore ''A and y A are fixed points for the function ⌽. Now, the function ⌽ can be written as

y1 ⌽Ž.Zs.''AqDZŽ.Ž. CZ Ž."DZ Ž. A r y1 s.''AqDZŽ. Ž. Z" A , 1 r hence ⌽Ž.Z " ''A s DZ Ž.y Ž Z" A .. Note that when Z is a scalar it can be shown d y1 2 ry1 ⌽Ž.Z s rD Ž. Z Ž Z y A . . dZ 400 MOHAMMED A. HASAN

2 ll Therefore if ␰ y A s 0, then ⌽Ž.␰ s ␰ and Žd rdZ .Ž.⌽ ␰ s 0, for l s 1, 2, . . . , r y 1. It follows that there is an R ) 0 and a neighborhood, Ž. Ä m=m 4 SR ␰ sZgC <5Zy␰ 5-R,of␰ and a constant K,0FK-1 such that

⌽Ž.Zy⌽ Ž.␰FKZ55y␰

Ä m=m 4Ž for all Z g SRŽ.␰ [ Z<55g C Z y ␰ F R i.e., ⌽ is a contractive Ž.. Ž. mapping in SR ␰ . Hence for any X0 g SR ␰ the generated sequence ⌽Ž. иии Ž.␰ 55␰ Xkq1 sXkk,ks1, 2, has the properties X g SRand Xky k55␨ иии FKX0 y for k s 1, 2, . This shows that the iteration Xkq1 s ⌽Ž.Xk converges to ␰ for any initial guess X0 in SRŽ.␰ for which XA00sAX . To prove the last conclusion of the theorem, we have

''r Ckk"DAsŽ.X k"A,11Ž. where C [ CXŽ. ÝrrrXAy2ll,and D[DXŽ. kkls s0ž/2lkkks ÝrrrXAy2ly1lr. Since DAŽ.Ž''2A .y1is nonsingular, D is ls0ž/2lq1 k s k nonsingular for all sufficiently large k. It follows that there exists a y1 constant S G 0 such that 5 Dk 5 F S for all sufficiently large k. Therefore 5555''r Xkq1 yAFSXk y A . This proves the rth order convergence of Xk . Q.E.D.

4. CONTINUED FRACTION EXPANSION

There are many formulas in the literature that approximate square roots of numbers and matrices using continued fraction expansionwx 4 . In this section, we express the results of Theorem 3 in continued fraction form.

THEOREM 7. Let A be a nonsingular matrix and let W be a square root of Ž.Ž. A.Then for each a g C for which < ␭iiaI q W <<) ␭ aI y W <, i s 1,...,m,W has the representation

2 2 W s aI q Ž.Ž.A y a I ½2 aI q A y a I

1y1 2 y1y Ä42aI q Ž.A y aI Ä42aI q иии 5.12Ž. SQUARE ROOTS OF COMPLEX MATRICES 401

It can easily be verified that the rth order truncation of the continued fraction ofŽ. 12 yields DaI Ž.y1 CaI Ž., where CaI Ž. Ýrrr aAy2ll, and s ls0ž/2l DaIŽ. Ýrrr aAy2ly1l. When A is a scalar, FormulaŽ. 12 reduces s ls0ž/2lq1 to

2 ' A y a A s a q 22,13Ž. 2aqŽ.AyarŽ.2aq Ž.Ž.Ayar2aqиии which in the case a s 1 becomes that ofwx 4 . Note that when A is positive, Ž.13 converges for any a g C with nonzero real part. The free parameter a provides some flexibility in choosing the initial guess in that the closer a is to 'A the more accelerated is the convergence.

Remark on the Choice of the Initial Guesses. In this remark we provide special cases where conditions on the initial matrix X0 can be imposed to ensure convergence. Assume that all eigenvalues of A are not on the negative real line. Then each eigenvalue of 'A has nonzero real part. The Ž. initial guess A0 s aI with a ) 0 a - 0 forces the sequence Xk to converge to a square root of A with eigenvalues having positiveŽ. negative real part. However, if some of the eigenvalues of 'A are on the imaginary Ž. axis, then the choice X0 s a q ib I, where a ) 0 and b ) 0, will lead to a sequence which converges to a square root of A having eigenvalues with nonnegative real parts. These observations show in particular that if A is positiveŽ. or negative definite then the methods of Theorems 2 and 3 Ž. converge for any initial guess X00of the form X s aI, a g R ia g R , where i sy'1 . This is an improvement over the algorithm inwx 2 which is applicable only to matrices which have no negative eigenvalue.

5. COMPUTATIONAL RESULTS

To demonstrate the performance of the algorithms proposed in this paper we consider in this section some examples to show the behavior of some of these methods in finite-precision arithmetic. These computations were carried out on approximately seven decimal digit accuracy. For the purpose of comparison we will apply Theorem 3 to four examples. The notation Xk, r will denote the computed square root using k iterations and 2 rth order method. The error is measured in the Frobenious norm 5 Xk, r y A5 F . 402 MOHAMMED A. HASAN

EXAMPLE 1. Consider the complex 3 = 3 matrix

8 q 15i y1 y 3i y4 y 9i 333 y1y3i5q9iy1y3i As . 333 y4y9iy1y3i8q15i 333

Applying five iterations of the algorithm of Theorem 3 with r s 2, X0 s Ž.1qiI, yields

1.938066 q 1.123146i y0.2334078 qy0.2188987i X5,2 s y0.2334079 qy0.2188985i 1.565499 q 0.8928874i y0.6059740 qy0.4491569i y0.2334077 qy0.2188986i y0.6059739 qy0.4491573i y0.2334077 qy0.2188989i , 1.938066 q 1.123145i which agrees with the exact square root to five decimal places, i.e., 2 Ž y6 . 5 X5, 2 y A5 F s O 10 . Comparable accuracy can also be achieved using only four iterations with a third order method with the same initial guess 2 Ž y6 . in which case we obtain 5 X4, 3 y A5 F s O 10 . EXAMPLE 2. Consider the 4 = 4 matrixwx 2

5411 4511 A . s1142 1124

The eigenvalues of this matrix are ␭Ž.A s Ä41, 2, 5, 10 and 2-norm condi- Ž. tion number kA20s10. When X s I and 4 iterations are used with rs3, we obtain

1.988520 0.9885167 0.1852414 0.1852417 0.9885170 1.988519 0.1852422 0.1852421 X , 4,3 s 0.1852420 0.1852420 1.917762 0.5035480 0.1852418 0.1852418 0.5035481 1.917762

2 Ž y6 . 2 Ž y3 . and 5 X4, 3y A5 FFs O 10 . We also note that 5 X 4, 2 y A5 s O 10 2 Ž y7 . while 5 X5, 2 y A5 F s O 10 . SQUARE ROOTS OF COMPLEX MATRICES 403

EXAMPLE 3. Consider the 4 = 4 near singular matrix

0 .07 .27 y.33 1.31 y0.36 1.21 0.4 A s . 1.06 2.86 1.49 y1.34 y2.64 y1.84 y0.24 y2.01

The eigenvalues of this matrix are ␭Ž.A s Ä40.03, 3.03, y1.97 " i . Apply- Ž. ing the iteration of Theorem 3 with r s 3 and X0 s 1 q iIwe obtain

0.2441971 y9.0401828E y 02 0.1997281 1.317404 1.181676 0.2569374 X7,3 s 1.0659751E y 02 0.1509080 1.370772 y0.6756389 y1.982304 0.3442460 y8.5186012 E y 02 0.8455525 , y1.248934 y0.1964209

2 Ž y4 . with error 5 X7, 3 y A5 F s O 10 . However, when r s 2, similar accuracy can be obtained with the same X0 as

0.2437900 y9.0409003E y 02 0.1997136 1.317573 1.181780 0.2569094 X6,2 s 1.0955708E y 02 0.1507968 1.370820 y0.6754091 y1.982364 0.3442680 y8.5148215E y 02 0.8455746 , y1.249036 y0.1964408

2 Ž y4 . with error measured in Frobenious norm as 5 X6, 2 y A5 F s O 10 . EXAMPLE 4. Consider the 3 = 3 matrixwx 2

11y2 Asy12 1, 01y1 which has the eigenvalues ␭Ž.A syÄ41, 1, 2 , i.e., this matrix has a nega- tive eigenvalue. Thus the initial matrix X0 s aI should be chosen so that 404 MOHAMMED A. HASAN

Ž. the imaginary part of a is nonzero. Using X0 s 1 q iI, we obtain

1.0285987 y 0.1666667i 0.4714045 y 0.3333333i X9,3 s y0.4142135 y 0.0000000i 1.4142135 q 0.0000000i y0.0285954 q 1.1666666i 0.4714045 y 0.3333333i y1.0285954 q 1.16666667i 0.4142135 q 0.0000000i , y0.0285954 q 1.1666666i

2 Ž y10. with error 5 X5, 2 y A5 F s O 10 . If a method of order three is used, 2 Ž y8 . then 5 X3, 3 y A5 F s O 10 . In summary, some square roots of nonsingular complex matrices can be Ž. computed using only initial matrices of the form X0 s a q ib I. The case b / 0 is required for matrices having negative eigenvalues.

6. CONCLUSION

A new set of algorithms for computing square roots of nonsingular complex matrices was developed. Given any positive integer r G 2we presented a systematic way of deriving an rth order convergent square root algorithm. The convergence and its speed are largely affected by the Ä4''Ä4 ratios Riis <␭ X0y A <

ACKNOWLEDGMENT

The author thanks the referees for their helpful remarks and suggestions which improved the quality of this work. SQUARE ROOTS OF COMPLEX MATRICES 405

REFERENCES

1. N. J. Higham, Computing real square roots of a real matrix, Appl. 88r89 Ž.1987 , 405᎐430. 2. N. J. Higham, Newton’s method for the matrix square root, Math. Comp. 46, No. 174 Ž.1986 , 537᎐549. 3. E. D. Denman, Roots of real matrices, Linear Algebra Appl. 36 Ž.1981 , 133᎐139. 4. L. S. Shieh and N. Chahin, A computer-aided method for the factorization of matrix polynomials, Appl. Math. Comput. 2 Ž.1976 , 63᎐94. 5. E. D. Denman and A. N. Beavers, The matrix sign function and computation of systems, Appl. Math. Comput. 2 Ž.1976 , 63᎐94. 6. A. Bjorck and S. Hammarling, A Schur method for the square root of a matrix, Linear Algebra Appl. 52r53 Ž.1983 , 127᎐140. 7. W. D. Hoskins and D. J. Walton, A fast method of computing the square root of a matrix, IEEE Trans. Automat. Control AC-23, No. 3Ž. 1978 , 494᎐495. 8. W. D. Hoskins and D. J. Walton, A fast, more stable method for computing the pth root of positive definite matrices, Linear Algebra Appl. 26 Ž.1979 , 139᎐163. 9. P. Henrici, ‘‘Applied and Computational Complex Analysis,’’ Vol. 1, Wiley, New York, 1974. 10. A. S. Householder, ‘‘The Numerical Treatment of a Single Nonlinear Equation,’’ Mc- Graw᎐Hill, New York, 1970. 11. M. A. Hasan, Higher order convergent algorithms for computing nth roots of complex matrices, submitted for publication.