<<

International Journal of Robotics and Automation, Vol. 17, No. 4, 2002

COMPUTING EXPONENTIALS OF SKEW-SYMMETRIC MATRICES AND OF ORTHOGONAL MATRICES

J. Gallier∗ and D. Xu∗

Abstract tom Dieck [4]). The surjectivity of exp is an important property. Indeed, it implies the existence of a function log: The authors show that there is a generalization of Rodrigues’ formula SO(n) → so(n) (only locally a function, log is really a mul- for computing the exponential map exp: so(n) → SO(n) from skew- tivalued function), and this has interesting applications. symmetric matrices to orthogonal matrices when n ≥ 4, and give For example, exp and log can be used for motion interpo- a method for computing some determination of the (multivalued) function log: SO(n) → so(n). The key idea is the decomposition of a lation, as illustrated in Kim, M.-J., Kim, M.-S and Shin skew-symmetric n×n B in terms of (unique) skew-symmetric [5, 6], and Park and Ravani [7, 8]. Motion interpolation matrices B1,...,Bp obtained from the diagonalization of B and and rational motions have also been investigated by J¨uttler satisfying some simple algebraic identities. A subproblem arising in [9, 10], J¨uttler and Wagner [11, 12], Horsch and J¨uttler [13], computing log R, where R ∈ SO(n), is the problem of finding a skew- B B2 B2 and R¨oschel [14]. In its simplest form, the problem is as , given the matrix , and knowing that has R ,R ∈ n eigenvalues −1 and 0. The authors also consider the exponential map follows: given two matrices 1 2 SO( ), find exp: se(n) → SE(n), where se(n) is the of the Lie a “natural” interpolating rotation R(t), where 0 ≤ t ≤ 1. SE(n) of (affine) rigid motions. The authors show that there is a Of course, it would be necessary to clarify what we mean Rodrigues-like formula for computing this exponential map, and give by “natural,” but note that we have the following solution: a method for computing some determination of the (multivalued) SE n → se n function log: ( ) ( ). This yields a direct proof of the R(t) = exp((1 − t)log R1 + t log R2) surjectivity of exp: se(n) → SE(n). In theory, the problem is solved. However, it is still necessary to compute exp(B) and log R effectively. Key Words When n = 2, a skew-symmetric matrix B can be written as B = θJ, where: , skew-symmetric matrices, exponentials, logarithms, rigid   motions, interpolation 0 −1 J =   10 1. Introduction and it is easily shown that: Given a real skew-symmetric n × n matrix B,itiswell B θJ e = e = cos θI2 + sin θJ known that R = eB is a , where: Given R ∈ SO(2), we can find cos θ because tr(R)=2cosθ ∞ k B B R R e = In + (where tr( ) denotes the of ). Thus, the problem is k k=1 ! completely solved. When n = 3, a real skew-symmetric matrix B is of the B is the exponential of (for instance, see Chevalley [1], form: Marsden and Ratiu [2], or Warner [3]). Conversely, given   R ∈ n −cb any rotation matrix SO( ), there is some skew-  0  symmetric matrix B such that R = eB. These two facts can   B =  c −a be expressed by saying that the map exp: so(n) → SO(n)  0  from the Lie algebra so(n) of skew-symmetric n × n matri- −ba 0 ces to the SO(n) is surjective (see Br¨ocker and √ and letting θ = a2 + b2 + c2, we have the well-known ∗ Department of Computer and Information Science, Univer- formula due to Rodrigues: sity of Pennsylvania, Philadelphia, PA 19104, USA; e-mail: [email protected], [email protected] B sin θ (1 − cos θ) 2 e = I3 + B + B (paper no. 206-2550) θ θ2 1 B with e = I3 when B = 0 (for instance, see Marsden and of computing the and the exponential of a matrix Ratiu [2], McCarthy [15], or Murray, Li, and Sastry [16]). is also investigated in [18, 19]. It turns out that it is more convenient to normalize B, The article is organized as follows. In Section 2 that is, to write B = θB1 (where B1 = B/θ, assuming that we give a Rodrigues-like formula for computing exp: θ = 0), in which case the formula becomes: so(n) → SO(n). In Section 3 we show how to compute log: SO(4) → so(4) in the special case of SO(4), which is θB1 2 e = I3 + sin θB1 +(1− cos θ)B1 simpler. In Section 4 we show how to compute some deter- mination of the (multivalued) function log: SO(n) → so(n) Also, given R ∈ SO(3), we can find cos θ because tr(R)= in general (n ≥ 4). In Section 5 we give a Rodrigues-like 1+2cosθ, and we can find B1 by observing that: formula for computing exp: se(n) → SE(n). In Section 6 we 1  show how to compute some determination of the (multi- (R − R ) = sin θB1 2 valued) function log: SE(n) → se(n). Our method yields a Actually, the above formula cannot be used when θ =0or simple proof of the surjectivity of exp: se(n) → SE(n). In θ = π, as sin θ = 0 in these cases. When θ = 0, we have Section 7 we solve the problem of finding a skew-symmetric 2 2 R = I3 and B1 = 0, and when θ = π, we need to find B1 matrix B, given the matrix B , and knowing that B has such that: eigenvalues −1 and 0. Section 8 draws conclusions.

2 1 B1 = 2 (R − I3) 2. A Rodrigues-Like Formula for B × As 1 is a skew-symmetric 3 3 matrix, this amounts exp: so(n) →SO(n) to solving some simple equations with three unknowns. Again, the problem is completely solved. In this section, we give a Rodrigues-like formula showing n ≥ B What about the cases where 4? The reason why how to compute the exponential e of a skew-symmetric Rodrigues’ formula can be derived is that: n × n matrix B, where n ≥ 4. We also show the uniqueness 3 2 of the matrices B1,...,Bp used in the decomposition of B = −θ B B mentioned in the introductory section. The following 3 fairly well-known lemma plays a key role in obtaining the or, equivalently, B1 = −B1. Unfortunately, for n ≥ 4, given any non-null skew-symmetric n × n matrix B,itis matrices B1,...,Bp (see Horn and Johnson [20], Corollary generally false that B3 = −θ2B, and the reasoning used in 2.5.14, or Bourbaki [21]). the 3D case does not apply. In this article, we show that there is a generalization Lemma 2.1. Given any skew-symmetric n × n matrix B of Rodrigues’ formula for computing the exponential map (n ≥ 2), there is some P and some block exp: so(n) → SO(n), when n ≥ 4, and we give a method E such that: for computing some determination of the (multivalued) B = PEP function log function log: SO(n) → so(n). The key to the n × n B solution is that, given a skew-symmetric matrix , with E of the form: there are p unique skew-symmetric matrices B1,...,Bp   such that B can be expressed as: E ···  1    B θ B ··· θ B  . .. .  = 1 1 + + p p  . . .  E =     where:  ··· Em 

··· 0n−2m {iθ1, −iθ1,...,iθp, −iθp} E is the of distinct eigenvalues of B, with θi > 0 and where each block i is a real two-dimensional matrix of where: the form:     B B B B i  j 0 −θi 0 −1 i j = j i =0n ( = ) E   θ   θ > 3 i = = i with i 0 Bi = −Bi θi 0 10 × This reduces the problem to the case of 3 3 matrices. Observe that the eigenvalues of B are ±iθj, or 0, recon- We also consider the exponential map exp se(n) → SE(n), firming the well-known fact that the eigenvalues of a where se(n) is the Lie algebra of the Lie group SE(n)of skew-symmetric matrix are purely imaginary, or null. We (affine) rigid motions. We show that there is a Rodrigues- now prove the existence and uniqueness of the Bj’s as well like formula for computing this exponential map, and we as the generalized Rodrigues’ formula. give a method for computing some determination of the (multivalued) function log: SE(n) → se(n). Theorem 2.2. Given any non-null skew-symmetric n×n The general problem of computing the exponential of matrix B, where n ≥ 3, if: a matrix is discussed in Moler and Van Loan [17]. However, more general types of matrices are considered. The problem {iθ1, −iθ1,...,iθp, −iθp} 2 3 is the set of distinct eigenvalues of B, where θj > 0 and Indeed, Bi = −Bi implies that: each iθj (and −iθj) has multiplicity kj ≥ 1, there are p B ,...,B 4k+j j 4k+2+j j unique skew-symmetric matrices 1 p such that: Bi = Bi and Bi = −Bi for j =1, 2 and all k ≥ 0 B = θ1B1 + ···+ θpBp (1) BiBj = BjBi =0n (i = j) (2) and thus, we get: 3 Bi = −Bi (3) θkBk θiBi i i e = In + for all i, j with 1 ≤ i, j ≤ p, and 2p ≤ n. Furthermore: k k≥1 !  p 3 5 θi θi θi B θ1B1+···+θpBp 2 e = e = In + (sin θiBi +(1− cos θi)B ) = In + − + + ··· Bi i 1! 3! 5! i=1  2 4 6 θi θi θi 2 + − + + ··· Bi and {θ1,...,θp} is the set of the distinct positive square 2! 4! 6! roots of the 2m positive eigenvalues of the symmetric 2  2 = In + sin θiBi +(1− cos θi)Bi matrix −1/4(B − B ) , where m = k1 + ···+ kp. Since: Proof. By Lemma 2.1, the matrix B can be written as: BiBj = BjBi =0n (i = j) B = PEP we get: where E is a block diagonal matrix consisting of m non-zero p m blocks of the form: B θiBi 2 e = e = (In + sin θiBi +(1− cos θi)B )   i i=1 i=1 0 −1 p E θ   θ > i = i with i 0 I θ B − θ B2 10 = n + (sin i i +(1 cos i) i ) i=1 If:

 2 2  {iθ1, −iθ1,...,iθp, −iθp} The matrix 1/4(B−B ) is of the form PE P , where:   B θ > is the set of distinct eigenvalues of , where j 0, for −θ2 2  i 0  every j, there is a non-empty set: Ei = 2 0 −θi

Sj = {i1,...,ikj } Thus, the eigenvalues of −1/4(B − B)2 are: of indices (in the set {1,...,m}) corresponding to all the blocks Ej in which θj occurs. Let Fj be the matrix 2 2 2 2 (θ1,θ1,...,θm,θm, 0,...,0) obtained by zeroing from E the blocks Ek, where k/∈ Sj.  n−2m By factoring θj in Fj,wehave:

and thus (θ1,...,θm) are the positive square roots of the Fj = θjGj eigenvalues of the symmetric matrix −1/4(B − B)2. and we let: We now prove the uniqueness of the Bj’s. If we assume that matrices Bj’s with the required properties exist, using  Bj = PGjP the properties of the Bj’s, we get the system:

It is obvious by construction that the three equations p (1)–(3) hold. B = θiBi i=1 As Bi and Bj commute for all i, j,wehave: p 3 3 B θ B +···+θ B θ B θ B B − θ B e = e 1 1 p p = e 1 1 ···e p p = i i i=1 p However, using: 5 5 B = θi Bi (4) 3 i=1 Bi = −Bi . . × . . as in the 3 3 case, we can show that: p 2p−1 p−1 2p−1 B =(−1) θi Bi eθiBi I θ B − θ B2 = n + sin i i +(1 cos i) i i=1 3 The of this system is: Using the surjectivity of the exponential map exp:   so(n) → SO(n), which easily follows from Lemma 2.1, θ θ ··· θ  1 2 p  Lemma 2.3 and the fact that if:      −θ3 −θ3 ··· −θ3   1 2 p  0 −θi δn =   E    . . .. .  i =  . . . .  θi 0 2p−1 2p−1 (−1)p−1θ (−1)p−1θ ··· (−1)p−1θ2p−1 1 2 p then   Observe that the above matrix is the product of the cos θi − sin θi diagonal matrix: eEi =   sin θi cos θi diag(1, −1, 1, −1,...,1, (−1)p−1) and we obtain the following characterization of rotations by the matrix: in SO(n), where n ≥ 3: p 2 2 θi V (θ1,...,θp) Lemma 2.4. Given any rotation matrix R ∈ SO(n), i=1 where n ≥ 3, if:

2 2 iθ −iθ iθ −iθ where V (θ1,...,θp) is a . Therefore, {e 1 ,e 1 ,...,e p ,e p } the determinant δn can be immediately computed, and we get: is the set of distinct eigenvalues of R different from 1, where 0 <θi ≤ π, there are p skew symmetric matrices p p(p−1)/2 2 2 B1,...,Bp such that: δn =(−1) θi (θj − θi ) i=1 1≤i

··· In−2m where B1 and B2 are all 4 × 4 skew-symmetric matrices, and: where the first m blocks Di are of the form:   B1B2 = B2B1 =0 cos θi − sin θi 3   B = −B1 Di = with 0 <θi ≤ π 1 θ θ 3 sin i cos i B2 = −B2 4 The first case in which iθ1 has multiplicity 2 is analo- we easily get the desired expression for p = cos θ1 + cos θ2 gous to the case of a rotation in SO(3). We can compute and q = cos θ1 cos θ2. cos θ1 easily because: Note in passing that we also have: 2 2 1  tr(R)=4cosθ1 cos θ1 cos θ2 = det 2 (R + R )

2 The case θ1 = π requires computing B1 from B1 . This which is the product of the eigenvalues. subproblem is solved in Section 7. Consider the system: In the second case, θ1 = θ2, with 0 <θi ≤ π. This is 1  (R − R ) = sin θ1B1 + sin θ2B2 analogous to the case of a rotation in SO(3). 2 1 R2 − R2 θ B θ B In all cases, we know that cos θ1 and cos θ2 are double 2 ( ) = sin 2 1 1 + sin 2 2 2 eigenvalues of 1/2(R + R), but we can easily compute The determinant of the above system is: cos θ1 + cos θ2 and cos θ1 cos θ2, and cos θ1 and cos θ2 are the roots of a quadratic equation that will be found 2 sin θ1 sin θ2(cos θ2 − cos θ1) explicitly. θ  <θ <π i , The properties of the Bi’s immediately imply that: As we assumed that sin i = 0 and 0 i for =1 2, we have cos θ2 = cos θ1, and the system has a unique 2 2 R = I4 + sin 2θ1B1 + sin 2θ2B2 +(1− cos 2θ1)B1 solution for B1 and B2. 2 2 +(1− cos 2θ2)B2

As B1 and B2 are skew-symmetric, we get: 4. Computing log: SO(n) →so(n)

1  2 (R − R ) = sin θ1B1 + sin θ2B2 Given an orthogonal matrix R ∈ SO(n), we would like to 1 2 2 find a logarithm of R, that is, some skew-symmetric matrix 2 (R − R ) = sin 2θ1B1 + sin 2θ2B2 B such that R = eB. By Theorem 2.2 and Lemma 2.4, we tr(R)=2cosθ1 + 2 cos θ2 know that we can look for a matrix:

We first look at the special cases in which sin θ1 =0or B = θ1B1 + ···+ θpBp sin θ2 =0. Assume that θ1 = π and θ2 = π, the case where θ1 = π and θ2 = π being similar. Then we get: where: 1  {iθ , −iθ ,...,iθ , −iθ } 2 (R − R ) = sin θ2B2 1 1 p p 2 B <θ ≤ π from which we can compute B2. We can now compute B1 is the set of distinct eigenvalues of , with 0 i , and from: B1,...,Bp are skew matrices such that:

1  2 2 BiBj = BjBi =0n (i = j) 2 (R + R )=2B1 +(1− cos θ2)B2 B3 −B 2 i = i Finally, we have to compute B1 from B1 . This subproblem is solved in Section 7. for all i, j with 1 ≤ i, j ≤ p and 2p ≤ n. Then, we have: θ  i We may now assume that sin i = 0, for =1,2.We p show the following proposition: θ1B1+···+θpBp 2 R = e = In + (sin θiBi +(1− cos θi)Bi ) i=1 Proposition 3.1. The numbers cos θ1 and cos θ2 are solutions of the equation x2 − px + q = 0, where: As we observed earlier:

1 {cos θ1,...,cos θp} p = cos θ1 + cos θ2 = 2 tr(R) 1 2 1  2 q = cos θ1 cos θ2 = 8 tr(R) − 16 tr((R − R ) ) − 1 is the set of eigenvalues of the symmetric matrix 1/2(R + R) that are different from 1. Furthermore, Proof. We know that: {cos θ1,...,cos θp} can be computed as the set of eigenval- ues of the symmetric matrix 1/2(R + R) that are different 1 R − R θ B θ B 2 ( ) = sin 1 1 + sin 2 2 from 1. The question is, how can we compute B1,...,Bp? Since and: R eθ1B1+···+θpBp 2 2 = tr(B1 )=−2 tr(B2 )=−2 we get: Therefore, some algebra yields: Rj = ejθ1B1+···+jθpBp 1  2 2 2 4 tr((R − R ) )=2cos θ1 + 2 cos θ2 − 4 and thus: As we also know that: p p j 2 R = In + sin jθiBi + (1 − cos jθi)B R θ θ i tr( )=2cos 1 + 2 cos 2 i=1 i=1 5 Then, we get the system: Proof. First, assume that −1 is not an eigenvalue of R, so that θp = π. We observed earlier that the determinant p 1  of the system determining B1,...,Bp is: (R − R )= sin θiBi 2   i=1 θ θ ··· θ p  sin 1 sin 2 sin p  1 R2 − R2 θ B   2 ( )= sin 2 i i sin 2θ1 sin 2θ2 ··· sin 2θp δ   i=1 p =  . . . .  p  . . .. .  1 3 3   (R − R )= sin 3θiBi 2 pθ pθ ··· pθ i=1 sin 1 sin 2 sin p

. .  . . Thus, we need to compute δn. p From the identity: 1 p p (R − R )= sin pθiBi n 2 (cos θ + i sin θ) = cos nθ + i sin nθ i=1 we get: As we will prove shortly, the determinant:       n n θ θ ··· θ nθ θ   n−1 θ −   n−3 θ 2 θ  sin 1 sin 2 sin p  sin = sin cos cos sin   1 3 sin 2θ1 sin 2θ2 ··· sin 2θp    δ   p =  . . . .  (5) n  . . . .    n−5 4   . . . .  + cos θ sin θ + ··· 5 sin pθ1 sin pθ2 ··· sin pθp As all the powers of sin θ in the sum are even, using the of this system is given by the formula: fact that cos2 θ + sin2 θ = 1, we can express the sum within θ p the parentheses in terms of cos only, so that:  p(p−1)/2 δp =2 sin θi (cos θj − cos θi) n−1 n−3 sin nθ = sin θ(an−1 cos θ + an−3 cos θ + ···) i=1 1≤i

 Similarly: When 0 <θi <πfor i =1,...,p, the determinant δp is   non-null. On the other hand, −1 is an eigenvalue of R iff n n   n−2 2 θj = π for some j. Without loss of generality, we may cos nθ = cos θ − cos θ sin θ assume that θp = π iff −1 is an eigenvalue of R, and we get 2   the following theorem. n +   cosn−4 θ sin4 θ + ··· Theorem 4.1. Given any rotation matrix R ∈ SO(n), 4 where n ≥ 3, let: so that cos nθ can be expressed in terms of cos θ only, and iθ −iθ iθ −iθ {e 1 ,e 1 ,...,e p ,e p } we get:

n n−2 be the set of distinct eigenvalues of R different from 1, cos nθ = bn cos θ + bn−2 cos θ + ··· where 0 <θi ≤ π. Then, there are p skew-symmetric We claim that: matrices B1,...,Bp such that: n−1 an−1 = bn =2 BiBj = BjBi =0n (i == j) 3 This is easily shown by induction using the identities: Bi = −Bi sin(n +1)θ = sin nθ cos θ + cos nθ sin θ for all i, j, with 1 ≤ i, j ≤ p, and 2p ≤ n, and: and: R = eθ1B1+···+θpBp cos(n +1)θ = cos nθ cos θ − sin nθ sin θ so that: Now, if we look at the determinant: B = θ1B1 + ···+ θpBp   θ θ ··· θ  sin 1 sin 2 sin p  is a logarithm of R. Furthermore, if −1 is not an eigenvalue    sin 2θ1 sin 2θ2 ··· sin 2θp  of R, the matrices B1,...,Bp are unique, and if −1isan δ   p =  . . . .  eigenvalue of R, the matrices B1,...,Bp−1 are unique and  . . . .   . . . .  the skew-symmetric square root of B2 can be determined p pθ pθ ··· pθ using the method of Section 7. sin 1 sin 2 sin p 6 n and express each sin jθk using: Definition 5.1. The set of affine maps ρ of R defined j−1 j−1 such that: sin jθk = sin θk 2 cos θk + sj(cos θk) ρ(X)=RX + U where sj(X) is a polynomial of degree j − 3, we can factor sin θk from each column, and we get a determinant where where R is a rotation matrix (R ∈ SO(n)) and U is some the jth row is of the form: vector in Rn, is a group under composition called the group j−1 j−1 of direct affine ,orrigid motions, denoted as 2 cos θ1 + sj(cos θ1) ··· SE(n). j−1 j−1 2 cos θp + sj(cos θp) Every rigid motion can be represented by the (n +1)× and where the first row is: (n + 1) matrix:   1 ··· 1 RU   Then, we can cancel all constant terms in rows 2,...,p 01 by subtracting some appropriate multiple of the first row; every term of degree 1 in rows 3,...,p by subtracting in the sense that:       some appropriate multiple of the second row; every term of ,...,p ρ(X) RU X degree 2 in rows 4 by subtracting some appropriate   =     multiple of the third row; and so on, so that in the 1 01 1 end we get the product of the Vandermonde determinant V (cos θ1,...,cos θp) by the determinant of the diagonal iff matrix: ρ(X)=RX + U diag(1, 2, 22,...,2p−1) Definition 5.2. The of real (n+1)×(n+1) The result is indeed: matrices of the form: p   δ p(p−1)/2 θ θ − θ BU p =2 sin i (cos j cos i)   i=1 1≤i

{iθ1, −iθ1,...,iθp, −iθp} of SE(n), because R is a rotation matrix, we know from is the set of distinct eigenvalues of B, where θi > 0, there Lemma 2.4 that if: are p unique skew-symmetric matrices B1,...,Bp such that the three equations (1)–(3) hold. Furthermore:   {eiθ1 ,e−iθ1 ,...,eiθp ,e−iθp } eB VU eΩ =   01 is the set of distinct eigenvalues of R different from 1, where: where 0 <θi ≤ π, there are p skew-symmetric matrices p B1,...,Bp such that: B 2 e = In + (sin θiBi +(1− cos θi)Bi ) i=1 BiBj = BjBi =0n (i = j) and:  3 p Bi = −Bi (1 − cos θi) (θi − sin θi) 2 V = In + Bi + B θ θ i i=1 i i for all i, j with 1 ≤ i, j ≤ p, and 2p ≤ n, and furthermore: Proof. The existence and uniqueness of B1,...,Bp B and the formula for e come from Theorem 2.2. Since: p k 1 R eθ1B1+···+θpBp I θ B − θ B2 B Bt = = n + sin i i +(1 cos i) i V = In + = e dt k i=1 k≥1 ( + 1)! 0 we have: We can also compute B1,...,Bp from R, as shown in p 1 Section 4. Thus, if V is invertible, we have a method to V I tθ B − tθ B2 dt = n + sin i i +(1 cos i) i compute a log of M. 0 i=1 Using Theorem 5.4 we can prove that V is invertible. p   1 cos tθi sin tθi 2 This yields a fairly direct proof of the surjectivity of the = tIn + − Bi + t − B θ θ i exponential map exp: se(n) → SE(n), and gives a method i=1 i i 0 for computing some determination of the (multivalued) p  (1 − cos θi) (θi − sin θi) 2 function the log function. = In + Bi + B θ θ i 2 i=1 i i Theorem 6.1. The matrix: Remark. Given:   p  BU (1 − cos θi) (θi − sin θi) 2   V = In + Bi + Bi Ω= θi θi 00 i=1 where B = θ1B1 + ···+ θpBp, if we let:   from Theorem 5.4 is invertible. B U/θ  i i Ωi = Proof. Since: 00

3 p  using the fact that Bi = −Bi and: (1 − cos θi) (θi − sin θi) 2   V = In + Bi + Bi Bk Bk−1U/θ θi θi k  i i i  i=1 Ωi = 00 Let us assume that the inverse of V is of the form: it is easily verified that: p p Ω 2 3 e = In+1 +Ω+ (1 − cos θi)Ωi +(θi − sin θi)Ωi 2 W = In + αiBi + βiBi i=1 i=1 8 2 The condition VW = In is expressed as: 7. A Method for Computing B Given B p  (1 − cos θi) (θi − sin θi) 2 In = In + Bi + B As we saw in Section 4, in order to compute a logarithm θ θ i i=1 i i of an orthogonal matrix, it may be necessary to compute 2 p a skew-symmetric matrix B given its square B . Actually, 2 + αiBi + βiBi the eigenvalues of B’s are ±i, and this simplifies the i=1 problem. We need to solve the following problem: find a  p skew-symmetric matrix B such that A = B2 is a given (1 − cos θi)αi 2 (1 − cos θi)βi + Bi − Bi non-null symmetric matrix with eigenvalues −1 or 0, with θi θi i=1 an even number of −1. It is slightly more convenient to 2 (θi − sin θi)αi (θi − sin θi)βi 2 look for a skew-symmetric B, given A = −B ,asA is then − Bi − Bi θi θi a non-null symmetric matrix with eigenvalues +1 and 0, p  with an even number of +1. Since A is a symmetric matrix sin θiαi (1 − cos θi)βi (1 − cos θi) = In + − + Bi θ θ θ whose eigenvalues are known, the problem can be solved by i=1 i i i A A PDP P  diagonalizing . Then, if = , with orthogonal, p D E D (1 − cos θi)αi sin θiβi (θi − sin θi) as has an even number of +1’s, we form from by + + + B2 θ θ θ i replacing every 2 × 2-identity block I2 in D by: i=1 i i i   Thus, we just have to solve the p systems of equations: 0 −1 J =   sin θiαi − (1 − cos θi)βi = cos θi − 1 10

(1 − cos θi)αi + sin θiβi = sin θi − θi  2 and we let B = PEP . Since J = −I2, we get: E2 = −D Since the determinant of the above matrix is: 2 2 and then: sin θi +(1− cos θi) = 2(1 − cos θi) B2 PEPPEP PE2P  −PDP −A and 0 <θi ≤ π, the matrix is invertible and the system = = = = has a unique solution. In fact, αi and βi are given by: A −B2     Therefore, = , as desired. In principle, the problem is solved. Actually, becuase the eigenvalues of A are special αi 1 sin θi (1 − cos θi)   =   (+1 and 0), a simple method based on the Gram–Schmidt 2(1 − cos θi) βi −(1 − cos θi) sin θi orthonormalization procedure can be designed, as we now   explain. As A = PDP, where D is a diagonal matrix con- cos θi − 1 A2 A ×   sisting or 0’s and 1’s, we have = . As a consequence, U A A sin θi − θi every non-null column of is an eigenvector of for the eigenvalue 1, that is, AU = U. Thus, we use the following That is: inductive method to diagonalize A. θi If A = 0 (the null matrix), then B = 0. Otherwise, αi = − n 2 proceed as follows. Let (e1,...,en) be any of R , for θi sin θi instance, the canonical basis (where the ith entry of ei is βi =1− 2(1 − cos θi) 1, and all other entries are 0). Let U1 be any non-null column of A (for instance, Therefore, the inverse of V is: the left-most non-null column). As U1 is non-null, let i be p   θ θ θ the index of some non-null entry in U1 (for instance, the −1 i i sin i 2 2 V = In + − Bi + 1 − Bi least index i, or the least index such that the ith entry is 2 2(1 − cos θi) i=1 maximum). We now form the new basis:

Remark. This formula is equivalent to the formula (U1,e1,...,ei−1,ei+1,...,en) given in the Appendix of Murray, Li, and Sastry [16] in the e ,...,e e U special case of SE(3). This is because: obtained from ( 1 n) by replacing i by 1 and reordering the vectors so that U1 is now the first vector. θ sin θ θ sin θ(1 + cos θ) θ(1 + cos θ) = = This new basis is generally not orthonormal, and we apply 2(1 − cos θ) 2(1 − cos θ)(1 + cos θ) 2 sin θ Gram–Schmidt (or any of its variants, such as modified and thus: Gram–Schmidt; see Golub and Van Loan [22] or Trefethen θ θ θ − θ θ and Bau [23]) to get an : − sin 2 sin (1 + cos ) 1 =      2(1 − cos θ) 2 sin θ (U1,e1,...,ei−1,ei+1,...,en) which is the expression found in Murray, Li, and Sastry This basis defines an orthogonal matrix Q1, and we B [16], except that our i’s are normalized. Note that this compute: expression is not well defined for θ = π. Our expression  does not suffer from this problem. A1 = Q1 AQ1 9   As U1 is just U1 normalized to unit length, U1 is an of a skew-symmetric n × n matrix B in terms of some eigenvector of A for the eigenvalue 1, and A1 is of the form: skew-symmetric matrices having some special properties.   We also showed that there is a Rodrigues-like formula for se n → n 10 computing this exponential map exp: ( ) SE( ), A1 = and we gave a method for computing some determination 0 A 1 of the (multivalued) function log: SE(n) → se(n). As a corollary we obtained a direct proof of the surjectivity We can now repeat the above procedure inductively on A , 1 of exp: se(n) → SE(n). The method for computing log: which is an (n − 1) × (n − 1) matrix. This will yield an SO(4)→ so(4) has been implemented. It has applications orthogonal (n − 1) × (n − 1) matrix Q such that: 2 to a locomotion problem, where the parameter space is     modelled by R4 (see Sun, [25]). The problem of interpo- D = Q2 A1Q2 lating between two rotations R1,R2 ∈ SO(4) comes up where D is a diagonal (n − 1) × (n − 1) matrix of 0’s and naturally. Our methods can be used to perform motion 1’s. Then: interpolation in SO(n)orSE(n) for fairly large n, but we are unaware of practical applications for n ≥ 5. We are     A1 = Q2D Q2 hoping that such problems will arise in the future, perhaps in robotics or even physics. and we form the orthogonal matrix:   10 Acknowledgements Q2 =  0 Q2 We thank Gilbert Strang for his comments. We also thank and the diagonal matrix: a referee of a previous version of this article and Kurt   Reillag for suggesting some shorter proofs. 10 D =   0 D References so that: [1] C. Chevalley, Theory of Lie groups I, Princeton Mathematical Series, No. 8 (Princeton, NJ: Princeton University Press,  A1 = Q2DQ2 1946). [2] J.E. Marsden & T.S. Ratiu, Introduction to mechanics and and we finally get: , TAM, vol. 17 (New York, NY: Springer-Verlag, 1994). A QDQ [3] F. Warner, Foundations of differentiable and Lie = groups, GTM, No. 94 (New York, NY: Springer-Verlag, 1983). [4] T. Br¨ocker & T. tom Dieck, Representation of compact Lie where Q = Q1Q2. groups, GTM, vol. 98 (New York, NY: Springer-Verlag, 1985). In forming the matrix E, instead of using the matrix: [5] M.-J. Kim, M.-S. Kim, & S.Y. Shin, A general construc-   tion scheme for unit curves with simple high order derivatives, Proceedings, Annual 0 −1 J   Conference Series, ACM, 1995, 369–376. = [6] M.-J. Kim, M.-S. Kim, & S.Y. Shin, A compact differential 10 formula for the first derivative of a unit quaternion curve, Journal of Visualization and Computer Animation, 7, 1996, we can use the matrix K = −J, since we also have K2 = 43–57. [7] F.C. Park & B. Ravani, B´ezier curves on Riemannian mani- −I2. This is the reason why B is not unique. In fact, if folds and Lie groups with kinematic applications, Mechanism, A has the eigenvalue 1 with multiplicity 2q, there are 2q Synthesis and Analysis, 70, 1994, 15–21. possibilities for B (recall that we are looking for B such [8] F.C. Park & B. Ravani, Smooth invariant interpolation of that B2 = −A, where A is a non-null symmetric matrix rotations, ACM Transactions on Graphics, 16, 1997, 277–295. [9] B. J¨uttler, Visualization of moving objects using dual with eigenvalues +1 and 0, with an even number of +1). quaternion curves, Computers and Graphics, 18 (3), 1994, 315–326. [10] B. J¨uttler, An osculating motion with second order contact 8. Conclusion for spacial Euclidean motions, Mechanics of Machine Theory, 32 (7), 1997, 843–853. [11] B. J¨uttler & M.G. Wagner, Computer-aided design with spacial In this work, we have given a generalization of rational B-spline motions, Journal of Mechanical Design, 118, Rodrigues’ formula for computing the exponential map 1996, 193–201. exp: so(n) → SO(n) when n ≥ 4, and we have also given a [12] B. J¨uttler & M.G. Wagner, Rational motion-based surface generation, Computer-Aided Design, 31, 1999, 202–213. method for computing some determination of the (multi- [13] T. Horsch & B. J¨uttler, Cartesian spline interpolation valued) function log function log: SO(n) → so(n). A sub- for industrial robots, Computer-Aided Design, 30(3), 1998, problem arising in computing log R, where R ∈ SO(n), is 217–224. B [14] O. R¨oschel, Rational motion design—a survey, Computer- the problem of finding a skew-symmetric matrix , given , B2 B2 − Aided Design 30(3), 1998, 169–178. the matrix , and knowing that has eigenvalues 1 [15] J.M. McCarthy, Introduction to theoretical and 0. Technically, the key result is the decomposition (Cambridge, MA: MIT Press, 1990). 10 [16] R.M. Murray, Z.X. Li, & S.S. Sastry, A Mathematical intro- typed lambda calculi, and proof theory to geometric mod- duction to robotics manipulation (London, UK: CRC Press, elling, computer graphics, and algebraic . The 1994). author of over 60 original papers, he has written three [17] C. Moler & C. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Review, 20(4), 1978, books: Logic for Computer Science (1986), Curves and 801–836. Surfaces in Geometric Modeling (1999), and Geomet- [18] S.H. Cheng, N.J. Higham, C.S. Kenney, & A.J. Laub, Approx- ric Methods and Applications (2000). He serves on the imating the to specified accuracy, SIAM Journal of and Applications, 22, 2001, editorial board of several professional journals, includ- 1112–1125. ing Theoretical Computer Science since 1992 and the [19] C.S. Kenney & A.J. Laub, A Schur-Fr´echet algorithm for Journal of Symbolic Computation since 1988. computing the logarithm and exponential of a matrix, SIAM Journal of Matrix Analysis and Applications, 18, 1998, 640–663. [20] R.A. Horn & C.R. Johnson, Matrix analysis (Cambridge, UK: Dianna Xu received her B.Sc. Cambridge University Press, 1990). in computer science from Smith [21] N. Bourbaki, Alg`ebre, Chapitres 4–7,El´ements de College, MA, in 1996. She then Math´ematiques, Masson, 1981. attended the University of Penn- [22] G.H. Golub & C.F. Van Loan, Matrix computations, 3rd ed. (Baltimore, MD: The Johns Hopkins University Press, 1996). sylvania as a Ph.D. candidate in [23] L.N. Trefethen & D. Bau III, Numerical computer and information science, (Philadelphia, PA: SIAM Publication, 1997). with a concentration on computer [24] M. Berger, G´eom´etrie 1, Nathan, 1990. English edition: graphics, under the supervision of Geometry 1, Universitext, Springer-Verlag. [25] H. Sun, Curved path human locomotion on uneven terrain, Dr. Jean Gallier. She is on track doctoral diss., University of Pennsylvania, Philadelphia, PA, to receive her Ph.D. at the end 2000. of August 2002. Her dissertation topic is triangular spline surfaces design. She is also interested in research on curves and Biographies surfaces fitting, subdivision surfaces, mesh generation and optimization, geometric methods, and computer-aided geo- Jean Gallier received his Ph.D. metric design in general. In addition, she has worked and in computer science from the Uni- published in related areas such as computational geometry, versity of California, Los Angeles, affine and , and algebraic topology. in 1978. Dr. Gallier joined the Uni- versity of Pennsylvania in 1978 as Assistant Professor in the Depart- ment of Computer and Informa- tion Science, where he has been a Professor since 1990. He has also had a secondary appointment in the Department of Mathemat- ics since 1994. Dr. Gallier has worked in various areas, ranging from program correctness and automated theorem proving, incremental compilation,

11