<<

Head Pose Determination from One Image Using a Generic Model ; Ikuko Shimizu1;3 Zhengyou Zhang2 3 Shigeru Akamatsu3 Koichiro Deguchi1 1 Faculty of Engineering, University of Tokyo, 7-3-1 Hongo, Bynkyo-ku, Tokyo 113, Japan 2 INRIA, 2004 route des Lucioles, BP 93, F-06902 Sophia-Antipolis Cedex, France 3 ATR , 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan e-mail: [email protected]

Abstract use a generic model of the , which is applicable to many persons and is able to consider the variety of facial We present a new method for determining the pose of a expressions. Such a model is constructed from the results of human head from its 2D image. It does not use any arti®cial intensive measurements on the of many people. With markers put on a . The basic idea is to use a generic model this 3D generic model, we suppose that an image of a head of a human head, which accounts for variation in shape and is the projection of this 3D generic model onto the image facial expression. Particularly, a set of 3D curves are used to plane. Then, the problem is to estimate this transformation, model the contours of , , and . A technique which is composed of the rigid displacement of the head and called Iterative Closest Curve matching (ICC) is proposed, a perspective projection. which aims at recovering the pose by iteratively minimizing We take a strategy that we de®ne edge curves on the 3D the distances between the projected model curves and their generic model in advance. For edge curves, we use the closest image curves. Because curves contain richer infor- contours of eyes, lips, eyebrows, and so on. They are caused mation (such as curvature and length) than points, ICC is by discontinuity of the re¯ectance and appear in the image both more robust and more ef®cient than the well-known iter- independent of the head pose in 3D space. (We call these ative closest point matching techniques (ICP). Furthermore, edges stable edges.) For each de®ned edge curve on the the image can be taken by a camera with unknown internal generic model, we search its corresponding curves in the parameters, which can be recovered by our technique thanks image. This is done by ®rst extracting every edge from the to the 3D model. Preliminary experiments show that the image and next using the relaxation method. proposed technique is promising and that an accurate pose After we have established the correspondences between estimate can be obtained from just one image with a generic the edges curves on the model and the edges in the image, head model. we are to estimate the head pose. For this purpose, we de- velop ICC (Iterative Closest Curve) method which minimizes the distance between the curves on the model and the corre- 1. Introduction sponding curves in the image. This ICC method is similar This paper deals with techniques for estimating the pose to the ICP (Iterative Closest Point) method [5] [8], which of a human head using its 2D image taken by a camera. They minimizes the distance from points of a 3D model to the cor- are useful for the realization of a new man-machine interface. responding measured points of the object. Because a curve We present a new method for the accurate estimation of a head contains much richer information than a point, curve corre- pose from only one 2D image using a 3D model of human spondences can be established more robustly and with less heads. By a 3D model with characteristic curves, our method ambiguity, and therefore, pose estimation based on curve cor- does not use any makers on the face and uses an arbitrary respondence is thought to be more accurate than that based camera with unknown parameters to take images. on point correspondence. Several methods have been proposed for head pose esti- The ICC method is an iterative algorithm and needs a mation which detect facial feature and estimate pose by loca- reasonable initial guess. To obtain it, prior to applying the tion of these features using 2D face model[1] or by template ICC method, we roughly compute the pose of a head and matching[3]. Jebara[7] tracked facial features in the sequence the camera parameters by using the correspondence of conics of images to generate 3D model of face and estimate pose of ®tted to the stable edges. The computation is analytically face. carried out. Then, a more precise pose are estimated by the We use 3D models of human heads in order to estimate a ICC method. In this step, in addition to the stable edges, we pose from only one 2D image. There are some dif®culties use variable edges, which are pieces of occluding contours with such 3D models; head shapes are different from one of a head, e.g. the contour of the face. person to another person and, furthermore, facial expressions Our method is currently applied for extracted face area may vary even for one person. Nevertheless, it is unrealistic from the natural image or the face image with unicolor back- to have 3D head models for all persons and for all possible ground. Many techniques have been reported in the literature facial expressions. To deal with effectively this problem, we to extract the face from clustered background.

Construction of the Generic Mo del

2. Notation 3.1.

t

=X; Y ; Z 

The coordinates of a 3D point X in a world We represent the deformation of the 3D shape of a human

t

=u; v 

coordinate system and its image coordinates x are head (i.e., shape differences and the changes of the facial

V [X ] related by expression) by the mean X and the variance of each

point on the face. These variables are calculated from the

X

x Ä results of measuring heads of many people. To do so, we need

=P ; = P X :   or simplyx Ä 1 1 1 a method for sampling points consistently for all . That

is, we need to know which point on a face corresponds to a

P  where  is an arbitrary scale factor, is a 3 4 matrix, called point on another face. Many methods have been proposed for

Ä t such a purpose and we can use them; we use the resampling

=X; Y ; Z;  the perspective projection matrix, and X 1 and

t method [4] developed in our laboratory. This method uses

=u; v ;  P xÄ 1 . The matrix can be decomposed as

several feature points (such as the corners of the eyes, the

= AT : P (2) vertex of the , and so on) as reference points. Using these reference points, the shape of a face is segmented into

The matrix A maps the coordinates of the 3D point to the several regions and further each region is resampled. We

image coordinates. The general matrix A can be written as choose the sample points using this method.

0 1

3.2. Edge Extraction in the Mo del

u o

u 0 0

@ A

v

A = : o 0 v 0 (3) As mentioned earlier, we use two types of edges: stable 0010 edges and variable edges. For stable edges, we extract them

beforehand from the 2D image taken at the same time as the

v

u and are the product of the focal length and the hori- acquisition of the 3D data of a head. They are the contours of

u v o zontal and vertical scale factors, respectively. o and are the eyes, lips, and eyebrows. We obtain their corresponding the coordinates of the principal point of the camera, i.e., the curves on the head by back-projecting them onto the 3D

intersection between the optical axis and the image plane. model. For variable edges, which are occluding contours and

u v o For simplicity of computation, both o and are assumed depend on the head pose and camera parameters, we extract to be 0 in our case, because the principal point is usually at them whenever these parameters change. Figure 1 shows the center of the image. an example of images of the generic model with stable and

The matrix T denotes the positional relationship between variable edges. It shows that the stable edges (i.e., the eyes the world coordinate system and the image coordinate system. and lips) do not change under the change of the pose, and the

T can be written as variable edge (i.e., the contour of the face ) changes whenever

the pose changes.

R t

T = : (4)

0 1

 t R is a 3 3rotationmatrixand is a translation vector.

Note that there are eight parameters to be estimated: two

v camera parameters u and , three rotation parameters, and

three translation parameters.

I

k = ;:::;K k

We use C 1 to denote the -th stable curve

k

W

P l = ;:::;L l

in the image, and C 1 ,the -th stable curve

l

I W

C C

in the model projected by P .Both and are 2D curves.

k l

I

Figure 1. A generic mo del of a head. In all p oses,

C is used to denote the contour of the face in the image.

o

W

the stable edges such as the eyes and lips do

P  P

C is the contour of the face projected by .

o

I

I

k

not change. The variable edges change b ecause

k C

x is the 2D the point belonging to the -th curve in

k i

W

they are o ccluding contours.

l l

the 2D image. X is the 3D point belonging to the -th

j

W

l

P 

curve of the 3D model. x is the 2D point belonging to

j

W W

W

l l

l C P  P X x P 

the -th curve projected by . and  are

j j l 4. De®nition of the Distance Between Curves

related by 1

0 Here we de®ne distance between curves. It is a basis of

a1 the ICC method which we will present in the later section,

a =a

W W l

l 1 3

@ A

:  =P X P = a 

x with 5 j

j 2 and it is also used for ®nding corresponding curves. =a a2 3

a3 The squared distance between a 2D curve in the image and a projection of a curve on the 3D model is de®ned by

3. Generic Model of a Human Head

I W

dC ; C P 

k l  We use the generic model of the human head which is able !

1 X W

to take account of shape differences between individuals and I

l k

P  ; x d x =

min m (6)

j i I

the changes of the facial expression. This section explains W

W

l

N

2C x P 

k

I

j

l

I k 2C

this generic model. x

i k

I

I I

k

N C d x 5.2. Calculating the Strength of Match

where is the number of points in and m

i

k k

W

l

I W

P 

x is the squared Mahalanobis distance:

C ; C P  j

If o is a correct pair, many of the rest of the

k l

W I C

model curves C have corresponding curve such that the

k l

m n

I W

k l

d x x P 

I W W

m

i j

C C C P 

position of relative to is similar to that of o

l k l

n m

I W I W

kl

I t

k l k l

=x x P  M x x P ;

C S

(7) relative to . We de®ne the strength of match M for pair

ij

i j i j

k

I W

0 1

C ; C P 

o

 ! !

 1 in a way similar to the one for point pair used

k l

t

W W

l l

@ x P  @ x P 

 in [10].

j j

W

kl

l

@ A

M = V [X :

] (8)

ij

j

W W

l l

@ X @ X

5.3. Up dating corresp onding pairs of curves

j j The strategy we use for updating corresponding pairs is It is possible to give another de®nition of the distance called the ªsome-winners-take-allº strategy[10]. Consider between curves. Our de®nition is based on the following the corresponding pairs having the highest strength of match

assumptions: for both of the image and the model. These pairs are called

fP g fP g i potential matches and denoted by i .For ,twotables

 When, for edges on the 3D model, the corresponding

T T UA SM and are constructed.

edges in the image are found, the projected model curve

T fP g i SM saves the matching strength of each which is

contains the image curve.

T U U

A A sorted in decreasing order. UA saves the value of . describes unambiguity and is de®ned as  The generic model is sampled at a higher resolution than

the image.

U = S  =S    

M M

A 1 2 1 10

V [X ]

 The variance of each point can be different and

S   S fP g S   S

M i M M where M 1 is the of and 2 is the of the

unisotropic. second best candidate in the pairs which include the curve

fP g T UA forming i . is also sorted in decreasing order.

5. Finding Corresponding Curves by Relaxation The pairs are selected as ªcorrectº matches if they are

q >  T

among the ®rst 50 percent of pairs in SM and the ®rst

In this section, we explain the method for ®nding corre-

q T

percent of pairs of UA. Using this method, the pairs which spondence between 3D model curves and 2D image curves.

are matched well and unambiguous are selected. I

This is done by matching 2D image curves C and model

k

W

C P P P 

o o curves o projected by . is an arbitrary projec- l 6. Rough Estimation of a Head Pose tion. We assume that all of the eyes and lips are seen in an In this section, we explain the method for roughly estimat- image. Therefore, the edges of a 2D image are expected to ing the head pose and camera parameters which are used as include stable edges. However, they also include noisy edges the initial guess in the re®nement process. caused by illumination, measurement error and so on. Conse- To roughly estimate the head pose and the camera param- quently, there are some correspondence ambiguities. We use eters, co-planar conics are used. Because the eyes and the relaxation techniques to resolve such the correspondence are approximately on a single plane, the 3D stable edges of ambiguities. the model, such as the edges of the eyes and lips, are projected First, we ®nd candidates for corresponding curves using to that plane. the similarity of the curvature. Curvature of the curve is not We use the intersections and bi-tangent lines of the co-

preserved under projection. However, because we assume planar conics because they are preserved under projection[2]. P the pose estimate o is reasonable, curvature of the same When using pairs of co-planar conics, at least one pair of co- curve might be similar. After ®nding candidates, we resolve planar conics is needed to determine all the parameters. But ambiguities by relaxation method. still remain two possibilities in our case: the correct one and

the upside-down one. Therefore, we use three pairs of conics:

Finding Candidates for Corresp onding

5.1. left and right eye, left eye and lips, right eye and lips.

Curves

Pro jection to the Face Plane Both the image edges and the projected model edges are 6.1. segmented into equi-curvature curves. Candidates for cor- Edge points of eyes and lips are almost on one plane, called responding pairs are found by evaluating the similarity of the face plane.

curvature. Consider a coordinate system in which the face plane co-

z =

k; l The similarity of curvature s is de®ned as incides with 0. We call such a coordinate system the

plane coordinate system. I

W The 3D coordinates of a point projected to the face plane

sk; l= : = : + jcC   cC P j; 

1 0 1 0 o 9

k l

X in the world coordinate system and the coordinates of the

t

x ;y ; 

cC  C p

where is the curvature of curve . point p 0 in the plane coordinate system are related

k; l

s has the following properties:(i) when two curves by

0 1 1 1 0 0

k; l

have exactly the same curvature, s equals 1, and (ii) as

x x p

the difference of the curvature between two curves becomes p

B C C C B B

R t

y y

p p

sk; l p p

B C C C B

larger, becomes smaller. Ä B

X = = T ;

p (11)

@ A A A @ @

k; l

If the value of s is higher than the threshold, the pair 0 0

W I

P  ; C C 0

of curves o is selected as the candidate pair. 1 1 1

l k

T H where p means the positional relationship between the world We select the best one among all possible values of

coordinate system and the plane coordinate system. by evaluating H . The method for evaluation is descried in From equations (1) and (11), we have appendix B.

From equation (13), unknown parameters are obtained

0 1 H

x using every components of (see appendix C). This is the

p @

A initial guess for re®nement process.

y

x = H ;

Ä p (12) 1

7. Re®nement of the Head Pose by ICC (Iterative 

where H is a 3 3 matrix, given by Closest Curve) Method 1

0 In this section, we explain the method for re®nement of

r t r

u u

u 11 12 1 the head pose and camera parameters using the initial guess

A @

r r t

;  H = 

v v v 13

21 22 2 obtained by the method described in the previous section.

r t r31 32 3 We employ the ICC method which minimizes the distance

between corresponding curves.

0

r i; j  R = RR t

p i

where ij is the -th component of ,and is We use the correspondence of two types of edges in this

0

i t = Rt + t the -th component of p . process: stable ones and variable ones.

The correspondence of stable edge curves have been es-

tersection and bi-tangent of co-planar 6.2. In tablished by the method described in section 5.

conics Variable edges, e.g. the contour of the face, of the generic model should be extracted whenever the parameters are up- A conic in a 2D space is a set of points x that satisfy dated because this curve varies whenever the parameters

t change. However, the correspondence of the contour of the

Qx = ;   xÄ Ä 0 14 face is known.

Once the correspondence of the curves are established,  where Q is a 3 3 symmetric matrix. We ®t a conic to the edge points of right eye, left eye, and lips by gradient the squared Mahalanobis distance of corresponding curves is weighted least square ®tting described in [9]. minimized.

We minimize the value of the function J :

Q Q

The intersectionm Ä for two conics 1 and 2 satis®es the

X

I W I

following simultaneous equations: W

J = dC ; C k P  + dC ; C P 

 (17)

k l o o

t t

l

m Q m = ; m Q m = : 

Ä 1 Ä 0 and Ä 2 Ä 0 (15) ! X

X 1

W I

l k

P  x = d x

m Q

Q min

j i

Denoting bi-tangent line for two conics and as I

1 2 W

W

l

N

x 2C P 

t

k

I

j

l

l

I k 2C

Ä Ä x

l x = l i

Ä 0, satis®es the following simultaneous equations[2]: k

 !

X t

t 1

I W

o

Ä 1Ä Ä 1Ä o

+ d x x P  :

l Q l = ; l Q l = :

0 and 0 (16) min m

i j

1 2 I

W

o

W

N

P  x 2C

o

o

I k

j

o

I

x 2C

o

i l m and are obtained by solving quartic equations analyt-

ically. (18) P

We minimize the value of J to ®nd by iterating these

Combinations of the Corresp ondence 6.3. two steps:

There are no real intersections for these pairs of conics.

I

k x

Therefore, the solutions of the quartic equation are two com-  For each image point of each corresponding curve

i

W

I W

plex conjugate pairs. In complex cases, there are eight possi- l

C ; C P  x

pairs  , the point which minimize

j l

bilities to correspond four points of the image to four points k

I W

k l

d x x 

m are found. j of the model because conjugate pairs project to conjugate i

pairs under real projection[2].

P J  is updated to minimize by Levenverg-Marquart On the other , because all of the bi-tangent lines are algorithm. real in this case, there are only four possibilities of correspon-

dence. P include the head pose and camera parameters in equa- Therefore, there are 32 possible combinations for each pair tion 2. We directly estimate eight parameters, i.e., three of conics. When we use three pairs of conics, the number of rotation parameters, three translation parameters, and two

3

 P the all possible pairs are 32 = 32768 . camera parameters, instead of each component of .

We reduce the number of combinations. Because there are

Non-linear Minimization of the Dis-

two possibilities are remaining in our case for only one pair 7.1.

between Curves of conics (the true one and the up-side-down one), we select tance

two combinations for each pair of conics. Then using these From equation (2), P is decomposed into a perspective combinations for three pairs of conics, all possible values projection and the rigid displacement. of H is calculated by the linear least squares described in Non-linear minimization with constraints of the rotation appendix A. The number of possible values of H is much matrix is complicated. Therefore, we rewrite the rigid dis-

3

+ + =  q reduced to 32 + 32 32 2 104 . placement part by using a 3D vector as

(a) (b)

2. a Extracted edges in images of one

Figure (a) (b)

woman's face and b edge curves of the eyes,

Figure 3. Edges and conic of the eyes and lips

lips, and eyebrows extracted by the corresp on-

and the result of rough estimation using conics.

dence b etween the mo del and the image.

a Edges of a woman's face and co-planar con-

ics. b The results of rough estimation using

The conics of the image are plot-

Ä conics of a.

X = RX + t

T (19)

the projection of mo del conics

2 ted in black and

= X + q  X q  X   q + t: re plotted in red.

(20) a

t

q q 1 +

The direction of q is equal to the rotation axis and the norm

of q is equal to tan 2 where is the rotation angle. Using this equation, because the three component of q are independent, the minimization becomes much simpler.

8. Experimental Result We show in this section some preliminary result with the proposed technique.

Figure 1 shows the model edges constructed from 36 Figure 4. The result of the head p ose estimation women's head measurements. All of these are with no facial expressions. Figure 2(a) shows the edges of an image of one using ICC. woman. These edges are extracted by the method described in [6]. Figure 2(b) shows the extracted stable edge curves, i.e., We have proposed the iterative closest curve matching the contour of the eyes, lips, and eyebrows. These edges are (ICC) method which estimates directly the pose by itera- extracted by establishing the correspondence between model tively minimizing the squared Mahalanobis distance between edges and image edges described in section 5. the projected model curves and the corresponding curves in Figure 3(a) shows the co-planar conics ®tted to contours of the image. The curve correspondence is established by the the eyes and lips in the image showed in ®gure 2(a). Figure relaxation technique. Because a curve contains much richer 3(a) shows the result of rough estimation. The conics of information than a point, curve correspondences can be estab- the model are plotted in red and the conics of the image are lished more robustly and with less ambiguity, and therefore, plottedinblack. pose estimation based the ICC is believed to be more accurate The head pose and camera parameters of the image shown than that based on the well-known ICP. in Fig. 2(a) was estimated. Figure 4 shows the projection of Furthermore, our technique does not assume that the in- the generic model by the estimated parameters. The pose of ternal parameters of a camera is known. This provides more the head shown in Fig. 2(a) and that of Fig. 4 are almost the ¯exibility in practice because an uncalibrated camera can be same. used. The unknown parameters are recovered by our tech- nique thanks to the generic 3D model. 9. Conclusion Preliminary experimental results show that (i) accurate head pose can be estimated by our method using the generic Head pose determination is very important for many ap- model and (ii) this generic model can deal with the shape plications such as human-computer interface and video con- difference between individuals. The accuracy of the pose ferencing. In this paper, we have proposed a new method estimation depends tightly on whether image curves can be for estimating accurately a head pose from only one image. successfully extracted. More experiments need to be carried To deal with shape variation of heads among individuals and out for different facial expressions and for cluttered back- different facial expressions, we use a generic 3D model of ground. the human head, which was built through statistical analysis We believe that the ICC method is useful not only for of range data of many heads. In particular, we use a set of 3D-2D pose estimation but also for 2D-2D or 3D-3D pose 3D curves to model the contours of eyes, lips, and eyebrows. estimation.

H  Acknowledgment: Using this relation, the criterion function e is de®ned We thank K.Isono for his help in the presentation of ex- as[2]:

perimental data.

0 P P 0 P

P 2 2

eH =I Q ; Q   +I Q ; Q   ab

ab 3 3 m

3 m1 1 4 1 1

0 P P 0 P

P 2 2

I Q ; Q   +I Q ; Q  

References + ab

ab 3 3 m 3 m2 2 4 2 2 [1] A.Lanitis, C.J.Taylor and T.F.Cootes. Automatic Interpreta- (25) tion and Coding of Face Images Using Flexible Models. IEEE

where

P 0 I

Trans. PAMI, 19(7):743±756, 1997. t

Q  = H Q H ; 

m 26

mi m [2] C.A.Rothwell, A.Zisserman, C.I.Marinos, D.A.Forsyth and i

J.L.Mundy. Relative Motion and Pose from Arbitrary Plane and

h i Curves. IVC, 10(4):250±262, 1992. 

1

 I A; B  = = AA  = B B ;

[3] D.J.Beymer. Face Recognition Under Varying Pose. In ab3 trace 1 det 1 det (27) i

CVPR94, pages 756±761, 1994. h



1

I A; B  =  = B B  = AA :

[4] K.Isono and S.Akamatsu. A Representation for 3D Faces with ab4 trace 1 det 1 det (28) Better Feature Correspondence for Image Generation using PCA. Technical Report HIP96-17, IEICE, 1996. C. Decomposition of H [5] P.J.Besl and N.D.McKay. A Method for Registration 3-D Shapes. IEEE Trans. PAMI, 14(2):239±256, 1992. From equation (13), the head pose and camera parameters

[6] R.Deriche. Using Canny's Criteria to Derive a Recursively are determined using every components of H .

Implemented Optimal Edge Detector. IJCV, 1(2):167±187, Because R is a rotation matrix, we have

1987. 2 2 2

+ r + r = ; [7] T.S.Jebara and A.Pentland. Parametrized Structure from Mo- r11 21 31 1 (29)

2 2 2

+ r + r = ; tion for 3D Adaptive Feedback Tracking of Faces. In CVPR97, r12 22 32 1 (30)

pages 144±150, 1997.

r + r r + r r = : r11 12 21 22 31 32 0 (31)

[8] Z.Zhang. Iterative Point Matching for Registration of Free-

h i; j  H

Form Curves and Surfaces. IJCV, 13(2):119±152, 1994. We use ij to denotes the -th component of .From [9] Z.Zhang. Parameter Estimation Techniques: A Tutorial with equations 13 and 31, we have Application to Conic Fitting. IVC, 15:59±76, 1997.

2 2

+ h + h h = h = h = :

h 0 (32) v [10] Z.Zhang, R.Deriche, O.Faugeras and Q.T.Luong. A Robust 11 12 u 21 22 31 32 Technique for Matching Two Uncaribrated Images Through

the Recovery of the Unknown Epipolar Geometry. AI Journal, From equations 29 and 30, we also have 

78:87±119, 1995. 2 2 2 2 2 2

 = = ; + h + h =

h 1 (33) v

11 u 21 31 

2 2 2 2 2 2

 = = : h + h + h =

H 1 (34) v

A. Linear Estimation of 12 u 22 32

x; y  Assume the image point  and the object point 2

Then, by eliminating  , we have

x ;y  p p are the corresponding pair. We rewrite the com-

ponents of H as 2 2 2 2 2 2 2 2

h h = +h h = + h h = :

 0 (35) v

11 12 u 21 22 31 32

1 0

b c

a 1 1

= = v

Let u and . From equation (32) and (35), we

u v

A @

d e f

: =

H (21) have h g 1

2 2 2 2

h h h h + h h h h 

31 32 21 22 21 22 31 32

=

u (36)

By eliminating  in equation (12), we get d

2 2 2 2

h h h + h h h h  h

31 32 11 12

+ by + c gx x hy x = x;

ax 11 12 31 32

p p p

p (22)

=

v (37) d and

where

dx + ey + f gx y hy y = y:

p p p

p (23) 2 2 2 2

= h h h h  h h h h : d 11 12 21 12 21 22 11 12 (38)

From equation (22) and equation (23), the components of H

 v are calculated by the linear least square algorithm. Once u and are estimated, we can compute using equation (33) or (34). All of the pose parameters are given by

B. Eliminating Ambiguous Solutions for H

r = h = ;r = h = ;r = h ; v

We select the best correspondence combination which 11 11 u 21 21 31 31 (39)

r = h = ;r = h = ;r = h ; v

minimizes the criterion function. 12 12 u 22 22 32 32 (40)

P

H Q

= h = ;t = h = ;t = h :

If the is correct, conic on the face plane and the t v

1 13 u 2 23 3 33 (41) I

image conic Q satisfy the following equation:

r i = ;:::; 

i3 1 3 can be easily computed using the orthogo-

P I

2 t

=  H Q H:   Q 24 nality of the rotation matrix.