Quick viewing(Text Mode)

Diagonalization of an S-Matrix

Diagonalization of an S-Matrix

Appendix A Diagonalization of an S-

Consider the standard basis B2 = (el' e2) in a two-dimensional vector space over the complex numbers V2. Denote in V2 a complex symmetric invertible S-matrix S as

(A.1)

For any matrix M in V2, let Mt and Mt denote respectively the and the adjoint of M. We will prove

Theorem. Let S be as given in (A. i). Let f.L! and f.L§ be the eigenvalues of the positive- st S. Then, there exists in V2 (i) a A such that the the "unitary congruent transforma• tion" AtSA reduces S to a real § as

(A.2)

_ (ii) a unitary matrix B such that Bt SB reduces to a real diagonal matrix S' as

We observe that the unitary congruent transformation on complex sym• metric matrices does what the unitary (similarity) transformation does on normal matrices.

Proof. There are two cases.

331 332 Diagonalization of an S-matrix

For the positive-definite matrix st S, there exists a unitary matrix U such that

Ut(stS)U = (~I :~), J1~ =1= J1~, (A.4) where Ut is the adjoint of U. Since S is complex symmetric and U is unitary, the left side of (A.4) can be decomposed as

(A.5) where "*,, stands for complex conjugation. Let us denote

w= UtSU. (A.6) W is a "unitary congruent transform" of S. It is complex symmetric and invertible because S is. In terms of W, (A.4) is expressed as

w*w = (J1I (A.7) o J1~0) which shows that W is hermitian-orthogonal in the principal-axis basis whose basis elements in V2 are the normalized eigenvectors of st S. Since W is complex symmetric, so is W t . Therefore, it follows that

Whence

wtw=wwt, (A.9) showing that the unitary congruent transform W of S is normal. Let

w = (a ac _ b2 =1= O. (A. 10) be'b)

Since W is nonsingular and W t = W*, we get from (A.8) and (A.9)

2 w- 1 (c*J11 -b*J12) 1 (A.ll) - qet W* -b* J1~ a* J1~ . Diagonalization of an S-matrix 333

Since W is complex symmetric and J..L~ =1= J..L~, it follows from (A.ll) that

b* = 0 ==> b = O. (A.12) Therefore, Win (A.lO) takes the form

W = (~ ~), ac =1= O. (A.13)

Thus (A.7) can be expressed as

(A.14)

Whence"

(A.15) where a and (3 are arbitrary in [0,211"). We now write (A.13) as

ia W = Ut SU = (J..Ll (e (A.16) o J..L20) 0 e~(3~). Let

A=UF, (A.17) where U is as given by (A A) and F is a unitary matrix of the form

e-ia/2 0 ) F = ( 0 e-i(3/2 . (A.18)

We form a new unitary congruent transform W' as

(A.19) where we used the fact that F is complex symmetric. Since both Wand F are diagonal, they commute with each other. Therefore, (A.19) yields a real diagonal matrix as

W' = FW F = W F2 = (~l :2)' (A.20) which is (A.2). Case (i) is proved. 334 Diagonalization of an S-matrix

Case (ii) : f-LI = f-L~ == f-L2

We assume without essential loss of generality that Sl =F 0, S2 =F 0, and S2 is real. Under these conditions, it it easy to show that f-LI = f-L~ == f-L2 leads to

(A.21)

(A.22) and

(A.23) where I denotes the in V2. Let

T = ~ S = (~ ~* ), (A.24) where a = sI/f-L, 13 = S2/ f-L, 13 is real. Now, it is easily seen that T is normal. In fact, it is unitary and complex symmetric. We need the following

Lemma. For a matrix T as given by (A.24), there exists an orthogonal and V with det V = -1 such that

VtTV = V-lTV = VTV = (~ _~* ) (A.25) where V t and V-I denote respectively the transpose and the inverse of V, and v is a .

Proof. Let

Tx= AX, (A.26) where A is an eigenvalue of T. We find that

Al = t[(a-a*) +d], } (A.27) A2 = "2 [(a - a*) - d],

d = v(a + a*)2 + 4132 > ° (A.28) Diagonalization of an S-matrix 335 and

Xl = ~ ( f ), for A!, (A.29)

X2 = ~ ( ~p ), for A2, (A.30)

p= 2~ [(a+a*)+d], (A.31)

N= V1+p2. (A.32)

We note that the eigenvectors Xi are real-valued. The desired unitary matrix V is

1 ) (A.33) V=-.!..(pN 1 -p , which is orthogonal and involutory with det V = -1, as is easily seen. From (A.24) and (A.33) we get

VtTV = V-lTV = VTV = 1 ( ap2 + 2{3p - a* p(a + a*) + {3(1 - p2)) (A 34) N2 p(a + a*) + {3(1- p2) -(a*p2 + 2{3p - a) . .

It is easy to show that p(a + a*) + {3(1 - p2) = 0, so that with 1 v = N2 [ap2 + 2{3p - a* ], (A.35) the lemma is proved. Since S = p,T from (A.24), (A.25) can be writen as

t 0 ) V SV = P, (v0 -v* , (A.36) which is a unitary congruent transform of S.

Now,

(A.37) 336 Diagonalization of an S-matrix

Also, by the fact that V is orthogonal and S is complex symmetric, we can write

V-l(stS)V = (VtSV)*(VtSV) = J.£ 2 1111 2 I. (A.38) Comparison of (A.37) and (A.38) yields

(A.39) whence

(AAO) where'IjJ is arbitrary in [0, 27r). (A.36) is expressed as

ei'/fr 0 ) V t SV = J.£ ( 0 _e-i'/fr . (AA1)

Let

B=VG, (AA2) where G is a unitary matrix of the form

e-i'/fr/2 0 ) G = ( 0 ei'/fr/2 . (AA3)

Evidently, G is complex symmetric also. Since the diagonal matrices G and V t SV commute, it follows that

BtSB _ Gt(vtSV)G = G(VtSV)G

- (VtSV)G2 = J.£ (~ ~1)' (AA4) which is (A.3). The proof of the theorem is complete. Appendix B A Deficient System of Equations

A large number of physical problems involving integral equations, differential equations, the Helmholtz integral representation, the Franz integral repre• sentations, the Cauchy integral, the Plemelj formulas, linear regression, etc., are all reducible to solving deficient linear systems of equations of the form

Ax=b (B.I) where A is an n x m complex rectangular matrix, b is a given n-dimensional complex vector, and x is an m-dimensional complex vector which we seek. In fact, every linear transformation can be represented by a matrix. x resides in the domain space Vm (or the row-vector space) of A and b in the range space Vn (or the column-vector space) of A . (B.I) always has a solution in the sense which will be explained later, and can be solved by numerical tech• niques such as the decomposition or the gradient ptojection method (cf. A. Albert [1] or K. Atkinson [2]). C. Lanczos [3], on whose work this appendix is based, guides us to see the behind-the-scene story of the pseudoinversion by using a single mathematical notion as an essential tool that the solution of an equation such as (B.I) depends fundamentally on the solutions of the related eigenequations and the number of nonzero eigenvalues of the related eigenoperators determines the nature of the solution of the equation. This appendix is a short summary on the general deficient systems of equations. It should be pointed out that both Lanczos' exposition and the Moore-Penrose theory of pseudoinverse are essentially different ways of stating the well-known algebraic relation that the rank of A + the nullity of A = m in the mapping A : Vm ( row-vector space) -+ Vn ( column-vector space). The essential features of Lanczos' argument can be exhibited in a spe• cial case of the n x n and will be discussed first and the knowledge gained there extended to the general case of the n x m rectangular matrix.

337 338 A Deficient System of Equations

Case of an Hermitian Matrix Let S be an n x n hermitian matrix with reference to a prescribed basis in an n-dimensional complex vector space Vn . We wish to solve a linear system of equations

Sx=h. (B.2) We know from the hermiticity of S that there exists a principal-axis basis in Vn , with reference to which (B.2) becomes uncoupled as

Dx' = h', (B.3) where o

D= (BA) o The diagonal entries Ai are the eigenvalues of S, and the new vectors x' and h' are related to the old ones by

x' (B.5) h' (B.6) where T is an n-dimensional unitary matrix whose n columns are made of the normalized eigenvectors belonging to the eigenvalues Ai of S, and Tt is the adjoint of T. Since TtT = TTt = In, where In is the n x n identity matrix, (B.5) and (B.6) may be expressed as x = Tx', h = Tb'. On substituting these results into (B.2), we obtain STx' = Th'. Whence

TtSTx' = h'.

Comparing the last result with (B.3), we get D = Tt ST. Whence

(B.7) We have decomposed in (B.7) the hermitian matrix S in terms of the unitary matrix T and the diagonal matrix D. While this decomposition per A Deficient System of Equations 339 se does not solve (B.2), it will help us to see what can go wrong in solving (B.2) and thereby suggest a method for obtaining a best possible solution for it. To find a solution x in (B.2), we need to invert S. However, S may not be invertible. To determine when and why it may not be invertible, let us look at the decomposition (B. 7). We know that T is invertible because it is unitary. If S is not invertible, therefore, it must be the diagonal matrix D which is not invertible. D will not be invertible only if one or more of the eigenvalues of S is zero. We see here that the solution of (B.2) is intrinsically linked to the eigenvalues of the related eigenoperator S. Suppose that there are p < n nonzero eigenvalues of S. In this case, the diagonal matrix D in (B.4) takes the form o

D= (B.8) o o o o The decomposition in (B.7) still holds, of course. But then, it no longer helps us solve (B.2) because S is not invertible due to D. What is wrong is the specific form of the decomposition in (B.7). Therefore, a new form of decomposition is needed. First we form a p x p diagonal matrix Dp out of D by discarding the zero eigenvalues of S. Let o

(B.9) o Evidently, Dp is invertible. Next, we form out of the n x n unitary matrix T an n x p rectangular matrix using as columns the eigenvectors belonging to p nonzero eigenvalues of S. Let it be Tp. Now, S can always be decomposed in the form 340 A Deficient System of Equations

(B.I0) as we can verify directly. Comparison of (B.I0) and (B.7) reveals a startling fact about the matrix 8 that some of the rows and columns in the n x n ma• trices T and D can be dispensed with since they do not participate in the map• ping of 8! In other words, there is a certain subspace in the n-dimensional vector space Vn which the matrix S completely ignores. No amount of math• ematical tricks can alter this behavior of 8. The pseudoinversion procedures are developed based on this mathematical fact. Traditionally, one says that (B.2) has no solution. In the decomposition of 8 in the form of (B.7), D was the trouble-maker. In the decomposition given in (B.I0), it is now the n x p rectangular matrix Tp that causes trouble. We know that an n-dimensional unitary matrix U satisfies the two-way relationship UtU = In and UUt = In where In denotes the n x n idenity matrix. Tp of course is not a unitary matrix and satisfies the following relationships:

(B.ll) and

(B.12) where Ip is the p-dimensional idenity matrix. Thus, even though Tp is not a unitary matrix, it does satisfy a one-way relationship (B.ll) "striving to attain" the of Tp. Because of it, we may regard Tp as a semi-unitary matrix. Thus the trouble-maker Tp is not quite unitary, but merely semi-unitary. If it were unitary, we could have defined for it (TpDpTJ)t = TpDplTJ. Tempted by this line of thought, we tentatively define for 8 in (B.I0) an n x n matrix B in the form

(B.13) Now,

(B.14) and

(B.15) A Deficient System of Equations 341

That is, Sand B commute. If only SB = BS = In, the matrix B defined in (B.13) would be the inverse of S. But, B is not an inverse of S (we already know that S has no inverse) because SB = BS = TpTJ =f: In. Yet, we multiply (B.2) with B and get

x=Bb (B.16) and call it a solution of (B.2) in the sense that with x given in (B.16), ISx - bl 2 is the smallest attainable. The matrix B in (B.13), which would be called a pseudoinverse of S in the theory of Moore-Penrose pseudoinverse, is called the natural inverse for S by C. Lanczos. We will now describe the above development in more concrete terms in the following. The composition matrix S B is an n x n and acts on element!,! of the n-dimensional vector space Vn . The rectangular matrix Tp maps the p-dimensional vector space Vp to the n-dimensional vector space Vn . Let rJ E Vp and ~(p) E Vn such that

(B.17) We let SB act on ~(p):

(B.18)

Let us call Vp , the domain space of the n x p rectangular matrix operator Tp, the "Tp - subspace". Then, (B.18) shows that SB maps the Tp-subspace into itself. Applying S on both sides of (B.16), we get

Sx=b=SBb. (B.19) In view of (B.18), we conclude that SBb = b is satisfied if the prescribed vector b in (B.2) belongs to the Tp-subspace. In other words, x in (B.16) is a solution of (B.2) if the prescribed vector b in (B.2) has no nonzero projection outside the Tp-subspace, namely in the nullspace of S. This requirement is the well-known compatibility condition. The number of nonzero eigenvalues of S, i.e., p in the present case, is the rank of S. What if b has a nonzero projection in the nullspace of S, i.e., outside the Tp-subspace? If it does, the matrix operator B defined in (B.13) blots out or annihilates that part of b so that the compatibility condition is automatically satisfied. This means that x = Bb in (B.16) is the closest solution we can 342 A Deficient System of Equations

p n-p

nullspace n of S

T p-subspace --+---1....

Figure B.l: The shaded part is the nullspace of S which the matrix operator S completely ignores. ever attain in the sense that the distance ISx - hi is the shortest. The part of h which B blots out is the solution which belongs to the nUllspace of S, {x E VnlSx = O}. When the inhomogeneous term h in (B.2) does not meet the compatibility condition by having a nonzero projection in the nullspace of S, the solution x given by (B.16) is not unique since the part which B annihilates can be added to x. We have seen that if one or more of the eigenvalues of S is zero, the linear system of equations given in (B.2) is deficient in the sense that there exists a nonempty nullspace about which S is totally incapable of providing any information and no amount of mathematical tricks can alter this deficiency. Therefore, we might as well do what (B.2) can possibly do. The solution (B.16) is precisely that. Hence B is the "natural inverse" of S. The fact that the matrix operator S can activate only the Tp-subspace within the n-dimensional vector space Vn is pictorially depicted in Fig. B.l.

Case of an n x m Matrix

Having dealt with the special case (B.2), we now return to the general case (B.l) where A is an m x n rectangular matrix. In conjunction with equation (B.l), we introduce its adjoint equation A Deficient System of Equations 343

2 2

2 sts 0

= .'A?n s·st 2 0 =( s t )ts t

= A?n

Figure B.2: An (m + n) x (m + n) matrix S is constructed so that it is hermitian.

(B.20) where At is an m x n rectangular matrix and is the adjoint of A, c is a prescribed m-dimensional vector and y is an n-dimensional vector which we seek. The adjoint matrix operator At is, loosely speaking, a "backward" operator with respect to the "forward" operator A and is always defined regardless of whether A is invertible or not. In fact, the domain space and the range space of A are respectively the range space and the domain space of At. One may wonder why the adjoint equation (B.20) has to be considered for solving (B.1). There is a simple reason for it. It is that the nature of the solution of (B.1), x, cannot be completely determined by the forward map A alone; for it the backward map, i.e. the adjoint map, At, must be considered at the same time. The algebraic alternative and the Fredholm alternative discussed in Chapter 8 bear this out. We incorporate (B.1) and (B.20) into a single equation in an (n + m)• dimensional vector space as depicted in Fig. B.2. Observe that (m + n) x (m + n) matrix S is hermitian. Let

Sz =r, (B.21) 344 A Deficient System of Equations where the (m+n)-dimensional vector z is made ofx and y in (B.l) and (B.20) respectively, the m-dimensional vector x filling the first m component slots of the vector z and the n-dimensional vector y the remaining n component slots. Similarly, the (n + m )-dimensional vector r is made of band c. In the (n + m )-dimensional vector space we consider two hermitian ma• trices At A and AAt , which are respectively m x m and n x n square matrices, and for them the eigenequations

(B.22)

(B.23) As can be easily shown, At A and AA t have the same rank and share the same eigenvalues, so that if the m x m matrix At A has p (:S n, m) nonzero eigenvalues, so should the n x n matrix AA t. (B.22) and (B.23) are the related eigenequations for (B.l) the nature of whose solution, as will be shown presently, is determined by the number of the nonzero eigenvalues of these related eigenequations. We first construct a p x p diagonal matrix Dp with p nonzero eigenvalues of At A (and of AAt) as its diagonal entires. Next, we take the p eigenvectors belonging to the p eigenvalues and form an m x p rectangular matrix Mp with the eigenvectors as its columns. Similarly, we form an n x p rectangular matrix Wp with the p nonzero eigenvectors of AA t. As in section B.2, it is easy to show that Mp and Wp are semi-unitary:

MJMp = I p, (B.24) MpMJ = 1m =1= Ip; (B.25)

WJWp = Ip, (B.26) WpWJ = In =1= Ip (B.27) where II is the l-dimensional identity matrix. Now the n x m rectangular matrix A in (B.l) is decomposed in the form

A=MpDpWJ (B.28) and its natural inverse (i.e., pseudoinverse) as A Deficient System of Equations 345

(B.29) Notice that the two semi-unitary matrices Mp and Wp are needed in decom• posing the n x m rectangular matrix A unlike in the case of the n x n square matrix. The best solution we can obtain for (B.l) is then given by

x = Bb, (B.30) where B is as given by (B.29). The matrix A in (B.1) maps an element of Vm to an element of Vn. However, it is activated only in the p-dimensional subspace in both vector spaces. This subspace is determined by the p nonzero eigenvalues of the related eigenvalue problems for the "two-way maps" At A and AAt given in (B.22) and (B.23). The matrix A completely ignores the subspaces which lie outside the p-dimensional subspaces of Vm and Vn, and it is only between these p-dimensional subspaces, in which the related two• way eigenequations have nonzero eigenvalues, that A acts as a one-to-one map. We write (B.22) and (B.23) more explicitly as

(B.31)

(B.32) where l = 1,2,3,··· ,p ~ n, m. Then, the decomposition (B.28) implies that

p A = L Al lllvj. (B.33) l=1 The formula (B.33) is commonly called the singular value decomposition. In the real vector space, "t" sign is replaced by the transposition "t" sign.

References

B. 1 A. Albert, Regression and Moore-Penrose Pseudoinverse, Aca• demic Press, New York, NY, 1972.

B. 2 K. E. Atkinson, Introduction to Numerical Analysis, John Wiley and Sons, Inc., New York, NY, 1978. B. 3 C. Lanczos, Linear Differential Operators, Van Nostrand, New York, NY, 1962. Appendix C -coefficient Matrix

The S-matrix for the scattered in the far zone in the bistatic scattering geometry is a linear integral transformation of the reflection-coefficient ma• trix as described in Chapter 3. In order to establish analytically this integral transformation relationship, it is necessary to introduce a set of bases as the chain of the physical process of electromagnetic scattering proceeds. We first set a fixed Cartesian basis Bg = (x, y, z) with origin at a point chosen on "the surface A of the scatterer (cf. Fig. 3.1, Chapter 3). The positions of the transmitter and the receiver in the exterior region De far away from the scatterer are denoted respectively by T= (rI, (h, (PI) and R = (r2' 02, 2) with reference to Bg • Without loss of generality, it will be assumed that 0 < 01 ,02 < 7r /2. With reference to the basis B g , we define the direction of propagation of the incident wave, !lI, as

!l1 xsin(7r - 01 ) cos(7r + 1)

+ysin(7r - 01 ) sin(7r + 1) + zcos(7r - 01 )

-(x sin 01 cos 1 + y sin 01 sin 1 + Z cos 01 ). (C.1) It is directed from the transmitter position T in De to the origin of the basis Bg on A. Similarly, we define the direction of propagation of the scattered wave at the receiver position by

!l2 = x sin O2 cos 2 + y sin O2 sin O2 + z cos O2, (C.2) which is directed from the origin of Bg on A to the receiver position R in De. The reflection-coefficient matrix depends on the definition of !l1 and !l2 as well as other bases which we will describe shortly. We set at the transmitter position T a two-dimensional transmitter po• larization basis BT = (hI, VI)' hI x VI = !lI, where

(C.3)

346 Reflection-coefficient Matrix 347

(C.4)

The set (hI, VI, nd forms a right-handed triplet. Since 0 < fh < 1f/2, Ii x nIl = sin (h > O. The definition of hI is based on this assumption. Similarly at the receiver position, we set a two-dimensional receiver basis BR = (h2' V2), h2 x V2 = n 2, where

(C.5)

(C.6) The set (h2' V2, n2) forms a right-handed triplet, and under the assumption that 0 < sin O2 < 1f/2, Ii x [hi = sin O2 > O. The polarizations of the trans• mitted and the received waves are necessarily referenced to the transmitter and the receiver polarization bases, respectively. Let r' = (x', y', z') be a point on the surface A where z' = z'(x', y') is the height of the surface A at r' with reference to the the basis B g . At r' E A, the outwardly drawn unit normal vector is denoted by

il' = (il' . i)( -xz~ - yz~ + i), (C.7) where a, zx, = ax'a, z ('x, y ') , zy, = oy' z ('x, y ') , (C.8)

A, A) I (n ·z = . VI + (z~)2 + (z~)2

At r' E A we set a local right-handed triplet (nI, i, 8), where n A' i _ I X n (C.g) - In l x il'l'

8 = nl x t. (C.lO) The i-axis lies in the tangential plane at r' E A and the 8-axis lies in the plane of incidence which is defined by nl and il' at r' E A. The Fresnel reflection coefficients R.l and RII at r' E A are determined with reference to the (nl' i, 8) basis. 348 Reflection-coefficient Matrix

Electromagnetic scattering can be described with reference to the four bases introduced above. Specifically, the S-matrix at the receiver position R = (r2' (h, (h) in the far zone in De and the reflection-coefficient matrix at a point r' E A, which are referenced to the polarization bases and BRand BT, are related to each other by (cf. (3.101) in Section 3.3, Chapter 3)

(C.11)

The elements of the reflection-coefficient matrix 'Ylm can be expressed as

'Yh2hl = ik ~ ~ ~ 471" [ -(1 + Rl.) Ph2hl (t . hI) + (1 - RII) Qh2hl (8' hI) 1 ik + 471" Rh2hl' (C.12)

'Yh2 Vl = !: [-(1 + Rl.) Ph2Vl (t . VI) + (1 - RII) Q h 2Vl (8' VI) 1 ik + 471" Rh2Vl' (C.13)

'YV2 h l = ik ~ ~ ~ 471" [ -(1 + Rl.) P V2h1 (t· hI) + (1 - RII) QV2hl (8' hI) 1 ik + 471" RV2 hl' (C.14)

'YV2 Vl = !: [-(1 + Rl.) P V2V1 (t . VI) + (1 - RII) QV2Vl (8' VI) 1 ik +-471" R V2V1. , (C.15) Reflection-coefficient Matrix 349 where

Ph2hl - h2 . ( il' x s) + V2 . ( il' xi),

Qh2hl - h2 . ( il' xi) - V2 . ( il' x s ), (C.16)

Rh2hl = 2h2· (il' X V1);

Ph2Vl - Ph2hl' Q h 2Vl - Qh2hll (C.17) Rh2Vl - -2 h2 . ( il' X h1 );

P v2hl = -Qh2hll QV2hl = Ph2hl' (C.18) 2 A (AI A) Rv2hl - V2· n x V1 ;

P V2V1 - P v2 hl'

Q V2Vl QV2hl' (C.19) RV2hl = -2 V2 . ( il' X h1 ).

If the scatterer is highly conducting, then 1 + Rl.. ~ °and 1 - RII ~ 0, so that the terms inside the square brackets for rlm are in general small compared to the dominant terms Rlm . rlm will be expressed in terms of geometric factors (81, 1), (82, 2), and the slopes z~, z~.

h1 xsin 1 - Ycos 1 + zo, (C.20) V1 - -x cos 81 cos 1 - Ycos 81 sin 1 + zsin 81; (C.21) h2 - -x sin 2 + Ycos 2 + zo, (C.22) V2 = -x cos 82 cos 2 - Ycos 82 sin 2 + zsin 82. (C.23) When the receiver position collapses onto the transmitter position (Le., 82 = 81, 2 = 1 in backscatter), h2 = -h1 and V2 = V1; that is, the receiver polarization basis B R does not coincide with the transmitter po• larization basis BT in the limit (r2' 82, 2) -t (r1, 81, 1). This, of course, 350 Reflection-coefficient Matrix is the consequence of the definitions of fh and n2 in (C. 1) and (C.2) and hi, Vi, i = 1,2 in (C.3)-(C.6). As will be seen, this fact brings a certain undesirable effect on both the S-matrix and the reflection-coefficient matrix. From (C. 1) and (C.7) we obtain

fh x n' = -(n'· z) sin(h {x (sinlP1 + z~ cot (h) -y ( cos lP1 + z~ cot (h ) +z ( z~ sin lP1 - z~ cos lP1 ) }, (C.24) and

Ifh X n'12 = (n' . z)2 sin2 (h x x{l +2z~ cot (h cos lP1 +2z~ cot (h sin lP1 - 2z~ z~ sin lP1 cos lP1 +(z~)2(cot2 01 + sin2 lP1) +(z~)2(cot2 01 + cos2 lP1) }. (C.25) From (C.g) and (C.24) we find that

A, A) . 0 A (n . z sm 1 t = - In1 x n'l x x { X ( sin lP1 + z~ cot 01 ) -y ( cos lP1 + z~ cot 01 ) +z (z~ sinlP1 - z~ COSlP1)}. (C.26) Similarly, from (C.lO) and (C.26),

A (n' . z) sin01 s = In 1 x n'l x x{ x [COS01 COSlP1 + z~ (CSC01 - sin 01 cos2 lP1) - z~ sin 01 sin lP1 cos lP1 ] +y [ cos 01 sin lP1 - z~ sin 01 sin lP1 cos lP1 +z~( CSC01 - sin 01 sin2 lP1)] -z [sin 01 + z~ cos 01 cos lP1 + z~ cos 01 sin lP1] }. (C.27) Reflection-coefficient Matrix 351

We now find various factors for Plm, Qlm and Rim.

(C.28)

(n'.z)sinfh (, ., ) A ~A ZX csc 01 sm 1>1 - Zy csc 01 cos 1>1 . (C.29) 101 X n'l From (C.7) , (C.26), and (C.27) we find

A' A (n' . z)2 sin01 nxt=- A X 101 X n'l X { X [ cos 1>1 + Z~ cot 01 - z~z~ sin 1>1 + (z~)2 cos 1>1 J +y [sin 1>1 + z~ cot 01 - z~z~ cos 1>1 + (z~)2 sin 1>1 J +z [Z~cos1>l + z~sin1>l + (z~)2cot01 + (z~)2cot01 J}, (C.30)

A' A (n'.z)2sin01 nxs=- A X 101 X n'l X { X [ cos 01 sin 1>1 - z~ sin 01 sin 1>1 cos 1>1 + z~ (csc 01 - sin 01 - sin 01 sin2 1>1) - Z~Z~ cos 01 cos 1>1 - (z~)2 cos 01 sin 1>1 J -y [COS01 cos 1>1 + Z~(CSC01 - sin 01 - sin 01 cos2 1>1) - Zy"0'''' sm 1 sm 0/1 cos 0/1'" - ZxZy" cos 0''''1 sm 0/1 _(z~)2 cos 01 cos 1>1 J +z [ Z~ cos 01 sin 1>1 - Z~ cos 01 cos 1>1 + z~z~ sin 01 cos 21>1 _(z~)2 sin 01 sin 1>1 cos 1>1 + (z~)2 sin 01 sin 1>1 cos 1>1 J }, (C.31)

( A, A)2 . 0 hA (A' A) n· z sm 1 2·nxs= A X . 101 X n'l 352 Reflection-coefficient Matrix

x { cos (h COS( 4>1 - 4>2) +Z~ [ cot(h cos (h cos 4>2 - sin (h cos 4>1 cos (4>1 - 4>2) ] +z~ [ cot (h cos (h sin 4>2 - sin (h sin 4>1 cos( 4>1 - 4>2) ] -z~z~ cos(h sin(4)1 - 4>2) _(z~)2 cos (h cos 4>1 cos 4>2 (z~)2 cos(h sin 4>1 sin 4>2 }, (C.32)

A (A' A) (n'·z)2sin(h V2· n x s = A X 101 X n'l x { cos fh cos (J2 sine 4>1 - 4>2) ..:... z~ [sin (Jl cos (J2 cos 4>1 sine 4>1 - 4>2) + cos (Jl sin (J2 sin 4>1 + cos (Jl cos (J2 cot (Jl sin 4>2] z~ [sin (Jl cos (J2 sin 4>1 sin (4>1 - 4>2) - cos (Jl sin (J2 cos (Jl - cos (Jl sin (J2 cos 4>1 - cos (Jl cos (J2 cot (Jl cos 4>2 ] z~z~ [sin (Jl sin (J2 cos 24>2 + cos (Jl cos (J2 cos( 4>1 + 4>2) ] + (z~)2 sin (Jl cos 4>1 (sin (J2 sin 4>1 + cot (Jl cos (J2 sin 4>2) (z~)2 sin (Jl sin 4>1 (sin (J2 cos 4>1 + cot (Jl cos (J2 cos 4>2)}, (C.33)

( A, A)2 . (J h2 . (n' x t) = n : z sm 1 x 101 X n'l x { - sine 4>1 - 4>2) +z~ cot (Jl sin 4>2 - z~ cot (Jl cos 4>2 +Z~Z~COS(4)1 + 4>2) _(z~)2 sin 4>1 cos 4>2 +(z~)2 cos 4>1 sin 4>2 }, (C.34) Reflection-coefficient Matrix 353

( A, A)2 . () V2 . (it' x t) = n ~ z sm I x Inl x it'l x { cos ()2 cos( ¢l - ¢2) +z~ (cot ()l cos ()2 cos ¢2 - sin ()2 cos ¢l ) +z~ (cot ()l cos ()2 sin ¢2 - sin ()2 sin ¢l ) - z~z~ [ cos ()2 sin( ¢l + ¢2) + cot ()l sin ()2] _(z~)2 ( cot ()l sin()2 - cos ()2 sin ¢l sin ¢2 ) _(z~)2 ( cot ()l sin()2 - cos ()2 cos ¢l cos ¢2 ) }. (C.35)

For the dominant terms Rlm ,

it' X hI = (it' . z) x x [x cos ¢l + y sin ¢l + z(z~ cos ¢l + z~ sin ¢l) ], (C.36)

it' X VI = (it' . z) { x(cos ()l sin ¢l - z~ sin ()I) -y(cos ()l cos ¢l - z~ sin ()l) +z cos ()l (z~ sin ¢l - z~ cos ¢l) }, (C.37)

h2 . (it' x hI) - (it'· z) sin(¢l - ¢2), (C.38) V2 . (it' x hI) - -(it'· z) x x { cos ()2 cos (¢l - ¢2) - z~ sin ()2 cos ¢l -z~ sin ()2 sin ¢l }, (C.39)

h2 . (it' x VI) = -(it'· z) x x{ - COS()1 COS(¢1 - ¢2) +z~ sin ()l cos ¢2 +z~ sin ()l sin ¢2 }, (CAD) 354 Reflection-coefficient Matrix

V2 . (il' x VI) = -(il' . z) x x { - cos (h cos 02 sin( 4>1 - 4>2) - z~ ( sin 01 cos O2 sin 4>2 - cos 01 sin O2 sin 4>1) +z~(sin 01 cos O2 cos 4>2 - cos 01 sin O2 cos 4>1) }. (C.41)

We will now obtain the elements of the reflection-coefficient matrix, "11m linearized in the slopes z~ and z~ for the moderately rough surface A. Then,

(il'· z) ~ 1, In x il/12 ~ + (1 - 2z~ cot 01 cos 4>1 - 2z~ cot 01 sin 4>1 ). (C.42) sm 01

Also, Pzm and Qlm are linearized in z~ and z~. For example,

Ph2hl = hA (AI A) A (AI A) sin 01 2· n x S + V2· n x t ~ A X 101 x il' I x{ (cos 01 + cos (2) COS(4)1 - 4>2) +z~ [ (cos 01 + cos (2) cot 01 cos 4>2 - cos 4>1 (sin O2 + sin 01 COS(4)1 - 4>2) 1 +z~ [ cos 01 + cos (2) cot 01 sin 4>2 - sin 4>1 (sin O2 + sin 01 cos ( 4>1 - 4>2) 1}, (C.43)

Q hA (AI A) A (AI A) sin 01 h2hl = 2· n x t - V2· n x s ~ A A X 101 x n'l x { -(1 + cos 01 cos (2) sin( 4>1 - 4>2) +z~ [(1 + cos 01 cos (2) cot 01 sin 4>2 + cos 01 sin O2 sin 4>1 + sin 01 cos O2 cos 4>1 sin( 4>1 - 4>2) 1 +z~ [ (-(1 + cos 01 cos ( 2 ) cot 01 cos 4>2 - cos 01 sin O2 cos 4>1 + sin 01 cos O2 sin 4>1 sin( 4>1 - 4>2) 1}. (C.44) Reflection-coefficient Matrix 355

The approximate expressions for 'Ylm linearized in z~, z~ then take the form

ik ( , ') 'Ylm - 411" Alm + ZxBlm + ZyClm ik ( , ') + 411" alm + Zxblm + ZyClm , (C.45) for 1 = h2, V2; m = hI. Vb where

Ah2hl = (1 + R ..d (cos 81 + cos 82) cos ( 4>1 - 4>2)' (C.46) Bh2hl = (1 + Rl..) { (cos 81 + cos 82) cot 81 x x[ cos 4>2 - cos 4>1 COS(4)l - 4>2) 1 - cos 4>1 [sin 82 + sin 81 cos( 4>1 - 4>2) 1} -(1- RII)(1 + cos 81 cos 82) csc81 sin 4>1 sin(4)l - 4>2), (C.47)

Ch2hl = (1 + Rl..) {(cos81 + cos 82) cot 81 x x [sin 4>2 - sin 4>1 cos( 4>1 - 4>2) 1 - sin 4>1 [sin 82 + sin 81 cos (4>1 - 4>2) 1}, +(1 - RII )(1 + cos 81 cos 82) CSC 81 cos 4>1 sin( 4>1 - 4>2); (C.48)

ah2hl = -2 cos 81 COS(4)l - 4>2), (C.49) bh2hl = 2 sin 81 cos 4>2, (C.50) Ch2hl = 2sin81 sin 4>2; (C.51)

Ah2Vl = -(1- RII)(1 + cos 81 COs82)sin(4)1 - 4>2), (C.52)

Bh2Vl = (1 + Rl..){ cos 81 + cos 82) CSC 81 sin 4>1 cos( 4>1 - 4>2) 356 Reflection-coefficient Matrix

-(1 - RII) { (1 + cos (h cos ( 2) cot 01 X X [sin

Ch2Vl = (1 + R ..d(coS01 + cos (2) CSC01 cos (

ah2Vl = -2sin(

bh2Vl = 0, (C.56)

Ch2Vl = 0; (C.57)

AV2hl = -(1 + R ..d(l + cos 01 cos (2) sin(

BV2hl = -(1 + Rl..) { (1 + cos 01 cos (2) cot 01 X X [sin

+(1 - RII)( cos 01 + cos ( 2) CSC 01 sin

Cv2hl = (1 + Rl..) { -(1 + cos 01 cos ( 2) cot 01 X X [cos

av2 hl = -2 cos 01 cos O2 sin(

bV2 hl = -2(sin 01 cos O2 sin

Cv2 hl = 2(sin 01 cos O2 cos

A V2V1 = -(1- RII)(cos(h + cos ( 2) COS(cP1 - cP2), (C.64)

B V2V1 = (1 + RJ..) (1 + cos (h cos ( 2) CSC 81 sin cP1 sin( cP1 - cP2) -(1-RII) {(cos81 + cos ( 2) cot81 X X [ cos cP2 - cos cP1 cos( cP1 - cP2) 1 (C.65) - cos cP1 [sin 82 + sin 81 cos( cP1 - cP2) 1}

C V2V1 = (1 + RJ..)(1 + cos 81 cos ( 2) csc81 COScP1 sin(cP1 - cP2) +(1-RjI) {(COS81 + cos (2) cot81 X X [sin cP2 - sin cP1 cos( cP1 - cP2) 1 - sin cP1 [sin 82 + sin 81 cos( cP1 - cP2) 1}; (C.66)

a V2V1 = 2 cos 82 COS(cP1 - cP2),

bV2V1 = - 2 sin 82 cos cP1, (C.67)

CV2V1 = -2 sin 82 sin cPl. (C.68) In backscatter we set 81 = 82 == 8 and cP1 = cP2 == cP. Then,

Ah2hl = 2(1 + RJ..) cos 8, (C.69) Bh2hl = -2(1 + RJ..) sin 8 cos cP, (C.70) Ch2hl = -2(1 + RJ..) sin 8 sin cP; (C.71)

ah2hl = -2 cos 8, (C.72) bh2hl = 2 sin 8 cos cP, (C.73) Ch2hl = 2 sin 8 sin cP; (C.74)

A h2V1 = 0, (C.75) Bh2Vl = 2(1 + RJ..) cot 8 sin cP -2(1 - R II ) cot 8 sin cP, (C.76) Ch2Vl = 2(1 + RJ..) cot 8 cos cP -2(1 - RII) cot 8 cos cP; (C.77) 358 Reflection-coefficient Matrix

ah2Vl = 0, (C.78)

bh2Vl = 0, (C.79)

Ch2Vl = 0; (C.80)

AV2hl = 0, (C.81) BV2hl = -2(1 + RJ...} cot 0 sin ¢

+2(1 - RII) cot 0 sin ¢ = - Bh2Vll (C.82)

C V2h1 = -2(1 + RJ..} cot 0 cos ¢

+2(1 - RII) cot 0 cos ¢ = -Ch2Vl; (C.83)

av2 h l = 0, (C.84)

bV2 hl = 0, (C.85)

Cv2 hl = 0; (C.86)

A V2V1 = -2(1 - RII) cosO, (C.87)

B V2V1 = 2(1 - RII) sin 0 cos ¢, (C.88)

C V2V1 = -2(1 - RII) sin 0 sin ¢; (C.89)

a V2V1 = 2 cos 0, (C.90)

b V2V1 = -2 sin 0 cos ¢, (C.91)

CV2V1 = -2sinOsin¢. (C.92) Appendix D Statistical Averages

In this appendix we will prove for (5.55), Section 5.2, Chapter 5 that

123 ( U2u3e-ikqzUl ) = i: dUI i: d U2 U2 i: dU3 U3 f(uI, U2, U3)e-ikqzUl (P23 - PI2PI3k2q~)e-!k2q~Pl1 (D.1) in the normal statistics, and by deduction for i, j = 1,2,3, that

(D.2) with

and is a three-dimensional real-valued random vector.

is the probability density function for normal distribution for random vari• ables Ui for which the A has the form

Pl1 Pl2 Pl3 ) A = ( P21 P22 P23 , (D.4) 'P31 P32 P33

359 360 Appendix D. Statistical Averages

Pij = (UiUj ), (D.5) ~ = detA, (D.6) and (-l)i+j Mij denotes the cofactors corresponding to the minors Mij of Pij, i, j = 1,2,3. Pii = (u;) denotes the variance and ..jjiii the standard deviation. It is assumed in (D.1) that (Ui ) = O. The covariance matrix A is real and positive-definite. The quadratic form

Q(U)

(D.7) can always be reduced to a sum of three squared terms because the positive• definite matrix A-l can always be diagonalized via an orthogonal(congruent) transformation. Due to the appearance of the factor U2U3 in the integrand in (D.1), however, we will use a less elegant but more straightforward method in evaluating the integral (D.1). Since the procedure is lengthy and tedious, it will be shown in detail for the convenience of the reader.

D.1 One-dimensional Case

For the one-dimensional random variable Ul, the probability density function (D.3) reduces to

(D.8)

We want to evaluate ( e-ikqzul ) where kqz is a real constant :

(D.9)

Taylor-expanding e-ikqzul,. D.1. One-dimensional Case 361

(D.10)

The interchange of the summation and the integration in (D.1O) is permissi• ble because the series is uniformly convergent. The integral (D.10) vanishes for all odd numbers for m. Therefore,

(D.11)

Let

(D.12)

Then the integral in (D .11) is

(D.13)

Let

(D.14) Then

00 -(2pl1)m+1 21 1 dx xm-21 e-x 2 0 ~(2pl1)m+~r(m + ~), (D.15) where r(m) = (m - I)! denotes the gamma function. Substituting (D.15) into (D.11), we get 362 Appendix D. Statistical Averages

( e-ikqzul ) =

1 00 ( _k2q;)m m 2 1 (D.16) y'7r fo (2m)! 2 P11 r(m + :2).

But

1 r(m+:2) = 1 1 1 3 3 = (m - -)r(m - -) = (m - -)(m - -)r(m --) 2 2 222 1 3 5 3 1 1 = ... = (m - -)(m - -) ... -. - . -r( -) 2 2 2 2 2 2 1 = 2m (2m - 1)(2m - 3)···5·3·1 J?r (2m)!y'7r (D.17) 2m2m(m)!·

Using (D.17) in (D.16), we finally obtain

00 1 k2 2 ( e-ikqzUl) _ L -, (- qz P11 )m m=O m. 2 k 2q2p z 11) = exp-( 2 . (D.18)

(D.18) is the desired mean value of e-ikqzul, where Ul is a random variable and P11 is the variance of Ul.

D.2 Two-dimensional Case

For a two-dimensional random variable vector u

(D.19) the corresponding probability density function is obtained from (D.3) in the form D.2. Two-dimensional Case 363

(D.20) where

2 2 Q(u) = ~ L L (_l)l+m Mlm U1Um , (D.21) l=lm=l

A = (~~; ~;;), j E {2,3,4,5}, (D.22)

~ = I Pl1 Plj I, (D.23) Plj Pjj Pl1=(Ur), Plj=(UlUj), pjj=(u;). (D.24)

Of the two random variables Ul and Uj, Ul represents a fixed random variable while Uj represents anyone of the several random variables which arise in evaluating the radar cross section of the rough surface. The quadratic form Q(u) in (D.21) is

Q(u)

(D.25)

We now evaluate the mean value of uje-ikqzUl:

(D.26) where

E(u) 364 Appendix D. Statistical Averages

2~ [Ml1ui - 2Ul(MljUj - ikqzb.)

+Mjju;J. (D.27)

(D.27) must be put into the form of a sum of the squared terms for Ul and Uj, so that the intergal (D.26) can be carried out by the repeated integral of the form

00 dx x2n e-x2 = r(n + ~), n = 0,1,···. (D.28) 1-00 2 Therefore, we need to complete the square for Ul and Uj in E(u). D.2. Two-dimensional Case 365

But, .6. + Mrj = MnMlj. Therefore,

(D.29) where we used Mjj = Pn. We have completed squaring E(u) for Ul and Uj. With this, (D.26) can now be computed easily.

For

(D.31) let

(D.32)

Then

(D.33) so that

(D.34) 366 Appendix D. Statistical Averages

Let

(D.35)

Then

12 - JCXl dUjuj exp[ - 2M1 (Uj + ikqzMlj )2] -CXl U = v'2Mu i: d82 [82 - ikqzMlj~]e-S~ = -ikqzMljv'27rMu = -ikqzPljv'27rMu (D.36) because

Substituting (D.36) into (D.34), we finally obtain

(D.37) for j = 1,2,···.

D.3 Three-dimensional Case

For a three-dimensional random variable vector u

we evaluate the mean value of UiUje-ikqzUl :

Iij = ( UiUje-ikqzUl ) = i: dUl i: dUiUi i: dUjUj!(Ul,Ui,Uj)e-ikqzUl. (D.38) For this we first evaluate D.3. Three-dimensional Case 367

= (D.39) where

E(u) ikqzuI +~Q(u), (D.40) 1 Q(u) f( Ul, U2, U3) - (v'21iVv'K exp[ --2- J, (D.41) 3 3 Q(u) - ! L L (_l)l+m MlmUlUm l=lm=1

- (U"U2,UaW' ( ~~ ) , (D.42)

PI2 ( PH Pla) A = P2I P22 P23 , Pij = Pji = (UiUj ), (D.43) P3I P32 P33 Ll = detA. (D.44)

Mlm = Mml holds for l, m = 1,2,3 because Plm = Pml. Now 368 Appendix D. Statistical Averages

E(u) = 1 2 2~ [MnU1 + 2U1 (-M12U2 + M13U3

+ikqz~ ) 1+ 2~ (M22U~ - 2M23U2U3 ) 1 2 +2~ M33U3 == Q1 +Q2 +Q3. (D.45)

We start with Q1 to complete the square for U1.

Q1 = 2~ [Mnui + 2U1 ( -M12U2 + M13U3 + ikqz~) 1 = 1 (~ -M12U2 + M13U3 + ikqz~ )2 2~ y 1V1nU1 + y'Mn

2M~1~ ( -M12U2 + M13U3 + ikqz~ )2. (D.46)

Next, to complete the square for U2, we combine Q2 with the second term on the right side of (D.46). Thus,

Q2 - 2M~1~ ( -M12U2 + M13U3 + ikqz~)2 = 1 2 2~ (M22U2 - 2M23U2U3)

2M~1~ ( -M12U2 + M13U3 + ikqz~)2

122 - 2Mn~ [u2(MnM22 - M 12 ) - 2U2(MnM 23 - M 12 M 13)U3 +2ikqzM12U2~ 1 2M~1~ (M13U3 + ikqz~ )2. (D.47)

But

(D.48) D.3. Three-dimensional Case 369 and

(D.49) Therefore,

Q2 - M1 ~ ( - M12U2 + M13U3 + ikqz~ )2 = 2 11 = M1 [p33U~ - 2U2( P23U3 - ikqzM 12 ) ] 2 11 2M:1~ (M13U3 + ikqz~ )2 __1_ (r;;:::: _ P23 U3 - ikqzM 12 )2 - 2M V P33 U2 rn;:;: 11 vP~ 2M:1~ [(M13U3 + ikqz~)2

+(P23U3 - ikqzM 12 )2 ~ ]. (D.50) P33 Finally, to complete the square for U3, we combine Q3 with the second term on the right side of (D.50). Thus

(D.51)

We observe that

M11 M 33 - Mf3 = P22~, (D.52) P23 P13 - M12 -M13 = - M11, (D.53) P33 P33 Mf2 MIl 2 A + - = - ( P11P33 - P13 ) (D.54) P33 P33 370 Appendix D. Statistical Averages and

2 M11 M33 - Mt3 - P23 .6. = P33 2 .6. MIl (P22P33 - P23) - = -.6.. (D.55) P33 P33 Using (D.52)-(D.55) in (D.51), we get

Q3 - 2Ml .6. [ (M13U3 + ikqz.6. )2 + (P23U3 - ikqzM l2 )2 .6. 1= 11 P33 1 (2. ) k 2 2 ) -2' u3 + 2zkqzP13u3 + -2-q; (P11P33 - P13 P33 P33 1 (2. 2 2 2) k 2 = -2- u3 + 2zkqzPl3u3 - k qzP13 + -2-q; P11P33 P33 P33 1 k2q2 = -2- (U3 + ikqzPl3)2 + -2z P11· (D.56) P33 We have thus succeeded in putting E(u) in the desired form of squares for Ul, U2, and U3. From (D.46), (D.50), and (D. 56) we get

(D.57) D.3. Three-dimensional Case 371

The integrals for Ul, U2, and U3 in (D.58) can be evaluated with the aid of (D.13) and (D.15). Thus, the first integral for Ul is

(D.59) where

For the second integral for U2

(D.60) we set

(D.61) so that

because

Substituting (D.59) and (D.62) into (D.58), we obtain 372 Appendix D. Statistical Averages

Let

(D.64)

Then

k 2 q2 e~Pll h3 = . ~ P23 [2p33f (1 + -21) - k 2q;PI3 f( -21 ) 1 y7r P33 __1_ k2q;M12P13 f(~) ..fii P33 2 = -P23 ( P33 - P132 k2 qz.2) - k 2 qz 2 -P13 M12 P~ P~ k2q2 = P23 - __z (P23P13 + M12 ) P13. (D.65) P33 Since M12 + P23P13 = P12P33, we finally obtain

2 2 k 2 q; I 23 = (P23 - k qz P12P13 )e- 2 Pll. (D.66) From (D.66) we deduce

(D.67) for i and j = 1,2,3. Appendix E The Cauchy Integral and Potential Functions

Consider in the complex plane an open subset Di enclosed by a smooth boundary curve L. The region exterior to L is De. Let 'P(t) , tEL be a real-valued analytic function. Then the Cauchy integral on 'P(t) is

'P(z) W(x, y) + iV(x, y) { 2;i iL dt f~t2 ' for z(=x+iY)EDi (E.l) 0, for Z E De. where l/(t - z) is a Cauchy kernel and z E Di U De is a fixed point. (E.l) states that the analytic function 'P(t) on the boundary L is mapped into itself everywhere in Di and vanishes in De. This remarkable property is attributable to the Cauchy kernel, which also plays central roles in singular integrals of Carleman type (cf. F. Tricomi [E-2]) and for a class of Hilbert and Riemann problems (cf. N. Muskhelishvili [E.l]). The main purpose of this appendix is to show that the Cauchy integral (E.l) is made up of a double• layer potential denoted by W(x, y) and a single-layer potential denoted by V (x, y) in two-dimension. Geometric relations for quantities involved in the Cauchy kernel are sketched in Fig. E.l. We connect the fixed ponit z in Di to a variable point t on L by the distance vector p. The unit normal to L at t is denoted by ft and is drawn into the interior region Di enclosed by L. The position of t on L is also denoted by the arc length s measured from a fixed point on L. We begin by expressing the Cauchy kernel in polar coordinates and then decompose dt I (t - z) into its real and the imaginary parts. Let

(E.2) where

373 374 The Cauchy Integral and Potential Functions

s

o

Figure E.l: Geometric relations for various quantities involved in the Cauchy kernell/(t-z}.

p - It-zl, (E.3) o - arg(t - z). (EA)

Then

log(t - z) = log(p) + iO. (E.5) Differentiation of (E.5) with respect to t holding z constant yields

~ = dp +idO, (E.6) t-z p and substitution of (E.6) into (E.1) yields

rp(z) W(x,y)+iV(x,y) = ~ 1 dO 1,0(8) _ ~ 1 dp 1,0(8), (E.7) 27r JL 27r JL P where t is replaced by 8. Now, The Cauchy Integral and Potential Functions 375

dB = dB ds = 0 log(p) ds (E.8) ds on as a Cauchy-Riemann relation. Use of (E.8) in the first integral in (E.7) yields

W(x,y) = t ds Kd(p) cp(s), (E.g) where

(E.1O) is a kernel of a double-layer potential in two-dimension. We have shown that the real· part of the Cauchy integral in (E.l) is a double-layer potential with a density function cp( s), s E L. Next we integrate the second integral in (E.7). Thanks to the fact that the density function cp(s) is differentiable because it is analytic and that the boundary curve L is smooth and closed, we obtain 1 dcp(s) V(x,y) = - JL ds go(p)~, (E.ll) where

1 go(p) = 211" log(p) (E.12) is a kernel for a singler-layer potential in two-dimension. We have shown that the imaginary part of the Cauchy integral in (E.l) is a single-layer potential whose density function is -dcp(s)jds, which is a tangential derivative of the density function of the double-layer potential in (E.g). The Green's second identity in two-dimension for a harmonic function cp(p) , p E Di is

cp(p) = t ds [Kd(p)cp(S) - go(p) o~~s) ]. (E.13) Comparison of (E.13) with (E.7) shows that the only difference between the Green's second identity in two-dimension and the Cauchy integral for cp(s) is that the single-layer potential in the former has the density function -ocp(s)jon, which is a normal derivative of cp(s), whereas the single-layer potential in the latter has the density function -idcp(s)jds, which is a tan• gential derivative of cp( s)' apart from the complex number i. Except for this 376 The Cauchy Integral and Potential Functions difference, both formulas are remarkably similar as integral representations for cp(p). We now consider the limit values of the Cauchy integral (E.1) as we approach a point to E L from the exterior side denoted by "+" and from the interior side denoted by "-" of the curve L. For that purpose we rewrite (E.1) in the form

cp(z) _ cp(to) 1 ~ + _1 1 dtCP(t) - cp(to) 27ri JL t - z 27ri JL t - z _ cp(to) + ~ 1 dt cp(t) - cp(to) , z E Di, (E.14) 27rt JLt - z where we used

forz E Di 1 ~ = { 27r, (E.15) JL t - z 0, forz E De.

For z E De, (E.14) is replaced by the second integral alone:

cp(z) = ~ 1 dt cp(t) - cp(to) . (E.16) 27rt JL t - z

We now let z E De ---t Land z E Di ---t L in (E.14) and (E.16). We get

cp+(to) = ~cp(to) + ~P. V. ( dt cp(t) , (E.17) 2 27rt JL t - to II!, cp(t) cp-(to) = --2CP(to) + -2.P. V. dt--, (E.18) 7rt L t - to where P.V. stands for the Cauchy principal value. From (E.17) and (E.18) we obtain the Plemelj formulas

cp(to), (E.19) _1 P. V. ( dt cp(t) . (E.20) 27ri JL t - to (E.19) represents the jump discontinuity which the Cauchy integral (E.1) suffers upon crossing the closed boundary L, and (E.20) shows the average boundary limit value (or, the direct value in the parlance of potential theory) of cp+(to) and cp-(to). The Cauchy Integral and Potential FUnctions 377

By taking the real and the imaginary parts of the Plemelj formulas, we get

(E.21)

(E.22) where Po = It-tol and s dEmotes the arc length of tEL. It should be noted that the integrals in (E.21) and (E.22) are no longer Cauchy principal-valued integrals but are Reimann integrals since a 1 -log(-) ds an Po is the solid angle subtended by the elementary arc ds at tEL and hence is bounded for the smooth boundary curve L. (E.21) and (E.22) are the limiting values of the double-layer potential onto the boundary, which we derived in Section 7.3.2, Chapter 7, and show very clearly the jump discontinuity which a double-layer potential suffers upon crossing the boundary L. References

E. 1 N. I. Muskhelishvili, Singular Integral Equations, Nordhoff N.V., Groningen-Holland, 1958.

E. 2 F. G. Tricomi, Integral Equations, Interscience Publishers, New York, NY, 1957. Appendix F Decomposition of a Plane Wave

The decomposition of a plane wave field into an incoming and an outgoing spherical wave field given in (10.10), Chapter 10, is derived here. We begin with the well-known expansion of a plane wave field

eiki.r _ eikki.r

00 - L (i)I(2l + l)jl(kr)Pl(coS'Y), (F.1) 1=0 where jl denotes the spherical Bessel function of order l, PI the Legendre ,.. i "i A ,.. function of order land cos'y = k . r = k . k(k == r). Since

(F.2) where yr is the spherical harmonics and the superscript "*,, denotes the complex conjugate (c.f., for exmaple, A. Messiah [F. 1]), (F.1) may be written as

. 00 I eik'.r = 47l" L L (i)l jl(kr) YrCki) Yr*(k). (F.3) 1=0 m=-l

For kr ---t 00, the asymptotic form of jl(kr) is

where hi l ) and h?) are spherical Hankel functions of the first and the second kind of order l, respectively. Thus

378 Decomposition of a Plane Wave 379

lim eiki .r ~ kr->=

(F.5)

Now {Yim} form a complete orthonormal set with the following orthogonality relationship:

= I 2: 2: Yim(ki) (Yim)*(k) = 8(ki - k) (F.6) 1=0 m=-l and

= I 2: 2: (_1)IYim(ki) (Yim)*(k) = 8(ki +k). (F.7) 1=0 m=-l Use of (F.6) and (F.7) in (F.5) yields the desired decomposition in the form

lim eiki .r = kr->= ikr ikr = 21T'i 8(ki + k) e- _ 21T'i 8(ki _ k) e , (F.8) k r k r

References

F. 1 A. Messiah, Quantum Mechanics, Volume I, John Wiley and Sons, Inc. New York, NY, 1968. See pp. 494- 497, Appendix B. Index absorbed power density, 323, 324, from a large smooth scatterer, 325 122 absorption cross section, 329 geometry, 134 addition formula for backscattered Hankel function, 238 field from a layered medium, Legendre function, 305 198 adjoint equation, 342 reflection-coefficient matrix, 119, Aksenov, V. I., 93 120 Albert, A., 157, 337 reI.. ;ved voltage vector, 136 algebraic aj.ternative, 216 Banach Sl '.ce, definition, 217 antenna polarization basis for of contmuous functions lBo, 218, transmitting, 154 234, 282, 310 receiving, 154 of Holder-continuous functions, antenna effective vector height, 133 276, 282, 283 anterior vector of of lB1+a functions, 251, 263, dyad, 4, 6 312 the free-space dyadic Green's of lB2 functions, 276 function, 191 of lB2+a functions, 276 of a dyadic Green's function, barrier, 9 198 Baum, C., 297 anti-podal point, 67, 89 Beckmann, P., 166 Arsenin, V. Va, 251 Bessel function, 234, 305, 308 asymptotic bistatic scalar field amplitude, 302 power scattering matrix, 155 vector field amplitude, 313 reciprocal power scattering ma• asymptotic form of trix, 156 the free-space Green's function, S-matrix, 155 15 scattering geometry, 111, 115, the free-space dyadic Green's 153, 160 function, 13 Bojarski, N. N., 125 Atkinson, K. E., 157, 337 Born, M., 63, 77, 83, 125, 130 average radar cross section of a rough Born approximation, 1, 2, 36 surface, 175 first-order, 38, 40 second-order, 39 backscatter 96, 118, 160 Brakhage, H., 246

380 Index 381

calculus of dyads, 4 complex , 135, 161, Caley transform, 306 331 Cauchy S-matrix for backscatter, 122, integral, 373 135 kernel, 215, 373 complex-valued principal-valued integral, 282 improper eigenvalues, 303 sequence, 218 voltage scalar, 140 center of the first Fresnel zone, 126 conjugate double-layer potential, 211, Chandrasekhar, S., 85 276,282 Cho, S. K., 134, 155, 297, 301 conservation law of energy, 325 Chu, C. M., 134, 155, 166 constituent vector of a dyad, 6 Chu, 1. J., 2, 28 co-pol nulls, 132, 133, 143 circular polarization basis, 72 co-polarization, 118, 120 circularly polarized wave, 47, 58 correlation function of a rough sur- left-circularly polarized, 50 face height, 175 right-circularly polarized, 50 correlation length (roughnE;)ss), 166, cofactor, 360 180 coherency matrix, 78, 80, 82 correlation function, 175 coherency time, 78, 82 coupled Fredholm integral equations, coherent part of the scattered field, 279 37 coupled integro-differential equations, Collin, R. E., 133 278 Colton, D., 220, 229 coupled nonlinear equations, 156 column-vector space, 337 Courant, R., 3, 206, 216, 229 compact (integral) operator, 223, covariance, 169 233, 236, 299, 303, 308, covariance matrix, 169, 175, 359 322 Cramer, H., 169 compatibility condition, 341 cross-pol nulls, 132, 134, 146 complete normed space, 218 cross-polarization, 118, 120 complete polarization, 83 curvature of a boundary curve, 205 complete unpolarization, 83 completely continuous operator, 216 decomposition of completely polarized wave, 46, 87 Cauchy integral, 373 completely unpolarized wave, 46, a partially polarized wave field, 83, 85 85 complex poles, 296 a plane wave field, 302, 308, complex symmetric backscatter re• 378 flection coefficient matrix, total electric field, 314 122 deficient system of equations, 337 382 Index degenerate eigenequation, 136 scattering operator, 322 degenerate kernel, 226 degree of polarization, 83, 85 effective vector height, 133 density of eigenspace of a scattering opera• a double-layer potential, 202 tor, 306 a single-layer potential, 202 eigenvalues of a scattering opera• depolarization in volume scatter- tor, 309 ing, 40 elliptic angle, 63 diagonal terms, 289 elliptic polarization, 48, 58, 61 diagonalization of an S-matrix, 146 left-elliptically polarized wave, dipole 61 antenna, 115 right-elliptically polarized wave, current moment, 115, 117 61 direct value of elliptically polarized, 58, 62 a conjugate double-layer poten• E-polarization, 257 tial, 211 electric field, 257 a double-layer potential, 203 magnetic field, 261 divergence-zero condition, 12 equatorial plane of the Poincare sphere, Dolph, C. L., 297 67 double-layer potential, 200, 202 equivalent representation, 234 electric surface field, 230, 257, down waves, 184 258,274 dual sum, 4 magnetic surface field, 230, 273 dyad, 4, 5, 104 surface current density, 18, 19, dyadic, 4 98, 104, 231 dyadic surface field, 4, 24 absorption operator, 325 surface impedance, 230, 257, Green's function, 191 274 integral operator for random euclidean inner product in complex medium, 39 vector space, 139 power-reflection operator, 324, euclidean quadratic form in com• 325 plex vector space, 140, 160 radiation operator, 321 exponential catastrophe, 296 reflection-coefficient operator, exterior Dirichlet problem 104, 106, 109,110 in R2, 234 scattering amplitude operator, in R3, 307 314, 321 exterior homogeneous boundary value scattering-coefficient operator, problem, 313 104, 105, 115 exterior Neumman problem Index 383

in R2, 249 192, 196 in R 3 , 307 transmission coefficient, 192, 196 exterior resonant frequency, 296 Fling, A., 36, 166 exterior Robin problem in R3, 304 Garabedian, P. R., 220, 229 fictitious interior resonant problem, Gaussian curvature, 130 237 Gaussian statistics, 166 five-dimensional randomly varying Gibbs, J. W., 4 vector, 169 Giuli, D., 134 Focke, J., 125 Gohberg, I., 300 formal Graves, C., 134 boundary condition, 258, 263, Graves' approach, 136 267, 282 great circle on the Poincare sphere, boundary value problem, 282 68 Green's vector function, 28 forward scattering theorem, 329 Greenspan, D., 249 Fourier transformation in R2, 176 Gunter, N. M., 200 Franz, W., 3, 17, 28 Gurjuoy, E., 297, 301 Franz representation, 1, 3, 17, 19, Gurjuoy-Saxon approach, 301 21, 24, 105, 231, 258, 262 for an interior problem, 17, 21 handedness of polarization, 49 for the electric field in an inte• Hankel function, 234, 308 rior problem, 21 spherical, 305, 306, 378 for the scattered electric field, Helmholtz, H., 2 21, 24, 27, 258, 274 Helmholtz representation, 1, 3, 33 for the scattered magnetic field, for the scattered acoustic field, 24,266 35 Fredholm alternative, 216, 220, 232 hermitian form, 136 Fredholm integral equation hermitian matrix, 337 of the first kind, 218, 242 hermitian orthogonal, 332 of the second kind, 210, 212, hermiticity of the dyadic absorp- 218, 236, 249, 266 tion operator, 325 free-space dyadic Green's function, Hilbert, D., 206, 216, 219, 229 11, 13, 26, 185 Hille, E., 218 free-space Hoffman, K., 306 Green's function, 15 Hoffman, W., 166 wave admittance, 12 HOlder-continuous function, 203, 213 wave impedance, 12 Honi, H., 18,32 Fresnel H-polarization, 266 reflection coefficients, 100, 115, horizontal-axis, 96 384 Index horizontally polarized wave, 67 conjugate double-layer poten• Huygens' tial, 212 principle, 2 double-layer potential, 211 , 254 , tensor, 18, 19, 28, 33 376 vector, 2, 28, 33 derivative of the free-space Green's vector for acoustic fields, 33 function, 188 wavelets, 19, 35 tangential derivative of a double• Huynen, R, 134 layer potential, 214 hypersurface, 123 Kellogg, O. D., 200 idemfactor, 9, 13, 27, 190 Kennaugh, E., 133 ill-posed, 242 Kennaugh's approach, 137 improper eigenvalues of Laplacian, kernel of 42~ 300 a conjugate double-layer poten• incident polarization for tial, 211 co-pol nulls, 132, 143 the direct value of a double- cross-pol nulls, 132, 146 layer potential, 206 . incoherent part of a scattered field 37 ' a double-layer potential, 202 Fredholm integral equation, 218 incoming asymptotic a single-layer potential, 202 scalar field amplitude, 302 Kong, J. A., 36, 115, 166, 191 vector field amplitude 325 Krein, M., 300 inhomogeneous medium, 36 Kress, R, 210, 229 integro-differential equation , 214 , Kunze, R, 306 278 Kussmaul, R, 251 interfere Kussmaul constructively, 126 integral equation, 251, 252, 263 destructively, 126 interior homogeneous representation, 251, 252, 255 Dirichlet problem in R2, 242 Lanczos, C., 161, 337 Nuemann problem in R2, 237 Laplacian, 12, 42, 303 Robin problem in R2 246 . ' Lax, P., 207, 219, 296 Involutory matrix, 335 layered medium, 183 Ishimaru, A., 36 left-circularly polarized, 50 iteration in Born approximation, left-elliptically polarized, 61 39 Legendre function, 305, 378 Jeffreys, B., Jeffreys, H., 125 Leontovich condition, 230, 257, 274 Jones, D. S., 270 Levin, H., 14, 186 jump discontinuity of Lewis, R M., 125 Index 385 linear combination technique of D. northern hemisphere of the Poincare S. Jones, 270 sphere, 68 linear polarization basis, 72 null space, 341 linearly polarized, 48, 49, 59 nullity, 337 Lissajeus figure, 46, 50, 53 numerical range, 141 local maximum received power, 142 Nussenzveig, H. M., 296 Lorentz reciprocity relation, 115, one-way nonlinear map, 164 316 O'Neal, E. L., 83 Lovitt, W., 200, 229 optimal backscattered electric field, 136 Marin, L., 297 optimal polarization, 132 Maue, A. W., 258 in backscatter, 136 mean value, 169 in bistatic scattering, 155 Messiah, A., 378 , 54, 129, 170 metric, 218 orthogonal polarization, 87 minors of covariance matrix, 360 orthogonal relation for Moore-Penrose pseudoinverse, 341 Legendre functions, 305 Moyal, J. E., 174 spherical harmonics, 379 multiplicity of poles, 301 orthogonal transformation, 54 Muskhelishvili, N. I., 206, 373 orthogonally polarized waves, 89 outgoing scalar field amplitude, 302 natural inverse, 341, 342 Neumann problem in R3 Panic, 0., 246 exterior, 307 partial polarization, 46, 83 Newton, R., 63, 220 partially polarized waves, 77, 83 noncommutative relation of a dyad, Pauli spin matrices, 78 8 PBW noncompactintegraloperator, 252 integral equation, 246, 259, 267 nonlinear equation, 137, 156 representation, 246 nonsingular S-matrix, 135 permitivity of , 217, 218 a randomly fluctuating medium, normal derivative of a double-layer 37 potential, 212 a statistically homogeneous , 148, 332 medium, 37 normalized equivalent surface imp• Phillips, R., 296 edance, 232 plane of incidence, 98 normalized Stokes vector, 62 plane-stratified medium, 183 North Pole of the Poincare sphere, Plemelj formulas, 214, 376 68 Poincare, H., 46 386 Index

Poincare sphere, 46, 63, 82 the free-space dyadic Green's Poisson integral, 2, 36 function, 16 polarimetric antenna, 94, 133, 134, the electromagnetic field, 22 144, 153 radiation pattern function (vector), receiver, 167 296, 301, 303, 305, 308 transmitter, 167 Ramm, A., 300 polarization, 46 random medium, 36, 166 basis, 71, 80, 94, 118 randomly fluctuating part of per- ellipse, 51, 53 mitivity, 37 ellipticity, 51 rank, 337, 341 polychromatic wave, 78 real-valued poles, 287 positive-definite receiver polarization basis, 96, 104, matrix, 136, 157, 169, 332 154,347 two-way matrix, 155, 164 reciprocal positive semi-definite matrix, 59 optimal polarization, 156 posterior vector of power scattering matrix~ 156, a dyad, 6 157 a dyadic Green's function, 191 scattering geometry, 115, 117, potential theory, 200 153, 160 potential-theoretic approach, 229 S-matrix, 152, 155 power scattering matrix reciprocity relation, 80, 115, 155 in backscatter, 134, 136, 147 for the dyadic scattering am• in bistatic scattering, 155 plitude operator, 316, 320 reciprocal, 156 for the dyadic scattering-coefficient principal radii of curvature, 130 operator, 118 probability density function, 169, rectangular matrix, 337, 342 359 reflection-coefficient matrix, 105, 110, propagation vector, 184 119, 121, 168, 176, 348 pseudo boundary reflection coefficient condition, 231, 258, 266, 277,279 in a layered medium, 192, 195, value problem, 231, 258, 266, 196 273 vector function, 106 quadratic form in complex vector regularization, 251, 252, 276 space, 140 regularized Fredholm integral equa• quasi-monochromatic wave, 77 tion, 256 quasi-principal-axis basis, 146 related eigenequation, 337, 344 related homogeneous radar cross section, 167 Dirichlet problem, 237, 242, 250 radiation conditon for Neumann problem, 237 Index 387 related of a single-layer potential, interior eigenvalues, 3, 239, 243, 276 244,245 self-cell, 277, 278, 288 real-valued poles, 311 semi-major axis of polarization el- relative phase, 49 lipse, 53 Rellich, F., 41 semi-unitary matrix, 340 Rellich's uniqueness theorem, 2, 41, Shenk, N., 296 313 single-layer potential, 200, 201, 275 retardation, 86 representation, 241, 249, 275 right-circularly polarized wave, 50 for exterior Robin problem, 310 right-elliptically polarized wave, 61 singular integral operator, 251, 273 right-hand rule, 49 singular value decomposition, 345 Roman, P., 72 singularity expansion method, 296 rotational transformation, 65, 72 S-matrix, 93, 105, 111, 112, 119, rough surface, 166 120, 122, 348 roughness correlation lengths, 166, for backscatter, 120 175, 180 solenoidal, 12, 20, 232 row-vector space, 337 Sommerfeld radiation condition, 2, 3, 16, 25, 34, 42 saddle point, 126 South Pole of the Poincare sphere, Saxon, D., 297, 301, 314 68 scalar Huygens' wavelets, 35 southern hemisphere of the Poincare scattering center, 126 sphere, 68 scattering-coefficient operator( dyadic), specular reflection, 96 104 spherical scattering cross section, 328 Bessel function, 305, 378 scattering geometry for Hankel funcion, 305, 306, 378 backscatter, 134 harmonics, 378 bistatic, 95, 153 spherical wave field, 93 reciprocal, 153 , 78 scattering (integral) operator, 299, Spizzicino, A., 166 302, 308 stacked 4-dimensional backscatter, for electromanetic wave, 322 162 scattering superposition, 186 Stakgold, I., 220 Schauder, J., 200, 212, 213 standard deviation, 360 Schiff, L., 186, 187 stationary phase Schwarz, G., 300 integral, 125 Schwinger, J., 14, 186 point, 126 second-order tangential derivative statistically homogeneous part of 388 Index

a random medium, 37 zeroth rank, 33 statistically homogeneous rough sur• tensor operator of zeroth, first, sec- face, 182 ond, rank, 33 Steinberg's analytic Fredholm the• Tesche, F., 296 orem, 299, 303 Thiel, M., 74 stochastic radar cross section of a Thoe, D., 296 rough surface, 166, 181 Tikhonov, A. N., 251 Stokes, G., 46, 82 total stochastic RCS in Stokes basis, 62 backscatter, 181 parameter, 46 , 62 bistatic scattering, 181 vector, 46, 63, 82 trace, 79 stratified medium, 183 invariance property, 80 Stratton, J., 2, 28 transient (time-dependent) radia• Stratton-Chu representation, 1, 2, tion field, 42, 296 28, transition problem, 183 for scattered electric field, 31, transmission coefficients in a lay• 270 ered medium, 192, 195, 196, for scattered magnetic field, 31 197 substitution rule, 12, 24, 274 transmitter polarization basis, 96, suitable representation, 249 104, 154, 346 suitability of Franz representation, transpose of a dyad, 9 256 transposed homogeneous equation, surface scattering, 1 236,246 super backscatter, 162 transposition of a dyadic operator, symmetric dyad, 9 118 symmetry of the dyadic radiation transversality of the dyadic scat• operator, 322 tering amplitude operator, synthetic aperture radar, 105, 124 315 Tricomi, F. G., 203, 229, 373 Tai, C. T., 11, 14, 32, 186, 191 trigonometric polynomial, 223 tangential sierivative of two-dimensional exterior a conjugate double-layer poten• Dirichlet problem, 234 tial, 276 Neumann problem, 249 a double-layer potential, 214 a single-layer potential, 213, 276, unit normal, 18, 95, 347 282 unitary congruent tensor of transform, 147, 332, 333 first rank, 33 transformation, 146,161, 331 second rank, 4, 5, 32, 33 unitary matrix, 141, 147, 331 Index 389 unitary transformation, 147, 332 jn(ka) and h~l) (ka), 306, 308 unphysical region in the complex wave number plane, 296 Young's interference principle, 2 up waves, 184 Zucker, F., 133 variance, 169, 175, 360 vector constituent functions, 13 vector Green's function, 28 theorem, 2 vector radiation condition, 3, 16, 22, 26 radiation pattern function, 105, 320 reflection-coefficient function, 106, 110 vertical axis, 96 vertically polarized wave, 67 voltage polarization vector, 133 voltage S-matrix, 133 vector, 133, 136, 142 volume scattering, 1, 36, 183

Walker, J., 124 wave admittance, 12 wave impedance, 12 weakly inhomogeneous medium, 37 weakly singualr, 201 Weierstrass approximation, 224 well-posed selfadjoint boundary value problem, 245 Werner, P., 200, 212, 229, 246, 249 Weyl, H. 235 Widder, D. V., 208 Wilcox, C. H., 41 Wolf, E., 63, 77, 83, 125, 130 for In(ka) and H~l) (ka),239, 309