<<

1 Complex-Valued Matrix Differentiation:

Techniques and Key Results

Are Hjørungnes, Senior Member, IEEE, and David Gesbert, Senior Member, IEEE.

Abstract

A systematic theory is introduced for finding the of complex-valued matrix functions with respect

to a complex-valued matrix and the complex conjugate of this variable. In the framework introduced,

the differential of the complex-valued matrix is used to identify the derivatives of this function. Matrix

differentiation results are derived and summarized in tables which can be exploited in a wide range of signal

processing related situations. The Hessian matrix of a scalar complex function is also defined, and it is shown how

it can be obtained from the second-order differential of the function. The methods given are general such that many

other results can be derived using the framework introduced, although only important cases are treated.

Keywords: Complex differentials, non-analytical complex functions, complex matrix derivatives, complex Hessian.

I. INTRODUCTION In many engineering problems, the unknown parameters are complex-valued vectors and matrices and, often, the task of the system designer is to find the values of these complex parameters which optimize a chosen criterion function. Examples of areas where these types of optimization problems might appear, can be in telecommunications, where digital filters can contain complex-valued coefficients [1], electric circuits [2], control theory [3], adaptive

filters [4], acoustics, optics, mechanical vibrating systems, heat conduction, fluid flow, and electrostatics [5]. For solving this kind of optimization problems, one approach is to find necessary conditions for optimality. When a scalar real-valued function depends on a complex-valued matrix parameter, the necessary conditions for optimality

Corresponding author: Are Hjørungnes is with UniK - University Graduate Center, University of Oslo, Instituttveien 25, P. O. Box 70,

N-2027 Kjeller, Norway, email: [email protected].

David Gesbert is with Mobile Communications Department, Eurécom Institute, 2229 Route des Crêtes, BP 193, F-06904 Sophia Antipolis

Cédex, France, email: [email protected].

This work was supported by the Research Council of Norway and the French Ministry of Foreign Affairs through the Aurora project entitled "Optimization of Broadband Wireless Communications Network". 2 can be found by either setting the of the function with respect to the complex-valued matrix parameter or its complex conjugate to zero. Differentiation results are well-known for certain classes of functions, e.g., quadratic functions, but can be tricky for others. This paper provides the tools for finding derivatives in a systematic way.

In an effort to build adaptive optimization algorithms, it will also be shown that the direction of maximum rate of change of a real-valued scalar function, with respect to the complex-valued matrix parameter, is given by the derivative of the function with respect to the complex conjugate of the complex-valued input matrix parameter. Of course, this is a generalization of a well-known result for scalar functions of vector variables. A general framework is introduced here showing how to find the derivative of complex-valued scalar-, vector-, or matrix functions with respect to the complex-valued input parameter matrix and its complex conjugate. The main contribution of this paper lies in the proposed approach for how to obtain derivatives in a way that is both simple and systematic, based on the so-called differential of the function. In this paper, it is assumed that the functions are differentiable with respect to the complex-valued parameter matrix and its complex conjugate, and it will be seen that these two parameter matrices should be treated as independent when finding the derivatives, as is classical for scalar variables.

It is also shown how the Hessian matrix, i.e., second-order derivate, of a complex scalar function can be calculated.

The proposed theory is useful when solving numerous problems which involve optimization when the unknown parameter is a complex-valued matrix.

The problem at hand has been treated for real-valued matrix variables in [6], [7], [8], [9], [10]. Four additional references that give a brief treatment of the case of real-valued scalar functions which depend complex-valued vectors are Appendix B of [11], Appendix 2.B in [12], Subsection 2.3.10 of [13], and the article [14]. The article [15] serves as an introduction to this area for complex-valued scalar functions with complex-valued argument vectors.

Results on complex differentiation theory is given in [16], [17] for differentiation with respect to complex-valued scalars and vectors, however, the more general matrix case is not considered. Examples of problems where the unknown matrix is a complex-valued matrix are wide ranging including precoding of MIMO systems [18], linear equalization design [19], array signal processing [20] to only cite a few.

Some of the most relevant applications to signal and communication problems are presented here, with key results being highlighted and other illustrative examples are listed in tables. The complex Hessian matrix is defined for 3 TABLE I CLASSIFICATION OF FUNCTIONS. ∗ ∗ N×1 ∗ N×Q

Function type Scalar variables z, z ∈ C Vector variables z, z ∈ C Matrix variables Z, Z ∈ C

¡ ¡ ¡ ∗ ∗ ∗ Scalar function f ∈ C f z, z f z, z f Z , Z

N×1 N×1 N×Q N×Q

f : C × C → C f : C × C → C f : C × C → C

¡ ¡ ¡ M×1 ∗ ∗ ∗ Vector function f ∈ C f z, z f z, z f Z , Z

M×1 N×1 N×1 M×1 N×Q N×Q M×1

f : C × C → C f : C × C → C f : C × C → C

¡ ¡ ¡ M×P ∗ ∗ ∗ Matrix function F ∈ C F z, z F z, z F Z , Z

M×P N×1 N×1 M×P N×Q N×Q M×P F : C × C → C F : C × C → C F : C × C → C complex scalar functions, and it is shown how it can be obtained from the second-order differential of the functions.

Hessian matrices can be used to check if a stationary is a minimum, maximum, or a saddle point.

The rest of this paper is organized as follows: Section II contains a discussion on the differences between analytical functions, which are usually studied in mathematical courses for engineers, and non-analytical functions, which are often encountered when dealing with engineering problems of complex variables. In Section III, the complex differential is introduced, and based on this differential, the definition of the derivatives of complex-valued matrix function with respect to the complex-valued matrix argument and its complex conjugate is given in Section IV. The key procedure showing how the derivatives can be found from the differential of a function is also presented in

Section IV. Section V contains important results such as the , equivalent conditions for finding stationary points, and in which direction the function has the maximum rate of change. In Section VI, several key results are placed in tables and some results are derived for various cases with high relevance for signal processing and communication problems. The Hessian matrix of scalar complex-valued functions dependent on complex matrices is defined in Section VII, and it is shown how it can be obtained. Section VIII contains some conclusions. Some of the proofs are given in the appendices.

Notation: The following conventions are always used in this article: Scalar quantities (variables z or functions f) are denoted by lowercase symbols, vector quantities (variables z or functions f) are denoted by lowercase boldface symbols, and matrix quantities (variables Z or functions F ) are denoted by capital boldface symbols. The types of functions used throughout this paper are classified in Table I. From the table, it is seen that all the functions √ depend on a complex variable and the complex conjugate of the same variable. Let j = −1 be the imaginary unit. Let the symbol z = x + jy denote a complex variable, where the real and imaginary parts are denoted by x and y, respectively. The real and imaginary operators return the real and imaginary parts of the input matrix, 4 respectively. These operators are denoted by Re{·} and Im{·}.IfZ ∈ C N×Q is a complex-valued1 matrix, then

Z =Re{Z} + j Im {Z}, and Z∗ =Re{Z}−j Im {Z}, where Re {Z}∈RN×Q, Im {Z}∈RN×Q, and the operator (·)∗ denotes complex conjugate of the matrix it is applied to. The real and imaginary operators can be

1 ∗ 1 ∗ expressed as Re {Z} = 2 (Z + Z ) and Im {Z} = 2j (Z − Z ).

II. ANALYTICAL VERSUS NON-ANALYTICAL FUNCTIONS Mathematical courses on complex functions for engineers often only involve the analysis of analytical functions:

Definition 1: Let D ⊆ C be the domain of definition of the function f : D → C. The function f, which depends f(z + Δz) − f(z) on a complex scalar variable z,isananalytical function in the domain D [5] if lim exists Δz→0 Δz for all z ∈ D.

If f(z) satisfies the Cauchy-Riemann equations [5], then it is analytical. A function that is complex differentiable is also named analytic, holomorphic,orregular. The Cauchy-Riemann condition can be formulated as one equation

∂ ∗ as ∂z∗ f =0, where the derivative of f with respect to z and z, respectively, is found by treating the variable z

∗ ∂ ∂ ∗ and z as a constant, i.e., ∂z∗ z =0and ∂z z =0, see also Definition 2 where this kind of partial derivatives will

∂ be defined with more details. From ∂z∗ f =0, it is seen that any analytical function f is not dependent on the variable z∗. This can also be seen from Theorem 1, page 804 in [5], which states that any analytical function f(z) can be written as an infinite power series2 with non-negative exponents of the complex variable z, and this power series is called the Taylor series. The series does not contain any terms that depend on z ∗. The derivative of a complex-valued function in mathematical courses of for engineers is often defined only for analytical functions. However, in many engineering problems, the functions of interest are often not analytic, since they are often real-valued functions. Conversely, if a function is only dependent on z, like an analytical function, and not implicitly or explicitly dependent on z∗, then this function cannot in general be real-valued, since the function can only be real if the imaginary parts of z can be eliminated, and this is handled by the terms that depend on z∗. An alternative treatment of how to find the derivative of real functions dependent on complex variables than the one used for analytical function is needed. In this article, a systematic theory is provided for finding the derivative of non-analytical functions.

1R and C are the sets of the real and complex numbers, respectively. ∞ 2 n A power series in the variable z ∈ C is an infinite sum of the form an (z − z0) , where an,z0 ∈ C [5]. n=0 5

A. Euclidean Distance

In engineering problems, the squared Euclidean distance is often used. Let f(z)=|z| 2 = zz∗. If the traditional definition of the derivative given in Definition 1 is used, then the function f is not differentiable because:

2 2 ∗ ∗ ∗ f(z0 + Δz) − f(z0) |z0 + Δz| −|z0| (Δz) z + z0(Δz) + Δz(Δz) lim = lim = lim 0 . (1) Δz→0 Δz Δz→0 Δz Δz→0 Δz

This does not exist because different values are found depending on how Δz is approaching 0. Let Δz =

∗ 2 j(Δy)z0 −jz0Δy+(Δy) Δx + jΔy. Firstly, let Δz approach zero such that Δx =0, then the last fraction in (1) is jΔy =

∗ ∗ z0 − z0 − jΔy, which approaches z0 − z0 = −2j Im{z0} when Δy → 0. Secondly, let Δz approach zero such that

∗ 2 (Δx)z0 +z0Δx+(Δx) ∗ ∗ Δy =0, then the last fraction in (1) is Δx = z0 + z0 + Δx, which approaches z0 + z0 =2Re{z0} when Δx → 0. For any non-zero z0 the following holds: Re{z0} = −j Im{z0}, and therefore, the function f(z)=|z|2 is not differentiable when using the commonly encountered definition given in Definition 1.

There are two alternative ways [13] to find the derivative of a scalar real-valued function f ∈ R with respect to the complex-valued matrix parameter Z ∈ CN×Q. The standard way is to rewrite f as a function of the real X and imaginary parts Y of the complex variable Z, and then to find the derivatives of the rewritten function with respect to these two independent real variables, X and Y , separately. Notice that NQ independent complex parameters in Z correspond to 2NQ independent real parameters in X and Y . In this paper, we bring the reader’s attention onto an alternative approach that can lead to a simpler derivation. In this approach, one treats the differentials of the variables variables Z and Z ∗ as independent, in the way that will be shown by Lemma 1, see also [15]. It will be shown later, that both the derivative of f with respect to Z and Z ∗ can be identified by the differential of f.

B. Formal Partial Derivatives

∗ Definition 2: The derivatives with respect to z and z of f(z0) are called the formal partial derivatives of f at z0 ∈ C. Let z = x + jy, where x, y ∈ R, then the formal derivatives, or Wirtinger derivatives [21], are defined as: ∂ 1 ∂ ∂ ∂ 1 ∂ ∂ f(z0)  f(z0) − j f(z0) and f(z0)  f(z0)+j f(z0) . (2) ∂z 2 ∂x ∂y ∂z∗ 2 ∂x ∂y

∂ ∂ ∗ Alternatively, when finding ∂zf(z0) and ∂z∗ f(z0), the parameters z and z , respectively, are treated as independent variables, see [15] for more details. 6

∂ ∂ From Definition 2, it follows that ∂xf(z0) and ∂yf(z0) can be found as: ∂ ∂ ∂ ∂ ∂ ∂ f(z0)= f(z0)+ f(z0) and f(z0)=j f(z0) − f(z0) . (3) ∂x ∂z ∂z∗ ∂y ∂z ∂z∗

If f is dependent on several variables, Definition 2 can be extended. Later in this article, it will be shown how the derivatives, with respect to the complex-valued matrix parameter and its complex conjugate, of all the function types given in Table I can be identified from the differentials of these functions.

Example 1: Let f : C × C → R be f(z,z∗)=zz∗. f is differentiable3 with respect to both variables z and

∗ ∂ ∗ ∗ ∂ ∗ ∗ z , and ∂z f(z,z )=z and ∂z∗ f(z,z )=z. When the complex variable z and its complex conjugate twin z are treated as independent variables [15] when finding the derivatives of f, then f is differentiable in both of these variables. However, as shown earlier in this section, the same function is not differentiable when studying analytical functions defined according to Definition 1.

III. COMPLEX DIFFERENTIALS The differential has the same size as the matrix it is applied to. The differential can be found component-wise, that is, (dZ)k,l = d (Z)k,l. A procedure that can often be used for finding the differentials of a complex matrix function4 F (Z0, Z1) is to calculate the difference

F (Z0 + dZ0, Z1 + dZ1) − F (Z0, Z1)=First-order(dZ0,dZ1)+Higher-order(dZ0,dZ1), (4) where First-order(·, ·) returns the terms that depend on either dZ 0 or dZ1 of the first order, and Higher-order(·, ·) returns the terms that depend on the higher order terms of dZ 0 and dZ1. The differential is then given by

First-order(·, ·), i.e., the first order term of F (Z0 +dZ0, Z1 +dZ1)−F (Z0, Z1). As an example, let F (Z 0, Z1)=

Z0Z1. Then the difference in (4) can be developed and readily expressed as: F (Z 0+dZ0, Z1+dZ1)−F (Z0, Z1)=

Z0dZ1 +(dZ0)Z1 +(dZ0)(dZ1). The differential of Z0Z1 can then be identified as all the first-order terms on either dZ0 or dZ1 as dZ0Z1 = Z0dZ1 +(dZ0)Z1.

Definition 3: The Moore-Penrose inverse of Z ∈ CN×Q is denoted by Z+ ∈ CQ×N , and it is defined through the following four relations [22]:

H ZZ+ = ZZ+, (5)

3Here, Definition 2 is used for finding the formal partial derivatives. 4The indexes are chosen to start with 0 everywhere in this article. 7

H Z+Z = Z+Z, (6)

ZZ+Z = Z, (7)

Z+ZZ+ = Z+, (8) where the operator (·)H is the Hermitian operator, or the complex conjugate transpose.

Let ⊗ and  denote the Kronecker and Hadamard product [23], respectively. Some of the most important rules on complex differentials are listed in Table II, assuming A, B, and a to be constants, and Z, Z 0, and Z1 to be complex- valued matrix variables. The vectorization operator vec(·) stacks the columns vectors of the argument matrix into a long column vector in chronological order [23]. The differentiation rule of the reshaping operator reshape(·) in

Table II is valid for any linear reshaping5 operator reshape(·) of the matrix, and examples of such operators are the transpose (·)T or vec(·). Some of the basic differential results in Table II can be derived by means of (4), and others can be derived by generalizing some of the results found in [7], [9] to the complex differential case.

Differential of the Moore-Penrose Inverse: The differential of the real-valued Moore-Penrose inverse can be found in in [7], [8], but the complex-valued version is derived here.

Proposition 1: Let Z ∈ CN×Q, then

+ + + + + H H + + H + H + dZ = −Z (dZ)Z + Z (Z ) (dZ ) IN − ZZ + IQ − Z Z (dZ )(Z ) Z . (9)

The proof of Proposition 1 can be found in Appendix I.

From Table II, the following four equalities follows dZ = d Re {Z} + jd Im {Z}, Z∗ = d Re {Z}−jd Im {Z},

1 ∗ 1 ∗ Re {Z} = 2 (dZ + dZ ), and Im {Z} = 2j (dZ − dZ ).

The following two lemmas are used to identify the first- and second-order derivatives later in the article. The real variables Re {Z} and Im {Z} are independent of each other and hence are their differentials. Although the complex variables Z and Z ∗ are related, their differentials are linearly independent in the following way:

N×Q M×NQ ∗ N×Q Lemma 1: Let Z ∈ C and let Ai ∈ C .IfA0d vec(Z)+A1d vec(Z )=0M×1 for all dZ ∈ C , then Ai = 0M×NQ for i ∈{0, 1}.

5The output of the reshape operator has the same number of elements as the input, but the shape of the output might be different, so reshape(·) performs a reshaping of its input argument. 8

TABLE II IMPORTANT RESULTS FOR COMPLEX DIFFERENTIALS.

Function Differential

A 0

aZ adZ

AZB A(dZ )B

Z 0 + Z 1 dZ 0 + dZ 1

Tr {Z } Tr {dZ }

Z 0Z 1 (dZ 0)Z 1 + Z 0(dZ 1)

Z 0 ⊗ Z 1 (dZ 0) ⊗ Z 1 + Z 0 ⊗ (dZ 1)

Z 0  Z 1 (dZ 0)  Z 1 + Z 0  (dZ 1)

−1 −1 −1

Z −Z (dZ)Z Ó Ò −1

det(Z) det(Z)Tr Z dZ Ó Ò −1 ln(det(Z )) Tr Z dZ

reshape(Z ) reshape(dZ)

∗ ∗ Z (dZ )

H H

Z (dZ )

   + + + + + H H  + + H + H + Z −Z (dZ )Z + Z (Z ) (dZ ) I N − ZZ + I Q − Z Z (dZ )(Z ) Z

The proof of Lemma 1 can be found in Appendix II. N×Q NQ×NQ T T ∗ Lemma 2: Let Z ∈ C and let Bi ∈ C .If d vec (Z) B0d vec(Z)+ d vec (Z ) B1d vec(Z)+ T ∗ ∗ N×Q T T d vec (Z ) B2d vec(Z )=0for all dZ ∈ C , then B0 = −B0 , B1 = 0NQ×NQ, and B2 = −B2 .

The proof of Lemma 2 can be found in Appendix III.

IV. DEFINITION OF THE DERIVATIVE WITH RESPECT TO COMPLEX-VALUED MATRICES The most general definition of the derivative is given here from which the definitions for less general cases follow and they will later be given in an identification table which shows how the derivatives can be obtained from the differential of the function.

Definition 4: Let F : CN×Q × CN×Q → CM×P . Then the derivative of the matrix function F (Z, Z ∗) ∈ CM×P

N×Q ∗ M×P with respect to Z ∈ C is denoted DZF , and the derivative of the matrix function F (Z, Z ) ∈ C with

∗ N×Q respect to Z ∈ C is denoted DZ∗ F and the size of both these derivatives is MP × NQ. The derivatives

DZF and DZ∗ F are defined by the following differential expression:

∗ d vec(F )=(DZF ) d vec(Z)+(DZ∗ F ) d vec(Z ). (10)

∗ ∗ ∗ DZF (Z, Z ) and DZ∗ F (Z, Z ) are called the Jacobian matrices of F with respect to Z and Z , respectively.

Remark 1: Definition 4 is a generalization of Definition 1, page 173 in [7] to include complex matrices. In [7], several alternative definitions of the derivative of real-valued functions with respect to a matrix are discussed, and 9 TABLE III IDENTIFICATION TABLE. ∗ ∗ ∗

Function type Differential Derivative with respect to z, z,orZ Derivative with respect to z , z ,orZ Size of derivatives

¡ ¡ ¡ ∗ ∗ ∗ ∗

f z, z df = a0dz + a1dz Dz f z, z = a0 Dz∗ f z, z = a1 1 × 1

¡ ¡ ¡ ∗ ∗ ∗ ∗

f z, z df = a0dz + a1dz Dz f z, z = a0 Dz∗ f z, z = a1 1 × N

¡ ¡ ¡ ∗ T T ∗ ∗ T ∗ T

f Z , Z df =vec (A0)d vec(Z )+vec (A1)d vec(Z ) DZ f Z, Z =vec (A0) DZ∗ f Z, Z =vec (A1) 1 × NQ

Ò Ó

¡ ¡ ¡ f Z , Z ∗ df AT dZ AT dZ ∗ ∂ f Z , Z ∗ A ∂ f Z, Z ∗ A N × Q

=Tr 0 + 1 ∂Z = 0 ∂Z∗ = 1

¡ ¡ ¡ ∗ ∗ ∗ ∗

f z, z df = b0dz + b1dz Dz f z, z = b0 Dz∗ f z, z = b1 M × 1

¡ ¡ ¡ ∗ ∗ ∗ ∗

f z, z df = B 0dz + B 1dz Dz f z, z = B 0 Dz∗ f z, z = B 1 M × N

¡ ¡ ¡ ∗ ∗ ∗ ∗

f Z , Z df = β0d vec(Z )+β1d vec(Z ) DZ f Z , Z = β0 DZ∗ f Z , Z = β1 M × NQ

¡ ¡ ¡ ∗ ∗ ∗ ∗

F z, z d vec(F )=c0dz + c1dz Dz F z, z = c0 Dz∗ F z, z = c1 MP × 1

¡ ¡ ¡ ∗ ∗ ∗ ∗

F z, z d vec(F )=C 0dz + C 1dz Dz F z, z = C 0 Dz∗ F z, z = C 1 MP × N

¡ ¡ ¡ ∗ ∗ ∗ ∗ F Z , Z d vec(F )=ζ0d vec(Z )+ζ 1d vec(Z ) DZ F Z , Z = ζ 0 DZ∗ F Z , Z = ζ1 MP × NQ it is concluded that the definition that matches Definition 4 is the only reasonable definition. Definition 4 is also a generalization of the definition used in [15] for complex-valued vectors to the case of complex-valued matrices.

Table III shows how the derivatives of the different function types in Table I can be identified from the differentials of these functions.6 To show the uniqueness of the representation in (10), we subtract the differential in (10) from the

∗ ∗ ∗ corresponding differential in Table III to get (ζ0 −DZF (Z, Z )) d vec(Z)+(ζ1 −DZ∗ F (Z, Z )) d vec(Z )=

0MP×1. The derivatives in the last line of Table III then follow by using Lemma 1 on this equation. Table III is an extension of the corresponding table given in [7], valid in the real variable case. In Table III, z ∈ C, z ∈ C N×1,

N×Q M×1 M×P 1×1 1×N N×Q M×1 Z ∈ C , f ∈ C, f ∈ C , and F ∈ C . Furthermore, ai ∈ C , ai ∈ C , Ai ∈ C , bi ∈ C ,

M×N M×NQ MP×1 MP×N MP×NQ Bi ∈ C , βi ∈ C , ci ∈ C , Ci ∈ C , ζi ∈ C , and each of these might be a function of z, z, Z, z∗, z∗,orZ∗.

f CN×1 × CN×1 → CM×1 ∂ f z z∗ ∂ f z z∗ Definition 5: Let : . The partial derivatives ∂zT ( , ) and ∂zH ( , ) of size

M × N are defined as ⎡ ⎤ ⎡ ⎤ ∂ ∂ ∂ ∂ ⎢ ∂z f0 ··· ∂z f0 ⎥ ⎢ ∂z∗ f0 ··· ∂z∗ f0 ⎥ ⎢ 0 N−1 ⎥ ⎢ 0 N−1 ⎥ ⎢ ⎥ ⎢ ⎥ ∂ f z z∗ ⎢ . . ⎥ ∂ f z z∗ ⎢ . . ⎥ T ( , )=⎢ . . ⎥ , H ( , )=⎢ . . ⎥ , (11) ∂z ⎢ ⎥ ∂z ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ∂ ∂ ∂ ∂ fM−1 ··· fM−1 ∗ fM−1 ··· ∗ fM−1 ∂z0 ∂zN−1 ∂z0 ∂zN−1 where zi and fi is component number i of the vectors z and f, respectively.

6 f (Z, Z∗) ∂ f (Z, Z∗) For the functions of the type two alternative definitions for the derivatives are given. The notation ∂Z and ∂ f (Z, Z∗) ∂Z ∗ will be defined in Subsection VI-C. 10

∂ ∂ f D f f D ∗ f Notice that ∂zT = z and ∂zH = z . Using the notation in Definition 5, the derivatives of the function F (Z, Z∗), in Definition 4, are:

∗ ∗ ∂ vec(F (Z, Z )) DZF (Z, Z )= , (12) ∂ vecT (Z)

∗ ∗ ∂ vec(F (Z, Z )) DZ∗ F (Z, Z )= . (13) ∂ vecT (Z∗)

This is a generalization of the real-valued matrix variable case treated in [7] to the complex-valued matrix variable case. (12) and (13) show how the all the MPNQ partial derivatives of all the components of F with respect to all the components of Z and Z ∗ are arranged when using the notation introduced in Definition 5.

Key result: Finding the derivative of the complex-valued matrix function F with respect to the complex-valued matrices Z and Z∗ can be achieved using the following three-step procedure:

1) Compute the differential d vec(F ).

2) Manipulate the expression into the form given (10).

3) Read out the result using Table III.

For less general function types, see Table I, a similar procedure can be used.

V. F UNDAMENTAL RESULTS In this section, some useful theorems are presented that exploit the theory introduced earlier. All of these results are important when solving practical optimization problems involving differentiation with respect to a complex- valued matrix. These results include the chain rule, conditions for finding stationary points for a real scalar function dependent on complex-valued matrices, and in which direction the same type of function has the minimum or maximum rate of change. A comment will be made on how this result should be used in the steepest decent method [24]. For certain types of functions, the same procedure used for the real-valued matrix case [7] can be used, and this result is stated in a theorem.

1) Chain Rule: One big advantage of the way the derivative is defined in Definition 4 compared to other definitions of the derivative of F (Z, Z ∗) is that the chain rule is valid. The chain rule is now formulated.

N×Q N×Q M×P Theorem 1: Let (S0,S1) ⊆ C ×C , and let F : S0 ×S1 → C be differentiable with respect to both

∗ M×P M×P its first and second argument at an interior point (Z, Z ) in the set S0 ×S1. Let T0 ×T1 ⊆ C ×C be such 11

∗ ∗ ∗ ∗ R×S that (F (Z, Z ), F (Z, Z )) ∈ T0×T1 for all (Z, Z ) ∈ S0×S1. Assume that G : T0×T1 → C is differentiable

∗ ∗ ∗ R×S at an interior point (F (Z, Z ), F (Z, Z )) ∈ T0 × T1. Define the composite function H : S0 × S1 → C by

∗ ∗ ∗ ∗ H (Z, Z )  G (F (Z, Z ), F (Z, Z )). The derivatives DZH and DZ∗ H are obtained through:

∗ DZH =(DF G)(DZF )+(DF ∗ G)(DZF ) , (14)

∗ DZ∗ H =(DF G)(DZ∗ F )+(DF ∗ G)(DZ∗ F ) . (15)

Proof: From Definition 4, it follows that:

∗ d vec(H)=d vec(G)=(DF G) d vec(F )+(DF ∗ G) d vec(F ). (16)

The differentials of vec(F ) and vec(F ∗) are given by:

∗ d vec(F )=(DZF ) d vec(Z)+(DZ∗ F ) d vec(Z ), (17)

∗ ∗ ∗ ∗ d vec(F )=(DZF ) d vec(Z)+(DZ∗ F ) d vec(Z ). (18)

By substituting the results from (17) and (18), into (16), and then using the definition of the derivatives with respect to Z and Z∗ the theorem follows.

2) Stationary Points: The next theorem presents conditions for finding stationary points of f(Z, Z ∗) ∈ R.

Theorem 2: Let f : CN×Q × CN×Q → R. A stationary point7 of the function f(Z, Z∗)=g(X, Y ), where g : RN×Q × RN×Q → R and Z = X + jY is then found by one of the following three equivalent conditions: 8

DXg(X, Y )=01×NQ ∧DY g(X, Y )=01×NQ, (19)

∗ DZf(Z, Z )=01×NQ, (20) or

∗ DZ∗ f(Z, Z )=01×NQ. (21)

Proof: In optimization theory [7], a stationary point is a defined as point where the derivatives with respect to all the independent variables vanish. Since Re{Z} = X and Im{Z} = Y contain only independent variables, (19)

7Notice that a stationary point can be a local minimum, a local maximum, or a saddle point. 8In (19), the symbol ∧ means that both of the equations stated in (19) must be satisfied at the same time. 12 gives a stationary point by definition. By using the chain rule in Theorem 1, on both sides of f(Z, Z ∗)=g(X, Y ) and taking the derivative with respect to X and Y , the following two equations are obtained:

∗ (DZf)(DXZ)+(DZ∗ f)(DXZ )=DXg, (22)

∗ (DZf)(DY Z)+(DZ∗ f)(DY Z )=DY g. (23)

∗ ∗ From Table II, it follows that DXZ = DXZ = INQ and DY Z = −DY Z = jINQ. If these results are inserted into (22) and (23), these two equations can be formulated into block matrix form in the following way: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ DXg ⎥ ⎢ 11⎥ ⎢ DZf ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ = ⎣ ⎦ ⎣ ⎦ . (24) DY g j −j DZ∗ f

This equation is equivalent to the following matrix equation: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 j ⎢ DZf ⎥ ⎢ − ⎥ ⎢ DXg ⎥ ⎢ ⎥ ⎢ 2 2 ⎥ ⎢ ⎥ ⎣ ⎦ = ⎣ ⎦ ⎣ ⎦ . (25) 1 j DZ∗ f 2 2 DY g

1×NQ 1×NQ Since DXg ∈ R and DY g ∈ R , it is seen from (25), that (19), (20), and (21) are equivalent.

(24) and (25) are multi-variable generalizations of (2) and (3).

3) Direction of Extremal Rate of Change: The next theorem states how to find the maximum and minimum rate of change of f(Z, Z ∗) ∈ R.

Theorem 3: Let f : CN×Q ×CN×Q → R. The directions where the function f have the maximum and minimum

∗ T ∗ T rate of change with respect to vec(Z) are given by [DZ∗ f(Z, Z )] and − [DZ∗ f(Z, Z )] , respectively.

∗ Proof: Since f ∈ R, df can be written in the following two ways df =(DZf) d vec(Z)+(DZ∗ f) d vec(Z ),

∗ ∗ ∗ ∗ ∗ and df = df =(DZf) d vec(Z )+(DZ∗ f) d vec(Z), where df = df since f ∈ R. Subtracting the two different

∗ ∗ expressions of df from each other and then applying Lemma 1, gives that D Z∗ f =(DZf) and DZf =(DZ∗ f) .

∗ From these results, it follows that: df =(DZf) d vec(Z)+((DZf) d vec(Z)) =2Re{(DZf) d vec(Z)} =

∗ K×1 2Re{(DZ∗ f) d vec(Z)}. Let ai ∈ C , where i ∈{0, 1}. Then ⎡ ⎤ ⎡ ⎤ ⎢ Re {a0} ⎥ ⎢ Re {a1} ⎥ H ⎢ ⎥ ⎢ ⎥ Re a0 a1 = ⎣ ⎦ , ⎣ ⎦ , (26) Im {a0} Im {a1} 13 where ·, · is the ordinary Euclidean inner product [25] between real vectors in R 2K×1. Using this ⎡ ⎤ ⎡ ⎤ T ⎢ Re (DZ∗ f) ⎥ ⎢ Re {d vec(Z)} ⎥ ⎢ ⎥ ⎢ ⎥ df =2 ⎣ ⎦ , ⎣ ⎦ . (27) T Im (DZ∗ f) Im {d vec(Z)}

T Cauchy-Schwartz inequality gives that the maximum value of df occurs when d vec(Z)=α (D Z∗ f) for α>0

T and from this, it follows that the minimum rate of change occurs when d vec(Z)=−β (DZ∗ f) , for β>0.

∗ T It is now explained why [DZf(Z, Z )] it not the direction of maximum change of rate of the function f. Let K×1 K×1 T T g : C × C → R be g (a0, a1)=2Re a0 a1 .IfK =2and a0 =[1,j] , then g (a0, a0)=0despite the

T fact that [1,j] = 02×1. Therefore, the function g is not an inner product and Cauchy-Schwartz inequality is not

∗ T valid for this function. By examining the proof of Theorem 3, it is seen that the reason why [DZf(Z, Z )] it not the direction of maximum change of rate, is that the function g is not an inner product.

4) Steepest Descent Method: If a real-valued function f is being optimized with respect to the parameter Z by means of the steepest decent method, it follows from Theorem 3 that the updating term must be proportional to

∗ ∗ DZ∗ f(Z, Z ) and not DZf(Z, Z ). The update equation for optimizing the real-valued function in Theorem 3 by

T T ∗ means of the steepest decent method, can be expressed as vec (Zk+1)=vec (Zk)+μDZ∗ f(Zk, Zk), where μ is a real positive constant if it is a maximization problem or a real negative constant if it is a minimization problem,

N×Q T ∗ and Zk ∈ C is the value of the matrix after k iterations. The size of vec (Zk) and DZ∗ f(Zk, Zk) is 1× NQ.

∗ 2 ∗ ∗ Example 2: Let f : C × C → R be f(z,z )=|z| = zz . The partial derivatives are Dz∗ f(z,z )=z and

∗ ∗ Dzf(z,z )=z . This result is in accordance with the intuitive picture for optimizing f, i.e., it is more efficient to

∗ ∗ ∗ maximize f in the direction of Dz∗ f(z,z )=z than in the direction of Dzf(z,z )=z when standing at z.

5) Complex Case versus Real Case: In order to show that the consistence with the real-valued case given in [7], the following theorem shows how the complex-valued case is an extension of [7].

N×Q N×Q M×P N×Q M×P Theorem 4: Let F : C × C → C and G : C → C .IfF (Z0, Z1)=G(Z0), then

∗ DZF (Z, Z )=DZG(Z) can be obtained by the procedure given in [7] for finding the derivative of the function

∗ G and DZ∗ F (Z, Z )=0MP×NQ.

∗ ∗ If F (Z0, Z1)=G(Z1), then DZF (Z, Z )=0MP×NQ, and DZ∗ F (Z, Z )=DZG(Z)|Z=Z∗ can be obtained by the procedure given in [7]. 14

TABLE IV ∗

DERIVATIVES OF FUNCTIONS OF THE TYPE f (z, z )

¡ ¡ ¡ ∗ ∗ ∗ f z, z Differential df Dz f z, z Dz∗ f z, z

T T T T a z = z a a dz a 01×N

T ∗ H T ∗ T

a z = z a a dz 01×N a

   T T  T T T z Az z A + A dz z A + A 01×N

H H T T ∗ H T T

z Az z Adz + z A dz z A z A

   H ∗ H  T ∗ H T z Az z A + A dz 01×N z A + A

Proof: Assume that F (Z0, Z1)=G(Z0) where Z0 and Z1 have independent differentials. Applying the vec(·)

F D F Z D F Z and the to this equation, leads to: d vec ( )=( Z0 ) d vec ( 0)+( Z1 ) d vec ( 1)=

∗ D G Z Z Z Z Z D ∗ F 0 ( Z0 ) d vec ( 0). By setting 0 = and 1 = in Lemma 1, it is seen that Z = MP×NQ and

∗ DZF (Z, Z )=DZG(Z). Since the last equation only has one matrix variable Z, the same techniques as given in [7] can be used. The first part of the theorem is then proved, and the second part can be shown similarly.

VI. DEVELOPMENT OF DERIVATIVE FORMULAS A. Derivative of f(z,z∗)

The case of scalar function of scalar variables is treated in the literature for one input variable, see for example [5],

∗ ∗ ∗ [26]. If the variables z and z are treated as independent variables, the derivatives Dzf(z,z ) and Dz∗ f(z,z ) can be found as for scalar functions having two independent variables, see Example 1.

B. Derivative of f(z, z∗)

Let f : CN×1 × CN×1 → C be f (z, z∗)=zH Az. This type of function is frequently used in the context of

filter optimization and in array signal processing [20]. For instance, a frequently occurring example is that of output power zH Az optimization where z is the filter coefficients and A is the input covariance matrix. The differential of f is df =(dzH )Az + zH Adz = zH Adz + zT AT dz∗, where Table II was used. From df , the derivatives of

H ∗ ∗ H ∗ T T z Az with respect to z and z follow from Table III, and are Dzf (z, z )=z A and Dz∗ f (z, z )=z A , see Table IV. The other selected examples are given in Table IV and they can be derived similarly. Some of the results included in Table IV can be found in [15].

C. Derivative of f(Z, Z∗)

∗ ∂ ∂ For functions of the type f (Z, Z ), it is common to arrange the partial derivatives f and ∗ f in an ∂zk,l ∂zk,l

∗ ∗ alternative way [7] than in the expressions DZf (Z, Z ) and DZ∗ f (Z, Z ). The notation for the alternative way 15

∂ ∂ of organizing all the partial derivatives is ∂Z f and ∂Z∗ f. In this alternative way, the partial derivatives of the elements of the matrix Z ∈ CN×Q are arranged as: ⎡ ⎤ ⎡ ⎤ ∂ ∂ ∂ ∂ ⎢ ∂z f ··· ∂z f ⎥ ⎢ ∂z∗ f ··· ∂z∗ f ⎥ ⎢ 0,0 0,Q−1 ⎥ ⎢ 0,0 0,Q−1 ⎥ ⎢ ⎥ ⎢ ⎥ ∂ ⎢ . . ⎥ ∂ ⎢ . . ⎥ f = ⎢ . . ⎥ , ∗ f = ⎢ . . ⎥ . (28) ∂Z ⎢ . . ⎥ ∂Z ⎢ . . ⎥ ⎣ ⎦ ⎣ ⎦ ∂ ∂ ∂ ∂ f ··· f ∗ f ··· ∗ f ∂zN−1,0 ∂zN−1,Q−1 ∂zN−1,0 ∂zN−1,Q−1 ∂ ∂ ∗ ∂Z f and ∂Z∗ f are called the gradient of f with respect to Z and Z , respectively. (28) generalizes to the complex case of one of the ways to define the derivative of real scalar functions with respect to real matrices in [7]. The way of arranging the partial derivatives in (28) is different than than the way given in (12) and (13). T T ∗ T T ∗ N×Q If df =vec (A0)d vec(Z)+vec (A1)d vec(Z )=Tr A0 dZ + A1 dZ , where Ai, Z ∈ C , then it can

∂ ∂ ∗ be shown that ∂Z f = A0 and ∂Z∗ f = A1, where the matrices A0 and A1 depend on Z and Z in general.

∂ ∂ ∗ This result is included in Table III. The size of ∂Z f and ∂Z∗ f is N × Q, while the size of DZf (Z, Z ) and

∗ DZ∗ f (Z, Z ) is 1×NQ, so these two ways of organizing the partial derivatives are different. It can be shown, that ∗ T ∂ ∗ ∗ T ∂ ∗ DZf (Z, Z )=vec ∂Z f (Z, Z ) , and DZ∗ f (Z, Z )=vec ∂Z∗ f (Z, Z ) . The steepest decent method can

∂ ∗ be formulated as Zk+1 = Zk + μ ∂Z∗ f(Zk, Zk). Key examples of derivatives are developed in the text below, while others useful results are stated in Table V.

1) Determinant Related Problems: Objective functions that depend on the determinant appear in several parts of signal processing related problems, e.g., in the capacity of wireless multiple-input multiple-output (MIMO) communication systems [19], in an upper bound for the pair-wise error probability [18], [27], and for the exact symbol error rate (SER) for precoded orthogonal space-time block-code (OSTBC) systems [28].

Let f : CN×Q × CQ×N → C be f(Z, Z∗)=det(P + ZZH ), where P ∈ CN×N is independent of Z and Z∗. df is found by the rules in Table II as df =det(P + ZZH )Tr (P + ZZH )−1d(ZZH ) =det(P + −1 ZZH )Tr ZH P + ZZH dZ + ZT (P + ZZH )−T dZ∗ . From this, the derivatives with respect to Z and Z∗ of det P + ZZH can be found, but since these results are relatively long, they are not included in Table V. 16

TABLE V ∗

DERIVATIVES OF FUNCTIONS OF THE TYPE f (Z, Z ) ¡ f Z , Z ∗ df ∂ f ∂ f Differential ∂Z ∂Z∗

Tr{Z } Tr {I N dZ } I N 0N×N © ∗ ¨ ∗ Tr{Z } Tr IN dZ 0N×N I N

T

Tr{AZ} Tr {AdZ} A 0N×Q

 Ó T Ò T T T T T T Tr{ZA0Z A1} Tr A0Z A1 + A0 Z A1 dZ A1 ZA0 + A1ZA0 0N×Q T T T T T T

Tr{ZA0ZA1} Tr {(A0ZA1 + A1ZA0) dZ } A1 Z A0 + A0 Z A1 0N×Q Ó H Ò H T T T ∗ T ∗ T

Tr{ZA0Z A1} Tr A0Z A1dZ + A0 Z A1 dZ A1 Z A0 A1ZA0 © ∗ ¨ ∗ ∗ T H T T T T

Tr{ZA0Z A1} Tr A0Z A1dZ + A1ZA0dZ A1 Z A0 A0 Z A1

Ó     −1 Ò −1 −1 T −1 T T −1

Tr{AZ } − Tr Z AZ dZ − Z A Z 0N×N

Ó   p Ò p−1 T p−1

Tr{Z } p Tr Z dZ p Z 0N×N

Ó   Ò −1 T −1

det(Z ) det(Z )Tr Z dZ det(Z ) Z 0N×N

Ó   Ò −1 T T T T −1 T

det(A0ZA1) det(A0ZA1)Tr A1 (A0ZA1) A0dZ det(A0ZA1)A0 A1 Z A0 A1 0N×Q

Ó   2 2 Ò −1 2 T −1

det(Z ) 2det (Z)Tr Z dZ 2det (Z ) Z 0N×N

   T T T  T −1 T T −1

det(ZZ ) 2det(ZZ )Tr Z ZZ dZ 2det(ZZ ) ZZ Z 0N×Q

Ó   ∗ ∗ Ò ∗ ∗ −1 ∗ −1 ∗ ∗ H T −1 H ∗ T H T −1

det(ZZ ) det(ZZ )Tr Z (ZZ ) dZ +(ZZ ) Z dZ det(ZZ )(Z Z ) Z det(ZZ )Z Z Z

   H H H H −1 T  ∗ T −1 ∗ H ∗ T −1 ∗ H H −1

det(ZZ ) det(ZZ )Tr Z (ZZ ) dZ + Z Z Z dZ det(ZZ )(Z Z ) Z det(ZZ ) ZZ Z

Ó   p p Ò −1 p T −1

det(Z ) p det (Z)Tr Z dZ p det (Z ) Z 0N×N µ H ´ H ∗ T v0 (dZ)u0 u0v0 v0 u0 λ(Z ) =Tr dZ 0N×N vH u vH u vH u

0 0 0 0 0 0 µ T ∗ ∗ ´ ∗ T H ∗ v0 (dZ )u0 u0 v0 ∗ v0u0 λ (Z ) =Tr dZ 0N×N vT u∗ vT u∗ vT u∗ 0 0 0 0 0 0

N×N N×1 2) Eigenvalue Related Problems: Let λ0 be a simple eigenvalue9 of Z0 ∈ C , and let u0 ∈ C be the

N×N N×N N×1 normalized corresponding eigenvector, such that Z 0u0 = λ0u0. Let λ : C → C and u : C → C be defined for all Z in some neighborhood of Z 0 such that:

H Zu(Z)=λ(Z)u(Z), u0 u(Z)=1,λ(Z0)=λ0, u(Z0)=u0. (29)

N×1 H H Let the normalized left eigenvector of Z 0 corresponding to λ0 be denoted v0 ∈ C , i.e., v0 Z0 = λ0v0 ,or

H ∗ equivalently Z0 v0 = λ0v0. In order to find the differential of λ(Z) at Z = Z0, take the differential of both sides of the first expression in (29) evaluated at Z = Z 0:

(dZ) u0 + Z0du =(dλ) u0 + λ0du. (30)

H H H H Pre-multiplying (30) by v0 gives: v0 (dZ) u0 =(dλ) v0 u0. From Lemma 6.3.10 in [22], v0 u0 =0 , and hence vH Z u u vH 0 (d ) 0 0 0 Z dλ = H =Tr H d . (31) v0 u0 v0 u0

9 N×N The matrix Z0 ∈ C has in general N different complex eigenvalues. However, the roots of the characteristic equation, i.e., the eigenvalues need not to be distinct. The number of times an eigenvalue appears is equal to its multiplicity. If one eigenvalue appear only once, it is called a simple eigenvalue [22]. 17

This result is included in Table V, and it will be used later when the derivatives of the eigenvector u and the

∗ Hessian of λ are found. The differential of λ at Z0 is also included in Table V. These results are derived in [7].

D. Derivative of f(z,z∗)

In engineering, the objective function is often a real scalar, and this might be a very complicated function in the parameter matrix. To be able to find the derivative in a simple way, the chain rule in Theorem 1 might be very useful, and then functions of the types f and F might occur.

∗ ∗ ∗ ∗ ∗ Let f(z,z )=af(z,z ), then df = adf = a (Dzf(z,z )) dz + a (Dz∗ f(z,z )) dz , where df follows from

∗ ∗ Table III. From this equation, it follows that Dzf = aDzf(z,z ) and Dz∗ f = aDz∗ f(z,z ). The derivatives of the vector functions az and azz∗ follow from these expressions.

E. Derivative of f(z, z∗)

First a useful results is introduced. In order to extract the vec(·) of an inner matrix from the vec(·) of a multiple matrix product, the following formula [23] is very useful:

vec (ABC)= CT ⊗ A vec (B) . (32)

Let f : CN×1 × CN×1 → CM×1 be given by f(z, z∗)=F (z, z∗)a, where F : CN×1 × CN×1 → CM×P . Then ∗ T T ∗ ∗ ∗ df = d vec(F (z, z )a)= a ⊗ IM d vec(F )= a ⊗ IM [(DzF (z, z )) dz +(Dz∗ F (z, z )) dz ], where

(32) and Table III were utilized. From df, the derivatives of f with respect to z and z ∗ follow.

F. Derivative of f(Z, Z∗)

1) Eigenvector Related Problems: First a lemma is stated.

Lemma 3: Let A ∈ CN×Q, then the following equalities holds:

R (A)=R A+A , C (A)=C AA+ , rank (A)=rank A+A =rank AA+ (33) where C(A), R(A), and N (A) are the symbols used for the column, row, and null space of A ∈ C N×Q, respectively.10

The proof of Lemma 3 can be found in Appendix IV.

10C(A)={w ∈ CN×1|w = Az, for some z ∈ CQ×1}, R(A)={w ∈ C1×Q|w = zA, for some z ∈ C1×N }, and N (A)={z ∈

Q×1 C |Az = 0N×1}. 18

The differential of the eigenvector u(Z) is now found at Z = Z 0. The derivation here is similar to the one in [7], where the same result for du at Z = Z 0 was derived, however, more details are included here. Let Y 0 = λ0IN −Z0, H H v0 (dZ)u0 u0v0 then it follows from (30), that Y 0du =(dZ) u0 − (dλ) u0 =(dZ) u0 − H u0 = IN − H (dZ) u0, v0 u0 v0 u0 + where (31) was utilized. Pre-multiplying Y 0du with Y 0 results in: u vH Y +Y u Y + I − 0 0 Z u 0 0d = 0 N H (d ) 0. (34) v0 u0

The operator dim(·) returns the dimension of the . Since λ0 is a simple eigenvalue, dim (N (Y 0)) =

1 [5]. Hence, it follows from 0.4.4 (g) on page 13 in [7] that rank(Y 0)=N − dim (N (Y 0)) = N − 1. From

+ + Y 0u0 = 0N×1, it follows from (71), see Appendix V, that u0 Y 0 = 01×N . It can be shown by direct insertion

+ H H + in Definition 3 of the Moore-Penrose inverse that u0 = u0 . From these results it follows that u0 Y 0 = 01×N .

+ H H + Set C0 = Y 0 Y 0 + u0u0 , then it can be shown from the two facts u0 Y 0 = 01×N and Y 0u0 = 0N×1 that

2 + C0 = C0, i.e., C0 is idempotent. It can be shown by the direct use of Definition 3, that the matrix Y 0 Y 0 is also idempotent. By the use of Lemma 10.1.1 and Corollary 10.2.2, in [8], it is found that rank (C 0)=Tr{C0} = + H + Tr Y 0 Y 0 +Tr u0u0 =rank Y 0 Y 0 +1=rank(Y 0)+1=N, where the third equation of (33) was used. From Lemma 10.1.1 and Corollary 10.2.2, in [8], and rank (C 0)=N, it follows that C0 = IN . Using the

H differential operator on both sides of the second equation in (29), yields u0 du =0. Using these results, it follows that:

+ H H Y 0 Y 0du = IN − u0u0 du = du − u0u0 du = du. (35)

(34) and (35) lead to: u vH u I − Z + I − 0 0 Z u d =(λ0 N 0) N H (d ) 0. (36) v0 u0

From (36), it is possible to find the differential of u(Z) evaluated at Z 0 with respect to the matrix Z: u vH u u I − Z + I − 0 0 Z u d =vec(d )=vec (λ0 N 0) N H (d ) 0 v0 u0 u vH uT ⊗ I − Z + I − 0 0 Z = 0 (λ0 N 0) N H d vec ( ) , (37) v0 u0 H T + u0v0 where (32) was used. From (37), it follows that DZu = u0 ⊗ (λ0IN − Z0) IN − H . The differential v0 u0 and the derivative of u∗ follow by the use of Table II, and the results found here. 19

The left eigenvector function v : CN×N → CN×1 with the argument Z ∈ CN×N , denoted v(Z), is defined

H H H through the following four relations: v (Z)Z = λ(Z)v , v0 v(Z)=1, λ(Z0)=λ0, and v(Z0)=v0. The differential of v(Z) at Z = Z0 can be found, using a procedure similar to the one used earlier in this subsection H H H u0v0 + for finding du at Z = Z0, giving dv = v0 (dZ) IN − H (λ0IN − Z0) . v0 u0

G. Derivative of F (z,z∗)

Examples of functions of the type F (z,z∗) are Az, Azz∗, and Af(z,z∗), where A ∈ CM×P . These functions can be differentiated by finding the differentials of the scalar functions z, zz∗, and f(z,z∗).

H. Derivative of F (z, z∗)

Let F : CN×1 × CN×1 → CN×N be F (z, z∗)=zzH . The differential of the F is dF =(dz)zH + zdzH . And

∗ H ∗ ∗ from this equation d vec(F )=[z ⊗ IN ] d vec(z)+[IN ⊗ z] d vec(z )=[z ⊗ IN ] dz +[IN ⊗ z] dz . Hence,

∗ H ∗ the derivatives of F (z, z )=zz are given by DzF = z ⊗ IN , and Dz∗ F = IN ⊗ z.

I. Derivative of F (Z, Z∗)

1) Kronecker Product Related Problems: Examples of objective functions which depends on the Kronecker product of the unknown complex-valued matrix is found in [18], [28].

Now, the commutation matrix is introduced:

Definition 6: Let A ∈ CN×Q, then there exists a permutation matrix that connects the vectors vec (A) and T vec A . The commutation matrix KN,Q has size NQ× NQ, and it defines the connection between vec (A) and T T vec A in the following way: KN,Q vec(A)=vec(A ).

T −1 The matrix KQ,N is a permutation matrix and it can be shown [7] that K Q,N = KQ,N = KN,Q. The following

N ×Q result is Theorem 3.9 in [7]: Let Ai ∈ C i i where i ∈{0, 1}, then the reason for why the commutation matrix

K A ⊗ A A ⊗ A K received its name can be seen from the following identity: N1,N0 ( 0 1)=( 1 0) Q1,Q0 .

N ×Q N ×Q N N ×Q Q N ×Q Let F : C 0 0 × C 1 1 → C 0 1 0 1 be given by F (Z 0, Z1)=Z0 ⊗ Z1, where Zi ∈ C i i . The differential of this function follows from Table II: dF =(dZ0) ⊗ Z1 + Z0 ⊗ dZ1. Applying the vec(·) operator to dF yields: d vec(F )=vec((dZ0) ⊗ Z1)+vec(Z0 ⊗ dZ1). From (A ⊗ B)(C ⊗ D)=AC ⊗ BD and 20

Theorem 3.10 in [7], it follows that

Z ⊗ Z I ⊗ K ⊗ I Z ⊗ Z vec ((d 0) 1)=( Q0 Q1,N0 N1 )[(d vec( 0)) vec( 1)]

I ⊗ K ⊗ I I Z ⊗ Z =( Q0 Q1,N0 N1 )[( N0Q0 d vec( 0)) (vec( 1)1)]

I ⊗ K ⊗ I I ⊗ Z Z ⊗ =( Q0 Q1,N0 N1 )[( N0Q0 vec( 1)) (d vec( 0) 1)]

I ⊗ K ⊗ I I ⊗ Z Z =( Q0 Q1,N0 N1 )[ N0Q0 vec( 1)] d vec( 0), (38) and in a similar way it follows that

Z ⊗ Z I ⊗ K ⊗ I Z ⊗ Z vec ( 0 d 1)=( Q0 Q1,N0 N1 )[vec( 0) d vec( 1)]

I ⊗ K ⊗ I Z ⊗ I Z =( Q0 Q1,N0 N1 )[vec( 0) N1Q1 ] d vec( 1). (39)

Inserting the result from (38) and (39) into d vec(F ) gives:

F I ⊗ K ⊗ I I ⊗ Z Z d vec( )=( Q0 Q1,N0 N1 )[ N0Q0 vec( 1)] d vec( 0)

I ⊗ K ⊗ I Z ⊗ I Z +( Q0 Q1,N0 N1 )[vec( 0) N1Q1 ] d vec( 1). (40)

A Z B Z A Z  I ⊗ K ⊗ I I ⊗ Z B Z Define the matrices ( 1) and ( 0) by ( 1) ( Q0 Q1,N0 N1 )[ N0Q0 vec( 1)], and ( 0)=

I ⊗ K ⊗ I Z ⊗ I F Z Z Z ⊗ Z ( Q0 Q1,N0 N1 )[vec( 0) N1Q1 ]. It is then possible to rewrite the differential of ( 0, 1)= 0 1 as d vec(F )=A(Z1)d vec(Z0)+B(Z0)d vec(Z1). From d vec(F ), the differentials and derivatives of Z ⊗ Z,

Z ⊗ Z∗, and Z∗ ⊗ Z∗ can be derived and these results are included in Table VI. In the table, diag(·) returns the square diagonal matrix with the input column vector elements on the main diagonal [22] and zeros elsewhere.

2) Moore-Penrose Inverse Related Problems: In pseudo-inverse matrix based receiver design, the Moore-

Penrose inverse might appear [19]. This is applicable for MIMO, CDMA, and OFDM systems.

Let F : CN×Q × CN×Q → CQ×N be given by F (Z, Z ∗)=Z+, where Z ∈ CN×Q. The reason for including both variables Z and Z ∗ in this function definition is that the differential of Z+, see Table II, depends on both dZ and dZ∗. Using the vec(·) operator on the differential of the Moore-Penrose inverse in Table II, in addition to

(32) and Definition 6, result in:

+ T + + T T + + H ∗ d vec(F )=− Z ⊗ Z d vec(Z)+ IN − Z Z ⊗ Z Z KN,Qd vec(Z ) 21

TABLE VI ∗

DERIVATIVES OF FUNCTIONS OF THE TYPE F (Z, Z )

¡ ¡ ¡ ∗ ∗ ∗ F Z , Z Differential d vec(F ) DZ F Z, Z DZ∗ F Z , Z

Z I NQd vec(Z ) INQ 0NQ×NQ

T Z K N,Qd vec(Z ) K N,Q 0NQ×NQ

∗ ∗ Z INQd vec(Z ) 0NQ×NQ I NQ

H ∗

Z K N,Qd vec(Z ) 0NQ×NQ K N,Q

    ZZT I K Z ⊗ I d Z I K Z ⊗ I 0

N2 + N,N ( N ) vec( ) N2 + N,N ( N ) N2×NQ

      Z T Z I K I ⊗ Z T d Z I K I ⊗ Z T 0

Q2 + Q,Q Q vec( ) Q2 + Q,Q Q Q2×NQ ¡ H ∗ ∗ ∗

ZZ Z ⊗ I N d vec(Z )+K N,N (Z ⊗ IN ) d vec(Z ) Z ⊗ I N K N,N (Z ⊗ I N )

  Z −1 − Z T −1 ⊗ Z −1 d Z − Z T −1 ⊗ Z −1 0 ( ) vec( ) ( ) N2×N2

p p

   

Z p Z T p−i ⊗ Z i−1 d Z Z T p−i ⊗ Z i−1 0 ( ) vec( ) ( ) N2×N2 i=1 i=1 Z ⊗ Z A Z B Z d Z A Z B Z 0 ( ( )+ ( )) vec( ) ( )+ ( ) N2Q2×NQ ∗ ∗ ∗ ∗ Z ⊗ Z A(Z )d vec(Z)+B(Z )d vec(Z ) A(Z ) B(Z )

Z ∗ ⊗ Z ∗ A Z ∗ B Z ∗ d Z ∗ 0 A Z ∗ B Z ∗ ( ( )+ ( )) vec( ) N2Q2×NQ ( )+ ( )

Z  Z 2diag(vec(Z))d vec(Z) 2 diag(vec(Z )) 0NQ×NQ

∗ ∗ ∗ ∗ Z  Z diag(vec(Z ))d vec(Z )+diag(vec(Z))d vec(Z ) diag(vec(Z )) diag(vec(Z ))

∗ ∗ ∗ ∗ ∗

Z  Z 2diag(vec(Z ))d vec(Z ) 0NQ×NQ 2 diag(vec(Z )) 

∞ k  ∞ k

   

1 T k−i i 1 T k−i i exp(Z) Z ⊗ Z d vec(Z ) Z ⊗ Z 0 2 2 k k N ×N

k=0 ( +1)! i=0 k=0 ( +1)! i=0

  

∞ k  ∞ k

   

¡ ∗ 1 H k−i ∗ i ∗ 1 H k−i ∗ i exp Z Z ⊗ (Z ) d vec(Z ) 0 2 2 Z ⊗ (Z ) k N ×N k k=0 ( +1)! i=0 k=0 ( +1)! i=0

∞ k ∞ k

     

¡ ¡

H 1 ∗ k−i H i ∗ 1 ∗ k−i H i exp Z Z ⊗ (Z ) K N,N d vec(Z ) 0 2 2 Z ⊗ (Z ) K N,N k N ×N k k=0 ( +1)! i=0 k=0 ( +1)! i=0

+ T + ∗ + ∗ + Z Z ⊗ IQ − Z Z KN,Qd vec(Z ). (41) + T T + + H + T + ∗ + This leads to DZ∗ F = IN − Z Z ⊗ Z Z + Z Z ⊗ IQ − Z Z KN,Q and + T + + −1 ∗ DZF = − Z ⊗ Z .IfZ is invertible, then the derivative of Z = Z with respect to Z is equal to the zero matrix and the derivative of Z+ = Z−1 with respect to Z can be found from Table VI. Since the expressions associated with the differential of the Moore-Penrose inverse is so long, it is not included in Table VI.

VII. HESSIAN MATRIX FORMULAS In the previous sections, we have proposed tools for finding first-order derivatives that can be of help to find stationary points in optimization problems. To identify the nature of these stationary points (minimum, maximum, or saddle point) or to study the stability of iterative algorithms, the second-order derivatives are useful. This can be done by examining the Hessian matrix.

The Hessian matrix of a function is a matrix containing the second-order derivatives of the function. In this section, the Hessian matrix is defined and it is shown how it can be obtained. Only the case of scalar functions f ∈ C is treated, since this is the case of interest in most practical situations. However, the results can be extended to the vector- and matrix-functions as well. The way the Hessian is defined here follows in the same lines as given 22 in [7], treating the real case. A complex version of Newton’s recursion formula is derived in [29], [30], and there the topic of Hessian matrices is briefly treated for real scalar functions. Therefore, the complex Hessian might be used to accelerate the convergence of iterative optimization algorithms.

A. Identification of the Complex Hessian Matrices

When dealing with the Hessian matrix, it is the second-order differential that has to be calculated in order to identify the Hessian matrix. Neither of the matrices dZ nor dZ ∗ is a function of Z or Z∗ and, hence, their differentials are the zero matrix. Mathematically, this can be formulated as:

2 ∗ 2 ∗ d Z = d (dZ)=0N×Q = d (dZ )=d Z . (42)

T T H H If f ∈ C, then d2f = d (df ) = d2f T = d2f and if f ∈ R, then d2f = d (df ) = d2f H = d2f.

The Hessian depends on two variables such that the notation must include which variable the Hessian is calculated

Z Z H with respect to. If the Hessian is calculated with respect to 0 and 1, the Hessian is denoted Z0,Z1 f. The

H following definition is used for Z0,Z1 f and it is an extension of the definition in [7], to complex scalar functions.

T Z ∈ CNi×Qi ∈{ } H ∈ CN1Q1×N0Q0 H  D D Definition 7: Let i , i 0, 1 , then Z0,Z1 f and Z0,Z1 f Z0 ( Z1 f) .

Later, the following proposition is important for showing the symmetry of the complex Hessian matrix.

Proposition 2: Let f : CN×Q × CN×Q → C. Assumed that the function f(Z, Z ∗) is twice differentiable with respect to all of the variables inside Z and Z ∗, when these variables are treated as independent variables. Then [7]

∂2 ∂2 ∂2 ∂2 ∂2 ∂2 f = f, ∗ ∗ f = ∗ ∗ f, and ∗ f = ∗ f, where m, k ∈{0, 1,...,N−1} ∂zk,l∂zm,n ∂zm,n∂zk,l ∂zk,l∂zm,n ∂zm,n∂zk,l ∂zk,l∂zm,n ∂zm,n∂zk,l and n, l ∈{0, 1,...,Q− 1}.

Let pi = Niki + li where i ∈{0, 1}, ki ∈{0, 1,...,Qi − 1} and li ∈{0, 1,...,Ni − 1}. As a consequence of

H Definition 7 and (12), it follows that element number (p0,p1) of Z0,Z1 f is given by: T T ∂ ∂ ∂ ∂ (HZ ,Z f) = DZ (DZ f) = f = f 0 1 p0,p1 0 1 T T p0,p1 Z0 Z1 Z0 Z1 ∂ vec ( ) ∂ vec ( ) ∂ (vec( ))p1 ∂ (vec( ))p0 p0,p1

∂ ∂ ∂2f = f = . (43) Z0 Z1 Z0 Z1 ∂ (vec( ))N1k1+l1 ∂ (vec( ))N0k0+l0 ∂ ( )l1,k1 ∂ ( )l0,k0

And as an immediate consequence of (43) and Proposition 2, it follows that for twice differentiable functions f:

T T T (HZ,Zf) = HZ,Zf, (HZ∗,Z∗ f) = HZ∗,Z∗ f, (HZ,Z∗ f) = HZ∗,Zf. (44) 23

In order to find an identification equation for the Hessians of the function f with respect to all the possible combinations of the variables Z and Z ∗, an appropriate form of the expression d2f is required. This expression is now derived. From (10), the first-order differential of the function f can be written as df =(DZf)d vec(Z)+

∗ 1×NQ 1×NQ (DZ∗ f)d vec(Z ), where DZf ∈ C and DZ∗ f ∈ C can be expressed by the use of (10) as: T T T ∗ (dDZf) = DZ (DZf) d vec(Z)+ DZ∗ (DZf) d vec(Z ), (45)

T T T ∗ (dDZ∗ f) = DZ (DZ∗ f) d vec(Z)+ DZ∗ (DZ∗ f) d vec(Z ). (46)

From (45) and (46) it follows that:

T T T T T ∗ T dDZf = d vec (Z) DZ (DZf) + d vec (Z ) DZ∗ (DZf) , (47)

T T T T T ∗ T dDZ∗ f = d vec (Z) DZ (DZ∗ f) + d vec (Z ) DZ∗ (DZ∗ f) . (48) d2f is found by applying the differential operator on df and then utilizing the results from (42), (47), and (48):

2 ∗ d f =(dDZf) d vec(Z)+(dDZ∗ f)d vec(Z )

T T T T T ∗ T = d vec (Z) DZ (DZf) d vec(Z)+ d vec (Z ) DZ∗ (DZf) d vec(Z)

T T T T ∗ T ∗ T ∗ + d vec (Z) DZ (DZ∗ f) d vec(Z )+ d vec (Z ) DZ∗ (DZ∗ f) d vec(Z )

T T T T ∗ = d vec (Z) DZ (DZf) d vec(Z)+ d vec (Z) DZ∗ (DZf) d vec(Z )

T ∗ T T ∗ T ∗ + d vec (Z ) DZ (DZ∗ f) d vec(Z)+ d vec (Z ) DZ∗ (DZ∗ f) d vec(Z ) ⎡ ⎤ ⎡ ⎤ T T ⎢ DZ (DZ∗ f) , DZ∗ (DZ∗ f) ⎥ ⎢ d vec(Z) ⎥ T ∗ T ⎢ ⎥ ⎢ ⎥ = d vec (Z ),dvec (Z) ⎣ ⎦ ⎣ ⎦ . (49) T T ∗ DZ (DZf) , DZ∗ (DZf) d vec(Z )

From Definition 7, it follows that d2f can be rewritten as: ⎡ ⎤ ⎡ ⎤

⎢ HZ,Z∗ f, HZ∗,Z∗ f ⎥ ⎢ d vec(Z) ⎥ 2 T ∗ T ⎢ ⎥ ⎢ ⎥ d f = d vec (Z ),dvec (Z) ⎣ ⎦ ⎣ ⎦ . (50) ∗ HZ,Zf, HZ∗,Zf d vec(Z )

Assume that it is possible to find an expression of d2f in the following form

2 T ∗ T ∗ ∗ d f = d vec (Z ) A0,0d vec(Z)+ d vec (Z ) A0,1d vec(Z ) 24

T T ∗ + d vec (Z) A1,0d vec(Z)+ d vec (Z) A1,1d vec(Z ) ⎡ ⎤ ⎡ ⎤

⎢ A0,0, A0,1 ⎥ ⎢ d vec(Z) ⎥ T ∗ T ⎢ ⎥ ⎢ ⎥ = d vec (Z ),dvec (Z) ⎣ ⎦ ⎣ ⎦ , (51) ∗ A1,0, A1,1 d vec(Z )

NQ×NQ ∗ ∗ where Ak,l ∈ C , k, l ∈{0, 1}, can possibly be dependent on Z and Z but not on d vec(Z) and d vec(Z ).

The four complex Hessian matrices in (50) can now be identified from Ak,l in (51) in the following way: Subtract the second-order differentials in (50) from (51), and then use (44) together with Lemma 2 to get

T T T T A0,0 + A1,1 A0,0 + A1,1 A1,0 + A1,0 A0,1 + A0,1 HZ∗,Zf = , HZ,Z∗ f = , HZ,Zf = , HZ∗,Z∗ f = . (52) 2 2 2 2

The complex Hessian matrices of the scalar function f can be computed using a three-step procedure:

1) Compute d2f.

2) Manipulate d2f into the form given in (51).

3) Use (52) to identify the four Hessian matrices.

In order to check convexity of f, the middle matrix on the right hand side of (50) must be positive definite.

B. Examples of Calculation of the Complex Hessian Matrices

1) Complex Hessian of f(z, z∗): Let f : CN×1 × CN×1 → C be defined as: f(z, z∗)=zH Φz. The second- 2 H order differential of f is d f =2 dz Φdz. From this, Ak,l in (51) are A0,0 =2Φ, A0,1 = A1,0 = A1,1 =

T 0N×N . (52) gives the four complex Hessian matrices HZ,Zf = 0N×N , HZ∗,Z∗ f = 0N×N , HZ∗,Zf = Φ , and

HZ,Z∗ f = Φ.

2) Complex Hessian of f(Z, Z ∗): See the discussion around (29) for an introduction to the eigenvalue problem.

2 d λ(Z) is now found at Z = Z0. The derivation here is similar to the one in [7], where the same result for

2 2 d λ was found. Applying the differential operator to both sides of (30), results in: 2(dZ)(du)+Z 0d u = 2 2 H 2 d λ u0 +2(dλ) du + λ0d u. Left-multiplying this equation by v0 , and solving for d λ gives H H H H v0 (dZ)u0 H v0 (dZ)u0v0 H 2v0 dZ − IN H du 2 v0 dZ − H du 2v (dZ − IN dλ) du v0 u0 v0 u0 d2λ = 0 = = vH u vH u vH u 0 0 0 0 0 0 H H H u0v0 + u0v0 2v0 (dZ) IN − H (λ0IN − Z0) IN − H (dZ) u0 v0 u0 v0 u0 = H , (53) v0 u0 25 where (31) and (36) were utilized. d2λ can be reformulated by means of Theorem 2.3 in [7] in the following way: u vH u vH 2 2 u vH Z I − 0 0 I − Z + I − 0 0 Z d λ = H Tr 0 0 (d ) N H (λ0 N 0) N H (d ) v u0 v u0 v u0 0 0 0 u vH u vH 2 T Z I − 0 0 I − Z + I − 0 0 ⊗ v∗uT ZT = H d vec ( ) N H (λ0 N 0) N H 0 0 d vec v u0 v u0 v u0 0 0 0 u vH u vH 2 T Z I − 0 0 I − Z + I − 0 0 ⊗ v∗uT K Z = H d vec ( ) N H (λ0 N 0) N H 0 0 N,Nd vec ( ) . (54) v0 u0 v0 u0 v0 u0

From this expression, it is possible to identify the four complex Hessian matrices by means of (51) and (52). Let f : CN×Q × CN×Q → C be given by f(Z, Z ∗)=Tr ZAZH , where Z ∈ CN×Q and A ∈ CQ×Q. From T ∗ T Theorem 2.3 in [7], f can written f =vec (Z ) A ⊗ IN vec(Z). By applying the differential operator twice 2 T ∗ T to the latest expression of f, it follows that d f =2 d vec (Z ) A ⊗ IN d vec(Z). From this expression, it is possible to identify the four complex Hessian matrices by means of (51) and (52).

VIII. CONCLUSIONS An introduction is given to a set of very powerful tools that can be used to systematically find the derivative of complex-valued matrix functions that are dependent on complex-valued matrices. The key idea is to go through the complex differential of the function and to treat the differential of the complex variable and its complex conjugate as independent. This general framework can be used in many optimization problems that depend on complex parameters. Many results are given in tabular form.

APPENDIX I

PROOF OF PROPOSITION 1 Proof: (8) leads to dZ+ = dZ+ZZ+ =(dZ+Z)Z++Z+ZdZ+.IfZdZ+ is found from dZZ+ =(dZ)Z++

ZdZ+, and inserted in the expression for dZ +, then it is found that:

dZ+ =(dZ+Z)Z+ + Z+(dZZ+ − (dZ)Z+)=(dZ+Z)Z+ + Z+dZZ+ − Z+(dZ)Z+. (55)

It is seen from (55), that it remains to express dZ +Z and dZZ+ in terms of dZ and dZ ∗. Firstly, dZ+Z is handled:

dZ+Z = dZ+ZZ+Z =(dZ+Z)Z+Z + Z+Z(dZ+Z)=(Z+Z(dZ+Z))H + Z+Z(dZ+Z). (56)

The expression Z(dZ +Z) can be found from dZ = dZZ+Z =(dZ)Z+Z + Z(dZ+Z), and it is given by

+ + + Z(dZ Z)=dZ − (dZ)Z Z =(dZ)(IQ − Z Z). If this expression is inserted into (56), it is found that: 26 + + + H + + + H + H + + dZ Z =(Z (dZ)(IQ−Z Z)) +Z (dZ)(IQ−Z Z)=(IQ−Z Z) dZ (Z ) +Z (dZ)(IQ−Z Z). + + + + H H + Secondly, it can be shown in a similar manner that: dZZ =(IN −ZZ )(dZ)Z +(Z ) dZ (IN −ZZ ).

If the expressions for dZ +Z and dZZ+ are inserted into (55), (9) is obtained.

APPENDIX II

PROOF OF LEMMA 1 M×NQ N×Q ∗ N×Q Proof: Let Ai ∈ C be an arbitrary complex-valued function of Z ∈ C and Z ∈ C .

From Table II, it follows that d vec(Z)=d vec(Re {Z})+jd vec(Im {Z}) and d vec(Z∗)=d vec(Re {Z}) −

∗ jd vec(Im {Z}). If these two expressions are substituted into A0d vec(Z)+A1d vec(Z )=0M×1, then it follows that A0(d vec(Re {Z})+jd vec(Im {Z})) + A1(d vec(Re {Z}) − jd vec(Im {Z})) = 0M×1. The last expression is equivalent to: (A0 + A1)d vec(Re {Z})+j(A0 − A1)d vec(Im {Z})=0M×1. Since the differentials d Re {Z} and d Im {Z} are independent, so are d vec(Re {Z}) and d vec(Im {Z}). Therefore, A 0 + A1 = 0M×NQ and

A0 − A1 = 0M×NQ. And from this it follows that A0 = A1 = 0M×NQ.

APPENDIX III

PROOF OF LEMMA 2 First three other results are stated and proven:

Lemma 4: Let A, B ∈ CN×N . zT Az = zT Bz ∀ z ∈ CN×1 is equivalent to A + AT = B + BT .

T T N×1 Proof: Let (A)k,l = ak,l and (B)k,l = bk,l. Assume first that z Az = z Bz ∀ z ∈ C , and set z = ek

T T where k ∈{0, 1,...,N − 1}. Then ek Aek = ek Bek, gives that ak,k = bk,k for all k ∈{0, 1,...,N − 1}. Setting

T T T T z = ek + el leads to (ek + el )A(ek + el)=(ek + el )B(ek + el), which results in ak,k + al,l + ak,l + al,k =

T T bk,k +bl,l +bk,l+bl,k. Eliminating equal terms gives ak,l +al,k = bk,l +bl,k which can be written A+A = B+B . T T T 1 T T T 1 T T Assuming that A + A = B + B , it follows that z Az = 2 z Az + z A z = 2 z A + A z = 1 T T 1 T T T 1 T T T N×1 2 z B + B z = 2 z Bz + z B z = 2 z Bz + z Bz = z Bz, for all z ∈ C .

Corollary 1: Let A ∈ CN×N . zT Az =0 ∀ z ∈ CN×1 is equivalent to AT = −A.

Proof: Set B = 0N×N in Lemma 4, then the corollary follows.

Lemma 5: Let A, B ∈ CN×N . zH Az = zH Bz ∀ z ∈ CN×1 is equivalent to A = B.

H H N×1 Proof: Let (A)k,l = ak,l and (B)k,l = bk,l. Assume first that z Az = z Bz ∀ z ∈ C , and set z = ek where k ∈{0, 1,...,N − 1}. This gives that ak,k = bk,k for all k ∈{0, 1,...,N − 1}. Also in the same way as in 27

T T the proof of Lemma 4 setting z = ek + el leads to A + A = B + B . Next, set z = ek + jel, then manipulations give A − AT = B − BT . The equations A + AT = B + BT and A − AT = B − BT imply that A = B.

If A = B, then it follows that zH Az = zH Bz for all z ∈ CN×1.

Now, the proof of Lemma 2 is given:

Proof: Substitution of d vec(Z)=d vec(Re {Z})+jd vec(Im {Z}) and d vec(Z ∗)=d vec(Re {Z}) − jd vec(Im {Z}) into the second-order differential expression given in the lemma leads to:

T T d vec (Re {Z}) [B0 + B1 + B2] d vec(Re {Z})+ d vec (Im {Z}) [−B0 + B1 − B2] d vec(Im {Z})

T T T T + d vec (Re {Z}) j B0 + B0 + j B1 − B1 − j B2 + B2 d vec(Im {Z})=0. (57)

(57) is valid for all dZ and the differentials of d vec(Re {Z}) and d vec(Im {Z}) are independent. If d vec(Im {Z}) is set to the zero vector, then it follows from (57) and Corollary 1 that:

T T T B0 + B1 + B2 = −B0 − B1 − B2 . (58)

In the same way, setting d vec(Re {Z}) to the zero vector, then it follows from (57) and Corollary 1 that:

T T T −B0 + B1 − B2 = B0 − B1 + B2 . (59)

Because of the skew symmetry in (58) and (59) and the linear independence of d vec(Re {Z}) and d vec(Im {Z}), it follows from (57) and Corollary 1 that:

T T T B0 + B0 + B1 − B1 − B2 + B2 = 0NQ×NQ. (60)

T T T (58), (59), and (60) lead to B0 = −B0 , B1 = −B1 , and B2 = −B2 . Since the matrices B0 and B2 are skew, T T ∗ T ∗ ∗ Corollary 1 reduces d vec (Z) B0d vec(Z)+ d vec (Z ) B1d vec(Z)+ d vec (Z ) B2d vec(Z )=0to T ∗ d vec (Z ) B1d vec(Z)=0. Then Lemma 5 results in B 1 = 0NQ×NQ.

APPENDIX IV

PROOF OF LEMMA 3 Proof: Let the symbol ⊆ means of. From (7) and the definition of R (A), it follows that R (A)= w ∈ C1×Q | w = zA A+A , for some z ∈ C1×N ⊆R A+A . From the definition of R A+A , it follows that R A+A = w ∈ C1×Q | w = zA+A, for some z ∈ C1×Q ⊆R(A).FromR (A) ⊆R A+A and 28 R A+A ⊆R(A), the first equality of (33) follows. From (7) and the definition of C (A), it follows that: C (A)= w ∈ CN×1 | w = AA+ Az, for some z ∈ CQ×1 ⊆C AA+ . From the definition of C AA+ ,it follows that: C AA+ = w ∈ CN×1 | w = AA+z, for some z ∈ CN×1 ⊆C(A).FromC (A) ⊆C AA+ and C AA+ ⊆C(A), the second equality of (33) follows. The third equation of (33) is a direct consequence of the two first equations of (33).

APPENDIX V

LEMMA SHOWING PROPERTIES OF THE MOORE-PENROSE INVERSE Lemma 6: Let A ∈ CN×Q and B ∈ CQ×R, then the following properties are valid for the Moore-Penrose inverse:

A+ = A−1 for non-singular A, (61)

+ A+ = A, (62)

+ H AH = A+ , (63)

AH = AH AA+ = A+AAH , (64)

H H A+ = AH A+ A+ = A+ A+ AH , (65)

+ H AH A = A+ A+ , (66)

+ H AAH = A+ A+, (67)

+ + A+ = AH A AH = AH AAH , (68)

−1 A+ = AH A AH if A has full column rank, (69)

−1 A+ = AH AAH if A has full row rank, (70)

+ + AB = 0N×R ⇔ B A = 0R×N . (71)

Proof: (61), (62), and (63) can be proved by direct insertion in Definition 3 of the Moore-Penrose inverse. 29

+ H The first part of (64) can be proved like this AH = AH AH AH = AH AA+ = AH AA+, where the

+ result from (63) was used. The second part of (64) can be proved in a similar way A H = AH AH AH = H + H A+A AH = A+AAH . The first part of (65) can be shown by A+ = A+AA+ = AH AH A+ = H H H + AH A+ A+. The second part of (65) follows from A+ = A+AA+ = A+ A+ AH = A+ AH AH .

(66) and (67) can be proved by using the results from (64) and (65) in the definition of the Moore-Penrose inverse.

(68) follows from (65), (66), and (67). (69) and (70) follow from (61), (68), and rank (A)=rank AH A =rank AAH , taken from 0.4.6 in [22].

+ + Now, (71) will be shown. Firstly, it is shown that AB = 0N×R implies B A = 0R×N . Assume that AB = 0N×R.

From (68), it follows that:

+ + B+A+ = BH B BH AH AAH . (72)

H H + + AB = 0N×R leads to B A = 0R×N , and then (72) yields B A = 0R×N . Secondly, it will be shown that

+ + + + B A = 0R×N implies AB = 0N×R. Assume that B A = 0R×N . Using the implication just proved, i.e., if

+ + CD = 0M×P , then D C = 0P ×M , where M and P are integers given by the size of the matrices C and D, + + + + gives A B = 0N×R, and then the desired result follows from (62).

REFERENCES

[1] A. Paulraj, R. Nabar, and D. Gore, Introduction to Space-Time Wireless Communications. Cambridge, United Kingdom: Cambridge

University Press, 2003.

[2] F. J. González-Vázquez, “The differentiation of functions of conjugate complex variables: Application to power network analysis,”

IEEE Trans. Education, vol. 31, no. 4, pp. 286–291, Nov. 1988.

[3] S. Alexander, “A derivation of the complex fast Kalman algorithm,” IEEE Trans. Acoust., Speech, Signal Process., vol. 32, no. 6, pp.

1230–1232, Dec. 1984.

[4] A. I. Hanna and D. P. Mandic, “A fully adaptive normalized nonlinear gradient descent algorithm for complex-valued nonlinear adaptive

filters,” IEEE Trans. Signal Process., vol. 51, no. 10, pp. 2540–2549, Oct. 2003.

[5] E. Kreyszig, Advanced Engineering Mathematics, 7th ed. New York, USA: John Wiley & Sons, Inc., 1993.

[6] J. W. Brewer, “Kronecker products and matrix calculus in system theory,” IEEE Trans. Circuits, Syst., vol. CAS-25, no. 9, pp. 772–781,

Sept. 1978.

[7] J. R. Magnus and H. Neudecker, Matrix with Application in Statistics and Econometrics. Essex, England: John

Wiley & Sons, Inc., 1988.

[8] D. A. Harville, Matrix Algebra from a Statistician’s Perspective. Springer-Verlag, 1997, corrected second printing, 1999. 30

[9] T. P. Minka. (December 28, 2000) Old and new matrix algebra useful for statistics. [Online]. Available: http://research.microsoft.com/

~minka/papers/matrix/

[10] K. Kreutz-Delgado, “Real vector derivatives and gradients,” Dept. of Electrical and Computer Engineering, UC San Diego, Tech. Rep.

Course Lecture Supplement No. ECE275A, December 5. 2005, http://dsp.ucsd.edu/~kreutz/PEI05.html.

[11] S. Haykin, Adaptive Filter Theory, 4th ed. Englewood Cliffs, New Jersey, USA: Prentice Hall, 1991.

[12] A. H. Sayed, Fundamentals of Adaptive Filtering. IEEE Computer Society Press, 2003.

[13] M. H. Hayes, Statistical Digital Signal Processing and Modeling. John Wiley & Sons, Inc., 1996.

[14] A. van den Bos, “Complex gradient and Hessian,” Proc. IEE Vision, Image and Signal Process., vol. 141, no. 6, pp. 380–383, Dec.

1994.

[15] D. H. Brandwood, “A complex gradient operator and its application in adaptive array theory,” IEE Proc., Parts F and H, vol. 130,

no. 1, pp. 11–16, Feb. 1983.

[16] M. Brookes. Matrix reference manual. [Online]. Available: http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html

[17] K. Kreutz-Delgado, “The complex gradient operator and the CR-calculus,” Dept. of Electrical and Computer Engineering, UC San Diego,

Tech. Rep. Course Lecture Supplement No. ECE275A, Sept.-Dec. 2005, http://dsp.ucsd.edu/~kreutz/PEI05.html.

[18] G. Jöngren, M. Skoglund, and B. Ottersten, “Combining beamforming and orthogonal space-time block coding,” IEEE Trans. Inform.

Theory, vol. 48, no. 3, pp. 611–627, Mar. 2002.

[19] D. Gesbert, M. Shafi, D. Shiu, P. J. Smith, and A. Naguib, “From theory to practice: An overview of MIMO space-time coded wireless

systems,” IEEE J. Select. Areas Commun., vol. 21, no. 3, pp. 281–302, Apr. 2003.

[20] D. H. Jonhson and D. A. Dudgeon, Array Signal Processing: Concepts and Techniques. Prentice-Hall, Inc., 1993.

[21] W. Wirtinger, “Zur formalen theorie der funktionen von mehr komplexen veränderlichen,” Mathematische Annalen, vol. 97, pp. 357–375,

1927.

[22] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press Cambridge, UK, 1985, reprinted 1999.

[23] ——, Topics in Matrix Analysis. Cambridge University Press Cambridge, UK, 1991, reprinted 1999.

[24] D. G. Luenberger, Introduction to Linear and Nonlinear Programming. Reading, Massachusetts: Addison–Wesley, 1973.

[25] N. Young, An Introduction to Hilbert Space. The Pitt Building, Trumpington Street, Cambridge: Press Syndicate of the University of

Cambridge, 1990, reprinted edition.

[26] C. H. Edwards and D. E. Penney, Calculus and Analytic Geometry, 2nd ed. Englewood Cliffs, New Jersey: Prentice-Hall, Inc, 1986.

[27] H. Sampath and A. Paulraj, “Linear precoding for space-time coded systems with known fading correlations,” IEEE Commun. Lett.,

vol. 6, no. 6, pp. 239–241, June 2002.

[28] A. Hjørungnes and D. Gesbert, “Precoding of orthogonal space-time block codes in arbitrarily correlated MIMO channels: Iterative

and closed-form solutions,” IEEE Trans. Wireless Commun., 2005, accepted 11.10.2005.

[29] T. J. Abatzoglou, J. M. Mendel, and G. A. Harada, “The constrained total least squares technique and its applications to harmonic

superresolution,” IEEE Trans. Signal Process., vol. 39, no. 5, pp. 1070–1087, May 1991.

[30] G. Yan and H. Fan, “A Newton-like algorithm for complex variables with applications in blind equalization,” IEEE Trans. Signal

Process., vol. 48, no. 2, pp. 553–556, Feb. 2000.