
Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2007, Article ID 13659, 8 pages doi:10.1155/2007/13659 Research Article A Hub Matrix Theory and Applications to Wireless Communications H. T. Kung1 andB.W.Suter1, 2 1 Harvard School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA 2 US Air Force Research Laboratory, Rome, NY 13440, USA Received 24 July 2006; Accepted 22 January 2007 Recommended by Sharon Gannot This paper considers communications and network systems whose properties are characterized by the gaps of the leading eigen- values of AH A for a matrix A. It is shown that a sufficient and necessary condition for a large eigen-gap is that A is a “hub” matrix in the sense that it has dominant columns. Some applications of this hub theory in multiple-input and multiple-output (MIMO) wireless systems are presented. Copyright © 2007 H. T. Kung and B. W. Suter. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. INTRODUCTION theory and our mathematical results to multiple-input and multiple-output (MIMO) wireless systems. There are many communications and network systems whose properties are characterized by the eigenstructure of a ma- trix of the form AH A, also known as the Gram matrix of A, 2. HUB MATRIX THEORY where A is a matrix with real or complex entries. For exam- ple, for a communications system, A couldbeachannelma- It is instructive to conduct a thought experiment on a com- trix, usually denoted H. The capacity of such system is related putation process before we introduce our hub matrix the- to the eigenvalues of HH H [1]. In the area of web page rank- ory. The process iteratively computes the values for a set of ing, with entries of A representing hyperlinks, Kleinberg [2] variables, which for example could be beamforming weights in a beamforming communication system. Figure 1 depicts shows that eigenvectors corresponding to the largest eigen- X values of AT A give the rankings of the most useful (author- an example of this process: variable uses and contributes to variables U2 and U4,variableY uses and contributes to ity) or popular (hub) web pages. Using a reputation system U U Z that parallels Kleinberg’s work, Kung and Wu [3] developed variables 3 and 5,andvariable uses and contributes to all variables U1, ..., U6.WesayvariableZ is a “hub” in the an eigenvector-based peer-to-peer (P2P) network user rep- Z utation ranking in order to provide services to P2P users sense that variables involved in ’s computation constitute based on past contributions (reputation) to avoid “freeload- a superset of those involved in the computation of any other ers.” Furthermore, the rate of convergence in the iterative variable. The dominance is illustrated graphically in Figure 1. computation of reputations is determined by the gap of the We can describe the computation process in matrix no- leading two eigenvalues of AH A. tation. Let H The recognition that the eigenstructure of A A deter- ⎛ ⎞ 001 mines the properties of these communications and network ⎜ ⎟ ⎜ ⎟ systems motivates the work of this paper. We will develop a ⎜101⎟ ⎜ ⎟ theoretical framework, called a hub matrix theory, which al- ⎜ ⎟ ⎜011⎟ AH A A = ⎜ ⎟ . (1) lows us to predict the eigenstructure of by examining A ⎜ ⎟ ffi ⎜101⎟ directly. We will prove su cient and necessary conditions for ⎜ ⎟ the existence of a large gap between the largest and the sec- ⎝011⎠ ond largest eigenvalues of AH A. Finally, we apply the “hub” 001 2 EURASIP Journal on Advances in Signal Processing Z In this paper, we study the eigenvalues of S = AH A,where A is a hub matrix. Since the eigenvalues of S are invariant X Y S U under similarity transformations of , we can permute the 6 U A U4 3 columns of the hub matrix so that its last column is the hub U column without loss of generality. For the rest of this paper, U2 5 we will denote the columns of a hub matrix A by a1, ..., am, U1 and assume that columns a1, ..., am−1 areorthogonaltoeach H other, that is, ai aj = 0fori = j and i, j = 1, ..., m − 1, and column am is the hub column. The matrix A introduced Figure 1: Graphical representation of hub concept. in the context of the graphical model from Figure 1 is such a hub matrix. In Section 4, we will relax the orthogonality condition of This process performs two steps alternatively (cf. Figure 1). a hub matrix, by introducing the notion of hub and arrow- (1) X, Y,andZ contribute to variables in their respective head dominant matrices. regions. ∈ n×m S ∈ m×m (2) X, Y,andZ compute their values using variables in Theorem 1. Let A C and let C be the Gram S = AH A their respective regions. matrix of A that is, . S is an arrowhead matrix if and only if A is a candidate-hub matrix. T ∗ T The first step (1) is (U1, U2, ..., U6) ← A (X, Y, Z) and T T∗ T H next step (2) is (X, Y, Z) ← A (U1, U2, ..., U6) . Thus, the Proof. Suppose A is a candidate-hub matrix. Since S = A A, T (i,j) H computational process performs the iteration (X, Y, Z) ← the entries of S are s = ai aj for i, j = 1, ..., m.By S∗(X, Y, Z)T ,whereS is defined as follows: Definition 2 of a candidate-hub matrix, the nonhub columns H ⎛ ⎞ of A are orthogonal, that is, ai aj = 0fori = j and i, j = ⎜202⎟ 1, ..., m − 1. Since S is Hermitian, the transpose of the last S AT A = . = ⎝022⎠ (2) column is the complex conjugate of the last row and the di- 226 agonal elements of S are real numbers. Therefore, S = AH A S is an arrowhead matrix by Definition 1. Note that an arrowhead matrix ,asdefinedbelow,has S AH A A Suppose = is an arrowhead matrix. Note that the emerged. Furthermore, note that matrix exhibits the hub S Z components of the matrix of Definition 1 can be repre- property of in Figure 1 in view of the fact that the last col- A A sented in terms of the inner products of columns of , that umn of consists of all 1’s, whereas other columns consist of H (i) H (i) H is, b = amam, d = ai ai, c = ai am for i = 1, ..., m − 1. only a few 1’s. Since S is an arrowhead matrix, all other off-diagonal entries S s(i,j) = aH a i = j i j = ... m − Definition 1 (arrowhead matrix). Let S ∈ Cm×m be a given of , i j for and , 1, , 1, are zero. aH a = i = j i j = ... m − A Hermitian matrix. S is called an arrowhead matrix if Thus, i j 0if and , 1, , 1. So, is a candidate-hub matrix by Definition 2. Dc S = cH b ,(3)Before proving our main result in Theorem 4,wefirstre- state some well-known results which will be needed for the where D = diag(d(1), ..., d(m−1)) ∈ R(m−1)×(m−1) is a real di- proof. agonal matrix, c = (c(1), ..., c(m−1)) ∈ Cm−1 is a complex Theorem 2 (interlacing eigenvalues theorem for bordered vector, and b ∈ R is a real number. matrices). Let U ∈ C(m−1)×(m−1) be a given Hermitian ma- y ∈ (m−1) a ∈ The eigenvalues of an arbitrary square matrix are invari- trix, let C be a given vector, and let R be a given V ∈ m×m ant under similarity transformations. Therefore, we can with real number. Let C be the Hermitian matrix obtained U y a no loss of generality arrange the diagonal elements of D to be by bordering with and as follows: (i) (i+1) ordered so that d ≤ d for i = 1, ..., m − 2. For details Uy concerning arrowhead matrices, see for example [4]. V = . yH a (4) Definition 2 (hub matrix). A matrix A ∈ Cn×m is called a V U {λ } {μ } candidate-hub matrix,ifm − 1 of its columns are orthogonal Let the eigenvalues of and be denoted by i and i , to each other with respect to the Euclidean inner product. respectively, and assume that they have been arranged in in- λ ≤ ··· ≤ λ μ ≤···≤μ If in addition the remaining column has its Euclidean norm creasing order, that is, 1 m and 1 m−1. greater than or equal to that of any other column, then the Then matrix A is called a hub matrix and this remaining column λ ≤ μ ≤ λ ≤···≤λm− ≤ μm− ≤ λm. (5) is called the hub column. We are normally interested in hub 1 1 2 1 1 matrices where the hub column has much large magnitude Proof. See [5, page 189]. than other columns. (As we show later in Theorems 4 and 10 that in this case the corresponding arrowhead matrices will Definition 3 (majorizing vectors). Let α ∈ Rm and β ∈ Rm have large eigengaps). begivenvectors.Ifwearrangetheentriesofα and β in H. T. Kung and B. W. Suter 3 (1) (m) (1) (m−1) increasing order, that is, α ≤ ··· ≤ α and β ≤ ··· ≤ This result implies that d + b ≥ λ1 + λm ≥ λm. By noting (m) (m−2) β , then vector β is said to majorize vector α if that d ≤ λm−1,wehave k k i i m− 2 2 β( ) ≥ α( ) k = ... m λm d( 1) + b am− + am for 1, , (6) S = ≤ = 1 2 2 i= i= EigenGap1( ) m− 2 1 1 λm− d( 2) a 1 m−2 2 with equality for k = m.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-