<<

Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2007, Article ID 13659, 8 pages doi:10.1155/2007/13659

Research Article A Hub Theory and Applications to Wireless Communications

H. T. Kung1 andB.W.Suter1, 2

1 Harvard School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA 2 US Air Force Research Laboratory, Rome, NY 13440, USA

Received 24 July 2006; Accepted 22 January 2007

Recommended by Sharon Gannot

This paper considers communications and network systems whose properties are characterized by the gaps of the leading eigen- values of AH A for a matrix A. It is shown that a sufficient and necessary condition for a large eigen-gap is that A is a “hub” matrix in the sense that it has dominant columns. Some applications of this hub theory in multiple-input and multiple-output (MIMO) wireless systems are presented.

Copyright © 2007 H. T. Kung and B. W. Suter. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION theory and our mathematical results to multiple-input and multiple-output (MIMO) wireless systems. There are many communications and network systems whose properties are characterized by the eigenstructure of a ma- trix of the form AH A, also known as the of A, 2. HUB MATRIX THEORY where A is a matrix with real or complex entries. For exam- ple, for a communications system, A couldbeachannelma- It is instructive to conduct a thought experiment on a com- trix, usually denoted H. The capacity of such system is related putation process before we introduce our hub matrix the- to the eigenvalues of HH H [1]. In the area of web page rank- ory. The process iteratively computes the values for a set of ing, with entries of A representing hyperlinks, Kleinberg [2] variables, which for example could be beamforming weights in a beamforming communication system. Figure 1 depicts shows that eigenvectors corresponding to the largest eigen- X values of AT A give the rankings of the most useful (author- an example of this process: variable uses and contributes to variables U2 and U4,variableY uses and contributes to ity) or popular (hub) web pages. Using a reputation system U U Z that parallels Kleinberg’s work, Kung and Wu [3] developed variables 3 and 5,andvariable uses and contributes to all variables U1, ..., U6.WesayvariableZ is a “hub” in the an eigenvector-based peer-to-peer (P2P) network user rep- Z utation ranking in order to provide services to P2P users sense that variables involved in ’s computation constitute based on past contributions (reputation) to avoid “freeload- a superset of those involved in the computation of any other ers.” Furthermore, the rate of convergence in the iterative variable. The dominance is illustrated graphically in Figure 1. computation of reputations is determined by the gap of the We can describe the computation process in matrix no- leading two eigenvalues of AH A. tation. Let H The recognition that the eigenstructure of A A deter- ⎛ ⎞ 001 mines the properties of these communications and network ⎜ ⎟ ⎜ ⎟ systems motivates the work of this paper. We will develop a ⎜101⎟ ⎜ ⎟ theoretical framework, called a hub matrix theory, which al- ⎜ ⎟ ⎜011⎟ AH A A = ⎜ ⎟ . (1) lows us to predict the eigenstructure of by examining A ⎜ ⎟ ffi ⎜101⎟ directly. We will prove su cient and necessary conditions for ⎜ ⎟ the existence of a large gap between the largest and the sec- ⎝011⎠ ond largest eigenvalues of AH A. Finally, we apply the “hub” 001 2 EURASIP Journal on Advances in Signal Processing

Z In this paper, we study the eigenvalues of S = AH A,where A is a hub matrix. Since the eigenvalues of S are invariant X Y S U under similarity transformations of , we can permute the 6 U A U4 3 columns of the hub matrix so that its last column is the hub U column without loss of generality. For the rest of this paper, U2 5 we will denote the columns of a hub matrix A by a1, ..., am, U1 and assume that columns a1, ..., am−1 areorthogonaltoeach H other, that is, ai aj = 0fori = j and i, j = 1, ..., m − 1, and column am is the hub column. The matrix A introduced Figure 1: Graphical representation of hub concept. in the context of the graphical model from Figure 1 is such a hub matrix. In Section 4, we will relax the orthogonality condition of This process performs two steps alternatively (cf. Figure 1). a hub matrix, by introducing the notion of hub and arrow- (1) X, Y,andZ contribute to variables in their respective head dominant matrices. regions. ∈ n×m S ∈ m×m (2) X, Y,andZ compute their values using variables in Theorem 1. Let A C and let C be the Gram S = AH A their respective regions. matrix of A that is, . S is an arrowhead matrix if and only if A is a candidate-hub matrix. T ∗ T The first step (1) is (U1, U2, ..., U6) ← A (X, Y, Z) and T T∗ T H next step (2) is (X, Y, Z) ← A (U1, U2, ..., U6) . Thus, the Proof. Suppose A is a candidate-hub matrix. Since S = A A, T (i,j) H computational process performs the iteration (X, Y, Z) ← the entries of S are s = ai aj for i, j = 1, ..., m.By S∗(X, Y, Z)T ,whereS is defined as follows: Definition 2 of a candidate-hub matrix, the nonhub columns H ⎛ ⎞ of A are orthogonal, that is, ai aj = 0fori = j and i, j = ⎜202⎟ 1, ..., m − 1. Since S is Hermitian, the transpose of the last S AT A = . = ⎝022⎠ (2) column is the complex conjugate of the last row and the di- 226 agonal elements of S are real numbers. Therefore, S = AH A S is an arrowhead matrix by Definition 1. Note that an arrowhead matrix ,asdefinedbelow,has S AH A A Suppose = is an arrowhead matrix. Note that the emerged. Furthermore, note that matrix exhibits the hub S Z components of the matrix of Definition 1 can be repre- property of in Figure 1 in view of the fact that the last col- A A sented in terms of the inner products of columns of , that umn of consists of all 1’s, whereas other columns consist of H (i) H (i) H is, b = amam, d = ai ai, c = ai am for i = 1, ..., m − 1. only a few 1’s. Since S is an arrowhead matrix, all other off-diagonal entries S s(i,j) = aH a i = j i j = ... m − Definition 1 (arrowhead matrix). Let S ∈ Cm×m be a given of , i j for and , 1, , 1, are zero. aH a = i = j i j = ... m − A . S is called an arrowhead matrix if Thus, i j 0if and , 1, , 1. So, is a candidate-hub matrix by Definition 2. Dc S = cH b ,(3)Before proving our main result in Theorem 4,wefirstre- state some well-known results which will be needed for the where D = diag(d(1), ..., d(m−1)) ∈ R(m−1)×(m−1) is a real di- proof. agonal matrix, c = (c(1), ..., c(m−1)) ∈ Cm−1 is a complex Theorem 2 (interlacing eigenvalues theorem for bordered vector, and b ∈ R is a real number. matrices). Let U ∈ C(m−1)×(m−1) be a given Hermitian ma- y ∈ (m−1) a ∈ The eigenvalues of an arbitrary are invari- trix, let C be a given vector, and let R be a given V ∈ m×m ant under similarity transformations. Therefore, we can with real number. Let C be the Hermitian matrix obtained U y a no loss of generality arrange the diagonal elements of D to be by bordering with and as follows: (i) (i+1) ordered so that d ≤ d for i = 1, ..., m − 2. For details Uy concerning arrowhead matrices, see for example [4]. V = . yH a (4) Definition 2 (hub matrix). A matrix A ∈ Cn×m is called a V U {λ } {μ } candidate-hub matrix,ifm − 1 of its columns are orthogonal Let the eigenvalues of and be denoted by i and i , to each other with respect to the Euclidean inner product. respectively, and assume that they have been arranged in in- λ ≤ ··· ≤ λ μ ≤···≤μ If in addition the remaining column has its Euclidean norm creasing order, that is, 1 m and 1 m−1. greater than or equal to that of any other column, then the Then matrix A is called a hub matrix and this remaining column λ ≤ μ ≤ λ ≤···≤λm− ≤ μm− ≤ λm. (5) is called the hub column. We are normally interested in hub 1 1 2 1 1 matrices where the hub column has much large magnitude Proof. See [5, page 189]. than other columns. (As we show later in Theorems 4 and 10 that in this case the corresponding arrowhead matrices will Definition 3 (majorizing vectors). Let α ∈ Rm and β ∈ Rm have large eigengaps). begivenvectors.Ifwearrangetheentriesofα and β in H. T. Kung and B. W. Suter 3

(1) (m) (1) (m−1) increasing order, that is, α ≤ ··· ≤ α and β ≤ ··· ≤ This result implies that d + b ≥ λ1 + λm ≥ λm. By noting (m) (m−2) β , then vector β is said to majorize vector α if that d ≤ λm−1,wehave k k i i m− 2 2 β( ) ≥ α( ) k = ... m λm d( 1) + b am− + am for 1, , (6) S = ≤ = 1 2 2 i= i= EigenGap1( ) m− 2 1 1 λm− d( 2) a 1 m−2 2 with equality for k = m. 2 2 2 am− am am− = 12 + 2 · 12 For details concerning majorizing vectors, see [5,pages a 2 a 2 a 2 m−2 2 m−1 2 m−2 2 192–198]. The following theorem provides an important = A · A . property expressed in terms of vector majorizing. HubGap1( )+1 HubGap2( ) (11) Theorem 3 (Schur-Horn theorem). Let V ∈ Cm×m be Her- mitian. The vector of diagonal entries of V majorizes the vector of eigenvalues of V. By Theorem 4, we have the following result, where nota- Proof. See [5, page 193]. tion “ ” means “much larger than.”

n×m Definition 4 (hub-gap). Let A ∈ Cn×m be a matrix with its Corollary 1. Let A ∈ C be a matrix with its columns a ... a < a 2 ≤ ··· ≤ a ... a < a 2 ≤ ··· ≤ a 2 ≤a 2 columns denoted by 1, , m with 0 1 2 1, , m satisfying 0 1 2 m−1 2 m 2. a 2 i = ... m − i A S = AH A ∈ m×m λ ··· λ m 2.For 1, , 1, the th hub-gap of is defined Let C with its eigenvalues 1, , m satisfy- to be ing 0 ≤ λ1 ≤···≤λm.Thefollowingholds 2 a A am am−  S m−(i−1) 2 (1) if is a hub matrix with 2 1 2, then HubGapi(A) = . (7) λ λ a 2 is an arrowhead matrix with m m−1;and m−i 2 (2) if S is an arrowhead matrix with λm λm−1, then A Definition 5 (eigengap). Let S ∈ Cm×m be a Hermitian ma- is a hub matrix with am2 am−12 or am−12 trix with its real eigenvalues denoted by λ , ..., λm with λ ≤ 1 1 am−22 or both. ···≤λm.Fori = 1, ..., m−1, the ith eigengap of S is defined to be λ m−(i−1) 3. MIMO COMMUNICATIONS APPLICATION EigenGapi(S) = . (8) λm−i n×m M Theorem 4. Let A ∈ C be a hub matrix with its columns A multiple-input multiple-output (MIMO) system with t a ... a < a 2 ≤···≤a 2 S = transmit antennas and Mr receive antennas is depicted in denoted by 1, , m and 0 1 2 m 2.Let AH A ∈ Cm×m be the corresponding arrowhead matrix with its Figure 2 [6, 7]. Assume the MIMO channel is modeled by M × M H = h eigenvalues denoted by λ , ..., λm with 0 ≤ λ ≤ ··· ≤ λm. the r t channel propagation matrix ( ij). The 1 1 s Then input-output relationship, given a transmitted symbol ,for A ≤ S thissystemisgivenby HubGap1( ) EigenGap1( ) (9) ≤ A A . x = szH Hw zH n. HubGap1( )+1 HubGap2( ) + (12) Proof. Let T be the matrix formed from S by deleting its The vectors w and z in the equation are called the beamform- last row and column. This means that T is a diagonal ma- a 2 i = ... m − ing and combining vectors, respectively, which will be chosen trix with diagonal elements i 2 for 1, , 1. By S T to maximize the signal-to-noise ratio (SNR). We will model Theorem 2, the eigenvalues of interlace those of , that n λ ≤a 2 ≤ ··· ≤ λ ≤a 2 ≤ λ the noise vector as having entries, which are independent is, 1 1 2 m−1 m−1 2 m.Thus, 2 and identically distributed (i.i.d.) random variables of com- λm− is a lower bound for am−  .ByTheorem 3, the vec- 1 1 2 plex Gaussian distribution CN(0, 1). Without loss of gener- tor of diagonal values of S majorizes the vector of eigenval- k k ality, assume the average power of transmit signal equals one, S d(i) ≥ λ k = ... m − ues of , that is, i=1 i=1 i for 1, , 1and that is, E|s|2 = 1. For the beamforming system described m−1 d(i) b = m λ b ≤ λ b =a 2 i=1 + i=1 m.So, m. Since m 2, here, the signal to noise ratio, γ, after combining at the re- λ a 2 a 2/a 2 ≤ m is an upper bound for m 2.Hence, m 2 m−1 2 ceiver is given by λ /λ A ≤ S m m−1 or HubGap1( ) EigenGap1( ). m−1 d(i) Again, by using Theorems 2 and 3,wehave i=1 + zH Hw2 m (1) (2) b = i= λm and λ ≤ d ≤ λ ≤ d ≤ λ ≤ ··· ≤ γ = . 1 1 2 3 z2 (13) (m−2) (m−1) 2 d ≤ λm−1 ≤ d ≤ λm, and, as such, d(1) ··· d(m−2) d(m−1) b + + + + Without loss of generality, assume z2 = 1. With this as- sumption, the SNR becomes = λ1 + λ2 + ···+ λm−1 + λm (10) (1) (m−2) H 2 ≥ λ1 + d + ···+ d + λm. γ = z Hw . (14) 4 EURASIP Journal on Advances in Signal Processing

h1,Mt Bit stream w n z∗ 1 1 1

Coding w n z∗ and s 2 2 2 h modulation Mr −1,2 x ......

∗ wMt −1 n z Mr −1 Mr −1

wM n z∗ t Mr Mr

Figure 2: MIMO block diagram (see [6, datapath portion of Figure 1]).

3.1. Maximum ratio combining By definition, a generalized subset selection (GSS) [10]sys- tem powers those k transmitters which yield the top k z γ w Areceiverwhere maximizes for a given is known as a SNR values at the receiver for some k>1. That is, if maximum ratio combining (MRC) receiver in the literature. f (1), f (2), ..., f (k)√ stand for the indices of these transmit- By the Cauchy-Bunyakovskii-Schwartz inequality (see, e.g., ters, then w f (i) = 1/ k for i = 1, ..., k, and all other entries [8, page 272]), we have of w are zero. It follows that, for the combined GSS/MRC communications system, the SNR gain is given by H 2 2 2 z Hw ≤z Hw . (15) 2 2 k 2 1 z = γGSS/MRC = h f i . (21) Since we already assume 2 1, k ( ) i=1 2 zH Hw2 ≤Hw2. k = M 2 (16) In the limiting case when t,GSSbecomesequalgain transmission (EGT) [6, 7], which requires all Mt transmit- Moreover, since in MRC we desire to maximize the SNR, we ters to be equally powered, that is, w f i = 1/ Mt for i = z ( ) must choose to be 1, ..., Mt. Then, for the combined EGT/MRC communica- Hw tions system, the SNR gain takes the expression zMRC = , (17) M 2 Hw2 t EGT/MRC 1 γ = h f (i) . (22) Mt which implies that the SNR for MRC is i=1 2 γMRC =Hw2. 2 (18) 3.3. Maximum ratio transmission and combined MRT/MRC 3.2. Selection diversity transmission, generalized subset selection, and Suppose there are no constraints placed on the form of the combined SDT/MRC and GSS/MRC vector w. Let us reexamine the expression of SNR gain γMRC. Note For a selection diversity transmission (SDT) [9]system,only γMRC =Hw2 = Hw H Hw = wH HH Hw . the antenna that yields the largest SNR is selected for trans- 2 ( ) ( ) (23) mission at any instant of time. This means With the assumption that w2 = 1, the above equation is T maximized under maximum ratio transmission (MRT) [9] w = δ ... δ 1, f (1), , Mt , f (1) , (19) (see, e.g., [5, page 295]), that is, when w = w where the Kronecker impulse δi,j is defined as δi,j = 1ifi = j, m, (24) δ = i = j f and i,j 0if ,and (1) represents the value of the in- w 2 where m is the normalized eigenvector corresponding to the dex x that maximizes i |hi,x| . Thus, the SNR for the com- H largest eigenvalues λm of H H. Thus, for an MRT/MRC sys- bined SDT/MRC communications system is tem, we have γSDT/MRC = h 2. γMRT/MRC = λ . f (1) 2 (20) m (25) H. T. Kung and B. W. Suter 5

3.4. Performance comparison between transmitters. Let f (1), f (2), ..., f (k) be the indices of these SDT/MRC and MRT/MRC k transmitters, and H the matrix formed by columns h f (i) of H for i = 1, ..., k. It is easy to see that the SNR for GSS- H ∈ n×m Theorem 5. Let C be a hub matrix with its columns MRT/MRC is h ... h < h 2 ≤···≤h 2 ≤ denoted by 1, , m and 0 1 2 m−1 2 h 2 γSDT/MRC γMRT/MRC GSS-MRT/MRC m 2.Let and be the SNR gains for γ = λm, (30) SDT/MRC and MRT/MRC, respectively. Then H where λm is the largest eigenvalue of H H. HubGap (H) γSDT/MRC 1 ≤ ≤ 1. (26) H γMRT/MRC n×m HubGap1( )+1 Theorem 6. Let H ∈ C be a hub matrix with its columns h ... h < h 2 ≤ ··· ≤ h 2 ≤ A denoted by 1, , m and 0 1 2 m−1 2 Proof. We note that the matrix in hub matrix theory of h 2 γGSS−MRT/MRC γMRT/MRC m 2.Let and be the SNR values for Section 2 corresponds to the H matrix here, and the ai col- GSS-MRT/MRC and MRT/MRC, respectively. Then umn of A corresponds to the hi column of H for i = 1, ..., m. b =a 2 ≤ λ − / From the proof of Theorem 4,wenote m 2 m or HubGap (H) γGSS MRT MRC HubGap (H)+1 h 2 ≤ λ 1 ≤ ≤ 1 . m 2 m. It follows that H γMRT/MRC H HubGap1( )+1 HubGap1( ) γSDT/MRC (31) ≤ 1. (27) γMRT/MRC < h 2 ≤ ··· ≤ h 2 ≤h 2 H Proof. Since 0 1 2 m−1 2 m 2, con- k H H To derive a lower bound for γSDT/MRC/γMRT/MRC,wenote sists of the last columns of . Moreover, since is a hub m− H from the proof of Theorem 4 that λm ≤ d( 1) + b. This matrix, so is . From the proof of Theorem 4, we note both λ λ h 2 h 2 means that m and m are bounded above by m−1 2 + m 2 and below h 2 by m 2. It follows that γMRT/MRC ≤ a 2 a 2 = h 2 h 2. m−1 2 + m 2 m−1 2 + m 2 (28) 2 − HubGap (H) hm γGSS MRT/MRC λm 1 = 2 ≤ = Thus 2 2 HubGap (H)+1 h h γMRT/MRC λm 1 m−1 2 + m 2 γSDT/MRC h 2 H m 2 HubGap1( ) 2 2 ≥ = . hm− + hm HubGap (H)+1 γMRT/MRC h 2 h 2 HubGap (H)+1 ≤ 1 2 2 = 1 . m−1 2 + m 2 1 h 2 HubGap (H) (29) m 2 1 (32)

The inequality γSDT/MRC/γMRT/MRC ≤ 1inTheorem 5 ref- lects the fact that in the SDT/MRC system, w is cho- 3.6. Diversity selection with partitions, sen to be a particular unit vector rather than an optimal DSP-MRT/MRC, and performance bounds H / choice. The other inequality of Theorem 5, HubGap1( ) H ≤ γSDT/MRC/γMRT/MRC (HubGap1( )+1) , implies that the Suppose that transmitters are partitioned into multiple SNR for SDT/MRC approaches that for MRT/MRC when H transmission partitions. We define the diversity selection is a hub matrix with a dominant hub column. More precisely, with partitions (DSP) to be the transmission scheme where we have the following result. in each transmission partition only the transmitter with the largest SNR will be powered. Note that SDT discussed above Corollary 2. Let H ∈ Cn×m be a hub matrix with its is a special case of DSP when there is only one partition con- h ... h < h 2 ≤ ··· ≤ columns denoted by 1, , m and 0 1 2 sisting of all transmitters. h 2 γSDT/MRC γMRT/MRC m 2.Let and be the SNR for SDT/MRC Let k be the number of partitions, and f (1), f (2), H ... f k and MRT/MRC, respectively. Then, as HubGap1( ) increases, , ( ) the indices of the powered transmitters. A DSP- γMRT/MRC/γSDT/MRC approaches one at a rate of at least MRT/MRCsystemisdefinedtobeanMRT/MRCsystem H / H HubGap1( ) (HubGap1( )+1). applied to these k transmitters. Define H to be the matrix formed by columns h f (i) of H for i = 1, ..., k. Then the SNR 3.5. GSS-MRT/MRC and performance comparison for DSP-MRT/MRC is with MRT/MRC γDSPS-MRT/MRC = λm, (33) Using an analysis similar to the one above, we can derive per- H formance bounds for a recently discovered communication where λm is the largest eigenvalue of H H. system that incorporates antenna selection with MRT on the Note that in general the powered transmitters for DSP transmission side while applying MRC on the receiver side are not the same as those for GSS. This is because a trans- [11, 12]. This approach will be called GSS-MRT/MRC here. mitter that yields the highest SNR among transmitters in Given a GSS scheme that powers those k transmitters which one of the k partitions may not be among the transmit- yield the top k highest SNR values, a GSS-MRT/MRC sys- ters that yield the top k highest SNR values among all tem is defined to be an MRT/MRC system applied to these k transmitters. Nevertheless, when H is a hub matrix with 6 EURASIP Journal on Advances in Signal Processing

< h 2 ≤ ··· ≤ h 2 ≤h 2 λ 0 1 2 m−1 2 m 2,wecanbound m greater than or equal to the row sum of magnitudes of all for DSP-MRT/MRC in a manner similar to how we bound off-diagonal entries, that is, λm for GSS-MRT/MRC. That is, for DSP-MRT/MRC, λm is h 2 h 2 h 2 m−1 bounded above by k 2 + m 2 and below by m 2,where (i,i) (i,j) e ≥ e for i = 1, ..., m. (36) hk is the second largest column of H in magnitude. Note that j=1 h 2 ≤h 2 H j=i k 2 m−1 2, since the second largest column of in magnitude cannot be larger that than of H. We have the fol- lowing result similar to that of Theorem 6. For more information on diagonally dominant matrices, see for example [5, page 349]. H ∈ n×m Theorem 7. Let C be a hub matrix with its columns m×m h ... h < h 2 ≤···≤h 2 ≤ Definition 9 (arrowhead dominant matrix). Let S ∈ C be denoted by 1, , m and 0 1 2 m−1 2 h 2 γDSP−MRT/MRC γMRT/MRC a given Hermitian matrix. S is called an arrowhead dominant m 2.Let and be the SNR for DSP- MRT/MRC and MRT/MRC, respectively. Then matrix if Dc HubGap (H) γDSP−MRT/MRC HubGap (H)+1 S = 1 ≤ ≤ 1 . cH b , (37) H γMRT/MRC H HubGap1( )+1 HubGap1( ) (34) where D ∈ C(m−1)×(m−1) is a diagonally dominant matrix, m− m− H c = (c(1), ..., c( 1)) ∈ C 1 is a complex vector, and b ∈ R Theorems 6 and 7 imply that when HubGap1( ) becomes large, the SNR values of both GSS-MRT/MRC and DSP- is a real number. MRT/MRC approach that of MRT/MRC. Similar to Theorem 1, we have the following theorem.

4. HUB DOMINANT MATRIX THEORY Theorem 8. Let A ∈ Cn×m and let S ∈ Cm×m be the Gram matrix of A,thatis,S = AH A. S is an arrowhead dominant We generalize the hub matrix theory presented above to situ- matrix if and only if A is a candidate-hub-dominant matrix. ations when matrix A (or H) exhibits a “near” hub property. In order to relax the definition of orthogonality of a set of Proof. Suppose A is a candidate-hub-dominant matrix. Since H (i,j) H vectors, we use the notion of frame. S = A A, the entries of S can be expressed as s = ai aj for i, j = 1, ..., m.ByDefinition 7 of a hub-dominant matrix, Definition 6 (frame). A set of distinct vectors { f , ..., fn} is 1 the nonhub columns of A form a frame with frame bounds said to be a frame if there exist positive constants ξ and ϑ ξ = ϑ = a 2 ≤ m−1 |aH a |≤ a 2 1and 2, that is j i=1 i j 2 j called frame bounds such that 2 H for j = 1, ..., m − 1. Since aj  =|aj aj |, it follows that n H m−1 H |ai ai|≥ j= j=i |ai aj |, i = 1, ..., m − 1, which is the diag- ξ f 2 ≤ f H f ≤ ϑ f 2 j = ... n. 1, j i j j for 1, , (35) onal dominance condition on the sub-matrix D of S. Since S i= 1 is Hermitian, the transpose of the last column is the complex S Note that if ξ = ϑ = 1, then the set of vectors { f , ..., fn} conjugate of the last row and the diagonal elements of are 1 S = AH A is orthogonal. Here we use frames to bound the non- real numbers. Therefore, is an arrowhead domi- orthogonality of a collection of vectors, while the usual use nant matrix in accordance with Definition 9. S = AH A for frames is to quantify the redundancy in a representation Suppose is an arrowhead dominant matrix. Note that the components of the S matrix of Definition 9 can (see, e.g., [13]). H be represented in terms of the columns of A.Thusb = amam n×m (i) H H 2 A ∈ and c = ai am for i = 1, ..., m − 1. Since |aj aj |=aj  , Definition 7 (hub dominant matrix). A matrix C m − H m−1 H is called a candidate-hub-dominant matrix if 1ofits the diagonal dominance condition, |ai ai|≥ j= j=i |ai aj |, 1, columns form a frame with frame bounds ξ = 1andϑ = 2, 2 m−1 H 2 i = 1, ..., m − 1, implies that aj  ≤ i= |ai aj |≤2aj  a 2 ≤ m−1 |aH a |≤ a 2 j = ... m − 1 that is, j i=1 i j 2 j for 1, , 1. for j = 1, ..., m − 1. So, A is a candidate-hub-dominant ma- If in addition the remaining column has its Euclidean norm trix by Definition 7. greater than or equal to that of any other column, then the matrix A is called a hub-dominant matrix and the remaining Before proceeding to our results in Theorem 10,wewill column is called the hub column. first restate a well-known result which will be needed for the proof. We next generalize the definition of arrowhead matrix to arrowhead dominant matrix, where the matrix D in Theorem 9 (monotonicity theorem). Let G, H ∈ Cm×m be Definition 1 goes from being a to a diago- Hermitian. Assume H is positive semidefinite and that the nally dominant matrix. eigenvalues of G and G + H are arranged in increasing order, that is, λ1(G) ≤ ··· ≤ λm(G) and λ1(G + H) ≤···≤ m×m Definition 8 (diagonally dominant matrix). Let E ∈ C λm(G + H). Then λκ(G) ≤ λk(G + H) for k = 1, ..., m. be a given Hermitian matrix. E is said to be diagonally dom- inant if for each row the magnitude of the diagonal entry is Proof. See [5, page 182]. H. T. Kung and B. W. Suter 7

Theorem 10. Let A ∈ Cn×m be a hub-dominant matrix with then the inequality (b) in Theorem 10 implies its columns denoted by a1, ..., am with 0 < a12 ≤ ··· ≤ m− a  ≤a  S = AH A ∈ m×m λm d( 1) b m−1 2 m 2.Let C be the correspond- ≤ r · + (m−2) , (42) ing arrowhead dominant matrix with its eigenvalues denoted λm−1 d i i by λ , ..., λm with λ ≤ ··· ≤ λm.Letd( ) and σ( ) denote 1 1 r = p /q the diagonal entry and the sum of magnitudes of off-diagonal where (1+ ) . As in the end of the proof of Theorem 4, entries, respectively, in row i of S for i = 1, ..., m. Then it follows that A / ≤ S S ≤ r · A · A . (a) HubGap1( ) 2 EigenGap1( ),and EigenGap1( ) HubGap1( )+1 HubGap2( ) S = λ /λ ≤ d(m−1) b (b) EigenGap1( ) m m−1 ( + + (43) m−2 (i) (m−2) (m−2) i= σ )/(d − σ ). 1 This together with (a) in Theorem 10 gives the following re- Proof. Let T be the matrix formed from S by deleting its last sult. row and column. This means that T is a diagonally dominant Corollary 3. Let A ∈ Cn×m be a matrix with its columns matrix. Let the eigenvalues of T be {μi} with μ ≤ ··· ≤ 1 a ... a < a 2 ≤ ··· ≤ a 2 ≤a 2 1, , m satisfying 0 1 2 m−1 2 m 2. μm−1. Then by Theorem 9,wehaveλ1 ≤ μ1 ≤ λ2 ≤ ··· ≤ Let S = AH A ∈ Cm×m be a Hermitian matrix with its eigen- λm− ≤ μm− ≤ λm. Applying Gershgorin’s theorem to T and 1 1 λ ... λ ≤ λ ≤ ··· ≤ λ noting that T is a diagonally dominant with d(m−1) being its values 1, , m satisfying 0 1 m. The following (m−1) holds largest diagonal entry, we have μm−1 ≤ 2d .Thusλm−1 ≤ d(m−1) = a 2 2 2 m−1 2. As observed in the proof of Theorem 4, (1) if A is a hub-dominant matrix with am2 λ ≥ b =a 2 a 2/ a 2 ≤ λ /λ m m 2. Therefore, m 2 (2 m−1 2) m m−1 am−12, then S is an arrowhead dominant matrix with A / ≤ S or HubGap1( ) 2 EigenGap1( ). λm λm−1;and E T S λ Let be the matrix formed from with its diagonal en- (2) if is an arrowhead dominant matrix with m tries replaced by the corresponding off-diagonal row sums, λ p d(m−1) b ≥ m−2 σ(i) − m−1,andif ( + ) i=1 and (1 and let T = T − E. Since T is a diagonally dominant matrix, q)d(m−2) ≥ σ(m−2) for some positive numbers p and T is a diagonal matrix with nonnegative diagonal entries. Let q q< A i i with 1, then is a hub-dominant matrix with ( ) ( ) (i) (i) the diagonal entries of T be {d }. Then d = d − σ . am2 am−12 or am−12 am−22 or both. (1) (m−1) Assume that d ≤ ··· ≤ d . Since E is a symmetric di- Sometimes, especially for large-dimensional matrices, it agonally dominant matrix with positive diagonal entries, it is is desirable to relax the notion of diagonal dominance. This a positive semidefinite matrix. Since T = T+E,byTheorem 9 (i) can be done using arguments analogous to those given above we have μi ≥ d for i = 1, ..., m − 1. Let (see, e.g., [14]), and extensions represent an open research Dc problem for the future. S = cH b (38) 5. CONCLUDING REMARKS

in accordance with Definition 9.ByTheorem 3,wehave This paper has presented a hub matrix theory and applied it m−1 d(i) b = m λ λ ≤ μ ≤ λ ≤ i=1 + i=1 m. Thus, by noting 1 1 2 to beamforming MIMO communications systems. The fact ···≤λ ≤ μ ≤ λ m−1 m−1 m,wehave that the performance of the MIMO beamforming scheme is d(1) d(2) ··· d(m−1) b critically related to the gap between the two largest eigenval- + + + + ues of the channel propagation matrix is well known, but this = λ1 + λ2 + ···+ λm ≥ λ1 + μ1 + ···+ μm−2 + λm paper reported for the first time how to obtain this insight di- rectly from the structure of the matrix, that is, its hub prop- (1) (m−2) ≥ λ1 + d + ···+ d + λm. erties. We believe that numerous communications systems (39) might be well described within the formalism of hub matri- ces. As an example, one can consider the problem of nonco- d(m−1) b m−2 σ(i) ≥ λ λ ≥ λ This implies that + + i=1 1 + m m. Since operative beamforming in a wireless sensor network, where m− (m−2) (m−2) ( 2) several source (transmitting) nodes communicate with a des- d − σ = d ≤ μm−2 ≤ λm−1,wehave tination node, but only one source node is located in the λ d(m−1) b m−2 σ(i) vicinity of the destination node and presents a direct line-of- S = m ≤ + + i=1 . EigenGap1( ) (m−2) (m−2) (40) sight to the destination node. Extending the hub matrix for- λm−1 d − σ malism to other types of matrices (e.g., matrices with a clus- ter of dominant columns) represents an interesting open re- search problem. The contributions reported in this paper can Note that if there exist positive numbers p and q,with m− m− be extended further to treat the more general class of block q<1, such that (1 − q)d( 2) ≥ σ( 2) and arrowhead and hub dominant matrices that enable the anal- m−2 ysis and design of algorithms and protocols in areas such as p d(m−1) + b ≥ σ(i), (41) distributed beamforming and power control in wireless ad- i=1 hoc networks. By relaxing the diagonal-matrix condition, in 8 EURASIP Journal on Advances in Signal Processing the definition of an arrowhead matrix, with a block diagonal [12] A. F. Molisch, M. Z. Win, and J. H. Winter, “Reduced- condition, and enabling groups of columns to be correlated complexity transmit/receive-diversity systems,” IEEE Transac- or uncorrelated (orthogonal/nonorthogonal) in the defini- tions on Signal Processing, vol. 51, no. 11, pp. 2729–2738, 2003. tion of block dominant hub matrices, a much larger spec- [13] R. M. Young, An Introduction to Nonharmonic Fourier Series, trum of applications could be treated within the proposed Academic Press, New York, NY, USA, 1980. framework. [14] N. Sebe, “Diagonal dominance and integrity,” in Proceedings of the 35th IEEE Conference on Decision and Control, vol. 2, pp. 1904–1909, Kobe, Japan, December 1996. ACKNOWLEDGMENTS H. T. Kung received his B.S. degree from The authors wish to acknowledge discussions that occurred National Tsing Hua University (Taiwan), between the authors and Dr. Michael Gans. These discus- and Ph.D. degree from Carnegie Mellon sions significantly improved the quality of the paper. In ad- University. He is currently William H. Gates dition, the authors wish to thank the reviewers for their Professor of computer science and electrical thoughtful comments and insightful observations. This re- engineering at Harvard University. In 1999 search was supported in part by the Air Force Office of Sci- he started a joint Ph.D. program with col- entific Research under Contract FA8750-05-1-0035 and by leagues at the Harvard Business School on the Information Directorate of the Air Force Research Labo- information, technology, and management, ratory and in part by NSF Grant no.ACI-0330244. and cochaired this Harvard program from 1999 to 2006. Prior to joining Harvard in 1992, Dr. Kung taught at Carnegie Mellon, pioneered the concept of systolic array process- REFERENCES ing, and led large research teams on the design and development of novel computers and networks. Dr. Kung has pursued a variety [1]D.TseandP.Viswanath,Fundamentals of Wireless Communi- of research interests over his career, including complexity theory, cation, Cambridge University Press, Cambridge, UK, 2005. database systems, VLSI design, parallel computing, computer net- [2] J. M. Kleinberg, “Authoritative sources in a hyperlinked envi- works, network security, wireless communications, and networking ronment,” in Proceedings of the 9th Annual ACM-SIAM Sym- of unmanned aerial systems. He maintains a strong linkage with posium on Discrete Algorithms, pp. 668–677, San Francisco, industry and has served as a Consultant and Board Member to nu- Calif, USA, January 1998. merous companies. Dr. Kung’s professional honors include Mem- [3]H.T.KungandC.-H.Wu,“Differentiated admission for peer- ber of the National Academy of Engineering in USA and Member to-peer systems: incentivizing peers to contribute their re- of the Academia Sinica in Taiwan. sources,” in Workshop on Economics of Peer-to-Peer Systems, B. W. Suter received the B.S. and M.S. Berkeley, Calif, USA, June 2003. degrees in electrical engineering in 1972 [4] D. P. O’Leary and G. W. Stewart, “Computing the eigenvalues andthePh.D.degreeincomputerscience and eigenvectors of symmetric arrowhead matrices,” Journal of in 1988, all from the University of South Computational Physics, vol. 90, no. 2, pp. 497–505, 1990. Florida, Tampa, FLa. Since 1998, he has [5] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge been with the Information Directorate of University Press, Cambridge, UK, 1985. the Air Force Research Laboratory, Rome, [6] D. J. Love and R. W. Heath Jr., “Equal gain transmission in NY, where he is the Founding Director of multiple-input multiple-output wireless systems,” IEEE Trans- the Center for Integrated Transmission and actions on Communications, vol. 51, no. 7, pp. 1102–1110, Exploitation. Dr. Suter has authored over a 2003. hundred publications and the author of the book Multirate and [7]D.J.LoveandR.W.HeathJr.,“Correctionsto“Equalgain Wavelet Signal Processing (Academic Press, 1998). His research in- transmission in multiple-input multiple-output wireless sys- terests include multiscale signal and image processing, cross layer tems”,” IEEE Transactions on Communications,vol.51,no.9, optimization, networking of unmanned aerial systems, and wireless p. 1613, 2003. communications. His professional background includes industrial [8] C. D. Meyer, Matrix Analysis and Applied , experience with Honeywell Inc., St. Petersburg, FLa, and with Lit- SIAM, Philadelphia, Pa, USA, 2000. ton Industries, Woodland Hills, Calif, and academic experience at [9] C.-H. Tse, K.-W. Yip, and T.-S. Ng, “Performance tradeoffs the University of Alabama, Birmingham, Aa and at the Air Force In- between maximum ratio transmission and switched-transmit stitute of Technology, Wright-Patterson AFB, Oio. Dr. Suter’s hon- diversity,” in Proceedings of the 11th IEEE International Sym- ors include Air Force Research Laboratory Fellow, the Arthur S. posium on Personal, Indoor and Mobile Radio Communications Flemming Award: Science Category, and the General Ronald W. (PIMRC ’00), vol. 2, pp. 1485–1489, London, UK, September Yates Award for Excellence in Technology Transfer. He served as an 2000. Associate Editor of the IEEE Transactions on Signal Processing. Dr. [10]D.J.Love,R.W.HeathJr.,andT.Strohmer,“Grassman- Suter is a Member of Tau Beta Pi and Eta Kappa Nu. nian beamforming for multiple-input multiple-output wire- less systems,” IEEE Transactions on Information Theory, vol. 49, no. 10, pp. 2735–2747, 2003. [11] C. Murthy and B. D. Rao, “On antenna selection with max- imum ratio transmission,” in Conference Record of the 37th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 228–232, Pacific Grove, Calif, USA, November 2003.