5406 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017 Uncertainty Principles and Sparse Eigenvectors of Graphs Oguzhan Teke, Student Member, IEEE, and P. P. Vaidyanathan, Fellow, IEEE

Abstract—Analysis of signals defined over graphs has been of , the study in [2] selects the graph operator to be the interest in the recent years. In this regard, many concepts from the (normalized) graph Laplacian, whereas studies in [3], [4] focus classical signal processing theory have been extended to the graph on the adjacency . There are other proposals as well [5], case, including uncertainty principles that study the concentration of a signal on a graph and in its graph Fourier basis (GFB). This [6]. Once the graph operator has been selected, its eigenvectors paper advances a new way to formulate the uncertainty principle are used to define the graph Fourier basis (GFB). Representation for signals defined over graphs, by using a nonlocal measure based of a signal in the selected graph Fourier basis is then used to on the notion of sparsity. To be specific, the total number of nonzero define the graph (GFT) of the signal. Inspired elements of a graph signal and its corresponding graph Fourier by the constructions in [2]–[4], sampling, reconstruction and transform (GFT) is considered. A theoretical lower bound for this total number is derived, and it is shown that a nonzero graph signal multirate processing of graph signals are studied in [7]–[14]. and its GFT cannot be arbitrarily sparse simultaneously. When the An essential concept in signal analysis is the uncertainty prin- graph has repeated eigenvalues, the GFB is not unique. Since the ciple, which states that a signal cannot be arbitrarily localized derived lower bound depends on the selected GFB, a method that in both time and frequency simultaneously [15]. In classical constructs a GFB with the minimal uncertainty bound is provided. signal processing, the uncertainty principle is useful to design In order to find signals that achieve the derived lower bound (i.e., the most compact on the graph and in the GFB), sparse eigenvectors filters that are maximally localized in time, for a given frequency of the graph are investigated. It is shown that a connected graph has spread, or vice versa. Due to its importance, some authors ex- a 2-sparse eigenvector (of the graph Laplacian) when there exist tended this principle to signals defined on graphs [16]–[19]. two nodes with the same neighbors. In this case, the uncertainty Details of these approaches are elaborated in Section I-C. bound is very low, tight, and independent of the global structure of Similar to [16]–[19], this paper studies the concept of uncer- the graph. For several examples of classical and real-world graphs, it is shown that 2-sparse eigenvectors, in fact, exist. tainty for graph signals. However, unlike earlier methods, and motivated by [20]–[22], we introduce a non-local measure for Index Terms—Graph signals, graph Fourier basis, sparsity, un- uncertainty, based on the notion of sparsity of the signal in the certainty principles, sparse eigenvectors. graph domain and frequency domain. We show that a nonzero I. INTRODUCTION graph signal and its corresponding GFT cannot be arbitrarily sparse simultaneously, and we provide a lower bound for the to- IGNALS defined over a graph are useful to express high- tal number of nonzero elements. We further provide the optimal Sdimensional data where the graph models the underlying selection of the GFB (that minimizes the uncertainty bound) dependency structure between the data sources. In this frame- when the graph operator has repeated eigenvalues. In order to work a graph signal is considered as a set of data points indexed find signals that achieve the derived lower bound, we consider according to vertices of the graph. This type of signal struc- sparse eigenvectors of the graph operator. A detailed outline and ture is not limited to electrical engineering and can be found the contributions of this paper are given below. in a variety of different contexts such as social, economic, and biological networks, among others [1]. A. Outline and Contributions In some of the recent developments, processing of graph sig- nals has been based on the so-called “graph operator.” However, Broadly speaking, results presented here can be divided into the definition of this operator is not fixed. Motivated by spectral three main parts. In the first part (Sections II and III), we pro- posediscreteandnon-localuncertainty principles that depend Manuscript received August 2, 2016; revised January 18, 2017, May 22, 2017, on the max-norm of the graph Fourier basis. These results follow and July 8, 2017; accepted July 13, 2017. Date of publication July 24, 2017; date from more general theory of sparse representations studied in of current version August 10, 2017. The associate editor coordinating the review [20]–[22]. We consider the and the GFB as a pair of this manuscript and approving it for publication was Dr. Pierre Borgnat. This work was supported in part by the NSF under Grant CCF-1712633, in part by of bases to represent a graph signal and interpret the methodolo- the ONR Grant N00014-15-1-2118, and in part by the Electrical Engineering gies in [22] in the context of graph signal processing. Similarly Carver Mead Research Seed Fund of the California Institute of Technology. and independently, the study in [19] has obtained very simi- (Corresponding author: Oguzhan Teke.) The authors are with the Department of Electrical Engineering, California lar interpretations based on the same theory. Apart from the Institute of Technology, Pasadena, CA 91125 USA (e-mail: [email protected]; overlapping results [19] generalizes these results to the case of [email protected]). frames. Then, it focuses on local uncertainty principles where Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. bounds depend on a particular portion of the graph. More de- Digital Object Identifier 10.1109/TSP.2017.2731299 tailed comparison with [19] is provided in Section I-C.

1053-587X © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publicationsstandards/publications/rights/index.html for more information. TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5407

Our main contributions are presented in the remaining sec- In the following, the0pseudo-normx 0denotes the num- tions: in the second part of our results (Section IV) we discuss ber of nonzero elements in the signalx.Thepnorm of a vector that given a graph, max-norm of the graph Fourier basis is not will be denoted by x p. The spectral norm (largest singular unique in the presence of the repeated eigenvalues of the se- value) of the matrixAwill be denoted byA 2. We will use lected graph operator. Since this non-uniqueness greatly affects A max to denote the maximum absolute value of elements in the interpretation of the relations between the graph structure the matrixA. Precisely, we have: and the uncertainty, we formulate a problem to select eigenvec- A 2= max Ax 2/ x 2,A max = max|ai,j|.(2) tors (GFB) such that the max-norm of the graph Fourier basis is x=0 i, j maximized (or, equivalently the uncertainty bounds considered Notice thatA is equivalent to the mutual coherence be- in Section II are minimized). We solve this problem analytically max tween the matrixAand the identity matrix [22]. (Theorem 6) and provide an algorithmic routine to obtain a set of For a matrixA and two index setsSandK,A denotes eigenvectors that gives a graph Fourier basis with the maximum S the matrix with columns ofAindexed bySandA denotes possible max-norm. K,S the matrix with columns indexed bySand rows indexed byK. In the third part of our results (Section V) we focus on the For a vectorx, elements ofxindexed bySwill be denoted by sparse eigenvectors of graphs in order to find signals that achieve x . The index setSis¯ defined asS={1,...,N}\S,¯ where\ the proposed uncertainty bounds. We study the relation between S stands for the set difference operator. We will use⊗to denote sparsity of eigenvectors and graph topology, and their effects of matrices. Dimension of the right null on the max-norm of the GFB. We provide the necessary and space of a matrixAis denoted bynullity(A). sufficient conditions for the existence of 1-sparse and 2-sparse eigenvectors of the graph Laplacian (Theorems 7 and 8). We C. Related Work then show that existence of a 2-sparse eigenvector impliesvery low and attainable uncertainty bounds(Theorem 9). Finally in To the best to our knowledge, there are mainly four studies that Section VI we apply our results to real world graph examples. consider uncertainty principles for signals defined over graphs Interestingly, uncertainty bounds for these graphs are very low. [16]–[19]. The main theme in these studies (including this one) We precisely explain why this is the case, and find the signals is to define a “measure” of the signal in the vertex domain that achieve these bounds. Portions and some extensions of these and the graph Fourier domain. In particular, the study in [16] results have been presented in [23], [24]. considers the following for unweighted graphs: xH P2 x xH Lx Δ2 (x)= u0 ,Δ2(x)= , (3) B. Graph Signal Processing and Notation g,u0 2 s 2 x 2 x 2 N Letx∈C be a graph signal on a graph of sizeN (i.e., 2 whereΔ (x)is referred to as the vertex spread (withPu N nodes or vertices) whose is denoted as g,u0 0 being the diagonal with respect to the nodeu), A ∈RN×N. We assume the graph does not have self loops, i.e. 0 andΔ2(x)is referred to as the spectral spread of the signal a =0. The weight of the edge from thejthnode to theith s i,i x. This approach is extended to weighted graphs in [25]. The node is denoted by the(i, j)thelement ofA. This definition of study in [17] works on an alternative definition and focuses on the adjacency matrix follows from [3], [4] and is the reverse of the following: the usual definition in graph theory. Unless it is specified, we consider the general case of directed graphs (i.e.,ai,j=aj,i). α= DSx 2/ x 2, β= BF x 2/ x 2, (4) For the restricted case of undirected graphs with non-negative whereα2 andβ2 represent the amount of energy confined in edge weights (a =a ≥0), the graph Laplacian is defined i,j j,i the vertex setSand the frequency setF(withD andB be- asL=D −A whereD is the diagonal given S F ing corresponding projection matrices), respectively. The study as(D) = a . Normalized graph Laplacian is defined as i,i j i,j in [18], [26] considers the smoothness of the signal in both L=D-1/2LD-1/2. We will useVto denote the graph Fourier domains: using the difference operator on the graph,Dr,the basis (GFB) of interest. Here the definition ofV is not fixed. 2 2 interplay betweenDrx 2andDrx 2is studied. Eigenvectors of either the graph Laplacian or the adjacency ma- The main observation of these studies is that measures in both trix itself can be set to be the graph Fourier domain. We will domains cannot be arbitrarily small simultaneously: a signal useVA andVL to explicitly denote the eigenvectors of the adja- “limited” in one domain cannot be “limited” in the other domain. cency matrix and the graph Laplacian, respectively. Assuming These trade-offs are then considered as uncertainty principles. Ais diagonalizable, we precisely have: Depending on their corresponding definitions, they characterize −1 −1 these uncertainty curves theoretically and study the signals that A =VA ΛA V , L=VL ΛL V , (1) A L achieve them with equality. whereΛA andΛL are diagonal eigenvalue matrices. We will More recently, the study in [19] takes a non-local perspective always assume that eigenvectors are normalized to have a unit where vertex and spectral measures are defined with pnorms. 2-norm. When the graph is undirectedVA andVL can be Using the fact that the identity matrix and the GFB form a pair selected to be unitary. For the selected graph Fourier basisV, of bases to represent graph signals, [19] proposes some uncer- graph Fourier transform (GFT) of a signalxis given byx=Fx tainty principles where the results are based on theory of sparse whereF=V−1. representations studied in [20]–[22], [27]. These results show 5408 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017

that the “graph Fourier coherence” is a fundamental quantity Fx 0=1. As a result we ask the following question:Given the for the uncertainty principles of interest. Notice that we specif- GFTF, what is the minimum total number of nonzero elements ically consider the case of0 and 1 in this study and obtain in a graph signal and its corresponding Fourier Transform? very similar results where we use the term “max-norm of GFB” For this purpose, we consider the following two definitions of instead of coherence. This overlap is a direct consequence of the uncertainty: influence of the sparse representation theory in [20]–[22]. In the rest, [19] focuses on the representation of graph signals using s0(x)= x 0+ Fx 0 /2, p0(x)= x 0 Fx 0, (5) frames and proposes some uncertainty results based on these representations. It later considers local uncertainty principles wheres0(x)andp0(x)are referred to asadditiveandmultiplica- where bounds depend on a region of the graph. tiveuncertainty of the signalx, respectively. The definitions in In order to find signals that achieve the proposed uncertainty (5) have the following two important properties: 1) They are bounds we consider sparse eigenvectors (2-sparse in particular) scale invariant, that iss0(αx)=s0(x)andp0(αx)=p0(x) of the graph Laplacian. In this context, the study in [28] re- for allα=0. This is a useful property since uncertainty of a veals a specific graph structure (referred to as motif-doubling) signal is expected to be scale-invariant. 2) Both can take only that results in sparse eigenvectors of both the adjacency matrix a discrete, finite set of values, namely,s0(x)takes half integer and the graph Laplacian. The structure of motif-doubling gives values (i.e.,k/2for√ integerk) andp0(x)takes values in the form rise to sparse eigenvectors with even number of non-zero en- ofp0(x)= kfor some integerk≥1. As a final remark, notice tries. However, this structure is only sufficient to have sparse that the AM-GM inequality dictates the following relation: eigenvectors, whereas our condition on 2-sparse eigenvectors is N necessary and sufficient. A detailed comparison with [28] will s0(x)≥p0(x), ∀x∈C , (6) be provided in Section V-B. with equality if and only ifx 0= Fx 0. The main purpose of this section is to find the lowest value II. DISCRETENON-LOCALSPREADS thats0(x)can attain. More precisely the following problem will Inspired by [20]–[22], we will study the concept of uncer- be considered: tainty from adiscreteandnon-localperspective. We will con- sider graph spread of a signal as the total number of nonzero s0= mins0(x), (7) x=0 elements in the signal. Similarly, spectral spread of a signal will be defined as the number of nonzero elements in the GFT of the wheres0is called as the uncertainty bound since a signal and signal. Hence, we have the following definitions: its corresponding graph Fourier Transform cannot be arbitrarily Definition 1 ( -based spread on vertex domain):Given a 0 sparse simultaneously, that is,s0(x)≥s0 ∀x=0. nonzero signalxon a graph with GFTF, the “spread” of the Due to the combinatoric nature of the problem in (7), no signal on vertex domain is defined asx . ♦ 0 closed form solution fors0is available for an arbitraryF.Nev- Definition 2 ( 0-based spread on Fourier domain):Given a ertheless, the theory of sparse representations provides useful nonzero signalxon a graph with GFTF, the “spread” of the bounds for the problem. Motivated by Theorem 2.1 in [22], the signal in the Fourier domain is defined asFx 0. ♦ following theorem provides a lower bound forp0(x): The definition of the spectral spread depends on the selected Theorem 1 (Multiplicative uncertainty principle):For a GFTF. Whether it is based on the adjacency matrix, the graph graph with GFTF, the multiplicative uncertainty of a nonzero Laplacian, or something else, for a given graph, the GFBV, signalxis lower bounded as follows: henceF=V−1, maynotbe unique. This is due to the fact that the selected graph operator may haverepeated eigenvalues. −1 p(x)≥ F−1 F , (8) In such a case, one can select different bases to span the cor- 0 2 max responding eigenspaces resulting in different GFB matrices. In order to avoid this ambiguity we assume that, not just the graph where ·2and·max are defined in (2). ♦ N th itself, but also the associated GFT is given in Definition 2. It Proof:First notice thatx= i=1 xieiwhereeiis thei th should be noted that in the case of repeated eigenvalues selec- vector of the canonical basis, andxiis thei element ofx.Let H th th tion of the GFB is not a simple task and it requires attention. fj denote thej row ofF.Forthej element of the GFT of We will address this problem in Sections IV and V, where we xwe have discuss optimal selection of the GFB in order to minimize the H H spectral spread of signals. |xj|=|fj x|= xifj ei, (9) For a nonzero graph signal notice that its vertex domain spread i∈S and spectral spread have to be at least 1, however, they maynot achieve this bound simultaneously. As a simple motivational whereSdenotes the support (set of nonzero indices) of the example, consider the directed cyclic graph ofNvertices, whose signalx. Notice that|S|= x 0. We can upper bound|xj|as graph Fourier Transform corresponds to DFT of sizeN[4]. If the signalxis an impulse, thenx 0=1,butFx 0=N. H H |xj|= xifj ei ≤ xifj ei, (10) On the contrary, if the signal is a constant, thenx 0=N,but i∈S i∈S TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5409 using the triangular inequality. Using Cauchy-Schwarz inequal- the lower bound or not. In order to understand the existence of ity, this can be further bounded as such signals, we provide the following result.

1/2 1/2 Theorem 2 (Existence of signals):LetV be a unitary 2 H 2 |xj| ≤ i∈S|xi| i∈S|fjei| , (11) Fourier basis of the graph. There exists a signalxon the graph that achieves the strong 0 uncertainty bound (satisfies (18) 1/2 2 1/2 with equality) if and only if there exist index setsK andS ≤ x 2 |S|F = x 2 x F max,(12) max 0 −1 H with|K|=|S|= V max such thatnullity(VS,K¯) >0. H th Furthermore, a signal that achieves the bound is given as where we use the fact thatfj eiis the(j, i) element ofF whose magnitude is upper bounded byF max. Now consider H xS ∈null(VS,K¯) , xS¯=0. (19) the2norm ofx=Fx: 2 2 2 2 ♦ x 2= |xj|≤ x 0 x 2 x 0 F max, (13) j∈K Proof:Assume thatxachieves the strong 0bound, that is,

x 0+ Fx 0 whereK denotes the support ofx,x 0=|K|. Hence we have, s(x)= = V −1 . (20) 0 2 max 2 1 x 2 ≤ F max . (14) Then we haves0(x)=p0(x), which implies x 0 x 0 x 2 x = Fx = V −1 , (21) -1 −1 0 0 max Notice thatmaxx x 2/x 2= maxy F y 2/y 2= F 2. Therefore, sinces0(x)=p0(x)if and only ifx 0= Fx 0due to AM- 2 2 GM inequality. LetSdenote the support ofxandK denote 1 x 2 ≤ F ≤ F F−1 (15) the support ofx. ThenFx=FSxS. Sincexis zero out- x x max x max 2 0 0 2 side of its support we haveFK¯,SxS =0, which means that −1 H −1 nullity(FK¯,S)=nullity(VS,K¯) >0for someSandK which implies x 0 Fx 0≥ F 2 F max . −1 with|S|=|K|= V max. H Corollary 1 (Additive uncertainty principle):For a graph Conversely, assume thatnullity(VS,K¯) >0for some −1 with GFTF, the additive uncertainty of a nonzero signalx S andK with|S|=|K|= V max. Then selectxS as H is lower bounded as follows: xS ∈null(VS,K¯) andxS¯ to be0. Hence,x 0=|S|. −1 −1 SinceSis the support ofx,wehaveFSxS =Fx. Further- s0(x)≥ F 2 F max , (16) more, where ·2and·max are defined in (2). ♦ FSxS 0= FK,S xS 0+ FK,S¯ xS 0=|K|, (22) Proof:From (6) we haves(x)≥p(x). Therefore, any 0 0 −1 lower bound forp(x)is also a lower bound fors(x). sincexS ∈null(FK,S¯ ). As a result,s0(x)= V max. 0 0 −1 When the GFT of interest is unitary, Theorem 1 and Corol- It should be noted thatV max may not be an integer in −1 lary 1 reduce to the following corollaries: general. In this case, we cannot find index sets of sizeV max since the size of an index set is an integer. In this case, Theorem 2 Corollary 2 (Weak 0uncertainty):For any graph withuni- tarygraph Fourier basisV, tells us that there is no signal that achieves the bound in (18) with equality. −1 p0(x)≥ V max. (17) As an immediate example, the normalized inverse DFT ma- ♦ trix is a unitary graph Fourier basis for circulant graphs [29], [30]. Notice√ that the normalized DFT matrix of sizeN has Proof:LetV be unitary in Theorem 1. Then we −1 H V max = N. Thus, circulant graphs have the following two haveF=V , henceF max = V max. Furthermore, √ −1 results. 1) From Corollary 2, we havep0(x)≥ N.Thisis V 2= F 2=1. This corollary is important since for undirected graphs, the a well-known uncertainty result given in [20], and the bound adjacency matrix and the graph Laplacian are symmetric which is known √to be tight for allN. 2) From Corollary 3, we have result in a unitary graph Fourier basis. The corollary also implies s0(x)≥ N. WhenNis a perfect square “picket fence” signal the following (from AM-GM inequality). is known to achieve this bound [22]. Details of these results will be elaborated in Section VI-A1. Corollary 3 (Strong 0uncertainty):For any graph withuni- tarygraph Fourier basisV, Even though Theorem 2 is useful to characterize signals that achieve the bound in (18), it has a major drawback in terms of −1 s0(x)≥ V max. (18) practical usability. Finding the index sets requires a combinato- rial search over all possible sets of sizeV −1 , which isnot ♦ max computationally efficient. In Section VI, we will provide graph examples on which the inequality in (18) is satisfied with equality. III. UNCERTAINTYBASED ON NORM Even though (18) is a lower bound for the uncertainty, it does 1 not say anything about the signal that achieves the bound. Fur- In the previous section, we defined the spread of a signal as the thermore, it does not say whetherthere isa signal that achieves total number of nonzero elements in the signal. (see Definitions 1 5410 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017 and 2.) In this section we will consider a “smoother” measure Proof:In Theorem 3, whenV is unitary, we have −1 H by replacing the 0with 1norm. V 2= F 2=1. Furthermore,F=V , hence Imitating (5), we can define an1based additive uncertainty F max = V max. ass1(x)=(x 1+ Fx 1)/2. With this definition we have We can finally provide a lower bound for the 1-based additive s1(αx)=|α|s1(x), that is,s1(x)isnotscale invariant. The uncertainty as follows. problem is that a nonzero signal can have arbitrarily small un- −1/2 certainty, which is an undesired property. One way to impose s1(x)≥ V max , (31) the scale invariance is to use a normalization as follows: where the inequality follows froms1(x)≥p1(x)(AM-GM x 1+ Fx 1 inequality). Hence, any lower bound forp1(x)(Corollary 4) is s1(x)= . (23) 2 x p a lower bound fors1(x). It is quite interesting to observe that the termV max appears For any nonzerop,s(x)in (23) has the property of 1 in the lower bound for both0-based and1-based uncertainty s(αx)=s(x)forα=0, hence it can be used as an uncer- √ 1 1 definitions. It should be noted that1/ N ≤ V max ≤1for tainty measure. However, it should be noted that characteristics any matrixVsinceVis assumed to have columns with unit2 ofs(x)depend on the selected norm. In the following, we −1/2 1 p norm. Therefore,V −1 ≥ V always holds true. This will consider the case ofp=2, and define the based uncer- max max 1 shows that the lower bound given by (31) is always less than the tainty measures as follows: bound in (18). In fact, not just the bounds but 0and 1based

x 1+ Fx 1 x 1 Fx 1 uncertainties are also related with each other. The following s1(x)= , p1(x)= ,(24) theorem establishes this relation. 2 x 2 x 2 Theorem 4:Assume that graph of interest has a unitary GFB wheres1(x)andp1(x)are referred to as 1based additive and V. Then, we have the following inequality multiplicative uncertainty of the signalx, respectively. 2 The main reason for considering the case ofp=2is that p0(x)≥ p1(x) , (32) such a selection has strong connections with the 0based un- certainty measures discussed in Section II. These relations will for all signals defined on the graph. ♦ be elaborated at the end of this section (see Theorem 4). Proof:We start with the following change of variables,

As done in Section II, motivated by Theorem 2.1 in [22], a 2 x 1 Fx 1 lower bound forp1(x)can be obtained as follows: p1(x) = 2 = x¯1 Fx¯1, (33) x 2 Theorem 3:For a graph with GFTF,1-based multiplicative uncertainty of a nonzero signalxis lower bounded as follows: where we definex=x/x¯ 2. −1 Given any two vectorsxandywithx = y =1,we p(x)≥ F−1 F 1/2 , (25) 2 2 1 2 max have the following relation (page 20 of [22]) where ·2and·max are defined in (2). ♦ The reader should carefully notice the presence of the square x 1 y 1≤ x 0 y 0. (34) root in (25), which was not there in (8). Notice that we have x¯2=1. Since we have assumed that Proof:Notice thatF is an , therefore the GFB is unitary, we further haveFx¯2=1. Then (34) gives x 2/x 2is lower and upper bounded as follows the following 2 −1 −2 x 2 2 F 2 ≤ 2 ≤ F 2. (26) x¯1 Fx¯1≤ x¯0 Fx¯0. (35) x 2 2 Therefore we have: Notice that left-hand-side of (35) is equal to(p1(x)) due F−1 −2 x 2≤ x 2=xH Fx= x∗F x, (27) to (33). Remember thatp0(x)is scale-invariant. As a result 2 2 2 i i,j j we have x¯ Fx¯ = x Fx , which shows that i,j 0 0 0 0 right-hand-side of (35) is equalp0(x). Hence, we conclude that ∗ 2 ≤ xiFi,jxj ≤ |xi|F max|xj|,(28) (p1(x))≤p0(x). i,j i,j Combining the result of Theorem 4, Corollary 4 and (6), we have the following relations between the aforementioned = x x F , (29) 1 1 max uncertainty definitions where we use the fact that|Fi,j|≤F max for all(i, j)in (28). 2 Notice that taking square-root of both sides and re-arranging the −1 s0(x)≥p0(x)≥ p1(x) ≥ V max (36) terms in (29) give the result in (25). When the GFT of interest is unitary Theorem 3 reduces to the for a unitary graph Fourier basis. It should be noted that 0- following corollary. based additive uncertainty has the strongest result. Namely, Corollary 4 (Weak 1uncertainty):For any graph withuni- if a signal achieves the 0-based additive uncertainty bound −1 taryFourier basisV, (s0(x)= V max), then the signal also achieves the bounds −1/2 given in Corollaries 2 and 4. (This is due to (36).) However, the p1(x)≥ V . (30) max converse is not true. For this reason, in the rest of the paper, we ♦ will focus ons0(x)as the uncertainty measure. TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5411

IV. CASE OFREPEATEDEIGENVALUES eigenvectors of the given graph LaplacianLsuch that lower bound in Corollary 3 isminimized(or equivalentlyV is Definitions 1 and 2 are generic in the sense that one can choose max any suitable graph Fourier basis. Most common selections are maximized). We precisely define this problem as follows: H eigenvectors of the adjacency matrix [4], the graph Laplacian max V max s.t.L=VΛLV , (37) and normalized Laplacian [2]. Even after one decides on which V of these should be used, there still is a significant point that whereΛL is a of eigenvalues ofL. requires attention: the possibility ofrepeated eigenvalues.This As a result of the graph Laplacian being a symmetric ma- is mostly the case for unweighted graphs or graphs with integer trix, eigenspaces,Si,ofLare orthogonal to each other. Hence, edge weights (see Section VI). The relation between eigenvalue (37) is a decoupled problem in the sense that we can focus on multiplicity of a graph and the topology of the graph is an individual eigenspaces rather than finding all eigenvectors of interesting problem. Interested reader may refer to [31] for some L. To be more precise, assume that the graph Laplacian hasK results. More on this topic can be found in [32]–[35]. distinct eigenvalues with the corresponding eigenspacesSifor In the following we will use eigenvectors of the graph Lapla- 1≤i≤K. Then, we can writeVas follows cian as the graph Fourier basis. However, the discussion is also valid for the adjacency matrix and the normalized graph Lapla- V =[V1V2 ···VK], (38) cian.Further, we assume that the graph Laplacian, the adja- N×Ni where eachVihas the dimensionVi∈C (Niis the mul- cency matrix and the normalized graph Laplacian are diago- tiplicity of the corresponding eigenvalue), spans the eigenspace nalizable matrices. Hence, geometric and algebraic multiplicity H Si, and it is unitaryViVi=INi for1≤i≤K. Wealso of an eigenvalue are the same. This justifies the use of the term H haveViVj=0fori=j(orthogonality of eigenspaces), and “multiplicity” without specifying which one. V max = max Vi max. Hence, we can write (37) as 1≤i≤K Letλibe an eigenvalue of the graph Laplacian with multi- plicityNi. The corresponding eigenspaceSiis then defined as Si=null(L−λiI), Si=null(L−λiI), whereSiis anNidimensional sub-space N max max Vi max s.t. span(Vi)=Si, (39) ofC . Whenλiis a repeated eigenvalue (Ni>1), any vector 1≤i≤K Vi H ViVi=INi. inSiis an eigenvector with eigenvalueλi. Therefore, by se- th lecting different set of eigenvectors, one can come up with a whereλiis thei distinct eigenvalue ofL. different graph Fourier basis. Hence,the graph Fourier basis It is important to notice that inner maximization in (39) can for the graph Laplacian is not unique. be solved independently for each eigenspace. For this purpose, Selection of the graph Fourier basis may affect the proposed we have the following definition: uncertainty principles significantly. For example, consider the Definition 3 (Max-max norm of a subspace):LetS be an complete graph ofNnodes. It is possible to selectVsuch√ that M-dimensional subspace ofCN. The max-max norm ofS, −1 −1 eitherV max = N/(N-1),orV max = N. The former m(S), is defined as: decreases withNand approaches unity for largeN, whereas the H m(S) = max U max s.t.span(U)=S, U U=I. latter increases withNunboundedly. Therefore, interpretations U ∈CN×M of the proposed uncertainty bounds differ greatly depending on the selected basis. It should be noted that the complete graph That is,m(S)is the maximum of max-norm of matrices with of sizeNis an extreme example since it has anN−1dimen- orthonormal columns that span the given sub-spaceS. ♦ sional eigenspace corresponding to eigenvalueN. Nevertheless Notice thatanyelement ofanyunitary basis that spansSis it shows the importance of the selection of the graph Fourier always less than (or equal to)m(S)in absolute sense. In the basis. following theorem, we will provide a closed form solution for Since our definition of uncertainty depends on the selected the max-max norm of a sub-space. N graph Fourier basis, in the following, we will mainly discuss Theorem 5:LetSbe aM-dimensional subspace ofC .Let N×M H two different approaches for the selection of the eigenvectors. U ∈C beanymatrix withspan(U)=S, andU U =I. This section (Section IV) will study the first approach where we Then max-max norm of sub-spaceSis select the GFB in a way that the lower bound in Corollary 3 is H minimized. In the next section (Section V), we will consider the m(S) = max UU (40) 1≤j≤N j,j second approach where we look for the sparsest eigenvectors. th We will also relate these two approaches in Section V. where(·)j,jdenotes thej diagonal entry. ♦ Proof:LetU andQ be two matrices with orthonormal A. Minimizing the Lower Bound columns such thatspan(U)=span(Q)=S. Since both span the same sub-space, we can writeQ=UX for some unitary In Corollary 3, we showed that the average number of nonzero matrixX of sizeM. Then we can writem(S)as: elements in a graph signal and its graph Fourier transform is −1 H lower bounded byV max whereV is assumed to be unitary. m(S) = max UX max s.t.X X =I, (41) When the graph of interest has repeated eigenvalues, one can X ∈CM ×M select different set of eigenvectors which results in different whereU is an arbitrary matrix with orthonormal columns that −1 th H th values forV max. In this section, our purpose is to select spanS.Letxibe thei column ofX, anduj be thej row 5412 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017 ofU. Then we can write (41) as: solution to (37) since it has the largest max-norm among all possible selections of the eigenvector matrices. H H m(S) = max ujxi s.t.xixj=δi,j (42) 1≤j≤N 1≤i≤M B. Numerical Problems xi H Even though Theorem 6 provides an efficient way to select the = max ujx1 s.t. x1 2=1 (43) 1≤j≤N −1 x1 eigenvectors such that lower bound in Corollary 3,V max,is H H minimized, there is an important numerical issue. In order to uti- = max uj 2= max UU (44) 1≤j≤N 1≤j≤N j,j lize Theorem 6, we need to group the eigenvectors that belong to the same eigenspace, which requires an equality check between where we assume (w.l.o.g.) in (42) thatx is the vector that 1 the corresponding eigenvalues. However, it is not possible to dis- achieves the maximum. Furthermore, otherx’s will have the i tinguish two values if they are closer than theprecisionof the nu- additional constraint of being orthonormal tox. As a result, 1 merical system. As a particular example consider a (undirected, x’s for2≤i≤M cannotproduce a larger inner product. Fur- i unweighted) path graph of sizeN. Eigenvalues of the adjacency ther, once the optimalx is selected, the remainingx’s can 1 i matrix of this graph are given asλ =2cos(πk/(N+1))for be selected arbitrarily as long as they are orthonormal to each k 1≤k≤N [36]. One can show that|λ-λ|≤, when the size other. This can be done via Gram-Schmidt process. In (43) we √ 1 2 of the graph isN≥ 3π −1/2. Hence, for any numerical preci- use the fact that inner product is maximized when the vectors sion, there exists a graph of sizeNsuch thatλ andλ cannot are aligned with each other. Equality in (44) follows from the 1 2 be distinguished from each other. Study in [37] has observed fact that -norm-square of a row is the corresponding diagonal 2 similar numerical problems as well. entry of the outer product. Finally, we state the maximized objective function value in V. SPARSEEIGENVECTORS OFGRAPHS (37) in the following theorem. Theorem 6 (Maximum Max-Norm of GFB):Assume the After the definition of additive uncertainty given in (5), the graph Laplacian,L, hasKdistinct eigenvalues. LetNidenote ultimate purpose of this study is to find a solution to (7) in order the multiplicity, andSidenote the eigenspace of the eigenvalue to find signals that are sparsebothin the vertex domain and the N×Ni λi for1≤i≤K. LetUi∈C beanymatrix with Fourier domain of a given graph. Unfortunately, solution to (7) H UiUi=Iandspan(Ui)=Si. Then we have the following: is not straightforward due to its combinatorial nature. We have provided lower bounds for the solution to (7) in Corollaries 1 H H and 3. In Section IV we studied the optimal selection of the max UiUi = max V max s.t.L=VΛLV , 1≤j≤N j,j V graph Fourier basis in order to minimize the lower bound given 1≤i≤K by Corollary 3. However, these approaches have two downsides whereΛL is a diagonal matrix of eigenvalues ofL. ♦ 1) Even though there are examples where the bound given by Proof:Follows from equivalence between (37) and (39), Corollary 3 is tight (see Section VI), it may not be the case Definition 3 and Theorem 5. for an arbitrary graph. 2) Even if the solution to (7) is known, Theorem 6 only provides the value of the maximized objective aforementioned results are unable to find the signals that achieve function in (37), which is useful to find the minimum lower the minima (except for Theorem 2, which requires an exhaustive bound given by Corollary 3. In fact, we can explicitly construct search to find a bound achieving signal). In this section, in the set of eigenvectors that result in the maximum max-norm. order to overcome these downsides, we will consider additive For this purpose, consider again the proof of Theorem 5. Notice uncertainty of graph Fourier basis elements. that it is aconstructiveproof, which can be translated into an Without losing any generality we will use orthogonal eigen- algorithm as follows. Given the graph Laplacian, one can take vectors of the graph Laplacian as the graph Fourier basis. That H H the eigenvalue decomposition and obtainL=VΛLV with a is,Vis the GFB whereL=VΛLV . Then, assuming a pre- proper ordering of eigenvectors such thatV=[V1···VK]and defined ordering of eigenvectors, theithcolumn ofV, denoted N×Ni th columns of eachVi∈C belong to the same eigenspace. byvi, will be thei element of GFB. Since GFT is defined as −1 Then, we utilize the following three steps for eachVi: F=V ,wehaveFvi 0=1. Then, the additive uncertainty H th 1) Letvi,jdenote thej row ofVi, and letjbe such that of a graph signal that is an element of GFB is given as j = arg max vH . 1≤j≤N i,j 2 s(v)= v +1/2. (45) Ni×Ni 0 i i 0 2) LetXi=[xi,1 xi,2 ···xi,Ni]∈C , and select xi,1=vi,j /vi,j 2. Remaining columns ofXican be Notice that quantity in (45) is directly related to the sparsity H selectedarbitrarilysuch thatXiXi=INi holds true. of the GFB element. Ifviitself is a sparse vector, then we have 3) ComputeVi=ViXi. adirect evidenceof a signal that is sparse in the vertex domain Then, the set of eigenvectors that has the maximum max-norm and graph Fourier domain simultaneously. Furthermore, addi- can be constructed asV =[V1 ···VK]. tive uncertainty of sparse eigenvectors may achieve (or, come Notice thatVi is just a different unitary basis for close to) the bound given in Corollary 3. Therefore, our aim the eigenspace spanned byVi. However, unlikeVi max, in this section, is to find sparse eigenvectors of graphs. How- Vi max is guaranteed to achieve the max-max norm of the ever, it should be noted that a signal that achieves the minima corresponding eigenspace (Theorem 5). As a result,V is a of the additive uncertainty maynotbe an element of the GFB. TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5413

Therefore, if the GFB elements,vi, are dense, we cannot reach Now assume thatx is a solution to right hand side of any conclusion using (45). In Section VI-A2 we will provide (48). Then consider the matrixU=[x u2···uM ]by se- H examples in this regard. lectingui’s such thatU U =Iandspan(U)=S. Hence U is in the feasible set of the problem in (49). Therefore, A. Sparse Vectors in an Eigenspace m(S)≥ U max ≥ x ∞ = maxx x ∞,x∈S,x 2=1. As a result, we conclude that maximization on the right-hand When the graph Laplacian has repeated eigenvalues, eigen- side of (48) is equivalent tom(S), and provide the following vectors are not unique and they form a sub-space. In lower bound for the solution of the problem in (46) Section IV-A, we discussed how eigenvectors can be selected −2 so thatthe lower boundfors0(x)is minimized. In this sec- m(S) ≤min x 0 s.t.x∈S, x 2=1. (50) tion we will try to select eigenvectors such thats0(vi)in x (45) is minimized. To be more precise, assume thatLhasK The two main points of this section can be summarized as (K≤N) distinct eigenvalues with corresponding eigenspaces follows: Sifor1≤i≤K. Then we consider the following problem: 1) The search for a sparse eigenvector in a specific eigenspace is an inherently difficult problem. Even though numerical tech- min v 0 s.t.v∈Si, v 2=1. (46) v niques that approximate the solution exist [42], closed form for each eigenspace ofL. solutions are not available in general. In this aspect, computa- The problem in (46) is precisely defined as “The Null Space tion of sparse eigenvectors differs from the max-norm approach Problem” in [38], and shown to be NP-Hard. Interested reader is discussed in Section IV, where we provided closed form solu- referred to [39]–[41] for computational approaches to solution tions by focusing on individual eigenspaces. of (46). Apart from these, the study in [42] proposes an iterative 2) Although we do not have a closed form solution for the algorithm in order to find an approximate solution to (46) via1 sparsest vector in a given eigenspace, we can provide a lower relaxation. bound for the total number of nonzero elements as in (50). Unlike the case of minimizing the lower bound in Corol- The inequality in (50) is especially useful when we want to lary 3 (see Theorem 6), selection of the sparsest eigenvector is show that an eigenspace doesnothave sparse vectors. We will not computationally tractable due to NP completeness of the use this inequality in Section VI-A2 to formally show that an problem in (46). However, it is quite interesting that the max- undirected cycle graph does not have sparse eigenvectors. max norm of a subspace (given in Definition 3) provides a lower bound for the problem in (46). In the following we will precisely B. Algebraic Characterization of Sparse Eigenvectors establish this relation. In the previous section we mentioned that finding the sparsest N Letx∈C be a vector with unit2norm,x 2=1. √Then the vector in an eigenspace is a difficult problem. Therefore, when infinity norm ofxcan be bounded as1≥ x ∞ ≥1/ N.In looking for sparse eigenvectors, we should consider the graph fact, if we further know thatxhasLnonzero elements√ (L≤N), (Laplacian) as a whole rather than focusing on each eigenspace we can improve the lower bound asx ∞ ≥1/ L. Therefore, individually. Furthermore, in Section IV-B we mentioned that we have the following inequality: characterization of eigenspaces of graphs suffers from numerical −2 when the graph is large (relative to the numerical x 0≥ x ∞ ∀x s.t x 2=1, (47) √ precision of the system). This is a significant problem especially where equality is achieved when|xi|=1/ Lfor alli’s in the when a large-scale real-world data is of interest. Therefore, we support ofx. need a way to characterize the sparse eigenvectors of graphs Using the inequality in (47) we can obtain a lower bound for without usingnumericaltechniques. The purpose of this section (46) as follows: is to find these sparse eigenvectorsalgebraically. −2 In the case of disconnected graphs we have a straightforward −2 min x 0≥ min x ∞ = max x ∞ ,(48) result. Assume that the graph isundirectedbutnon-negatively x∈S x∈S x∈S x 2=1 x 2=1 x 2=1 weightedand consists ofDdisconnected components with sizes Ci. Then the adjacency matrix and the graph Laplacian can be where we use the fact thatx ∞ is strictly positive, finite, and −1 written in the following form bounded away from zero so thatx ∞ is finite. ⎡ ⎤ ⎡ ⎤ In the following we will show that the maximization problem A1 L1 ⎢ . ⎥ ⎢ . ⎥ in the right hand side of (48) is equivalent to definition of max- A =⎣ .. ⎦, L=⎣ .. ⎦, max norm of the subspaceS. Remember from Definition 3 that A L max-max norm is defined as D D (51) H Ci Ci m(S) = max U max s.t.span(U)=S,U U=I.(49) whereAi∈ M andLi∈ M are the adjacency matrix and U ∈CN×M the graph Laplacian of theithcomponent, respectively. Due

Assume thatU =[u1···uM ]is a solution to the problem to block-diagonal form ofAandL, corresponding eigenvec- in (49). Then we havem(S)= U max = maxi ui ∞. tors can be selected to be block sparse. Therefore, there exists Further,ui∈S, and ui 2=1. Hence we have an eigenvector that hasat mostCinonzero elements for each m(S)≤maxx x ∞,x∈S,x 2=1. 1≤i≤D. It is important to note that the converse of this result 5414 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017 is not true: if a graph has a sparse eigenvector, it doesnotimply generality assume that the first two indices are nonzero, that is, that the graph is disconnected. As a counter example consider v1=0andv2=0,butvi=0fori≥3. Theorem 8, which proves that a connected graph can have a For a connected graph, notice that the all-1 vector is the sparse eigenvector. Examples of such graphs will be provided only eigenvector of the graph Laplacian with the zero eigen- at the end of this section. However, 1-sparse eigenvectors are value. Since the graph Laplacian is a the exemption in this regard. That is, a graph has a 1-sparse eigen- eigenspaces are orthogonal to each other. Therefore the 2-sparse vector if and only if it has an isolated node. This result is stated eigenvector (with nonzero eigenvalue)vis orthogonal to the all- as follows. 1 vector, which implies thatv1+v2=0. Then, we can select Theorem 7 (Isolated nodes of a graph):Assume that the v1=−v2=1without loss of any generality. graph of interest isundirectedbutnon-negatively weighted. LetAdenote the adjacency matrix of the graph. We have ⎡ ⎤ Then, the following statements are equivalent: T ⎡ T ⎤ 0 a1,2 ar,1 d1 −a1,2 −a 1) The graph has an isolated node. ⎢ ⎥ r,1 ⎢a 0 aT ⎥ ⎢ T ⎥ 2) The graph Laplacian has a 1-sparse eigenvector. A =⎣ 2,1 r,2⎦,L=⎣−a2,1 d2 −ar,2⎦, 3) The GFBcanbe selected such that there exists a nonzero ar,1 ar,2 Ar −ar,1 −ar,2 Lr signal that achievess(x)=1, i.e.,x = Fx =1. ♦ 0 0 0 (54) Proof:We prove(1)implies(2): Assume that the graph whereA ∈ MN−2 andL ∈ MN−2 are the partitions of has an isolated node. According to (51), there exists a 1-sparse r r the adjacency matrix and the graph Laplacian, respectively. eigenvector of the graph Laplacian. a ∈RN−2 is the vector that denotes the adjacency of We now prove that(2)implies(1):Letv be the 1- r,1 node 1 except node 2.a is the same for node 2. Notice that sparse eigenvector. Without loss of generality assume that the r,2 d =a + a andd =a + a . Then, consider first index is nonzerov =1 and the rest is zero. There- 1 2,1 r,11 2 1,2 r,21 1 the following foreLvis equivalent to the first column ofL. That is, ⎡ ⎤ ⎡ ⎤ T T T T N-1 Lv=[d1 -ar,1]=λ[10 ], wherear,1∈R is the vec- d1+a1,2 1 tor that denotes the adjacency of node 1, andd = a is the ⎢ ⎥ ⎢ ⎥ 1 r,11 Lv=⎣−(a2,1+d2)⎦=λv=λ⎣−1⎦. degree of node 1. Therefore we havea =0, henced =0. r,1 1 −a +a 0 Since edge weights are non-negative, node 1 is an isolated node. r,1 r,2 Next we will prove that(2)implies(3):Letvbe a 1-sparse Therefore we havear,1=ar,2, which in particular implies that eigenvector of the graph Laplacian. Then,vcanbe selected d1=d2 sincea1,2=a2,1 (graph is undirected). Furthermore to be an element of the GFT. In this case,v 0= Fv 0=1, the corresponding eigenvalue isλ=d1+a1,2. Since the graph hences0(v)=1. is connectedd1>0,λis nonzero. Notice that the condition Finally, we prove(3)implies(2):Lets0(x)=1for some ar,1=ar,2is the same as the condition in (52). x=0. Sincex 0≥1for any nonzero signal, we must have Conversely, assume that there exist two nodes with the prop- x 0= Fx 0=1.Fx 0=1implies thatxis an eigen- erty in (52). Without loss of generality, assumei=1andj=2, vector of the graph Laplacian and x 0=1 implies that and letvbe a 2-sparse vector withv1=−v2=1. Then partition x is 1-sparse. Hence the graph Laplacian has a 1-sparse the graph Laplacian as in (54). Due to (52), we havear,1=ar,2, eigenvector. andd1=d2. Then we haveLv=λvwithλ=d1+ar,1. Now we provide the characterization theorem for 2-sparse Therefore,vis a 2-sparse eigenvector ofL. Further, the graph eigenvectors of graphs. Recall that a graph is said to be con- is connectedd1>0, henceλ>0. nected if there is a path between any pair of nodes. Similar to Theorem 8, the study in [28] also reveals a spe- Theorem 8 (2-sparse eigenvectors of a connected graph): cific graph structure that results in sparse eigenvectors of both LetA denote the adjacency matrix of anundirectedandcon- the adjacency matrix and the graph Laplacian. In particular, it nectedgraph withai,j≥0being the weight of the edge considers the case when a graph has two copies of the same sub- between nodesiandj. Then, there exist nodesiandjsuch that graph (referred to as “motif doubling” in [28]). That is, there are two disjoint subsets (of sizeK) of nodes,S1andS2, such that ai,k=aj,k ∀k∈{1,···,N}\{i, j}, (52) the induced sub-graphs onS1andS2are the same, there is no if and only ifthe graph Laplacian,L, has a 2-sparse eigenvec- edge betweenS1andS2, andS2is connected to the rest of the tor with nonzero eigenvalueλ=di+ai,j. When the graph is graph in the same wayS1is. In this case the adjacency matrix unweighted, (52) can be stated as (and the graph Laplacian) can be shown to have(2K)-sparse eigenvectors (Theorem 2.2 of [28]). In the case ofK=1(each N(i)\{j}=N(j)\{i}, (53) subset has only one node), this motif doubling property reduces whereN(i)is the set of nodes that are adjacent to nodei.♦ to the condition in (53) withai,j=0. Note that a 2-sparse eigenvector can be assumed to have However, it is important to note that the condition in The- values 1 and−1on the nodes with the property (52). (See the orem 8 is more general than the one in [28] due to the fol- proof below.) This result is especially useful for Theorem 9. lowing two reasons: 1) the construction in [28] provides only Proof:Assume that the graph Laplacian of a connected graph asufficientcondition, whereas Theorem 8 gives the necessary has a two-sparse eigenvectorvwith nonzero eigenvalue. Due to and sufficient condition to have a 2-sparse eigenvector. 2) The permutation invariance of the node labels, without loss of any motif doubling idea in [28] specifically considers the case when TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5415

ai,j=0, whereas Theorem 8 is applicable to the case ofai,j=0 as well. In the general case ofK, it is straightforward to find sufficient conditions for aK-sparse eigenvector to exist: the motif doubling in [28] and “the same neighborhood structure” in [24] are two such examples. On the other hand, it is difficult to reveal the necessary conditions forK-sparse eigenvectors to exist. As discussed in Section II, the additive uncertainty of a signal Fig. 1. (a)K8, complete graph of size 8, (b)S9, star graph of size 9, (c)C8, xin (5) can take only half integer values for nonzero signals, cycle graph of size of 8. that iss0(x)∈{1,3/2,2,···,N}. It should be noted that Theorem 7 precisely characterizes the case whens0(x)takes its possible minimum value. A nonzero signal hass0(x)=1if that the graph is connected.) As a result, nonzero elements of and only ifthe graph has an isolated node. This result is espe- the signal that achieves the bound in (55) are at most 2 hops cially useful to conclude that a nonzero signal on aconnected away from each other. However, localization property is unique graph, which does not have any isolated node,cannotachieve to 2-sparse eigenvectors. An eigenvector with an arbitrary level s0(x)=1. Therefore, we consider the next attainable case for of sparsity may not be localized on the graph. Details of these connected graphs, that is,s0(x)=3/2. This happens under two are discussed in [24]. circumstances −1 3) Due to Corollary 3, the inequalitys0(x)≥ V max is 1)x 0=1,Fx 0=2: the signalxis an impulse on the always true. When3/2is the smallest attainable value for vertex domain, and it has a 2-sparse GFT. −1 s0(x),wehave3/2≥ V max. Therefore, existence of a pair 2)x 0=2,Fx 0=1: the signalxis a 2-sparse eigen- of nodes with (52) proves that there exists a GFBVsuch that vector of the graph Laplacian. V max ≥2/3. In fact, using Corollary√ 2 this result can be We note that Theorem 8 precisely characterizes the second slightly improved toV max ≥1/ 2. case. Therefore, given a connected graph, existence of a pair of 4) In the case of repeated eigenvalues of the graph Laplacian, nodes that satisfy (52) implies thats0(x)≥3/2for all nonzero the GFB is not unique. However, uncertainty boundsdependon signals on the graph. Furthermore, the bound is tight, and the the selection of the GFB. When there are repeated eigenvalues signal that achieves the bound is known. This result is formally GFB should be selected properly (sparse eigenvector should be stated as follows. an element of GFB) in order to have (55). This point will be Theorem 9 (Uncertainty bound for connected graphs):For numerically demonstrated in Section VI-A3. anundirected,connected, andnon-negatively weightedgraph, In the following we will provide three classical graph exam- assume that there exist nodesiandjsatisfying the condition in ples that satisfy, or do not satisfy, the condition in (53). Notice (52). Then, the GFB with respect to the graph Laplaciancan be that these graphs are simple (undirected, unweighted, free from selected such that self-loops) and connected. 1) Complete Graph,KN:A complete graph ofNnodes has s0(x)≥3/2 ∀x=0. (55) an edge between any two nodes. Figure 1(a) provides a visual Furthermore, the signal achieving this bound,s0(x)=3/2,is representation ofK8.Letiandjbe two arbitrary nodes of a given asxi=−xj=1and zero everywhere else. ♦ complete graph. Then we haveN(i)={1,···,N}\{i}, and Proof:For a simple and connected graph, Theorem 7 says N(j)={1,···,N}\{j}. Therefore, we have that there is no signal such thats(x)=1. Sinces(x)can take 0 0 N(i)\{j}=N(j)\{i}=N(i)={1,···,N}\{i, j},(56) only half integers in[1,N],s0(x)=1implies thats0(x)≥3/2 for any nonzero signalx. which shows that a complete graph of an arbitrary size has a Furthermore, if there is a pair of nodes satisfying (52), The- 2-sparse eigenvector. orem 8 says that the graph Laplacian has a 2-sparse eigenvec- 2) Star Graph,SN:A star graph of sizeN has a center tor. Letvdenote this eigenvector. Then we havevi=−vj=1 node that is connected to any other node, and all the nodes are and zero everywhere else. Notice that GFB with respect to the connected only to the center node. Figure 1(b) provides a visual graph Laplaciancan beselected such thatvis an element of representation ofS9. Assume that the center node is labeled GFB. In this case we havev 0=2andFv 0=1, hence as 1. Letiandjbe two nodes other than the center node. Then s0(v)=3/2. This shows that the lower bound in (55) is tight we haveN(i)=N(j)={1}. Therefore, and attainable. N(i)\{j}=N(j)\{i}={1}, (57) There are four remarks regarding Theorem 9. 1) The tightness of the bound given in (55) doesnotdepend which shows that a star graph of an arbitrary size has a 2-sparse on the size and the global structure of the graph. Existence of a eigenvector. pair of nodes with (52) directly implies this result. 3) Cycle Graph,CN:A cycle graph of sizeN contains a 2) The signal that achieves the bound is localized on the graph. single cycle through all nodes. Figure 1(c) provides a visual rep- Notice that if two nodes have the property in (52), they must resentation ofC8. Notice thatC2=K2,C3=K3,C4=K2,2, have at least one common neighbor. (This follows from the fact hence they have 2-sparse eigenvectors as shown above. For 5416 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017

√ N ≥5,C doesnothave a pair of nodes that satisfy (53). −1 N adjacency matrix isF=W N, and we haveV max = N. Therefore, a cycle graph forN ≥5does not have a 2-sparse As a result, the strong √uncertainty principle for circulant graphs eigenvector. In fact, an eigenvector of a cycle graph of sizeN of sizeNiss0(x)≥ N. has at leastN/2nonzero values (see Section VI-A2). As shown in [22], this is a tight bound whenN is a perfect Above examples are carefully selected to point out an impor- square. Consider√ the “picket√ fence” signal√ which has support tant observation:sparsity of the graph and existence of sparse S={1,1+ N,1+2 N,···,1+N− N}with eigenvectors do not imply each other. This follows from the following three facts: 1) A complete graph is dense, yet it has xS =1, xS¯=0. (60) a sparse eigenvector. 2) A cycle graph is sparse, yet it does not √ have a sparse eigenvector. 3) A star graph is sparse, and it has a Then we havex=Fx=x.√ Notice that|S|= N.Asaresult −1 sparse eigenvector. we haves0(x)= N = V max, that is, strong0uncertainty One can also use Theorem 8 to find a sparse GFB of a given is achieved (Corollary 3). √ graph. Existence of a pair of nodes that satisfy (53) guarantees For the weak uncertainty we havep0(x)≥ N, that is, the existence of a 2-sparse eigenvector. When there is more than x 0 x 0≥N, wherexcorresponds to DFT ofx.Thisis one pair, it is possible to find various 2-sparse eigenvectors. Even a well-known uncertainty result given in [20]. Unlike the strong though those eigenvectors may not be orthonormal to each other uncertainty, weak uncertainty bound can be achieved for any they provide a sparse GFB. In fact,N−1eigenvectors of the N.Letxbe an impulse, thenxwill have no zero elements, graph Laplacian of a complete graph of sizeNcan be selected resulting inx 0 x 0=N. to be 2-sparse. These eigenvectors will be linearly independent, If the graph is unweighted, then circulant graphs are regular but not orthonormal. In this case, GFB has only3N−2nonzero (each node has the same degree). In this case the graph Laplacian entries. Details of these will not be elaborated here, and deserve can be written asL=dI−A, wheredis the degree of each an independent study. node. Therefore,Lis also a , and diagonalizable It is important to notice that the condition in (52) is purely al- byW N. As a result, strong uncertainty bound based on the gebraic, and does not require any numerical computation. There- graph Laplacian is also tight. fore, Theorems 8 and 9 arenotsubject to the problems discussed 2) Cycle Graph:In this part we will focus on theundirected in Section IV-B. In order to find a pair of nodes with the property cyclic graph as visually shown in Figure 1(c). Eigenvalues of the in (52), one can check every pair in a brute-force manner, which Laplacian of a cyclic graph are given asλk=2−2 cos(2πk/N) N results in 2 tests in total. Therefore, complexity of verifying for0≤k≤N−1[36]. Notice thatλ0=0is not a repeated that a graph has a pair of nodes with the property (52) is at eigenvalue. However, other eigenvalues have the property 2 mostO(N ). However, there may exist more efficient search λk=λN−kfork≥1. Therefore, the Laplacian of a cycle graph algorithms for this purpose. has 2-dimensional eigenspaces. LetSkdenote the 2-dimensional eigenspace of the Laplacian corresponding to eigenvalueλk.Let th H VI. EXAMPLES OFUNCERTAINTYBOUNDS wk denote thek column ofW N. ThenUk=[wk wN−k] spans the eigenspaceSksince the Laplacian is diagonalized by A. Standard Examples from Graph Theory √ W N. Notice that|(W N)i,j|=1/ N for all pairs of(i, j).As 1) Circulant Graphs:A graph is said to be circulant when its a result, each row ofUk has 2 norm of 2/N. Then, The- adjacency matrix is a circulant matrix under suitable permuta- orem 5 gives thatm(Sk)= 2/N. Using (50) we conclude tion of the node numbering [29]. This is a broad family including that the total number of nonzero elements of an eigenvector in −2 cyclic graphs (directed or undirected), complete graphs, com- Skcan be at least(m(Sk)) =N/2. Since this is true for any plete bi-partite graphs and more. The directed cyclic graph of eigenspace,any eigenvector of the Laplacian of a cycle graph sizeN, whose adjacency matrix is given as (of sizeN) has at leastN/2nonzero values. ⎡ ⎤ As discussed in the previous sub-section, we have 1 √ ⎢ ⎥ s0(x)≥ Nfor all nonzero signals on a cycle graph when GFB ⎢1 ⎥ H C = ∈ MN, (58) selected asW N. When we consider the additive uncertainty of N ⎢ .. ⎥ ⎣ . ⎦ elements of GFB, as in (45), we gets0(vi)≥(N+2)/4since 1 each eigenvector has at leastN/2nonzeros. As a result, ele- ments of GFB are not useful candidates to achieve the bound is particularly important since it relates the graph signal pro- √ s0(x)≥ N with equality. This is because eigenvectors of the cessing to classical signal processing [3], [4]. cycle graph are not sparse. The adjacency matrix of a circulant graph is a circulant matrix 3) Complete Graph:Being a circulant graph, GFB of a com- (with suitable permutation of vertices), and can be diagonalized plete graphcan beselected asW H. With this selection, the by the DFT matrix: N √ additive uncertainty bound is given ass0(x)≥ N. It should H be noted that the Laplacian of a complete graph (of sizeN) A =W ΛW N, (59) N has only two distinct eigenvalues: 0 with multiplicity 1,Nwith for some diagonalΛ, whereW N is the normalized DFT matrix multiplicityN−1. One can select different set of vectors to of sizeN. Hence, the graph Fourier transform based on the span theN−1dimensional eigenspace. In fact, as discussed in TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5417

has such a pair of nodes then the same pair satisfies (53) on the complement of the graph as well. This is due to the equality condition in (53) that remains satisfied when all the edges are complemented. Further notice that complement of aG(N, p)is aG(N,1−p)graph. As a resultG(N, p)andG(N,1−p)have the same probability of having a pair of nodes satisfying (53). This explains the symmetry of Figure 2(a) aroundp=1/2. It is important to note that Theorem 8 specifically consid- ers the case of connected graphs since it is trivial to find sparse eigenvectors in disconnected graphs (see (51)). How- Fig. 2. Probability ofG(N,p)(a) having a pair of nodes satisfying (53), ever, aG(N, p)tends to be disconnected whenpis small. In (b) being connectedandhaving a pair of nodes satisfying (53). Probabilities are obtained via averaging over104 experiments, hence the lowest observed factplog(N)/Nguarantees (almost surely)G(N, p)to be connected [43]. In order to get rid of the trivial cases we need to consider the probability ofG(N, p)having a pair of nodes Section V-B1, one of these vectors can be selected to be 2- with (53)andbeing connected. Experimental computation of sparse. With such a selection, by virtue of Theorem 9, the this probability is given in Figure 2(b). Notice that connectivity uncertainty bound is given ass0(x)≥3/2, which is signifi- is not preserved under complementation, hence Figure 2(b) is H cantly different from the one whenW N is used as GFB. At not symmetric aroundp=1/2. this point we are not favoring one selection of GFB over an- Figure 2(b) shows the existence of connected Erdos-R˝ enyi´ other. The sole purpose of this example is to show that selection graphs with 2-sparse eigenvectors. Figure 2(b) also suggests of the GFB is an important issue in the presence of repeated that asNgets larger it is less likely to find such graphs, which eigenvalues. can be explained as follows. The study in [44] states that the ∞-norm of any unit eigenvector ofG(N, p)is almost surely B.M-Block Cyclic Graphs o(1)forp=ω(log(N)/N), whereo(·)andω(·)denotes little- o and little-omega notations, respectively. That is, asN→ ∞ In [8]–[11], it is shown thatM-Block cyclic graphs play an we havev → 0for allv. Therefore,V → 0, hence, important role in the development of multirate processing of ∞ max V −1 → ∞. Since the uncertainty bound in (18) goes to in- graph signals. When we assume that all the edges have unit max finity we do not expect to find 2-sparse eigenvectors in connected weights, the adjacency matrix of anM-Block cyclic graph T Erdos-R˝ enyi´ graphs for large values ofN. of sizeN can be written as:A =CM ⊗ 1N/M 1N/M , whereCM is given in (58) and1N is a vector of sizeN T with all 1 entries. Notice that bothCM and1N/M 1N/M D. Real World Examples are circulant matrices, hence they are diagonalizable by nor- (k) malized DFT matrices of respective sizes. As a result, GFT In the following we will use the termλ to denote that the based on the adjacency matrix (and the graph Laplacian eigenvalueλhas multiplicityk. since all the nodes have the same degree) can be selected 1) Minnesota Road Graph:In this part, we will consider the H Minnesota road graph [7], [13]. We use the data publicly avail- asF=W M ⊗W . Notice thatV=F is unitary and √ N/M able in [45]. This graph has 2642 nodes in total where 2 nodes V −1 = N. max are disconnected to the rest of the graph. Since a road graph As an example consider the caseN =9andM=3, and con- is expected to be connected, we disregard those two nodes. sider the following signalx=[111]T ⊗ [1 0 0]T.TheGFT See [7], [13] for the visual representation of the graph. Here of this signal,x, is given as each node is an intersection, andai,j=1if there is a road T T T T x=W 3[111] ⊗ W 3[100]=[1 0 0] ⊗[1 1 1]. (61) connecting the intersections, otherwiseai,j=0. There are to- tal of 3302 undirected unweighted edges. The graph is simple Hence, we havex = x =3, ands(x)=3= V −1 . 0 0 0 max and connected,AandLare symmetric matrices (in particu- Therefore, strong uncertainty bound is achieved. In general, let √ lar, diagonalizable), henceV andV can be selected to be N be a perfect square andM = N.Then for anyM-Block A L unitary. cyclic graph of sizeNwith unit weights, we can find a signal For the Minnesota road graph, bothAandLhave repeated that achieves the strong uncertainty bound. eigenvalues. As a result,VA andVL arenotunique. In fact, the adjacency matrix has repeated eigenvalues of−1(15),0(44), ˝ ´ C. Erdos-Renyi Graphs and1(13). The graph Laplacian has repeated eigenvalues of An Erdos-R˝ enyi´ graphG(N, p)is a simple graph ofNnodes 0.3820(2),1(10),2(7),2.6180(2), and3(6). In order to minimize where an edge between a pair of nodes appears randomly and the uncertainty bound given by Corollary 3, we can selectVA independently with probabilityp[1], [43]. In Figure 2(a) we andVL such thatVA max andVL max are maximized. This empirically compute the probability of aG(N, p)having a pair idea is discussed in Section IV, and the closed-form solution of nodes satisfying the condition in (53). Notice that if a graph for such a selection is provided by Theorem 6. Via numerical 5418 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017

2) Co-appearance Network:In this example, we will con- sider the co-appearance network of characters in the famous novel Les Miserables´ by Victor Hugo [46]. Data is publicly available in [47]. This is an undirected butweightedgraph, where two characters are connected if they appear in the same scene, and the weight of an edge is the total number of co- appearances through the novel. The graph has 77 nodes and 254 (weighted) edges in total. The Laplacian of the graph has repeated eigenvalues of1(9), 13(2), and28(2). As a result, GFB with respect to the graph Laplacian,VL, is not unique. In order to minimize the bound given in Corollary 3, we use Theorem 6 and obtain the following result

H 0.9398 = max V max s.t.L=VΛLV . (65) V

Fig. 3. Pair of nodes in Minnesota road graph that result in 2-sparse eigen- When VL is selected to be the GFB, (65) gives vectors. Axes represent the geographical coordinates. The pairs that satisfy the the following uncertainty bounds (Corollaries 2 and 3) condition in (52) are colored in blue. Notice that the pairs in (a)–(d) generate s(x)≥p(x)≥ V −1 =1.0641. Due to discrete nature eigenvectors with eigenvalue 1, and the pairs in (e)–(f) generate eigenvectors 0 0 L max with eigenvalue 2. (See Theorem 8.) ofs0(x)it is clear that this bound cannot be satisfied with −1 equality. WhenVL max is rounded-off to the next attainable value ofs0(x), we get the same bounds as in (64). At this point evaluation of Theorem 6 on the Minnesota road graph, we obtain we can use Theorem 9 to find signals (if there is any) that achieve the following: the bound in (64). In a co-appearance graph, pair of nodes with the condition H 0.7071 = max V max s.t.A =VΛAV , (62) in (52) has a meaningful interpretation. If two characters al- V ways appear simultaneously, they will have the same num- H 0.8343 = max V max s.t.L=VΛLV . (63) ber of co-appearances with other characters, which implies V the condition in (52) mathematically. As an example, consider Due to Corollaries 2 and 3, whenVA is selected to characters “Brevet”, “Chenildieu”, and “Cochepaille” of the be the GFB, (62) gives the following uncertainty bounds novel Les Miserables.´ They are three witnesses in Champ- −1 s0(x)≥p0(x)≥ VA max =1.4142. WhenVL is selected mathieu’s trial, and appear simultaneously through the court to be the GFB, (63) gives the following uncertainty bounds scenes. Nodes (of the graph) that correspond to any two of these −1 s0(x)≥p0(x)≥ VL max =1.1987. three characters satisfy the condition in (52), which, in turn, It is important to remember thats0(x)can have values only implies that the graph Laplacian has a 2-sparse eigenvector, and on a discrete set, namely,s0(x)=k/2for some integerk≥2. s0(x)≥3/2is a tight uncertainty bound whenVL is selected As a result, for both selection of GFB, signals on the Minnesota as GFB. road graphcannotattain the uncertainty bound in Corollary 3 in a strict sense. However, by rounding-off the value of both VII. CONCLUSIONS V −1 andV −1 to the next attainable value ofs(x), A max L max 0 In this paper, we studied the concept of uncertainty princi- we get ple for signals defined over graphs. Unlike existing studies we took a non-local and discrete approach, where the vertex and the s0(x)≥3/2, (64) spectral domain spreads of a signal are defined as the number for the strong uncertainty bound for both selection of GFB. of nonzero elements of the signal and its GFT, respectively. We Even though (64) is a valid bound, Corollary 3 and Theorem 6 derived a lower bound for the total number of nonzero elements gives no further information about existence and characteriza- in both domains (on the graph and in the GFB) and showed tion of a signal that achieves the bound. At this point it is quite that a signal and its corresponding GFT cannot be arbitrarily interesting to observe that the bound in (64) is the same as the sparse simultaneously. Based on this, we obtained a new form bound provided by Theorem 9 (forVL as GFB), which requires of uncertainty principle for graph signals. When the graph has existence of a pair of nodes with the property in (53). In fact, the repeated eigenvalues we explained that GFB is not unique, and Minnesota road graphdoeshave 6 different pairs of nodes with the derived lower bound can have different values depending on the property in (53). These pairs are visualized in Figure 3. As the selected GFB. We provided a constructive method to find a result, the bound in (64) istight, and the signals that achieve a GFB that yields the smallest uncertainty bound. In order to the bound are defined by the pairs of nodes in Figure 3 (see find the signals that achieve the derived lower bound we consid- Theorem 9). It should be noted that tightness of (64) is valid ered sparse eigenvectors of the graph. We showed that the graph whenVL isselectedto include at least one 2-sparse eigenvector Laplacian has a 2-sparse eigenvector if and only if there exists generated by the pairs in Figure 3. a pair of nodes with the same neighbors. When this happens, TEKE AND VAIDYANATHAN: UNCERTAINTY PRINCIPLES AND SPARSE EIGENVECTORS OF GRAPHS 5419

the uncertainty bound is very low and the 2-sparse eigenvectors [21] M. Elad and A. M. Bruckstein, “A generalized uncertainty principle and achieve this bound. We presented examples of both classical sparse representation in pairs of bases,”IEEE Trans. Inf. Theory, vol. 48, no. 9, pp. 2558–2567, Sep. 2002. and real-world graphs with 2-sparse eigenvectors. We also dis- [22] M. Elad,Sparse and Redundant Representations. New York, NY, USA: cussed that, in some examples, the neighborhood structure has Springer, 2013. a meaningful interpretation. [23] O. Teke and P. P. Vaidyanathan, “Discrete uncertainty principles on graphs,” inProc. Asilomar Conf. Signals, Syst. Comput., Nov. 2016, pp. 1475–1479. [24] O. Teke and P. P. Vaidyanathan, “Sparse eigenvectors of graphs,” ACKNOWLEDGMENT inProc. Int. Conf. Acoust. Speech, Signal Process., Mar. 2017, pp. 3904–3908. The authors would like to thank the anonymous reviewers for [25] B. Pasdeloup, R. Alami, V. Gripon, and M. Rabbat, “Toward an uncer- pointing out the study in [19], for thoughtful questions about tainty principle for weighted graphs,” inProc. Eur. Signal Process. Conf., random graphs and for other very useful suggestions. Aug. 2015, pp. 1496–1500. [26] J. J. Benedetto and P. J. Koprowski, “Graph theoretic uncertainty principles,” inProc. Int. Conf. Sampling Theory Appl., May 2015, pp. 357–361. EFERENCES R [27] B. Ricaud and B. Torrsani, “Refined support and entropic uncertainty [1] M.E.J.Newman,Networks: An Introduction. Oxford, U.K.: Oxford Univ. inequalities,”IEEE Trans. Inf. Theory, vol. 59, no. 7, pp. 4272–4279, Press, 2010. Jul. 2013. [2] D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, [28] A. Banerjee and J. Jost, “On the spectrum of the normalized “The emerging field of signal processing on graphs: Extending high- graph Laplacian,”Linear Algorithm Appl., vol. 428, pp. 3015–3022, dimensional data analysis to networks and other irregular domains,”IEEE 2008. Signal Process. Mag., vol. 30, no. 3, pp. 83–98, May 2013. [29] V. Ekambaram, G. Fanti, B. Ayazifar, and K. Ramchandran, “Circulant [3] A. Sandryhaila and J. M. F. Moura, “Big data analysis with signal process- structures and graph signal processing,” inProc. Int. Conf. Image Process., ing on graphs: Representation and processing of massive data sets with Sep. 2013, pp. 834–838. irregular structure,”IEEE Signal Process. Mag., vol. 31, no. 5, pp. 80–90, [30] V. Ekambaram, G. Fanti, B. Ayazifar, and K. Ramchandran, “Spline-like Sep. 2014. wavelet filterbanks for multiresolution analysis of graph-structured data,” [4] A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on IEEE Trans. Signal Inf. Process. Over Netw., vol. 1, no. 4, pp. 268–278, graphs,”IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1644–1656, Dec. 2015. Apr. 2013. [31] R. Merris, “Laplacian matrices of graphs: A survey,”Linear Algebra Appl., [5] A. Gavili and X.-P. Zhang, “On the shift operator and optimal filtering in vol. 197, pp. 143–176, 1994. graph signal processing,” arXiv: 1511.03512v3, Jun. 2016. [32] R. Grone, “On the geometry and Laplacian of a graph,”Linear Algebra [6] S. Sardellitti, S. Barbarossa, and P. Di Lorenzo, “On the graph Fourier Appl., vol. 150, pp. 167–178, 1991. transform for directed graphs,”IEEE J. Sel. Topics Signal Process., 2017, [33] F. K. Bell and P. Rowlinson, “On the multiplicities of graph eigen- preprint, doi: 10.1109/JSTSP.2017.2726979. values,”Bull. London Math. Soc., vol. 35, no. 3, pp. 401–408, May [7] S. Narang and A. Ortega, “Perfect reconstruction two-channel wavelet fil- 2003. ter banks for graph structured data,”IEEE Trans. Signal Process., vol. 60, [34] D. Cvetkovic, P. Rowlinson, and S. Simic,Eigenspaces of Graphs. no. 6, pp. 2786–2799, Jun. 2012. Cambridge U.K.: Cambridge Univ. Press, 2008. [8] O. Teke and P. P. Vaidyanathan, “Extending classical multirate signal [35] M. Muzychuk and M. Klin, “On graphs with three eigenvalues,”Discrete processing theory to graphs—Part I: Fundamentals,”IEEE Trans. Signal Math., vol. 189, no. 1–3, pp. 191–207, 1998. Process., vol. 65, no. 2, pp. 409–422, Jan. 2017. [36] A. E. Brouwer and W. H. Haemers,Spectra of Graphs.NewYork,NY, [9] O. Teke and P. P. Vaidyanathan, “Extending classical multirate signal pro- USA: Springer, 2011. cessing theory to graphs—Part II: M-Channel filter banks,”IEEE Trans. [37] J. A. Deri and J. M. F. Moura, “Taxi data in New York city: A network Signal Process., vol. 65, no. 2, pp. 423–437, Jan. 2017. perspective,” inProc. Asilomar Conf. Signals, Syst. Comput., Nov 2015, [10] O. Teke and P. P. Vaidyanathan, “Fundamentals of multirate graph signal pp. 1829–1833. processing,” inProc. Asilomar Conf. Signals, Syst. Comput., Nov. 2015, [38] T. F. Coleman and A. Pothen, “The null space problem I. Complex- pp. 1791–1795. ity,”SIAM J. Algebr. Discrete Methods, vol. 7, no. 4, pp. 527–537, [11] O. Teke and P. P. Vaidyanathan, “Graph filter banks with m-channels, max- 1986. imal decimation, and perfect reconstruction,” inProc. Int. Conf. Acoust. [39] T. F. Coleman and A. Pothen, “The null space problem II. Algo- Speech, Signal Process., Mar. 2016, pp. 4089–4093. rithms,”SIAM J. Algebr. Discrete Methods, vol. 8, no. 4, pp. 544–563, [12] X. Wang, P. Liu, and Y. Gu, “Local-set-based graph signal reconstruc- 1987. tion,”IEEE Trans. Signal Process., vol. 63, no. 9, pp. 2432–2444, [40] M. W. Berry, M. T. Heath, I. Kaneko, M. Lawo, R. J. Plem- May 2015. mons, and R. C. Ward, “An algorithm to compute a sparse ba- [13] D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs sis of the null space,”Numer. Math., vol. 47, no. 4, pp. 483–504, via spectral graph theory,”Appl. Comput. Harmonic Anal., vol. 30, no. 2, 1985. pp. 129–150, 2011. [41] J. R. Gilbert and M. T. Heath, “Computing a sparse basis for the null [14] A. Anis, A. Gadde, and A. Ortega, “Towards a sampling theorem for space,”SIAM J. Algebr. Discrete Methods, vol. 8, no. 3, pp. 446–459, signals on arbitrary graphs,” inProc. Int. Conf. Acoust. Speech, Signal 1987. Process., May 2014, pp. 3864–3868. [42] Q. Qu, J. Sun, and J. Wright, “Finding a sparse vector in a subspace: [15] G. B. Folland and A. Sitaram, “The uncertainty principle: A mathematical Linear sparsity using alternating directions,”Adv. Neural Inform. Process. survey,”J. Fourier Anal. Appl., vol. 3, no. 3, pp. 207–238, 1997. Syst., vol. 27, pp. 3401–3409, 2014. [16] A. Agaskar and Y. M. Lu, “A spectral graph uncertainty principle,”IEEE [43] R. van der Hofstad,Random Graphs and Complex Networks: Volume 1. Trans. Inf. Theory, vol. 59, no. 7, pp. 4338–4356, Jul. 2013. Cambridge U.K.: Cambridge Univ. Press, 2016. [17] M. Tsitsvero, S. Barbarossa, and P. D. Lorenzo, “Signals on graphs: Un- [44] L. V. Tran, V. H. Vu, and K. Wang, “Sparse random graphs: Eigenvalues certainty principle and sampling,”IEEE Trans. Signal Process., vol. 64, and eigenvectors,”Random Structures Algorithms, vol. 42, no. 1, pp. 110– no. 18, pp. 4845–4860, 2016. 134, 2013. [18] P. J. Koprowski, “Graph theoretic uncertainty and feasibility,” arXiv: [45] S. Narang and A. Ortega, “Graph Bior Wavelet Toolbox,” 2013. [Online]. 1603.02059, Feb. 2016. Available: http://biron.usc.edu/wiki/index.php/Graph_Filterbanks [19] N. Perraudin, B. Ricaud, D. Shuman, and P. Vandergheynst, “Global and [46] D. E. Knuth,The Stanford GraphBase: A Platform for Com- local uncertainty principles for signals on graphs,” arXiv: 1603.03030, binatorial Computing. Reading, MA, USA: Addison-Wesley, Mar. 2016. 1993. [20] D. L. Donoho and P. B. Stark, “Uncertainty principles and signal recovery,” [47] M. E. J. Newman., “Network data,” 2013. [Online]. Available: http://www- SIAM J. Appl. Math, vol. 49, no. 3, pp. 906–931, 1989. personal.umich.edu/∼mejn/netdata/ 5420 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 20, OCTOBER 15, 2017

Oguzhan Teke(S’12) received the B.S. and the M.S. P. P. Vaidyanathan(S’80–M’83–SM’88–F’91) was degrees in electrical and electronics engineering both born in Calcutta, India, on October 16, 1954. He from Bilkent University, Ankara, Turkey, in 2012, and received the B.Sc. (Hons.) degree in physics and 2014, respectively. He is currently working toward the B.Tech. and M.Tech. degrees in radiophysics the Ph.D. degree in electrical engineering at the Cal- and electronics, all from the University of Calcutta, ifornia Institute of Technology, Pasadena, CA, USA. Kolkata, India, in 1974, 1977, and 1979, respectively, His research interests include signal processing, and the Ph.D. degree in electrical and computer en- graph signal processing, and convex optimization. gineering from the University of California at Santa Barbara, Santa Barbara, CA, USA, in 1982. He was a postdoctoral fellow at the University of California, Santa Barbara, from September 1982 to March 1983. In March 1983, he joined the Electrical Engineering Department, Calfornia Institute of Technology as an Assistant Professor, and since 1993 has been a Professor of electrical engineering there. His main research interests include dig- ital signal processing, multirate systems, wavelet transforms, signal processing for digital communications, genomic signal processing, radar signal processing, and sparse array signal processing. Dr. Vaidyanathan served as the Vice-Chairman of the Technical Program committee for the 1983 IEEE International Symposium on Circuits and Sys- tems, and as the Technical Program Chairman for the 1992 IEEE International Symposium on Circuits and Systems. He was an Associate Editor for the IEEE TRANSACTIONS ON CIRCUITS ANDSYSTEMS for the period 1985–1987, and is currently an Associate Editor for the JOURNAL IEEE SIGNALPROCESSING LETTERS, and a Consulting Editor for the Journal Applied and Computational Harmonic Analysis. He has been a Guest Editor in 1998 for special issues of the IEEE TRANSACTIONS ONSIGNALPROCESSINGand the IEEE TRANSACTIONS ONSIGNALPROCESSING ONCIRCUITS ANDSYSTEMSII, on the topics of filter banks, wavelets and subband coders. He has authored more than 500 papers in journals and conferences, and is the author/coauthor of the four books Multirate systems and filter banks, Prentice Hall, 1993, Linear Prediction Theory, Morgan and Claypool, 2008, and (with Phoong and Lin) Signal Processing and Opti- mization for Transceiver Systems, Cambridge University Press, 2010, and Filter Bank Transceivers for OFDM and DMT Systems, Cambridge University Press, 2010. He has written several chapters for various signal processing handbooks. He received the Award for excellence in teaching at the California Institute of Technology for the years 1983–1984, 1992–1993, and 1993–1994. He also received the NSF’s Presidential Young Investigator Award in 1986. In 1989, he received the IEEE ASSP Senior Award for his paper on multirate perfect- reconstruction filter banks. In 1990, he received the S. K. Mitra Memorial Award from the Institute of Electronics and Telecommuncations Engineers, India, for his joint paper in the IETE journal. In 2009, he was chosen to receive the IETE students’ Journal Award for his tutorial paper in theIETE Journal of Educa- tion.He was also the coauthor of a paper on linear-phase perfect reconstruction filter banks in the IEEE SP Transactions, for which the first author (Truong Nguyen) received the Young Outstanding Author Award in 1993. He received the 1995 F. E. Terman Award of the American Society for Engineering Edu- cation, sponsored by Hewlett Packard Co., for his contributions to engineering education. He has given several plenary talks including at the IEEE ISCAS-04, Sampta-01, Eusipco-98, SPCOM-95, and Asilomar-88 conferences on signal processing. He has been chosen a Distinguished Lecturer for the IEEE Signal Processing Society for the year 1996–1997. In 1999, he was chosen to receive the IEEE CAS Society’s Golden Jubilee Medal. He received the IEEE Gustav Kirchhoff Award (an IEEE Technical Field Award) in 2016, for “Fundamental contributions to digital signal processing.” In 2016, he received the Northrup Grumman Prize for excellence in Teaching at Caltech. He received the IEEE Signal Processing Society’s Technical Achievement Award (2002), Education Award (2012), and the Society Award (2016).