Seidel-Estrada Index Jalal Askari1, Ali Iranmanesh1* and Kinkar Ch Das2

Total Page:16

File Type:pdf, Size:1020Kb

Seidel-Estrada Index Jalal Askari1, Ali Iranmanesh1* and Kinkar Ch Das2 View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Crossref Askari et al. Journal of Inequalities and Applications (2016)2016:120 DOI 10.1186/s13660-016-1061-9 R E S E A R C H Open Access Seidel-Estrada index Jalal Askari1, Ali Iranmanesh1* and Kinkar Ch Das2 *Correspondence: [email protected] Abstract 1Department of Mathematics, Faculty of Mathematical Sciences, Let G be a simple graph with n vertices and (0, 1)-adjacency matrix A.Asusual, Tarbiat Modares University, P.O. Box S(G)=J –2A – I denotes the Seidel matrix of the graph G. Suppose θ1, θ2, ..., θn and 14115-137, Tehran, Iran λ1, λ2, ..., λn are the eigenvalues of the adjacency matrix and the Seidel matrix of G, Full list of author information is n θi available at the end of the article respectively. The Estrada index of the graph G is defined as i=1 e .Wedefineand n λi investigate the Seidel-Estrada index, SEE = SEE(G)= i=1 e . In this paper the basic properties of the Seidel-Estrada index are investigated. Moreover, some lower and upper bounds for the Seidel-Estrada index in terms of the number of vertices are obtained. In addition, some relations between SEE and the Seidel energy Es(G)are presented. MSC: 05C50; 05C90 Keywords: eigenvalue; Seidel matrix; Seidel-Estrada index 1 Introduction Throughout this paper, let G be a simple graph with vertex set V = {v, v,...,vn}.The adjacency matrix A(G)=[aij]ofG isabinarymatrixofordern such that aij = if the vertex vi is adjacent to the vertex vj, and otherwise. The Seidel matrix S(G)=[sij]isequalto Jn –A(G)–In,wherethesymbolJn denotes the square matrix of order n all of whose entries are equal to . Since A(G)andS(G) are real symmetric matrices, their eigenvalues must be real. The eigenvalues of G are referred to as the eigenvalues of A(G), denoted by θ(A(G)), θ(A(G)),...,θn(A(G)) and similarly, λ(S(G)) ≥ λ(S(G)) ≥···≥λn(S(G)), the Seidel eigenvalues of G.Forsimplicity,wewriteλi instead of λi(S(G)). The sequence of n Seidel eigenvalues is called the Seidel spectrum of G (for short S-spec(G)). We now present an example of pairs of graphs on n vertices with the same Seidel spectrum such that one of them is a connected graph and the other one is not. Example Here we address two examples from non-isomorphic graphs which are co- spectral: (i) S-spec(Kp,q)=S-spec(K n)ifp + q = n, (ii) S-spec(Kn/ ∪ Kn/)=S-spec(Kn)(n is even). In our recent studies on Seidel eigenvalues it has been shown that a lower and upper bound exists for the sum of powers of the absolute eigenvalues of the Seidel matrix, sug- gesting a common core architecture similar to the cases of adjacency and signless Lapla- cian matrix []. The reader can find more information related to the eigenvalues of the © 2016 Askari et al. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, pro- vided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Askari et al. Journal of Inequalities and Applications (2016)2016:120 Page 2 of 9 adjacency matrix and the spectrum of G in []. The Estrada index of a graph G is defined as n EE(G)= eθi(A(G)).() i= This graph-spectrum-based structural descriptor was first proposed by Estrada in ; see [–]. Already, de la Peña et al. [] proposed to call it the Estrada index, a name that in the meantime has been commonly accepted. Several kinds of Estrada indices were dis- cussed in [–] and the references therein. For the recent work of the mathematical prop- erties on the Estrada and signless Laplacian Estrada indices, see [, ]. In this review, we summarize some indirect evidence to support the concept of a Seidel matrix. Similarly, we define the Seidel-Estrada index for the graph G in full analogy with equation ()as n SEE(G)= eλi .() i= For details on the theory of the Estrada index and several lower and upper bounds, see [, , , ]. Ayyaswamy et al. [] gave a lower bound for a signless Laplacian of the graph using the numbers of vertices and edges. A conference matrix is a square matrix C of order n with zero diagonal and ± off the diagonal, such that CCT =(n –)I.IfC is symmetric, then C is the Seidel matrix of a graph and this graph is called a conference graph;see[, ]. The aim of this paper is to find the upper and lower bounds for the Seidel-Estrada index of the graph G. The rest of the paper is organized as follows: In Section ,wegive some definitions and obtain some upper and lower bounds for the Seidel-Estrada index. In Section , we present a relation between the Seidel-Estrada index and the Seidel energy of a graph G, and we prove several results on the Seidel-Estrada index. 2 Estimates of the Seidel-Estrada index Here we give some new lower and upper bounds on Seidel-Estrada index. For convenience, we give some notation and properties which will be used in the following proofs of our n k k n | |k x results. Let Sk = Sk(G)= i=(λi) ,andS (G)= i= λi .FromtheTaylorexpansionofe , it is easy to see that the Seidel-Estrada index and Sk(G)ofG are related by ∞ S (G) SEE(G)= k .() k! k= It is easy to see that any graph G of order n ≥ hasSEE(G)>n. (If equality holds, then λ = λ = ···= λn = . By Lemma .(ii), we can get a contradiction.) Lemma . [] For any graph G with n vertices, we have n (i) S(G)= λi = trace S(G) =, i= n (ii) S (G)=S(G)= λi = trace S (G) =(n –) +(n –)=n(n –), i= Askari et al. Journal of Inequalities and Applications (2016)2016:120 Page 3 of 9 n ≤ ≤ (iii) S(G)= λi S (G) (n –) +(n –), i= n (iv) S (G)= |λi| ≥ n (n –) , i= n k k k (v) Sk(G) ≤ S (G)= |λi| ≤ (n –) +(n –), k =,,..., i= n k k k (vi) S (G)= |λi| ≥ n (n –) , k =,,.... i= Lemma . [] Let B be an n × n symmetric matrix with eigenvalues λ ≥ λ ≥···≥λn and let Bk be its leading k × ksubmatrixofB. Then, for i =,,...,k, λn–i+(B) ≤ λk–i+(Bk) ≤ λk–i+(B), () where λi(B) is the ith greatest eigenvalue of B. Lemma . Let G be a graph of order n ≥ . Then λ ≥ . Proof Since n ≥ , therefore K or K must be an induced subgraph of G.Sinceλ(K)= λ(K ) = , by Lemma ., we get the required result. Theorem . Let G be a simple graph with n ≥ and det S(G) =. Then the Seidel- Estrada index of G is bounded by √ n(n –)<SEE(G)<n –+e n(n–).() Proof (a) To prove this theorem, we apply a technique similar to the proof of Theorem in []. At first we prove that the left inequality of (): From (), we get n SEE(G)= eλi + eλi eλj .() i= i<j In view by the inequality between the geometric and arithmetic mean, we get n n– n(n–) n(n–) eλi eλj ≥ n(n –) eλi eλj = n(n –) eλi i<j i<j i= = n(n –) eS(G) n = n(n – ). () By using the power series expansion, and Lemma .,weget ∞ n n k n n n n k (λi) (λi) (λi) (λi) (λi) eλi = = + + + k! ! ! ! k! i= i= k= i= i= i= i= k≥ n (λ )k n (λ )k = S +S +S + i = n ++n(n –)+ i . k! k! i= k≥ i= k≥ Askari et al. Journal of Inequalities and Applications (2016)2016:120 Page 4 of 9 k k (λi) ≥ (λi) ∈ Since k≥ k! k≥ k! , we shall use a multiplier γ [, ], so as to arrive at n n k (λi) eλi ≥ n +n(n –)+γ k! i= i= k≥ n (λ )k = n +n(n –)–γ n – γ n(n –)+γ i k! i= k≥ = n +n(n –)–γ n – γ n(n –)+γ SEE(G). () By substituting ()and()backinto()andsolvingforSEE(G), we obtain γ γ SEE(G) ≥ + + n( – γ /) – n( + γ /). () Now, we consider a function x x x x f (x)= + + n – – n + .() We have f (x)<forx ≥ . Thus f (x) is a monotonically decreasing function for x >. Consequently, the best lower bound for SEE(G)isattainedγ = . Setting γ =in(), we arrive at the first half of Theorem .: SEE(G) ≥ n(n –). Now, we have to prove that the lower bound is strict. For this purpose, we assume that the left equality holds in (). Then we have eλi+λj = eλk +λ , for any i, j, k, ∈{,,...,n}, that is, eλ+λ = eλ+λ = ···= eλ+λn = eλ+λ , and hence λ = λ = ···= λn. By Lemma . and the trace of S(G), we can get a contradiction. Thus the left equality in ()isstrict. (b) Let us prove now the right inequality. Since f (x)=ex monotonically increases in the interval (–∞, ∞), we starting with equa- tion (), we get ∞ n (λ )k n (|λ |)k SEE(G)=n + i ≤ n + i k! k! i= k= i= k≥ n k [(λ )] = n + i k! k≥ i= Askari et al. Journal of Inequalities and Applications (2016)2016:120 Page 5 of 9 √ k ( n(n –))k ≤ n + S (G) = n + k! k! k≥ k≥ ∞ √ ( n(n –))k √ = n –+ = n –+e n(n–).() k! k= Suppose that the right equality holds in ().
Recommended publications
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Network Properties Revealed Through Matrix Functions 697
    SIAM REVIEW c 2010 Society for Industrial and Applied Mathematics Vol. 52, No. 4, pp. 696–714 Network Properties Revealed ∗ through Matrix Functions † Ernesto Estrada Desmond J. Higham‡ Abstract. The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, com- municability,andbetweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks. Key words. centrality measures, clustering methods, communicability, Estrada index, Fiedler vector, graph Laplacian, graph spectrum, power series, resolvent AMS subject classifications. 05C50, 05C82, 91D30 DOI. 10.1137/090761070 1. Motivation. 1.1. Introduction. Connections are important. Across the natural, technologi- cal, and social sciences it often makes sense to focus on the pattern of interactions be- tween individual components in a system [1, 8, 61].
    [Show full text]
  • Diagonal Sums of Doubly Stochastic Matrices Arxiv:2101.04143V1 [Math
    Diagonal Sums of Doubly Stochastic Matrices Richard A. Brualdi∗ Geir Dahl† 7 December 2020 Abstract Let Ωn denote the class of n × n doubly stochastic matrices (each such matrix is entrywise nonnegative and every row and column sum is 1). We study the diagonals of matrices in Ωn. The main question is: which A 2 Ωn are such that the diagonals in A that avoid the zeros of A all have the same sum of their entries. We give a characterization of such matrices, and establish several classes of patterns of such matrices. Key words. Doubly stochastic matrix, diagonal sum, patterns. AMS subject classifications. 05C50, 15A15. 1 Introduction Let Mn denote the (vector) space of real n×n matrices and on this space we consider arXiv:2101.04143v1 [math.CO] 11 Jan 2021 P the usual scalar product A · B = i;j aijbij for A; B 2 Mn, A = [aij], B = [bij]. A permutation σ = (k1; k2; : : : ; kn) of f1; 2; : : : ; ng can be identified with an n × n permutation matrix P = Pσ = [pij] by defining pij = 1, if j = ki, and pij = 0, otherwise. If X = [xij] is an n × n matrix, the entries of X in the positions of X in ∗Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA. [email protected] †Department of Mathematics, University of Oslo, Norway. [email protected]. Correspond- ing author. 1 which P has a 1 is the diagonal Dσ of X corresponding to σ and P , and their sum n X dP (X) = xi;ki i=1 is a diagonal sum of X.
    [Show full text]
  • Codes and Modules Associated with Designs and T-Uniform Hypergraphs Richard M
    Codes and modules associated with designs and t-uniform hypergraphs Richard M. Wilson California Institute of Technology (1) Smith and diagonal form (2) Solutions of linear equations in integers (3) Square incidence matrices (4) A chain of codes (5) Self-dual codes; Witt’s theorem (6) Symmetric and quasi-symmetric designs (7) The matrices of t-subsets versus k-subsets, or t-uniform hy- pergaphs (8) Null designs (trades) (9) A diagonal form for Nt (10) A zero-sum Ramsey-type problem (11) Diagonal forms for matrices arising from simple graphs 1. Smith and diagonal form Given an r by m integer matrix A, there exist unimodular matrices E and F ,ofordersr and m,sothatEAF = D where D is an r by m diagonal matrix. Here ‘diagonal’ means that the (i, j)-entry of D is 0 unless i = j; but D is not necessarily square. We call any matrix D that arises in this way a diagonal form for A. As a simple example, ⎛ ⎞ 0 −13 10 314⎜ ⎟ 100 ⎝1 −1 −1⎠ = . 21 4 −27 050 01−2 The matrix on the right is a diagonal form for the middle matrix on the left. Let the diagonal entries of D be d1,d2,d3,.... ⎛ ⎞ 20 0 ··· ⎜ ⎟ ⎜0240 ···⎟ ⎜ ⎟ ⎝0 0 120 ···⎠ . ... If all diagonal entries di are nonnegative and di divides di+1 for i =1, 2,...,thenD is called the integer Smith normal form of A, or simply the Smith form of A, and the integers di are called the invariant factors,ortheelementary divisors of A.TheSmith form is unique; the unimodular matrices E and F are not.
    [Show full text]
  • The Generalized Dedekind Determinant
    Contemporary Mathematics Volume 655, 2015 http://dx.doi.org/10.1090/conm/655/13232 The Generalized Dedekind Determinant M. Ram Murty and Kaneenika Sinha Abstract. The aim of this note is to calculate the determinants of certain matrices which arise in three different settings, namely from characters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. Seemingly disparate, these results arise from a common framework suggested by elementary linear algebra. 1. Introduction The purpose of this note is three-fold. We prove three seemingly disparate results about matrices which arise in three different settings, namely from charac- ters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. In this section, we state these theorems. In Section 2, we state a lemma from elementary linear algebra, which lies at the heart of our three theorems. A detailed discussion and proofs of the theorems appear in Sections 3, 4 and 5. In what follows below, for any n × n matrix A and for 1 ≤ i, j ≤ n, Ai,j or (A)i,j will denote the (i, j)-th entry of A. A diagonal matrix with diagonal entries y1,y2, ...yn will be denoted as diag (y1,y2, ...yn). Theorem 1.1. Let G = {x1,x2,...xn} be a finite abelian group and let f : G → C be a complex-valued function on G. Let F be an n × n matrix defined by F −1 i,j = f(xi xj). For a character χ on G, (that is, a homomorphism of G into the multiplicative group of the field C of complex numbers), we define Sχ := f(s)χ(s).
    [Show full text]
  • Dynamical Systems Associated with Adjacency Matrices
    DISCRETE AND CONTINUOUS doi:10.3934/dcdsb.2018190 DYNAMICAL SYSTEMS SERIES B Volume 23, Number 5, July 2018 pp. 1945{1973 DYNAMICAL SYSTEMS ASSOCIATED WITH ADJACENCY MATRICES Delio Mugnolo Delio Mugnolo, Lehrgebiet Analysis, Fakult¨atMathematik und Informatik FernUniversit¨atin Hagen, D-58084 Hagen, Germany Abstract. We develop the theory of linear evolution equations associated with the adjacency matrix of a graph, focusing in particular on infinite graphs of two kinds: uniformly locally finite graphs as well as locally finite line graphs. We discuss in detail qualitative properties of solutions to these problems by quadratic form methods. We distinguish between backward and forward evo- lution equations: the latter have typical features of diffusive processes, but cannot be well-posed on graphs with unbounded degree. On the contrary, well-posedness of backward equations is a typical feature of line graphs. We suggest how to detect even cycles and/or couples of odd cycles on graphs by studying backward equations for the adjacency matrix on their line graph. 1. Introduction. The aim of this paper is to discuss the properties of the linear dynamical system du (t) = Au(t) (1.1) dt where A is the adjacency matrix of a graph G with vertex set V and edge set E. Of course, in the case of finite graphs A is a bounded linear operator on the finite di- mensional Hilbert space CjVj. Hence the solution is given by the exponential matrix etA, but computing it is in general a hard task, since A carries little structure and can be a sparse or dense matrix, depending on the underlying graph.
    [Show full text]
  • Hadamard and Conference Matrices
    Hadamard and conference matrices Peter J. Cameron December 2011 with input from Dennis Lin, Will Orrick and Gordon Royle Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI. A matrix attaining the bound is a Hadamard matrix. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? A matrix attaining the bound is a Hadamard matrix. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI.
    [Show full text]
  • Adjacency Matrices Text Reference: Section 2.1, P
    Adjacency Matrices Text Reference: Section 2.1, p. 114 The purpose of this set of exercises is to show how powers of a matrix may be used to investigate graphs. Special attention is paid to airline route maps as examples of graphs. In order to study graphs, the notion of graph must first be defined. A graph is a set of points (called vertices,ornodes) and a set of lines called edges connecting some pairs of vertices. Two vertices connected by an edge are said to be adjacent. Consider the graph in Figure 1. Notice that two vertices may be connected by more than one edge (A and B are connected by 2 distinct edges), that a vertex need not be connected to any other vertex (D), and that a vertex may be connected to itself (F). Another example of a graph is the route map that most airlines (or railways) produce. A copy of the northern route map for Cape Air from May 2001 is Figure 2. This map is available at the web page: www.flycapeair.com. Here the vertices are the cities to which Cape Air flies, and two vertices are connected if a direct flight flies between them. Figure 2: Northern Route Map for Cape Air -- May 2001 Application Project: Adjacency Matrices Page 2 of 5 Some natural questions arise about graphs. It might be important to know if two vertices are connected by a sequence of two edges, even if they are not connected by a single edge. In Figure 1, A and C are connected by a two-edge sequence (actually, there are four distinct ways to go from A to C in two steps).
    [Show full text]
  • On Structured Realizability and Stabilizability of Linear Systems
    On Structured Realizability and Stabilizability of Linear Systems Laurent Lessard Maxim Kristalny Anders Rantzer Abstract We study the notion of structured realizability for linear systems defined over graphs. A stabilizable and detectable realization is structured if the state-space matrices inherit the sparsity pattern of the adjacency matrix of the associated graph. In this paper, we demonstrate that not every structured transfer matrix has a structured realization and we reveal the practical meaning of this fact. We also uncover a close connection between the structured realizability of a plant and whether the plant can be stabilized by a structured controller. In particular, we show that a structured stabilizing controller can only exist when the plant admits a structured realization. Finally, we give a parameterization of all structured stabilizing controllers and show that they always have structured realizations. 1 Introduction Linear time-invariant systems are typically represented using transfer functions or state- space realizations. The relationship between these representations is well understood; every transfer function has a minimal state-space realization and one can easily move between representations depending on the need. In this paper, we address the question of whether this relationship between representa- tions still holds for systems defined over directed graphs. Consider the simple two-node graph of Figure 1. For i = 1, 2, the node i represents a subsystem with inputs ui and outputs yi, and the edge means that subsystem 1 can influence subsystem 2 but not vice versa. The transfer matrix for any such a system has the sparsity pattern S1. For a state-space realization to make sense in this context, every state should be as- sociated with a subsystem, which means that the state should be computable using the information available to that subsystem.
    [Show full text]
  • A Note on the Spectra and Eigenspaces of the Universal Adjacency Matrices of Arbitrary Lifts of Graphs
    A note on the spectra and eigenspaces of the universal adjacency matrices of arbitrary lifts of graphs C. Dalf´o,∗ M. A. Fiol,y S. Pavl´ıkov´a,z and J. Sir´aˇnˇ x Abstract The universal adjacency matrix U of a graph Γ, with adjacency matrix A, is a linear combination of A, the diagonal matrix D of vertex degrees, the identity matrix I, and the all-1 matrix J with real coefficients, that is, U = c1A + c2D + c3I + c4J, with ci 2 R and c1 6= 0. Thus, as particular cases, U may be the adjacency matrix, the Laplacian, the signless Laplacian, and the Seidel matrix. In this note, we show that basically the same method introduced before by the authors can be applied for determining the spectra and bases of all the corresponding eigenspaces of arbitrary lifts of graphs (regular or not). Keywords: Graph; covering; lift; universal adjacency matrix; spectrum; eigenspace. 2010 Mathematics Subject Classification: 05C20, 05C50, 15A18. 1 Introduction For a graph Γ on n vertices, with adjacency matrix A and degree sequence d1; : : : ; dn, the universal adjacency matrix is defined as U = c1A+c2D +c3I +c4J, with ci 2 R and c1 6= 0, where D = diag(d1; : : : ; dn), I is the identity matrix, and J is the all-1 matrix. See, for arXiv:1912.04740v1 [math.CO] 8 Dec 2019 instance, Haemers and Omidi [10] or Farrugia and Sciriha [5]. Thus, for given values of the ∗Departament de Matem`atica, Universitat de Lleida, Igualada (Barcelona), Catalonia, [email protected] yDepartament de Matem`atiques,Universitat Polit`ecnicade Catalunya; and Barcelona Graduate
    [Show full text]
  • One Method for Construction of Inverse Orthogonal Matrices
    ONE METHOD FOR CONSTRUCTION OF INVERSE ORTHOGONAL MATRICES, P. DIÞÃ National Institute of Physics and Nuclear Engineering, P.O. Box MG6, Bucharest, Romania E-mail: [email protected] Received September 16, 2008 An analytical procedure to obtain parametrisation of complex inverse orthogonal matrices starting from complex inverse orthogonal conference matrices is provided. These matrices, which depend on nonzero complex parameters, generalize the complex Hadamard matrices and have applications in spin models and digital signal processing. When the free complex parameters take values on the unit circle they transform into complex Hadamard matrices. 1. INTRODUCTION In a seminal paper [1] Sylvester defined a general class of orthogonal matrices named inverse orthogonal matrices and provided the first examples of what are nowadays called (complex) Hadamard matrices. A particular class of inverse orthogonal matrices Aa()ij are those matrices whose inverse is given 1 t by Aa(1ij ) (1 a ji ) , where t means transpose, and their entries aij satisfy the relation 1 AA nIn (1) where In is the n-dimensional unit matrix. When the entries aij take values on the unit circle A–1 coincides with the Hermitian conjugate A* of A, and in this case (1) is the definition of complex Hadamard matrices. Complex Hadamard matrices have applications in quantum information theory, several branches of combinatorics, digital signal processing, etc. Complex orthogonal matrices unexpectedly appeared in the description of topological invariants of knots and links, see e.g. Ref. [2]. They were called two- weight spin models being related to symmetric statistical Potts models. These matrices have been generalized to two-weight spin models, also called , Paper presented at the National Conference of Physics, September 10–13, 2008, Bucharest-Mãgurele, Romania.
    [Show full text]