69 Adjacency Matrix

Total Page:16

File Type:pdf, Size:1020Kb

69 Adjacency Matrix Index A-optimality, 439 angle between matrices, 168 absolute error, 484, 494, 526 angle between vectors, 37, 229, 357 ACM Transactions on Mathematical ANSI (standards), 475, 562, 563 Software, 552, 617 Applied Statistics algorithms, 617 adj(·), 69 approximation and estimation, 431 adjacency matrix, 331–334, 390 approximation of a matrix, 174, 257, Exercise 8.20:, 396 339, 437, 533 augmented, 390 approximation of a vector, 41–43 adjoint (see also conjugate transpose), arithmetic mean, 35, 37 59 Arnoldi method, 318 adjoint, classical (see also adjugate), 69 artificial ill-conditioning, 269 adjugate, 69, 598 ASCII code, 460 adjugate and inverse, 118 association matrix, 328–338, 357, 365, affine group, 115, 179 366, 369–371 affine space, 43 adjacency matrix, 331–332, 390 affine transformation, 228 connectivity matrix, 331–332, 390 Aitken’s integral, 217 dissimilarity matrix, 369 AL(·) (affine group), 115 distance matrix, 369 algebraic multiplicity, 144 incidence matrix, 331–332, 390 similarity matrix, 369 algorithm, 264, 499–513 ATLAS (Automatically Tuned Linear batch, 512 Algebra Software), 555 definition, 509 augmented adjacency matrix, 390 direct, 264 augmented connectivity matrix, 390 divide and conquer, 506 Automatically Tuned Linear Algebra greedy, 506 Software (ATLAS), 555 iterative, 264, 508–511, 564 autoregressive process, 447–450 reverse communication, 564 axpy, 12, 50, 83, 554, 555 online, 512 axpy elementary operator matrix, 83 out-of-core, 512 real-time, 512 back substitution, 274, 406 algorithmically singular, 121 backward error analysis, 495, 500 Anaconda, 553, 556, 569 Banach space, 33 angle(·, ·), 37 Banachiewicz factorization, 255 632 Index banded matrix, 58 Cartesian geometry, 35, 73 inverse, 121 catastrophic cancellation, 486 Bartlett decomposition, 346 Cauchy matrix, 389 base, 467 Cauchy-Schwarz inequality, 24, 98 base point, 466 Cauchy-Schwarz inequality for matrices, basis, 21–23 98, 178 Exercise 2.6:, 52 Cayley multiplication, 75, 94 orthonormal, 40–41 Cayley-Hamilton theorem, 138 batch algorithm, 512 CDF (Common Data Format), 463 Bauer-Fike theorem, 307 centered matrix, 288, 363 Beowulf (cluster computing), 559 centered vector, 49 bias, in exponent of floating-point chaining of operations, 485 number, 468 character data, 461 big endian, 480 character string, 461 big integer, 466, 492 characteristic equation, 138 big O (order), 497, 503, 591 characteristic polynomial, 138 big omega (order), 497 characteristic value (see also eigenvalue), bilinear form, 91, 134 135 bit, 460 characteristic vector (see also eigenvec- bitmap, 461 tor), 135 BLACS (software), 557, 558 chasing, 317 BLAS (software), 553–556 Chebyshev norm, 28 CUDA, 560 chi-squared distribution, 400 PBLAS, 558 noncentral, 400 PLASMA, 559 PDF, equation (9.3), 400 PSBLAS, 558 Cholesky decomposition, 253–256, 345, block diagonal matrix, 62 436 determinant of, 70 computing, 558, 574 inverse of, 121 root-free, 255 multiplication, 78 circulant matrix, 383 BMvN distribution, 219, 548 classification, 389 Bolzano-Weierstrass theorem for cluster analysis, 389 orthogonal matrices, 133 cluster computing, 559 Boolean matrix, 390 coarray (Fortran construct), 563 Box M statistic, 368 Cochran’s theorem, 353–356, 401 bra·ket notation, 24 cofactor, 68, 598 byte, 460 Collected Algorithms of the ACM (CALGO), 617 C (programming language), 474, 489, collinearity, 265, 406, 430 568–569 column rank, 99 C++ (programming language), 475, column space, 55, 89, 104, 105 568–569 column-major, 522, 539, 545 CALGO (Collected Algorithms of the column-sum norm, 165 ACM), 617 Common Data Format (CDF), 463 cancellation error, 487, 500 companion matrix, 139, 305 canonical form, equivalent, 110 compatible linear systems, 106 canonical form, similar, 149 compensated summation, 486 canonical singular value factorization, complementary projection matrix, 356 162 complementary vector spaces, 19 Index 633 complete graph, 329 convergence of powers of a matrix, 171, complete pivoting, 275 375 complete space, 33 convergence rate, 509 completing the Gramian, 177 convex combination, 12 complex data type, 491 convex cone, 44–46, 346, 349, 371 complex vectors/matrices, 33, 132, 387 convex function, 26, 197 Conda, 553 convex optimization, 346 condition (problem or data), 499 convex set, 12 condition number, 265, 270, 290, 426, convexity, 26 427, 499, 502, 523, 533 coordinate, 4 computing the number, 533, 574 Cor(·, ·), 51 inverse of matrix, 266, 270 correlation, 51, 90 nonfull rank matrices, 290 correlation matrix, 365, 422 nonsquare matrices, 290 Exercise 8.8:, 394 sample standard deviation, 502 positive definite approximation, 436 conditional inverse, 128 pseudo-correlation matrix, 437 cone, 43–46, 327 sample, 422 Exercise 2.18:, 53 cost matrix, 369 ·, · convex cone, 44, 346 Cov( ), 50 covariance, 50 of nonnegative definite matrices, 346 covariance matrix, see variance- of nonnegative matrices, 371 covariance matrix of positive definite matrices, 349 CRAN (Comprehensive R Archive of positive matrices, 371 Network), 552, 576 conference matrix, 381 cross product of vectors, 47 configuration matrix, 369 Exercise 2.19:, 54 conjugate gradient method, 279–283 cross products matrix, 256, 358 preconditioning, 282 cross products, computing sum of conjugate norm, 93 Exercise 10.18c:, 519 conjugate transpose, 59, 132 Crout method, 245 conjugate vectors, 93, 134 cuBLAS, 560 connected vertices, 329, 334 CUDA, 559 connectivity matrix, 331–334, 390 curl, 192 Exercise 8.20:, 396 curse of dimensionality, 511 augmented, 390 cuSPARSE, 560 consistency property of matrix norms, 164 D-optimality, 439–440, 533 consistency test, 527, 551 daxpy, 12 consistent system of equations, 105, decomposable matrix, 373 272, 277 decomposition, see also factorization of constrained least squares, equality a matrix constraints, 413 additive, 353, 354 Exercise 9.4d:, 451 Bartlett decomposition, 346 continuous function, 186 multiplicative, 108 contrast, 410 nonnegative matrix factorization, convergence criterion, 508 257, 337 convergence of a sequence of matrices, singular value decomposition, 134, 152, 171 161–164, 320, 336, 425, 532 convergence of a sequence of vectors, 32 spectral decomposition, 155 634 Index defective (deficient) matrix, 149, 150 dimension of vector space, 14 deficient (defective) matrix, 149, 150 dimension reduction, 20, 356, 425 deflation, 308–310 direct method for solving linear systems, degrees of freedom, 360, 362, 407, 430 272–277 del, 192 direct product (of matrices), 95 derivative with respect to a matrix, 195 direct product (of sets), 5 derivative with respect to a vector, direct product (of vector spaces), 20 189–195 basis for, 23 det(·), 66 direct sum decomposition of a vector determinant, 66–74 space, 19 as criterion for optimal design, 439 direct sum of matrices, 62 computing, 533 direct sum of vector spaces, 18–20, 64 derivative of, 195 basis for, 22 Jacobian, 217 direct sum decomposition, 19 of block diagonal matrix, 70 directed dissimilarity matrix, 369 of Cayley product, 88 direction cosines, 38, 231 of diagonal matrix, 70 discrete Fourier transform, 384 of elementary operator matrix, 86 discrete Legendre polynomials, 380 of inverse, 117 discretization error, 498, 509 of Kronecker product, 96 dissimilarity matrix, 369 of nonnegative definite matrix, 345 distance, 32 of partitioned matrix, 70, 122 between matrices, 175 of permutation matrix, 87 between vectors, 32 of positive definite matrix, 347 distance matrix, 369 of transpose, 70 distributed computing, 463, 508, 544 of triangular matrix, 70 distributed linear algebra machine, 558 relation to eigenvalues, 141 distribution vector, 377 relation to geometric volume, 73, 217 div, 192 diag(·), 56, 60, 596 divergence, 192 with matrix arguments, 62 divide and conquer, 506 diagonal element, 56 document-term matrix, 336 diagonal expansion, 73 dominant eigenvalue, 142 diagonal factorization, 148, 152 Doolittle method, 245 diagonal matrix, 57 dot product of matrices, 97 determinant of, 70 dot product of vectors, 23, 91 inverse of, 120 double cone, 43 multiplication, 78 double precision, 472, 480 diagonalizable matrix, 148–152, 306, doubly stochastic matrix, 377 343 Drazin inverse, 129–130 orthogonally, 154 dual cone, 44 unitarily, 147, 343, 387 diagonally dominant matrix, 57, 62, Epq, E(π), Ep(a), Epq(a) (elementary 100, 348 operator matrices), 84 differential, 188 E(·) (expectation operator), 216 differentiation of vectors and matrices, E-optimality, 439 183–220 echelon form, 111 digraph, 332 edge of a graph, 329 of a matrix, 333 EDP (exact dot product), 493 dim(·), 14 effective degrees of freedom, 362, 430 Index 635 efficiency, computational, 503–508 Euclidean distance matrix, 369 eigenpair, 134 Euclidean matrix norm (see also eigenspace, 144 Frobenius norm), 167 eigenvalue, 134–164, 166, 305–322 Euclidean vector norm, 27 computing, 306–318, 558, 574 Euler’s constant Exercise 10.2:, 516 Jacobi method, 313–316 Euler’s integral, 593 Krylov methods, 318 Euler’s rotation theorem, 231 power method, 311–313 exact computations, 493 QR method, 316–318 exact dot product (EDP), 493 of a graph, 391 exception, in computer operations, 483, of a polynomial Exercise 3.26:, 181 487 relation to singular value, 163 expectation, 212–220 upper bound on, 142, 145, 306 exponent, 467 eigenvector, 134–164, 305–322 exponential order, 503 left eigenvector, 135, 158 exponential, matrix, 153, 184 eigenvectors, linear independence of, extended precision, 472 143 extrapolation, 509 EISPACK, 556 elementary operation, 79 factorization of a matrix, 108, 112, 147, elementary operator matrix, 79–87, 101, 148,
Recommended publications
  • Realizing Euclidean Distance Matrices by Sphere Intersection
    Realizing Euclidean distance matrices by sphere intersection Jorge Alencara,∗, Carlile Lavorb, Leo Libertic aInstituto Federal de Educa¸c~ao,Ci^enciae Tecnologia do Sul de Minas Gerais, Inconfidentes, MG, Brazil bUniversity of Campinas (IMECC-UNICAMP), 13081-970, Campinas-SP, Brazil cCNRS LIX, Ecole´ Polytechnique, 91128 Palaiseau, France Abstract This paper presents the theoretical properties of an algorithm to find a re- alization of a (full) n × n Euclidean distance matrix in the smallest possible embedding dimension. Our algorithm performs linearly in n, and quadratically in the minimum embedding dimension, which is an improvement w.r.t. other algorithms. Keywords: Distance geometry, sphere intersection, euclidean distance matrix, embedding dimension. 2010 MSC: 00-01, 99-00 1. Introduction Euclidean distance matrices and sphere intersection have a strong mathe- matical importance [1, 2, 3, 4, 5, 6], in addition to many applications, such as navigation problems, molecular and nanostructure conformation, network 5 localization, robotics, as well as other problems of distance geometry [7, 8, 9]. Before defining precisely the problem of interest, we need the formal defini- tion for Euclidean distance matrices. Let D be a n × n symmetric hollow (i.e., with zero diagonal) matrix with non-negative elements. We say that D is a (squared) Euclidean Distance Matrix (EDM) if there are points x1; x2; : : : ; xn 2 RK (for a positive integer K) such that 2 D(i; j) = Dij = kxi − xjk ; i; j 2 f1; : : : ; ng; where k · k denotes the Euclidean norm. The smallest K for which such a set of points exists is called the embedding dimension of D, denoted by dim(D).
    [Show full text]
  • Arxiv:1701.03378V1 [Math.RA] 12 Jan 2017 Setao Apihe Rusadfe Probability” Free and Groups T Lamplighter by on Supported Technology
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by TUGraz OPEN Library Linearizing the Word Problem in (some) Free Fields Konrad Schrempf∗ January 13, 2017 Abstract We describe a solution of the word problem in free fields (coming from non- commutative polynomials over a commutative field) using elementary linear algebra, provided that the elements are given by minimal linear representa- tions. It relies on the normal form of Cohn and Reutenauer and can be used more generally to (positively) test rational identities. Moreover we provide a construction of minimal linear representations for the inverse of non-zero elements. Keywords: word problem, minimal linear representation, linearization, realiza- tion, admissible linear system, rational series AMS Classification: 16K40, 16S10, 03B25, 15A22 Introduction arXiv:1701.03378v1 [math.RA] 12 Jan 2017 Free (skew) fields arise as universal objects when it comes to embed the ring of non-commutative polynomials, that is, polynomials in (a finite number of) non- commuting variables, into a skew field [Coh85, Chapter 7]. The notion of “free fields” goes back to Amitsur [Ami66]. A brief introduction can be found in [Coh03, Section 9.3], for details we refer to [Coh95, Section 6.4]. In the present paper we restrict the setting to commutative ground fields, as a special case. See also [Rob84]. In [CR94], Cohn and Reutenauer introduced a normal form for elements in free fields in order to extend results from the theory of formal languages. In particular they characterize minimality of linear representations in terms of linear independence of ∗Contact: [email protected], Department of Discrete Mathematics (Noncommutative Structures), Graz University of Technology.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • Two-Graphs and Skew Two-Graphs in Finite Geometries
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Two-Graphs and Skew Two-Graphs in Finite Geometries G. Eric Moorhouse Department of Mathematics University of Wyoming Laramie Wyoming Dedicated to Professor J. J. Seidel Submitted by Aart Blokhuis ABSTRACT We describe the use of two-graphs and skew (oriented) two-graphs as isomor- phism invariants for translation planes and m-systems (including avoids and spreads) of polar spaces in odd characteristic. 1. INTRODUCTION Two-graphs were first introduced by G. Higman, as natural objects for the action of certain sporadic simple groups. They have since been studied extensively by Seidel, Taylor, and others, in relation to equiangular lines, strongly regular graphs, and other notions; see [26]-[28]. The analogous oriented two-graphs (which we call skew two-graphs) were introduced by Cameron [7], and there is some literature on the equivalent notion of switching classes of tournaments. Our exposition focuses on the use of two-graphs and skew two-graphs as isomorphism invariants of translation planes, and caps, avoids, and spreads of polar spaces in odd characteristic. The earliest precedent for using two-graphs to study ovoids and caps is apparently owing to Shult [29]. The degree sequences of these two-graphs and skew two-graphs yield the invariants known as fingerprints, introduced by J. H. Conway (see Chames [9, lo]). Two LZNEAR ALGEBRA AND ITS APPLZCATZONS 226-228:529-551 (1995) 0 Elsevier Science Inc., 1995 0024-3795/95/$9.50 655 Avenue of the Americas, New York, NY 10010 SSDI 0024-3795(95)00242-J 530 G.
    [Show full text]
  • Hadamard and Conference Matrices
    Hadamard and conference matrices Peter J. Cameron December 2011 with input from Dennis Lin, Will Orrick and Gordon Royle Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI. A matrix attaining the bound is a Hadamard matrix. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? A matrix attaining the bound is a Hadamard matrix. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI.
    [Show full text]
  • On Sign-Symmetric Signed Graphs∗
    ISSN 1855-3966 (printed edn.), ISSN 1855-3974 (electronic edn.) ARS MATHEMATICA CONTEMPORANEA 19 (2020) 83–93 https://doi.org/10.26493/1855-3974.2161.f55 (Also available at http://amc-journal.eu) On sign-symmetric signed graphs∗ Ebrahim Ghorbani Department of Mathematics, K. N. Toosi University of Technology, P.O. Box 16765-3381, Tehran, Iran Willem H. Haemers Department of Econometrics and Operations Research, Tilburg University, Tilburg, The Netherlands Hamid Reza Maimani , Leila Parsaei Majd y Mathematics Section, Department of Basic Sciences, Shahid Rajaee Teacher Training University, P.O. Box 16785-163, Tehran, Iran Received 24 October 2019, accepted 27 March 2020, published online 10 November 2020 Abstract A signed graph is said to be sign-symmetric if it is switching isomorphic to its negation. Bipartite signed graphs are trivially sign-symmetric. We give new constructions of non- bipartite sign-symmetric signed graphs. Sign-symmetric signed graphs have a symmetric spectrum but not the other way around. We present constructions of signed graphs with symmetric spectra which are not sign-symmetric. This, in particular answers a problem posed by Belardo, Cioaba,˘ Koolen, and Wang in 2018. Keywords: Signed graph, spectrum. Math. Subj. Class. (2020): 05C22, 05C50 1 Introduction Let G be a graph with vertex set V and edge set E. All graphs considered in this paper are undirected, finite, and simple (without loops or multiple edges). A signed graph is a graph in which every edge has been declared positive or negative. In fact, a signed graph Γ is a pair (G; σ), where G = (V; E) is a graph, called the underlying ∗The authors would like to thank the anonymous referees for their helpful comments and suggestions.
    [Show full text]
  • Pretty Good State Transfer in Discrete-Time Quantum Walks
    Pretty good state transfer in discrete-time quantum walks Ada Chan and Hanmeng Zhan Department of Mathematics and Statistics, York University, Toronto, ON, Canada fssachan, [email protected] Abstract We establish the theory for pretty good state transfer in discrete- time quantum walks. For a class of walks, we show that pretty good state transfer is characterized by the spectrum of certain Hermitian adjacency matrix of the graph; more specifically, the vertices involved in pretty good state transfer must be m-strongly cospectral relative to this matrix, and the arccosines of its eigenvalues must satisfy some number theoretic conditions. Using normalized adjacency matrices, cyclic covers, and the theory on linear relations between geodetic an- gles, we construct several infinite families of walks that exhibits this phenomenon. 1 Introduction arXiv:2105.03762v1 [math.CO] 8 May 2021 A quantum walk with nice transport properties is desirable in quantum com- putation. For example, Grover's search algorithm [12] is a quantum walk on the looped complete graph, which sends the all-ones vector to a vector that almost \concentrate on" a vertex. In this paper, we study a slightly stronger notion of state transfer, where the target state can be approximated with arbitrary precision. This is called pretty good state transfer. Pretty good state transfer has been extensively studied in continuous-time quantum walks, especially on the paths [9, 22, 1, 5, 21]. However, not much 1 is known for the discrete-time analogues. The biggest difference between these two models is that in a continuous-time quantum walk, the evolution is completely determined by the adjacency or Laplacian matrix of the graph, while in a discrete-time quantum walk, the transition matrix depends on more than just the graph - usually, it is a product of two unitary matrices: U = SC; where S permutes the arcs of the graph, and C, called the coin matrix, send each arc to a linear combination of the outgoing arcs of the same vertex.
    [Show full text]
  • Hadamard Matrices Include
    Hadamard and conference matrices Peter J. Cameron University of St Andrews & Queen Mary University of London Mathematics Study Group with input from Rosemary Bailey, Katarzyna Filipiak, Joachim Kunert, Dennis Lin, Augustyn Markiewicz, Will Orrick, Gordon Royle and many happy returns . Happy Birthday, MSG!! Happy Birthday, MSG!! and many happy returns . Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI. A matrix attaining the bound is a Hadamard matrix. This is a nice example of a continuous problem whose solution brings us into discrete mathematics. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? A matrix attaining the bound is a Hadamard matrix. This is a nice example of a continuous problem whose solution brings us into discrete mathematics. Hadamard's theorem Let H be an n × n matrix, all of whose entries are at most 1 in modulus. How large can det(H) be? Now det(H) is equal to the volume of the n-dimensional parallelepiped spanned by the rows of H. By assumption, each row has Euclidean length at most n1/2, so that det(H) ≤ nn/2; equality holds if and only if I every entry of H is ±1; > I the rows of H are orthogonal, that is, HH = nI.
    [Show full text]
  • On D. G. Higman's Note on Regular 3-Graphs
    Also available at http://amc.imfm.si ISSN 1855-3966 (printed edn.), ISSN 1855-3974 (electronic edn.) ARS MATHEMATICA CONTEMPORANEA 6 (2013) 99–115 On D. G. Higman’s note on regular 3-graphs Daniel Kalmanovich Department of Mathematics Ben-Gurion University of the Negev 84105 Beer Sheva, Israel Received 17 October 2011, accepted 25 April 2012, published online 4 June 2012 Abstract We introduce the notion of a t-graph and prove that regular 3-graphs are equivalent to cyclic antipodal 3-fold covers of a complete graph. This generalizes the equivalence of regular two-graphs and Taylor graphs. As a consequence, an equivalence between cyclic antipodal distance regular graphs of diameter 3 and certain rank 6 commutative association schemes is proved. New examples of regular 3-graphs are presented. Keywords: Antipodal graph, association scheme, distance regular graph of diameter 3, Godsil- Hensel matrix, group ring, Taylor graph, two-graph. Math. Subj. Class.: 05E30, 05B20, 05E18 1 Introduction This paper is mainly a clarification of [6] — a short draft written by Donald Higman in 1994, entitled “A note on regular 3-graphs”. The considered generalization of two-graphs was introduced by D. G. Higman in [5]. As in the famous correspondence between two-graphs and switching classes of simple graphs, t-graphs are interpreted as equivalence classes of an appropriate switching relation defined on weights, which play the role of simple graphs. In his note Higman uses certain association schemes to characterize regular 3-graphs and to obtain feasibility conditions for their parameters. Specifically, he provides a graph theoretic interpretation of a weight and from the resulted graph he constructs a rank 4 symmetric association scheme and a rank 6 fission of it.
    [Show full text]
  • 2020 Ural Workshop on Group Theory and Combinatorics
    Institute of Natural Sciences and Mathematics of the Ural Federal University named after the first President of Russia B.N.Yeltsin N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences The Ural Mathematical Center 2020 Ural Workshop on Group Theory and Combinatorics Yekaterinburg – Online, Russia, August 24-30, 2020 Abstracts Yekaterinburg 2020 2020 Ural Workshop on Group Theory and Combinatorics: Abstracts of 2020 Ural Workshop on Group Theory and Combinatorics. Yekaterinburg: N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences, 2020. DOI Editor in Chief Natalia Maslova Editors Vladislav Kabanov Anatoly Kondrat’ev Managing Editors Nikolai Minigulov Kristina Ilenko Booklet Cover Desiner Natalia Maslova c Institute of Natural Sciences and Mathematics of Ural Federal University named after the first President of Russia B.N.Yeltsin N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences The Ural Mathematical Center, 2020 2020 Ural Workshop on Group Theory and Combinatorics Conents Contents Conference program 5 Plenary Talks 8 Bailey R. A., Latin cubes . 9 Cameron P. J., From de Bruijn graphs to automorphisms of the shift . 10 Gorshkov I. B., On Thompson’s conjecture for finite simple groups . 11 Ito T., The Weisfeiler–Leman stabilization revisited from the viewpoint of Terwilliger algebras . 12 Ivanov A. A., Densely embedded subgraphs in locally projective graphs . 13 Kabanov V. V., On strongly Deza graphs . 14 Khachay M. Yu., Efficient approximation of vehicle routing problems in metrics of a fixed doubling dimension .
    [Show full text]
  • ON EUCLIDEAN DISTANCE MATRICES of GRAPHS∗ 1. Introduction. a Matrix D ∈ R N×N Is a Euclidean Distance Matrix (EDM), If Ther
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 574-589, August 2013 ELA ON EUCLIDEAN DISTANCE MATRICES OF GRAPHS∗ GASPERˇ JAKLICˇ † AND JOLANDA MODIC‡ Abstract. In this paper, a relation between graph distance matrices and Euclidean distance matrices (EDM) is considered. It is proven that distance matrices of paths and cycles are EDMs. The proofs are constructive and the generating points of studied EDMs are given in a closed form. A generalization to weighted graphs (networks) is tackled. Key words. Graph, Euclidean distance matrix, Distance, Eigenvalue. AMS subject classifications. 15A18, 05C50, 05C12. n n 1. Introduction. A matrix D R × is a Euclidean distance matrix (EDM), ∈ if there exist points x Rr, i =1, 2,...,n, such that d = x x 2. The minimal i ∈ ij k i − j k possible r is called an embedding dimension (see, e.g., [3]). Euclidean distance matri- ces were introduced by Menger in 1928, later they were studied by Schoenberg [13], and other authors. They have many interesting properties, and are used in various applications in linear algebra, graph theory, and bioinformatics. A natural problem is to study configurations of points xi, where only distances between them are known. n n Definition 1.1. A matrix D = (d ) R × is hollow, if d = 0 for all ij ∈ ii i =1, 2,...,n. There are various characterizations of EDMs. n n Lemma 1.2. [8, Lemma 5.3] Let a symmetric hollow matrix D R × have only ∈ one positive eigenvalue λ and the corresponding eigenvector e := [1, 1,..., 1]T Rn.
    [Show full text]
  • A Submodular Approach
    SPARSEST NETWORK SUPPORT ESTIMATION: A SUBMODULAR APPROACH Mario Coutino, Sundeep Prabhakar Chepuri, Geert Leus Delft University of Technology ABSTRACT recovery of the adjacency and normalized Laplacian matrix, in the In this work, we address the problem of identifying the underlying general case, due to the convex formulation of the recovery problem, network structure of data. Different from other approaches, which solutions that only retrieve the support approximately are obtained, are mainly based on convex relaxations of an integer problem, here i.e., thresholding operations are required. we take a distinct route relying on algebraic properties of a matrix Taking this issue into account, and following arguments closely representation of the network. By describing what we call possible related to the ones used for network topology estimation through ambiguities on the network topology, we proceed to employ sub- spectral templates [10], the main aim of this work is twofold: (i) to modular analysis techniques for retrieving the network support, i.e., provide a proper characterization of a particular set of matrices that network edges. To achieve this we only make use of the network are of high importance within graph signal processing: fixed-support modes derived from the data. Numerical examples showcase the ef- matrices with the same eigenbasis, and (ii) to provide a pure com- fectiveness of the proposed algorithm in recovering the support of binatorial algorithm, based on the submodular machinery that has sparse networks. been proven useful in subset selection problems within signal pro- cessing [18–22], as an alternative to traditional convex relaxations. Index Terms— Graph signal processing, graph learning, sparse graphs, network deconvolution, network topology inference.
    [Show full text]