A Note on the Joint Spectral Radius
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Linear Systems and Control: a First Course (Course Notes for AAE 564)
Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ............................... -
Scientific Computing: an Introductory Survey
Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector -
Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A. -
Mata Glossary of Common Terms
Title stata.com Glossary Description Commonly used terms are defined here. Mata glossary arguments The values a function receives are called the function’s arguments. For instance, in lud(A, L, U), A, L, and U are the arguments. array An array is any indexed object that holds other objects as elements. Vectors are examples of 1- dimensional arrays. Vector v is an array, and v[1] is its first element. Matrices are 2-dimensional arrays. Matrix X is an array, and X[1; 1] is its first element. In theory, one can have 3-dimensional, 4-dimensional, and higher arrays, although Mata does not directly provide them. See[ M-2] Subscripts for more information on arrays in Mata. Arrays are usually indexed by sequential integers, but in associative arrays, the indices are strings that have no natural ordering. Associative arrays can be 1-dimensional, 2-dimensional, or higher. If A were an associative array, then A[\first"] might be one of its elements. See[ M-5] asarray( ) for associative arrays in Mata. binary operator A binary operator is an operator applied to two arguments. In 2-3, the minus sign is a binary operator, as opposed to the minus sign in -9, which is a unary operator. broad type Two matrices are said to be of the same broad type if the elements in each are numeric, are string, or are pointers. Mata provides two numeric types, real and complex. The term broad type is used to mask the distinction within numeric and is often used when discussing operators or functions. -
Chapter 2 a Short Review of Matrix Algebra
Chapter 2 A short review of matrix algebra 2.1 Vectors and vector spaces Definition 2.1.1. A vector a of dimension n is a collection of n elements typically written as ⎛ ⎞ ⎜ a1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ a2 ⎟ a = ⎜ ⎟ =(ai)n. ⎜ . ⎟ ⎝ . ⎠ an Vectors of length 2 (two-dimensional vectors) can be thought of points in 33 BIOS 2083 Linear Models Abdus S. Wahed the plane (See figures). Chapter 2 34 BIOS 2083 Linear Models Abdus S. Wahed Figure 2.1: Vectors in two and three dimensional spaces (-1.5,2) (1, 1) (1, -2) x1 (2.5, 1.5, 0.95) x2 (0, 1.5, 0.95) x3 Chapter 2 35 BIOS 2083 Linear Models Abdus S. Wahed • A vector with all elements equal to zero is known as a zero vector and is denoted by 0. • A vector whose elements are stacked vertically is known as column vector whereas a vector whose elements are stacked horizontally will be referred to as row vector. (Unless otherwise mentioned, all vectors will be referred to as column vectors). • A row vector representation of a column vector is known as its trans- T pose. We will use⎛ the⎞ notation ‘ ’or‘ ’ to indicate a transpose. For ⎜ a1 ⎟ ⎜ ⎟ ⎜ a2 ⎟ ⎜ ⎟ T instance, if a = ⎜ ⎟ and b =(a1 a2 ... an), then we write b = a ⎜ . ⎟ ⎝ . ⎠ an or a = bT . • Vectors of same dimension are conformable to algebraic operations such as additions and subtractions. Sum of two or more vectors of dimension n results in another n-dimensional vector with elements as the sum of the corresponding elements of summand vectors. -
The Jacobian of the Exponential Function
A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique Working Paper The Jacobian of the exponential function Tinbergen Institute Discussion Paper, No. TI 2020-035/III Provided in Cooperation with: Tinbergen Institute, Amsterdam and Rotterdam Suggested Citation: Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique (2020) : The Jacobian of the exponential function, Tinbergen Institute Discussion Paper, No. TI 2020-035/III, Tinbergen Institute, Amsterdam and Rotterdam This Version is available at: http://hdl.handle.net/10419/220072 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R. -
Facts from Linear Algebra
Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector. -
On the Distance Spectra of Graphs
On the distance spectra of graphs Ghodratollah Aalipour∗ Aida Abiad† Zhanar Berikkyzy‡ Jay Cummings§ Jessica De Silva¶ Wei Gaok Kristin Heysse‡ Leslie Hogben‡∗∗ Franklin H.J. Kenter†† Jephian C.-H. Lin‡ Michael Tait§ October 5, 2015 Abstract The distance matrix of a graph G is the matrix containing the pairwise distances between vertices. The distance eigenvalues of G are the eigenvalues of its distance matrix and they form the distance spectrum of G. We determine the distance spectra of halved cubes, double odd graphs, and Doob graphs, completing the determination of distance spectra of distance regular graphs having exactly one positive distance eigenvalue. We characterize strongly regular graphs having more positive than negative distance eigenvalues. We give examples of graphs with few distinct distance eigenvalues but lacking regularity properties. We also determine the determinant and inertia of the distance matrices of lollipop and barbell graphs. Keywords. distance matrix, eigenvalue, distance regular graph, Kneser graph, double odd graph, halved cube, Doob graph, lollipop graph, barbell graph, distance spectrum, strongly regular graph, optimistic graph, determinant, inertia, graph AMS subject classifications. 05C12, 05C31, 05C50, 15A15, 15A18, 15B48, 15B57 1 Introduction The distance matrix (G) = [d ] of a graph G is the matrix indexed by the vertices of G where d = d(v , v ) D ij ij i j is the distance between the vertices vi and vj , i.e., the length of a shortest path between vi and vj . Distance matrices were introduced in the study of a data communication problem in [16]. This problem involves finding appropriate addresses so that a message can move efficiently through a series of loops from its origin to its destination, choosing the best route at each switching point. -
0 Nonsymmetric Eigenvalue Problems 149
0 Nonsymmetric Eigenvalue Problems 4.1. Introduction We discuss canonical forms (in section 4.2), perturbation theory (in section 4.3), and algorithms for the eigenvalue problem for a single nonsymmetric matrix A (in section 4.4). Chapter 5 is devoted to the special case of real symmet- ric matrices A = A T (and the SVD). Section 4.5 discusses generalizations to eigenvalue problems involving more than one matrix, including motivating applications from the analysis of vibrating systems, the solution of linear differ- ential equations, and computational geometry. Finally, section 4.6 summarizes all the canonical forms, algorithms, costs, applications, and available software in a list. One can roughly divide the algorithms for the eigenproblem into two groups: direct methods and iterative methods. This chapter considers only direct meth- ods, which are intended to compute all of the eigenvalues, and (optionally) eigenvectors. Direct methods are typically used on dense matrices and cost O(n3 ) operations to compute all eigenvalues and eigenvectors; this cost is rel- atively insensitive to the actual matrix entries. The mail direct method used in practice is QR iteration with implicit shifts (see section 4.4.8). It is interesting that after more than 30 years of depend- able service, convergence failures of this algorithm have quite recently been observed, analyzed, and patched [25, 65]. But there is still no global conver- gence proof, even though the current algorithm is considered quite reliable. So the problem of devising an algorithm that is numerically stable and globally (and quickly!) convergent remains open. (Note that "direct" methods must still iterate, since finding eigenvalues is mathematically equivalent to finding zeros of polynomials, for which no noniterative methods can exist. -
On the Construction of Nearest Defective Matrices to a Normal Matrix
On the construction of nearest defective matrices to a normal matrix Rafikul Alam∗ Department of Mathematics Indian Institute of Technology Guwahati Guwahati - 781 039, INDIA Abstract. Let A be an n-by-n normal matrix with n distinct eigenvalues. We describe a simple procedure to construct a defective matrix whose distance from A, among all the defective matrices, is the shortest. Our construction requires only an appropriate pair of eigenvalues of A and their corresponding eigenvectors. We also show that the nearest defective matrices influence the evolution of spectral projections of A. Keywords: Defective matrix, multiple eigenvalue, pseudospectra, eigenvector, spectral pro- jection. 1 Introduction Let A Cn×n be a normal matrix with n distinct eigenvalues. Let d(A) be the distance of A ∈ from the set of defective matrices, that is, d(A) := inf A A0 : A0 is defective , (1.1) {k − k } where is either the 2-norm or the Frobenius norm. For a non-normal B, the determination k·k of d(B) and a defective matrix B0 such that B B0 = d(B) is a nontrivial task and is widely k − k known as Wilkinson’s problem (see, for example, [8], [9], [10], [6], [2], [5], [4], [1]). By contrast, since A is normal, it is easy to see that for the 2-norm 1 d(A)= min λi λj (1.2) 2 i6=j | − | where λ1,...,λn are the eigenvalues of A. However, it appears that the construction of a defective matrix A0 such that A A0 = d(A) is nontrivial. Note that the existence of such an k − k A0 would imply that the infimum in (1.1) is actually the minimum. -
Consensus Guidelines for the Use and Interpretation of Angiogenesis Assays
Angiogenesis (2018) 21:425–532 https://doi.org/10.1007/s10456-018-9613-x REVIEW PAPER Consensus guidelines for the use and interpretation of angiogenesis assays Patrycja Nowak‑Sliwinska1,2 · Kari Alitalo3 · Elizabeth Allen4 · Andrey Anisimov3 · Alfred C. Aplin5 · Robert Auerbach6 · Hellmut G. Augustin7,8,9 · David O. Bates10 · Judy R. van Beijnum11 · R. Hugh F. Bender12 · Gabriele Bergers13,4 · Andreas Bikfalvi14 · Joyce Bischof15 · Barbara C. Böck7,8,9 · Peter C. Brooks16 · Federico Bussolino17,18 · Bertan Cakir19 · Peter Carmeliet20,21 · Daniel Castranova22 · Anca M. Cimpean23 · Ondine Cleaver24 · George Coukos25 · George E. Davis26 · Michele De Palma27 · Anna Dimberg28 · Ruud P. M. Dings29 · Valentin Djonov30 · Andrew C. Dudley31,32 · Neil P. Dufton33 · Sarah‑Maria Fendt34,35 · Napoleone Ferrara36 · Marcus Fruttiger37 · Dai Fukumura38 · Bart Ghesquière39,40 · Yan Gong19 · Robert J. Grifn29 · Adrian L. Harris41 · Christopher C. W. Hughes12 · Nan W. Hultgren12 · M. Luisa Iruela‑Arispe42 · Melita Irving25 · Rakesh K. Jain38 · Raghu Kalluri43 · Joanna Kalucka20,21 · Robert S. Kerbel44 · Jan Kitajewski45 · Ingeborg Klaassen46 · Hynda K. Kleinmann47 · Pieter Koolwijk48 · Elisabeth Kuczynski44 · Brenda R. Kwak49 · Koen Marien50 · Juan M. Melero‑Martin51 · Lance L. Munn38 · Roberto F. Nicosia5,52 · Agnes Noel53 · Jussi Nurro54 · Anna‑Karin Olsson55 · Tatiana V. Petrova56 · Kristian Pietras57 · Roberto Pili58 · Jefrey W. Pollard59 · Mark J. Post60 · Paul H. A. Quax61 · Gabriel A. Rabinovich62 · Marius Raica23 · Anna M. Randi33 · Domenico Ribatti63,64 · Curzio Ruegg65 · Reinier O. Schlingemann46,48 · Stefan Schulte‑Merker66 · Lois E. H. Smith19 · Jonathan W. Song67,68 · Steven A. Stacker69 · Jimmy Stalin66 · Amber N. Stratman22 · Maureen Van de Velde53 · Victor W. M. van Hinsbergh48 · Peter B. Vermeulen50,72 · Johannes Waltenberger70 · Brant M. Weinstein22 · Hong Xin36 · Bahar Yetkin‑Arik46 · Seppo Yla‑Herttuala54 · Mervin C. -
Spectra of Graphs
Spectra of graphs Andries E. Brouwer Willem H. Haemers 2 Contents 1 Graph spectrum 11 1.1 Matricesassociatedtoagraph . 11 1.2 Thespectrumofagraph ...................... 12 1.2.1 Characteristicpolynomial . 13 1.3 Thespectrumofanundirectedgraph . 13 1.3.1 Regulargraphs ........................ 13 1.3.2 Complements ......................... 14 1.3.3 Walks ............................. 14 1.3.4 Diameter ........................... 14 1.3.5 Spanningtrees ........................ 15 1.3.6 Bipartitegraphs ....................... 16 1.3.7 Connectedness ........................ 16 1.4 Spectrumofsomegraphs . 17 1.4.1 Thecompletegraph . 17 1.4.2 Thecompletebipartitegraph . 17 1.4.3 Thecycle ........................... 18 1.4.4 Thepath ........................... 18 1.4.5 Linegraphs .......................... 18 1.4.6 Cartesianproducts . 19 1.4.7 Kronecker products and bipartite double. 19 1.4.8 Strongproducts ....................... 19 1.4.9 Cayleygraphs......................... 20 1.5 Decompositions............................ 20 1.5.1 Decomposing K10 intoPetersengraphs . 20 1.5.2 Decomposing Kn into complete bipartite graphs . 20 1.6 Automorphisms ........................... 21 1.7 Algebraicconnectivity . 22 1.8 Cospectralgraphs .......................... 22 1.8.1 The4-cube .......................... 23 1.8.2 Seidelswitching. 23 1.8.3 Godsil-McKayswitching. 24 1.8.4 Reconstruction ........................ 24 1.9 Verysmallgraphs .......................... 24 1.10 Exercises ............................... 25 3 4 CONTENTS 2 Linear algebra 29 2.1