Eigenvalue, Eigenvector and Eigenspace

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvalue, Eigenvector and Eigenspace http://en.wikipedia.org/wiki/Eigenvector Eigenvalue, eigenvector and eigenspace FromWikipedia,thefreeencyclopedia (RedirectedfromEigenvector) Inmathematics,an eigenvector ofatransformation[1]isa nonnull vectorwhosedirectionisunchangedbythattransformation. Thefactorbywhichthemagnitudeisscalediscalledthe eigenvalue ofthatvector.(SeeFig.1.)Often,atransformationis completelydescribedbyitseigenvaluesandeigenvectors.An eigenspaceisasetofeigenvectorswithacommoneigenvalue. Theseconceptsplayamajorroleinseveralbranchesofbothpure andappliedmathematics—appearingprominentlyinlinearalgebra, functionalanalysis,andtoalesserextentinnonlinearsituations. Itiscommontoprefixanynaturalnameforthesolutionwitheigen Fig.1.InthissheartransformationoftheMona insteadofsayingeigenvector.Forexample,eigenfunctionifthe Lisa,thepicturewasdeformedinsuchaway eigenvectorisafunction,eigenmodeiftheeigenvectorisaharmonic thatitscentralverticalaxis(redvector)was notmodified,butthediagonalvector(blue)has mode,eigenstateiftheeigenvectorisaquantumstate,andsoon changeddirection.Hencetheredvectorisan (e.g.theeigenfaceexamplebelow).Similarlyfortheeigenvalue,e.g. eigenvectorofthetransformationandtheblue eigenfrequencyiftheeigenvalueis(ordetermines)afrequency. vectorisnot.Sincetheredvectorwasneither stretchednorcompressed,itseigenvalueis1. Allvectorsalongthesameverticallinearealso eigenvectors,withthesameeigenvalue.They formtheeigenspaceforthiseigenvalue. 1 of 14 16/Oct/06 5:01 PM http://en.wikipedia.org/wiki/Eigenvector Contents 1 History 2 Definitions 3 Examples 4 Eigenvalueequation 5 Spectraltheorem 6 Eigenvaluesandeigenvectorsofmatrices 6.1 Computingeigenvaluesandeigenvectorsofmatrices 6.1.1 Symboliccomputations 6.1.2 Numericalcomputations 6.2 Properties 6.2.1 Algebraicmultiplicity 6.2.2 Decompositiontheoremsforgeneralmatrices 6.2.3 Someotherpropertiesofeigenvalues 6.3 Conjugateeigenvector 6.4 Generalizedeigenvalueproblem 6.5 Entriesfromaring 7 Infinitedimensionalspaces 8 Applications 9 Notes 10 References 11 Externallinks History Nowadays,eigenvaluesareusuallyintroducedinthecontextofmatrixtheory.Historically,however,theyarosein thestudyofquadraticformsanddifferentialequations. Inthefirsthalfofthe18thcentury,JohannandDanielBernoulli,d'Alembert,andEulerencounteredeigenvalue problemswhenstudyingthemotionofarope,whichtheyconsideredtobeaweightlessstringloadedwithanumber ofmasses.LaplaceandLagrangecontinuedtheirworkinthesecondhalfofthecentury.Theyrealizedthatthe eigenvaluesarerelatedtothestabilityofthemotion.Theyalsousedeigenvaluemethodintheirstudyofthesolar system.[2] Eulerhadalsostudiedtherotationalmotionofarigidbodyanddiscoveredtheimportanceoftheprincipalaxes.As Lagrangerealized,theprincipalaxesaretheeigenvectorsoftheinertiamatrix.[3]Intheearly19thcentury,Cauchy sawhowtheirworkcouldbeusedtoclassifythequadricsurfaces,andgeneralizedittoarbitrarydimensions.[4] Cauchyalsocoinedthetermracinecaractéristique(characteristicroot)forwhatisnowcalledeigenvalue;histerm survivesincharacteristicequation.[5] FourierusedtheworkofLaplaceandLagrangetosolvetheheatequationbyseparationofvariablesinhisfamous 1822bookThéorieanalytiquedelachaleur.[6] SturmdevelopedFourier'sideasfurtherandhebroughtthemtothe attentionofCauchy,whocombinedthemwithhisownideasandarrivedatthefactthatsymmetricmatriceshave realeigenvalues.[4]ThiswasextendedbyHermitein1855towhatarenowcalledHermitianmatrices.[5]Aroundthe sametime,Brioschiprovedthattheeigenvaluesoforthogonalmatriceslieontheunitcircle,[4]andClebschfound thecorrespondingresultforskewsymmetricmatrices.[5]Finally,Weierstrassclarifiedanimportantaspectinthe 2 of 14 16/Oct/06 5:01 PM http://en.wikipedia.org/wiki/Eigenvector stabilitytheorystartedbyLaplacebyrealizingthatdefectivematricescancauseinstability.[4] Inthemeantime,LiouvillehadstudiedsimilareigenvalueproblemsasSturm;thedisciplinethatgrewoutoftheir workisnowcalledSturmLiouvilletheory.[7] SchwarzstudiedthefirsteigenvalueofLaplace'sequationongeneral domainstowardstheendofthe19thcentury,whilePoincaréstudiedPoisson'sequationafewyearslater.[8] Atthestartofthe20thcentury,Hilbertstudiedtheeigenvaluesofintegraloperatorsbyconsideringthemtobe infinitematrices.[9]HewasthefirsttousetheGermanwordeigentodenoteeigenvaluesandeigenvectorsin1904, thoughhemayhavebeenfollowingarelatedusagebyHelmholtz."Eigen"canbetranslatedas"own","peculiarto", "characteristic"or"individual"—emphasizinghowimportanteigenvaluesaretodefiningtheuniquenatureofa specifictransformation.Forsometime,thestandardterminEnglishwas"propervalue",butthemoredistinctive term"eigenvalue"isstandardtoday.[10] Thefirstnumericalalgorithmforcomputingeigenvaluesandeigenvectorsappearedin1929,whenVonMises publishedthepowermethod.Oneofthemostpopularmethodstoday,theQRalgorithm,wasproposed independentlybyFrancisandKublanovskayain1961.[11] Definitions Seealso:Eigenplane Transformationsofspace—suchastranslation(orshiftingtheorigin),rotation,reflection,stretching,compression, oranycombinationofthese—maybevisualizedbytheeffecttheyproduceonvectors.Vectorscanbevisualisedas arrowspointingfromonepointtoanother. Eigenvectorsoftransformationsarevectors[12]whichareeitherleftunaffectedorsimplymultipliedbya scalefactorafterthetransformation. Aneigenvector'seigenvalueisthescalefactorbywhichithasbeenmultiplied. Aneigenspaceisaspaceconsistingofalleigenvectorswhichhavethesameeigenvalue,alongwiththezero (null)vector,whichitselfisnotaneigenvector. Theprincipal eigenvectorofatransformationistheeigenvectorwiththelargestcorrespondingeigenvalue. Thegeometric multiplicityofaneigenvalueisthedimensionoftheassociatedeigenspace. Thespectrumofatransformationonfinitedimensionalvectorspacesisthesetofallitseigenvalues. Forinstance,aneigenvectorofarotationinthreedimensionsisavectorlocatedwithintheaxisaboutwhichthe rotationisperformed.Thecorrespondingeigenvalueis1andthecorrespondingeigenspacecontainsallthevectors alongtheaxis.Asthisisaonedimensionalspace,itsgeometricmultiplicityisone.Thisistheonlyeigenvalueofthe spectrum(ofthisrotation)thatisarealnumber. Examples AstheEarthrotates,everyarrowpointingoutwardfromthecenteroftheEarthalsorotates,exceptthosearrows thatlieontheaxisofrotation.ConsiderthetransformationoftheEarthafteronehourofrotation:Anarrowfromthe centeroftheEarthtotheGeographicSouthPolewouldbeaneigenvectorofthistransformation,butanarrowfrom thecenteroftheEarthtoanywhereontheequatorwouldnotbeaneigenvector.Sincethearrowpointingatthe poleisnotstretchedbytherotationoftheEarth,itseigenvalueis1. Anotherexampleisprovidedbyathinmetalsheetexpandinguniformlyaboutafixedpointinsuchawaythatthe distancesfromanypointofthesheettothefixedpointaredoubled.Thisexpansionisatransformationwith eigenvalue2.Everyvectorfromthefixedpointtoapointonthesheetisaneigenvector,andtheeigenspaceisthe setofallthesevectors. 3 of 14 16/Oct/06 5:01 PM http://en.wikipedia.org/wiki/Eigenvector However,threedimensionalgeometricspaceisnottheonlyvector space.Forexample,considerastressedropefixedatbothends,like thevibratingstringsofastringinstrument(Fig.2).Thedistancesof atomsofthevibratingropefromtheirpositionswhentheropeisat restcanbeseenasthecomponentsofavectorinaspacewithas Fig.2.Astandingwaveinaropefixedatits boundariesisanexampleofaneigenvector,or manydimensionsasthereareatomsintherope. moreprecisely,aneigenfunctionofthe transformationgivingtheacceleration.Astime Assumetheropeisacontinuousmedium.Ifoneconsidersthe passes,thestandingwaveisscaledbya equationfortheaccelerationateverypointoftherope,its sinusoidaloscillationwhosefrequencyis eigenvectors,oreigenfunctions,arethestandingwaves.The determinedbytheeigenvalue,butitsoverall standingwavescorrespondtoparticularoscillationsoftheropesuch shapeisnotmodified. thattheaccelerationoftheropeissimplyitsshapescaledbya factor—thisfactor,theeigenvalue,turnsouttobe−ω2whereωistheangularfrequencyoftheoscillation.Each componentofthevectorassociatedwiththeropeismultipliedbyatimedependentfactorsin(ωt).Ifdampingis considered,theamplitudeofthisoscillationdecreasesuntiltheropestopsoscillating,correspondingtoacomplex ω.Onecanthenassociatealifetimewiththeimaginarypartofω,andrelatetheconceptofaneigenvectortothe conceptofresonance.Withoutdamping,thefactthattheaccelerationoperator(assumingauniformdensity)is Hermitianleadstoseveralimportantproperties,suchasthatthestandingwavepatternsareorthogonalfunctions. Eigenvalue equation Mathematically,vλisaneigenvectorandλthecorrespondingeigenvalueofatransformationTiftheequation: istrue,whereT(vλ)isthevectorobtainedwhenapplyingthetransformationTtovλ. SupposeTisalineartransformation(whichmeansthat forallscalars a,b, andvectorsv,w).Considerabasisinthatvectorspace.Then,Tandvλcanberepresentedrelativetothatbasisby amatrix AT—atwodimensionalarray—andrespectivelyacolumnvectorvλ—aonedimensionalverticalarray.The eigenvalueequationinitsmatrixrepresentationiswritten wherethejuxtapositionismatrixmultiplication.SinceinthiscircumstancethetransformationTanditsmatrix representationATareequivalent,wecanoftenusejustTforthematrixrepresentationandthetransformation.This isequivalenttoasetofnlinearequations,wherenisthenumberofbasisvectorsinthebasisset.Inthisequation boththeeigenvalueλandthencomponentsofvλareunknowns. However,itissometimesunnaturalorevenimpossibletowritedowntheeigenvalueequationinamatrixform.This occursforinstancewhenthevectorspaceisinfinitedimensional,forexample,inthecaseoftheropeabove. DependingonthenatureofthetransformationTandthespacetowhichitapplies,itcanbeadvantageousto representtheeigenvalueequationasasetofdifferentialequations.IfTisadifferentialoperator,theeigenvectors arecommonlycalledeigenfunctionsofthedifferentialoperatorrepresentingT.Forexample,differentiationitselfis
Recommended publications
  • LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
    LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students.
    [Show full text]
  • Solution to Homework 5
    Solution to Homework 5 Sec. 5.4 1 0 2. (e) No. Note that A = is in W , but 0 2 0 1 1 0 0 2 T (A) = = 1 0 0 2 1 0 is not in W . Hence, W is not a T -invariant subspace of V . 3. Note that T : V ! V is a linear operator on V . To check that W is a T -invariant subspace of V , we need to know if T (w) 2 W for any w 2 W . (a) Since we have T (0) = 0 2 f0g and T (v) 2 V; so both of f0g and V to be T -invariant subspaces of V . (b) Note that 0 2 N(T ). For any u 2 N(T ), we have T (u) = 0 2 N(T ): Hence, N(T ) is a T -invariant subspace of V . For any v 2 R(T ), as R(T ) ⊂ V , we have v 2 V . So, by definition, T (v) 2 R(T ): Hence, R(T ) is also a T -invariant subspace of V . (c) Note that for any v 2 Eλ, λv is a scalar multiple of v, so λv 2 Eλ as Eλ is a subspace. So we have T (v) = λv 2 Eλ: Hence, Eλ is a T -invariant subspace of V . 4. For any w in W , we know that T (w) is in W as W is a T -invariant subspace of V . Then, by induction, we know that T k(w) is also in W for any k. k Suppose g(T ) = akT + ··· + a1T + a0, we have k g(T )(w) = akT (w) + ··· + a1T (w) + a0(w) 2 W because it is just a linear combination of elements in W .
    [Show full text]
  • Refinements of the Weyl Tensor Classification in Five Dimensions
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Ghent University Academic Bibliography Home Search Collections Journals About Contact us My IOPscience Refinements of the Weyl tensor classification in five dimensions This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2012 Class. Quantum Grav. 29 155016 (http://iopscience.iop.org/0264-9381/29/15/155016) View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: 157.193.53.227 The article was downloaded on 16/07/2012 at 16:13 Please note that terms and conditions apply. IOP PUBLISHING CLASSICAL AND QUANTUM GRAVITY Class. Quantum Grav. 29 (2012) 155016 (50pp) doi:10.1088/0264-9381/29/15/155016 Refinements of the Weyl tensor classification in five dimensions Alan Coley1, Sigbjørn Hervik2, Marcello Ortaggio3 and Lode Wylleman2,4,5 1 Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia B3H 3J5, Canada 2 Faculty of Science and Technology, University of Stavanger, N-4036 Stavanger, Norway 3 Institute of Mathematics, Academy of Sciences of the Czech Republic, Zitnˇ a´ 25, 115 67 Prague 1, Czech Republic 4 Department of Mathematical Analysis EA16, Ghent University, 9000 Gent, Belgium 5 Department of Mathematics, Utrecht University, 3584 CD Utrecht, The Netherlands E-mail: [email protected], [email protected], [email protected] and [email protected] Received 31 March 2012, in final form 21 June 2012 Published 16 July 2012 Online at stacks.iop.org/CQG/29/155016 Abstract We refine the null alignment classification of the Weyl tensor of a five- dimensional spacetime.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Abstract Algebra
    Abstract Algebra Martin Isaacs, University of Wisconsin-Madison (Chair) Patrick Bahls, University of North Carolina, Asheville Thomas Judson, Stephen F. Austin State University Harriet Pollatsek, Mount Holyoke College Diana White, University of Colorado Denver 1 Introduction What follows is a report summarizing the proposals of a group charged with developing recommendations for undergraduate curricula in abstract algebra.1 We begin by articulating the principles that shaped the discussions that led to these recommendations. We then indicate several learning goals; some of these address specific content areas and others address students' general development. Next, we include three sample syllabi, each tailored to meet the needs of specific types of institutions and students. Finally, we present a brief list of references including sample texts. 2 Guiding Principles We lay out here several principles that underlie our recommendations for undergraduate Abstract Algebra courses. Although these principles are very general, we indicate some of their specific implications in the discussions of learning goals and curricula below. Diversity of students We believe that a course in Abstract Algebra is valuable for a wide variety of students, including mathematics majors, mathematics education majors, mathematics minors, and majors in STEM disciplines such as physics, chemistry, and computer science. Such a course is essential preparation for secondary teaching and for many doctoral programs in mathematics. Moreover, algebra can capture the imagination of students whose attraction to mathematics is primarily to structure and abstraction (for example, 1As with any document that is produced by a committee, there were some disagreements and compromises. The committee members had many lively and spirited communications on what undergraduate Abstract Algebra should look like for the next ten years.
    [Show full text]
  • Kinematics of Visually-Guided Eye Movements
    Kinematics of Visually-Guided Eye Movements Bernhard J. M. Hess*, Jakob S. Thomassen Department of Neurology, University Hospital Zurich, Zurich, Switzerland Abstract One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane.
    [Show full text]
  • Spectral Coupling for Hermitian Matrices
    Spectral coupling for Hermitian matrices Franc¸oise Chatelin and M. Monserrat Rincon-Camacho Technical Report TR/PA/16/240 Publications of the Parallel Algorithms Team http://www.cerfacs.fr/algor/publications/ SPECTRAL COUPLING FOR HERMITIAN MATRICES FRANC¸OISE CHATELIN (1);(2) AND M. MONSERRAT RINCON-CAMACHO (1) Cerfacs Tech. Rep. TR/PA/16/240 Abstract. The report presents the information processing that can be performed by a general hermitian matrix when two of its distinct eigenvalues are coupled, such as λ < λ0, instead of λ+λ0 considering only one eigenvalue as traditional spectral theory does. Setting a = 2 = 0 and λ0 λ 6 e = 2− > 0, the information is delivered in geometric form, both metric and trigonometric, associated with various right-angled triangles exhibiting optimality properties quantified as ratios or product of a and e. The potential optimisation has a triple nature which offers two j j e possibilities: in the case λλ0 > 0 they are characterised by a and a e and in the case λλ0 < 0 a j j j j by j j and a e. This nature is revealed by a key generalisation to indefinite matrices over R or e j j C of Gustafson's operator trigonometry. Keywords: Spectral coupling, indefinite symmetric or hermitian matrix, spectral plane, invariant plane, catchvector, antieigenvector, midvector, local optimisation, Euler equation, balance equation, torus in 3D, angle between complex lines. 1. Spectral coupling 1.1. Introduction. In the work we present below, we focus our attention on the coupling of any two distinct real eigenvalues λ < λ0 of a general hermitian or symmetric matrix A, a coupling called spectral coupling.
    [Show full text]
  • Abstract Algebra
    Abstract Algebra Paul Melvin Bryn Mawr College Fall 2011 lecture notes loosely based on Dummit and Foote's text Abstract Algebra (3rd ed) Prerequisite: Linear Algebra (203) 1 Introduction Pure Mathematics Algebra Analysis Foundations (set theory/logic) G eometry & Topology What is Algebra? • Number systems N = f1; 2; 3;::: g \natural numbers" Z = f:::; −1; 0; 1; 2;::: g \integers" Q = ffractionsg \rational numbers" R = fdecimalsg = pts on the line \real numbers" p C = fa + bi j a; b 2 R; i = −1g = pts in the plane \complex nos" k polar form re iθ, where a = r cos θ; b = r sin θ a + bi b r θ a p Note N ⊂ Z ⊂ Q ⊂ R ⊂ C (all proper inclusions, e.g. 2 62 Q; exercise) There are many other important number systems inside C. 2 • Structure \binary operations" + and · associative, commutative, and distributive properties \identity elements" 0 and 1 for + and · resp. 2 solve equations, e.g. 1 ax + bx + c = 0 has two (complex) solutions i p −b ± b2 − 4ac x = 2a 2 2 2 2 x + y = z has infinitely many solutions, even in N (thei \Pythagorian triples": (3,4,5), (5,12,13), . ). n n n 3 x + y = z has no solutions x; y; z 2 N for any fixed n ≥ 3 (Fermat'si Last Theorem, proved in 1995 by Andrew Wiles; we'll give a proof for n = 3 at end of semester). • Abstract systems groups, rings, fields, vector spaces, modules, . A group is a set G with an associative binary operation ∗ which has an identity element e (x ∗ e = x = e ∗ x for all x 2 G) and inverses for each of its elements (8 x 2 G; 9 y 2 G such that x ∗ y = y ∗ x = e).
    [Show full text]
  • Deflation by Restriction for the Inverse-Free Preconditioned Krylov Subspace Method
    NUMERICAL ALGEBRA, doi:10.3934/naco.2016.6.55 CONTROL AND OPTIMIZATION Volume 6, Number 1, March 2016 pp. 55{71 DEFLATION BY RESTRICTION FOR THE INVERSE-FREE PRECONDITIONED KRYLOV SUBSPACE METHOD Qiao Liang Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA Qiang Ye∗ Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA (Communicated by Xiaoqing Jin) Abstract. A deflation by restriction scheme is developed for the inverse-free preconditioned Krylov subspace method for computing a few extreme eigen- values of the definite symmetric generalized eigenvalue problem Ax = λBx. The convergence theory for the inverse-free preconditioned Krylov subspace method is generalized to include this deflation scheme and numerical examples are presented to demonstrate the convergence properties of the algorithm with the deflation scheme. 1. Introduction. The definite symmetric generalized eigenvalue problem for (A; B) is to find λ 2 R and x 2 Rn with x 6= 0 such that Ax = λBx (1) where A; B are n × n symmetric matrices and B is positive definite. The eigen- value problem (1), also referred to as a pencil eigenvalue problem (A; B), arises in many scientific and engineering applications, such as structural dynamics, quantum mechanics, and machine learning. The matrices involved in these applications are usually large and sparse and only a few of the eigenvalues are desired. Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are some of the most efficient numerical methods developed in the past few decades for computing a few eigenvalues of a large scale eigenvalue problem, see [1, 11, 19].
    [Show full text]
  • Section 18.1-2. in the Next 2-3 Lectures We Will Have a Lightning Introduction to Representations of finite Groups
    Section 18.1-2. In the next 2-3 lectures we will have a lightning introduction to representations of finite groups. For any vector space V over a field F , denote by L(V ) the algebra of all linear operators on V , and by GL(V ) the group of invertible linear operators. Note that if dim(V ) = n, then L(V ) = Matn(F ) is isomorphic to the algebra of n×n matrices over F , and GL(V ) = GLn(F ) to its multiplicative group. Definition 1. A linear representation of a set X in a vector space V is a map φ: X ! L(V ), where L(V ) is the set of linear operators on V , the space V is called the space of the rep- resentation. We will often denote the representation of X by (V; φ) or simply by V if the homomorphism φ is clear from the context. If X has any additional structures, we require that the map φ is a homomorphism. For example, a linear representation of a group G is a homomorphism φ: G ! GL(V ). Definition 2. A morphism of representations φ: X ! L(V ) and : X ! L(U) is a linear map T : V ! U, such that (x) ◦ T = T ◦ φ(x) for all x 2 X. In other words, T makes the following diagram commutative φ(x) V / V T T (x) U / U An invertible morphism of two representation is called an isomorphism, and two representa- tions are called isomorphic (or equivalent) if there exists an isomorphism between them. Example. (1) A representation of a one-element set in a vector space V is simply a linear operator on V .
    [Show full text]
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]