An Iteration Method for the Solution of the Eigenvalue Problem of Linear

An Iteration Method for the Solution of the Eigenvalue Problem of Linear

Journal of Research of the National Bureau of Standards Vol. 45, No. 4, October 1950 Research Paper 2133 An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators1 By Cornelius Lanczos The present investigation designs a systematic method for finding the latent roots and the principal axes of a matrix, without reducing the order of the matrix. It is characterized by a wide field of applicability and great accuracy, since the accumulation of rounding errors is avoided, through the process of "minimized iterations". Moreover, the method leads to a well convergent successive approximation procedure by which the solution of integral equations of the Fredholm type and the solution of the eigenvalue problem of linear differ- ential and integral operators may be accomplished. I. Introduction The present investigation, although starting out along classical lines, proceeds nevertheless in The eigenvalue problem of linear operators is of a different direction. The advantages of the central importance for all vibration problems of method here developed 4 can be summarized as physics and engineering. The vibrations of elastic follows: structures, the flutter problems of aerodynamics, 1. The iterations are used in the most economi- the stability problem of electric networks, the cal fashion, obtaining an arbitrary number of atomic and molecular vibrations of particle phys- eigenvalues and eigensolutions by one single set ics, are all diverse aspects of the same fundamental of iterations, without reducing the order of the problem, viz., the principal axis problem of quad- matrix. ratic forms. 2. The rapid accumulation of fatal rounding In view of the central importance of the eigen- errors, common to all iteration processes if applied value problem for so many fields of pure and to matrices of high dispersion (large "spread" of applied mathematics, much thought has been de- the eigenvalues), is effectively counteracted by voted to the designing of efficient methods by the method of "minimized iterations". which the eigenvalues of a given linear operator 3. The method is directly translatable into may be found. That linear operator may be of analytical terms, by replacing summation by the algebraic or of the continuous type; that is, a integration. We then get a rapidly convergent matrix, a differential operator, or a Fredholm analytical iteration process by which the eigen- kernel function. Iteration methods play a prom- values and eigensolutions of linear differential and inent part in these designs, and the literature on integral equations may be obtained. the iteration of matrices is very extensive.2 In the English literature of recent years the works of II. The Two Classical Solutions of H. Hotelling [1] 3 and A. C. Aitken [2] deserve Fredholm's Problem attention. H. Wayland [3] surveys the field in its historical development, up to recent years. Since Fredholm's fundamental essay on integral W. U. Kincaid [4] obtained additional results by equations [5], we can replace the solution of linear improving the convergence of some of the classical differential and integral equations by the solution procedures. 4 The literature available to the author showed no evidence that the methods and results of the present investigation have been found before. 1 The preparation of this paper was sponsored in part by the Office of However, A. M. Ostrowski of the University of Basle and the Institute for Naval Research. Numerical Analysis informed the author that his method parallels the i 2 The basic principles of the various iteration methods are exhaustively earlier work of some Russian scientists; the references given by Ostrowski I treated in the well-known book on Elementary matrices by R. A. Frazer, are: A. Krylov, Izv. Akad. Nauk SSSR 7, 491 to 539 (1931); N. Luzin, Izv. W. J. Duncan, and A. R. Collar (Cambridge University Press, 1938); Akad. Nauk SSSR 7, 903 to 958 (1931). On the basis of the reviews of these (MacMillan, New York, N. Y., 1947). papers in the Zentralblatt, the author believes that the two methods coincide 3 Figures in brackets indicate the literature references at the end of this only in the point of departure. The author has not, however, read these paper. Russian papers. 904428—50——1 255 of a set of simultaneous ordinary linear equations we possess all the eigenvalues6 /x^ and eigenvectors of infinite order. The problem of Fredholm, if Ut of the matrix A, defined by the equations formulated in the language of matrices, can be stated as follows: Find a solution of the equation y—\Ay=b, (1) If A is nonsymmetric, we need also the " adjoint" eigenvectors u<*, defined with the help of the where b is a given vector, X a given scalar para- transposed matrix A*: meter, and A a given matrix (whose order event- ually approaches infinity); whereas y is the unknown vector. The problem includes the We now form the scalars inversion of a matrix (X= °°) and the problem of the characteristic solutions, also called "eigenso- b-ut* lutions", (6=0) as special cases. (7) Two fundamentally different ^classical solutions of this problem are known. The first solution is and obtain y in form of the following expansion: known as the "Liouville-Neumann expansion" [6]. We consider A as an algebraic operator and obtain formally the following infinite geometric series: This series offers no convergence difficulties, since it is a finite expansion in the case of matrices of finite order and yields a convergent expansion in the case of the infinite matrices associated with This series converges for sufficiently small values the kernels of linear differential and integral of |X| but diverges beyond a certain |Xl = |Xi|. operators. The solution is obtained by a series of successive The drawback of this solution is—apart from 7 "iterations";5 we construct in succession the the exclusion of defective matrices —that it pre- following set of vectors: supposes the complete solution of the eigenvalue problem associated with the matrix A. bo=b III. Solution of the Fredholm Problem by the S-Expansion b2=Abx (3) We now develop a new expansion that solves the FredhoJm problem in similar terms as the Liouville-Neumann series but avoids the conver- gence difficulty of that solution. We first notice that the iterated vectors b0, b\, b , ... cannot be linearly independent of and then form the sum: 2 each other beyond a certain definite bk. All these + . (4) vectors find their place within the w-dimensional 2 space of the matrix A, hence not more than n of The merit of this solution is that it requires them can be linearly independent. We thus know nothing but a sequence of iterations. The draw- in advance that a linear identity of the following back of the solution is that its convergence is form must exist between the successive iterations. limited to sufficiently small values of X. The second classical solution is known as the \-gmbo=O. (9) Schmidt series [7]. We assume that the matrix 6 We shall use the term "eigenvalue" for the. numbers m defined by (5), whereas the reciprocals of the eigenvalues: \i=l/m shall be called "character, A is "nondefective" (i. e. that all its elementary istic numbers". divisors are linear). We furthermore assume that 7 The characteristic solutions of defective matrices (i. e. matrices whose elementary divisors are not throughout linear) do not include the entire 5 Throughout this paper the term ' 'iteration" refers to the application of /i-dimensional space, since such matrices possess less than n independent the given matrix A to a given vector 6, by forming the product Ab. principal axes. 256 We cannot tell in advance what m will be, except If we compare this solution with the earlier for the lower and upper bounds: solution (4), we notice that the expansion (17) may be conceived as a modified form of the Liou- l^m^n. (10) ville-Neumann series, because it is composed of the same kind of terms, the difference being only How to establish the relation (9) by a systematic that we weight the terms \kb by the weight factors algorithm will be shown in section VI. For the k time being we assume that the relation (9) is -k-i (X) already established. We now define the poly- (18) nomial m m l G(x)=x +gix - +- • .+gm, (11) instead of taking them all with the uniform weight factor 1. This weighting has the beneficial effect together with the "inverted polynomial" (the that the series terminates after m terms, instead of coefficients of which follow the opposite sequence): going on endlessly. The weight factors Wt are very near to 1 for small X but become more and m (12) -+gm\ . more important as X increases. The weighting Furthermore, we introduce the partial sums of the makes the series convergent for all values of X. latter polynomial: The remarkable feature of the expansion (17) is its complete generality. No matter how defective So =1 the matrix A may be, and no matter how the vector b0 was chosen, the expansion (17) is always valid, provided only that we interpret it properly. 2 = l+g1\+g2\ In particular we have to bear in mind that there (13) will always be m polynomials Sk(k), even though every Sk(\) may not be of degree Jc, due to the vanishing or the higher coefficients. For example, it could happen that G(x)=xm, (19) We now refer to a formula which can be proved by so that straightforward algebra:8 : +0X-, (20) i _ +0X", (21) 1 0 • X"- *— and the formula (17) gives: (14) 1 &W-1. (22) Let us apply this formula operationally, replacing x by the matrix A, and operating on the vector IV.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us