
Old Dominion University ODU Digital Commons Mathematics & Statistics Theses & Dissertations Mathematics & Statistics Winter 1989 On Vector Sequence Transforms and Acceleration Techniques Steven L. Hodge Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/mathstat_etds Part of the Applied Mathematics Commons, and the Mathematics Commons Recommended Citation Hodge, Steven L.. "On Vector Sequence Transforms and Acceleration Techniques" (1989). Doctor of Philosophy (PhD), Dissertation, Mathematics & Statistics, Old Dominion University, DOI: 10.25777/fnyf- vb61 https://digitalcommons.odu.edu/mathstat_etds/81 This Dissertation is brought to you for free and open access by the Mathematics & Statistics at ODU Digital Commons. It has been accepted for inclusion in Mathematics & Statistics Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. ON VECTOR SEQUENCE TRANSFORMS AND ACCELERATION TECHNIQUES by Steven L. Hodge B.S., August 1982, University of South Alabama, Mobile, AL A Dissertation Submitted to the Faculty of Old Dominion University in Partial Fulfilment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY in COMPUTATIONAL AND APPLIED MATHEMATICS December, 1989 Approved by: W JX^aki^jDirector) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ABSTRACT On Vector Sequence Transforms and Acceleration Techniques Steven Lee Hodge Old Dominion University, 1989 Director: Dr. William D. Lakin This dissertation is devoted to the acceleration of convergence of vector se­ quences. This means to produce a replacement sequence from the original se­ quence w ith higher rate of convergence. It is assumed that the sequence is generated from a linear m atrix iteration xi+i = Gxi + k where G is an n x n square m atrix and xI+ 1 , x,-, and k are n x 1 vectors. Acceleration of convergence is obtained when we are able to resolve approximations to low dimension invariant subspaces of G which contain large components of the error. When this occurs, simple weighted averages of iter­ ates x,+|, i = 1 ,2 ,... k where k < n are used to produce iterates which contain approximately no error in the selfsame low dimension invariant subspaces. We begin with simple techniques based upon the resolution of a simple dominant eigenvalue/eigenvector pair and extend the notion to higher dimensional invari­ ant spaces. Discussion is given to using various subspace iteration methods and their convergence. These ideas are again generalized by solving the eigenele- ment problem for a projection of G onto an appropiate subspace. The use of Lanzcos-type methods are discussed for establishing these projections. i Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. We produce acceleration techniques based on the process of generalized inver­ sion. The relationship between the minimal polynomial extrapolation technique (MPE) for acceleration of convergence and conjugate gradient type methods is explored. Further acceleration techniques are formed from conjugate gradient type techniques and a generalized inverse Newton’s method. An exposition is given to accelerations based upon generalizations of rational interpolation and Pade approximation. Further acceleration techniques using Sherman-Woodbury-Morrison type formulas are formulated and suggested as a replacement for the E-transform. We contrast the effect of several extrapolation techniques drawn from the dissertation on a nonsymmetric linear iteration. We pick the Minimal Polyno­ mial Extrapolation (MPE) as a representative of techniques based on orthogonal residuals, the Vector e-Algorithm (VEA) as a representative vector interpola­ tion technique and a technique formulated in this dissertation based on solving a projected eigenproblem. The results show the projected eigenproblem technique to be superior for certain iterations. ii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. To Dawne iii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Acknowledgements I would like to thank Dr. William D. Lakin and the other members of the committee for their encouragement and friendly advice during the writing of this dissertation. I especially appreciate the mathematical insights provided to me by Dr. John Swetits and Dr. John Tweed. I would like to thank Dr. Stan Weinstein for his extensive proofreading of this dissertation at a time I had grown weary of it. I must also recognize Barbara Jeffrey for her help with this dissertation and, most of all, friendship. I owe a debt of gratitude to the Department of Mathematics at O.D.U., the ICAM program at O.D.U., and the Theoretical Aerodynamics Branch of the Transonics Division at the N.A.S.A. Langley Research Center for the use of their equipment and monetary support. My parents and brother never ceased to encourage me and I thank them. But most of all, I would like to thank my wife Dawne and son Daniel (5 years old) for their strength, patience, love and understanding during this seemingly infinite project which threatened to rend us apart. In closing, I would also like to thank my daughter Lauren (at present 10 months old and squalling in her crib) who—although she came a little late to be of much help—may in the future like to see her name in print. iv Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Contents 1 Introduction 1 1.1 N o ta tio n ................................................................................................... 6 1.1.1 Spaces .......................................................................................... 6 1.1.2 Real and Complex N um bers ................................................... 7 1.1.3 Sequences ................................................................................... 7 2 Matrix Eigenvalue and Projection Techniques for Extrapolation 9 2.1 Extending Lusternik’s Method to a Subspace ................................. 14 2.1.1 Remarks on Methods Based upon Resolving Eigenspaces ............................................................. 19 2.2 Producing an Extrapolation Method from the Simultaneous Iteration Technique ............................................. 20 2.3 Extrapolation with Projection Methods .......................................... 25 2.3.1 Introduction ............................................................................. 25 2.3.2 Avoiding Amplification by the Incorporation of Eigenvectors ................................................ 27 2.3.3 On Approximate Eigenelements from Projections ................................................................................ 32 2.4 Extensions to Lanczos type m ethods ................................................. 34 v Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 Extrapolations Based on Generalized Inverses 44 3.1 The M PE M e th o d ................................................................................. 44 3.2 A Generalization of M P E .................................................................... 52 3.2.1 On Generalized In v e rs io n ....................................................... 54 3.2.2 Weighted Generalized Inverses .............................................. 56 3.3 Pseudoinversion and Descent M ethods ............................................. 58 3.3.1 Descent Methods for Extrapolation ....................................... 59 3.3.2 The Steepest Descent P ro c e d u re .......................................... 61 3.3.3 Conjugate Direction and Conjugate Gradient M ethods ................................................................... 63 3.4 A More Sophisticated Gradient Extrapolation ................................ 65 3.4.1 Planar Conjugate Gradient Acceleration ............................. 66 3.5 A Generalized Inverse Extrapolation Based on Newton’s M e th o d ................................................................ 67 4 The Annihilation and Suppression of Spectral Terms 72 4.1 The Classical Shanks-Schmidt Approach and Generalizations ................................................................................ 72 4.1.1 Sylvester’s Id e n tity .................................................................... 75 4.2 Eigenvalue Problems, Analytic Continuation, and Rational Approximation ...................................... 81 4.2.1 Rational and Polynomial Approximation ............................. 82 4.2.2 Pade Approximation ................................................................ 84 4.2.3 On the Evaluation of Pade Approximants .......................... 85 4.2.4 On Acceleration Methods for Vector Sequences ................. 88 4.3 On the Formation of Extrapolations ................................................ 94 vi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 Numerical Experiments with Selected Extrapolations 98 5.1 Preconditioning and the Formulation of Iterative Methods ................................................................................ 99 5.1.1 On the Formation of Classical Relaxation M ethods ....................................................................102 5.2 More on the S.O.R. M e th o d ...................................................................107
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages131 Page
-
File Size-