Acceleration Methods for Slowly Convergent Sequences and Their Applications

Total Page:16

File Type:pdf, Size:1020Kb

Acceleration Methods for Slowly Convergent Sequences and Their Applications Acceleration Methods for Slowly Convergent Sequences and their Applications Naoki Osada Acceleration Methods for Slowly Convergent Sequences and their Applications January 1993 Naoki Osada CONTENTS Introduction ............................. 1 I. Slowly convergent sequences .................... 9 1. Asymptotic preliminaries . 9 1.1 Order symbols and asymptotic expansions . 9 1.2 The Euler-Maclaurin summation formula . 10 2. Slowly convergent sequences . 13 2.1 Order of convergence . 13 2.2 Linearly convergent sequences . 14 2.3 Logarithmically onvergent sequences . 15 2.4 Criterion of linearly or logarithmically convergent sequences . 17 3. In¯nite series . 19 3.1 Alternating series . 19 3.2 Logarithmically convergent series . 23 4. Numerical integration . 28 4.1 Semi-in¯nite integrals with positive monotonically decreasing integrands . 28 4.2 Semi-in¯nite integrals with oscillatory integrands . 29 4.3 Improper integrals with endpoint singularities . 31 II. Acceleration methods for scalar sequences . 33 5. Basic concepts . 33 5.1 A sequence transformation, convergence acceleration, and extrapolation . 33 5.2 A classi¯cation of convergence acceleration methods . 34 6. The E-algorithm . 35 6.1 The derivation of the E-algorithm . 35 6.2 The acceleration theorems of the E-algorithm . 37 7. The Richardson extrapolation . 39 7.1 The birth of the Richardson extrapolation . 39 7.2 The derivation of the Richardson extrapolation . 40 7.3 Generalizations of the Richardson extrapolation . 43 8. The ²-algorithm . 50 8.1 The Shanks transformation . 50 8.2 The ²-algorithm . 51 i 8.3 The asymptotic properties of the ²-algorithm . 52 8.4 Numerical examples of the ²-algorithm . 53 9. Levin's transformations . 57 9.1 The derivation of the Levin T transformation . 57 9.2 The convergence theorem of the Levin transformations . 58 9.3 The d-transformation . 61 10. The Aitken ±2 process and its modi¯cations . 64 10.1 The acceleration theorem of the Aitken ±2 process . 64 10.2 The derivation of the modi¯ed Aitken ±2 formula . 66 10.3 The automatic modi¯ed Aitken ±2 formula . 68 11. Lubkin's W transformation . 72 11.1 The derivation of Lubkin's W transformation . 72 11.2 The exact and acceleration theorems of the W transformation . 74 11.3 The iteration of the W transformation . 75 11.4 Numerical examples of the iterated W transformation . 77 12. The ½-algorithm . 80 12.1 The reciprocal di®erences and the ½-algorithm . 80 12.2 The asymptotic behavior of the ½-algorithm . 82 13. Generalizations of the ½-algorithm . 87 13.1 The generalized ½-algorithm . 87 13.2 The automatic generalized ½-algorithm . 90 14. Comparisons of acceleration methods . 93 14.1 Sets of sequences . 93 14.2 Test series . 94 14.3 Numerical results . 95 14.4 Extraction . 98 14.5 Conclusions . 100 15. Application to numerical integration . 101 15.1 Introduction . 101 15.2 Application to semi-in¯nite integrals . 102 15.3 Application to improper integrals . 106 Conclusions ............................. 108 ii References ............................. 111 Appendix .............................. 115 A. Asymptotic formulae of the Aitken ±2 process . 115 B. An asymptotic formula of Lubkin's W transformation . 117 FORTRAN program . 120 The automatic generalized ½-algorithm The automatic modi¯ed Aitken ±2 formula iii INTRODUCTION Sequence transformations Convergent numerical sequences occur quite often in natural science and engineering. Some of such sequences converge very slowly and their limits are not available without a suitable convergence acceleration method. This is the raison d'^etre of the study of the convergence acceleration method. A convergence acceleration method is usually represented as a sequence transforma- tion. Let S and T be sets of real sequences. A mapping T : S ! T is called a sequence transformation, and we write (tn) = T (sn) for (sn) 2 S . Let T : S ! T be a sequence transformation and (sn) 2 S . T accelerates (sn) if t ¡ s lim n = 0; !1 n sσ(n) ¡ s where σ(n) is the greatest index used in the computation of tn. An illustration : the Aitken ±2 process The most famous sequence transformation is the Aitken ±2 process de¯ned by ¡ 2 2 ¡ (sn+1 sn) ¡ (¢sn) tn = sn = sn 2 ; (1) sn+2 ¡ 2sn+1 + sn ¢ sn 1 2 where (sn) is a scalar sequence. As C. Brezinski pointed out , the ¯rst proposer of the ± process was the greatest Japanese mathematician Takakazu Seki 関 孝 和(or K¹owa Seki, 1642?-1708). Seki used the ±2 process computing ¼ in Katsuy¹oSamp¹ovol. IV 括 要 算 法 巻 四, which was edited by his disciple Murahide Araki 荒 木 村 英in 1712. Let sn be the perimeter of the polygon with 2n sides inscribed in a circle of diameter one. From s15 = 3:14159 26487 76985 6708; s16 = 3:14159 26523 86591 3571; s17 = 3:14159 26532 88992 7759; Seki computed 1C. Brezinski, History of continued fractions and Pad¶eapproximants, Springer-Verlag, Berlin, 1991. p.90. (s16 ¡ s15)(s17 ¡ s16) t15 = s16 + ; (2) (s16 ¡ s15) ¡ (s17 ¡ s16) = 3:14159 26535 89793 2476; and he concluded ¼ = 3:14159 26535 89.2 The formula (2) is nothing but the ±2 process. Seki obtained seventeen-¯gure accuracy from s15; s16 and s17 whose ¯gure of accuracy is less than ten. Seki did not explain the reason for (2), but Yoshisuke Matsunaga 松永良弼 (1692?- 1744), a disciple of Murahide Araki, explained it in Kigenkai 起源解, an annotated edition of Katsuy¹oSamp¹oas follows. Suppose that b = a + ar, c = a + ar + ar2. Then (b ¡ a)(c ¡ b) a b + = ; (b ¡ a) ¡ (c ¡ b) 1 ¡ r the sum of the geometric series a + ar + ar2 + . .3 It still remains a mystery how Seki derived the ±2 process, but Seki's application can be explained as follows. Generally, if a sequence satis¯es » n n sn s + c1¸1 + c2¸2 + . ; where 1 > ¸1 > ¸2 > ¢ ¢ ¢ > 0, then tn in (1) satis¯es ( ¶ ¡ 2 » ¸1 ¸2 n tn s + c2 ¸2 : (3) ¸1 ¡ 1 This result was proved by J. W. Schmidt[48] and P. Wynn[61] in 1966 independently. Since Seki's sequence (sn) satis¯es 1 ¼ X (¡1)j¼2j+1 s = 2n sin = ¼ + (2¡2j)n; n 2n (2j + 1)! j=1 (3) implies that ( ¶ ¼5 1 n+1 t » ¼ + : n 5! 16 2A. Hirayama, K. Shimodaira, and H. Hirose(eds.), Takakazu Seki's collected works, English translation by J. Sudo, (Osaka Kyoiku Tosho, 1974). pp.57-58. 3M. Fujiwara, History of mathematics in Japan before the Meiji era, vol. II (in Japanese), under the auspices of the Japan Academy, (Iwanami, 1956) (= 日本学士院編, 藤原松三郎著, 明治前日本数学史, 岩波書店). p.180. 2 In 1926, A. C. Aitken[1] iteratively applied the ±2 process ¯nding the dominant root of an algebraic equation, and so it is now named after him. He used (1) repeatedly as follows: (n) 2 N T0 = sn; n ; (T (n+1) ¡ T (n))2 T (n) = T (n) ¡ k k k = 0; 1; ...; n 2 N: k+1 k (n+2) ¡ (n+1) (n) Tk 2Tk + Tk This algorithm is called the iterated Aitken ±2 process. Derivation of sequence transformations Many sequence transformations are designed to be exact for sequences of the form sn = s + c1g1(n) + ¢ ¢ ¢ + ckgk(n); 8n; (4) where s; c1; ... ; ck are unknown constants and gj(n)(j = 1; ... k) are known functions of n. Since s is the solution of the system of linear equations sn+i = s + c1g1(n + i) + ¢ ¢ ¢ + ckgk(n + i); i = 0; ... ; k; the sequence transformation (sn) 7! (tn) de¯ned by ¯ ¯ ¯ ¢ ¢ ¢ ¯ ¯ sn sn+1 sn+k ¯ ¯ ¢ ¢ ¢ ¯ ¯ g1(n) g1(n + 1) g1(n + k) ¯ ¯ ¢ ¢ ¢ ¯ ¯ ¯ g (n) g (n + 1) ¢ ¢ ¢ g (n + k) t = E(n) = ¯ k k k ¯ n k ¯ ¢ ¢ ¢ ¯ ¯ 1 1 1 ¯ ¯ ¢ ¢ ¢ ¯ ¯ g1(n) g1(n + 1) g1(n + k) ¯ ¯ ¢ ¢ ¢ ¯ ¯ ¯ gk(n) gk(n + 1) ¢ ¢ ¢ gk(n + k) is exact for the model sequence (4). This sequence transformation includes many famous sequence transformations as follows: 2 (i) The Aitken ± process : k = 1 and g1(n) = ¢sn. j (ii) The Richardson extrapolation : gj(n) = xn, where (xn) is an auxiary sequence. (iii) Shanks' transformation : gj(n) = ¢sn+j¡1. 1¡j (iv) The Levin u-transformation : gj(n) = n ¢sn¡1. In 1979 and 1980, T. Hºavie[20]and C. Brezinski[10] gave independently a recursive (n) algorithm for the computation of Ek , which is called the E-algorithm. 3 By the construction, the E-algorithgm accelerates sequences having the aymptotic expansion of the form X1 sn » s + cjgj(n); (5) j=1 where s; c1; c2; . are unknown constants and (gj(n)) is a known asymptotic scale. More precisely, in 1990, A. Sidi[53] proved that if the E-algorithm is applied to the sequence (5), then for ¯xed k, ( ¶ E(n) ¡ s g (n) k = O k+1 ; as n ! 1: (n) ¡ gk(n) Ek¡1 s Some sequence transformations are designed to accelerate for sequences having a certain asymptotic property. For example, suppose (sn) satis¯es s ¡ s lim n+1 = ¸: (6) !1 n sn ¡ s The Aitken ±2 process is also obtained by solving s ¡ s s ¡ s n+2 = n+1 sn+1 ¡ s sn ¡ s for the unknown s. Such methods for obtaining a sequence transformation from a formula with limit was proposed by C. Kowalewski[25] in 1981 and is designated as thechnique du sous-ensemble, or TSE for short. Recently, the TSE has been formulated by N. Osada[40]. When ¡1 · ¸ < 1 and ¸ =6 0 in (6), the sequence (sn) is said to be linearly convergent sequence. In 1964, P. Henrici[21] proved that the ±2 process accelerates any linearly convergent sequence. When j¸j > 1 in (6), the sequence (sn) diverges but (tn) converges to s. In this case s is called the antilimit of (sn).
Recommended publications
  • Convergence Acceleration Methods: the Past Decade *
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 12&13 (1985) 19-36 19 North-Holland Convergence acceleration methods: The past decade * Claude BREZINSKI Laboratoire d’AnaIyse Numkrique et d’optimisation, UER IEEA, Unicersitk de Lille I, 59655- Villeneuce diiscq Cedex, France Abstract: The aim of this paper is to present a review of the most significant results obtained the past ten years on convergence acceleration methods. It is divided into four sections dealing respectively with the theory of sequence transformations, the algorithms for such transformations, the practical problems related to their implementation and their applications to various subjects of numerical analysis. Keywords: Convergence acceleration, extrapolation. Convergence acceleration has changed much these last ten years not only because important new results, algorithms and applications have been added to those yet existing but mainly because our point of view on the subject completely changed thanks to the deep understanding that arose from the illuminating theoretical studies which were carried out. It is the purpose of this paper to present a ‘state of the art’ on convergence acceleration methods. It has been divided into four main sections. The first one deals with the theory of sequence transformations, the second section is devoted to the algorithms for sequence transfor- mations. The third section treats the practical problems involved in the use of convergence acceleration methods and the fourth section presents some applications to numerical analysis. In such a review it is, of course, impossible to analyze all the results which have been obtained and thus I had to make a choice among the contributions I know.
    [Show full text]
  • Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CERN Document Server Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients 1, 1 2 1 U. D. Jentschura ∗,J.Becher,E.J.Weniger, and G. Soff 1Institut f¨ur Theoretische Physik, TU Dresden, D-01062 Dresden, Germany 2Institut f¨ur Physikalische und Theoretische Chemie, Universit¨at Regensburg, D-93040 Regensburg, Germany (November 9, 1999) have also been used for the prediction of unknown per- We propose a method for the resummation of divergent turbative coefficients in quantum field theory [6–8]. The perturbative expansions in quantum electrodynamics and re- [l/m]Pad´e approximant to the quantity (g) represented lated field theories. The method is based on a nonlinear se- by the power series (1) is the ratio ofP two polynomials quence transformation and uses as input data only the numer- P (g)andQ (g) of degree l and m, respectively, ical values of a finite number of perturbative coefficients. The l m results obtained in this way are for alternating series superior P(g) p+pg+...+p gl to those obtained using Pad´e approximants. The nonlinear [l/m] (g)= l = 0 1 l . m sequence transformation fulfills an accuracy-through-order re- P Qm(g) 1+q1g+...+qm g lation and can be used to predict perturbative coefficients. In many cases, these predictions are closer to available analytic The polynomials Pl(g)andQm(g) are constructed so that results than predictions obtained using the Pad´e method.
    [Show full text]
  • Practical Exrapolation Methods: Theory and Applications
    This page intentionally left blank Practical Extrapolation Methods An important problem that arises in many scientific and engineering applications is that of approximating limits of infinite sequences {Am }. In most cases, these sequences converge very slowly. Thus, to approximate their limits with reasonable accuracy, one must compute a large number of the terms of {Am }, and this is generally costly. These limits can be approximated economically and with high accuracy by applying suitable extrapolation (or convergence acceleration) methods to a small number of terms of {Am }. This bookis concerned with the coherent treatment, including derivation, analysis, and applications, of the most useful scalar extrapolation methods. The methods it discusses are geared toward problems that commonly arise in scientific and engineering disci- plines. It differs from existing books on the subject in that it concentrates on the most powerful nonlinear methods, presents in-depth treatments of them, and shows which methods are most effective for different classes of practical nontrivial problems; it also shows how to fine-tune these methods to obtain best numerical results. This bookis intended to serve as a state-of-the-art reference on the theory and practice of extrapolation methods. It should be of interest to mathematicians interested in the theory of the relevant methods and serve applied scientists and engineers as a practical guide for applying speed-up methods in the solution of difficult computational problems. Avram Sidi is Professor of Numerical Analysis in the Computer Science Department at the Technion–Israel Institute of Technology and holds the Technion Administration Chair in Computer Science. He has published extensively in various areas of numerical analysis and approximation theory and in journals such as Mathematics of Computation, SIAM Review, SIAM Journal on Numerical Analysis, Journal of Approximation The- ory, Journal of Computational and Applied Mathematics, Numerische Mathematik, and Journal of Scientific Computing.
    [Show full text]
  • Conservative Sequence-To-Series Transformation Matrices
    MATHEMATICS CONSERVATIVE SEQUENCE-TO-SERIES TRANSFORMATION MATRICES BY F. GERRISH (Communicated by Prof. J. F. KoKS:\'IA at the meeting of September 29, 1956) The three methods (sequence-to-sequence, series-to-sequence, series­ to-series) of defining generalized limits or generalized sums by infinite matrices are here complemented by a fourth (sequence-to-series); and a new class of matrices (</>-matrices) giving conservative transformations is introduced (§ 2), together with its regular sub-class (17-matrices). Products of these with each other and with the known types K, T, {3, y, <5, £X (§ I) of summability matrix are investigated and, in all cases when the product always exists, its type is ascertained. The results are tabulated at the end of § 4. Associativity of triple products of summability matrices is discussed (§ 5) in the cases when the product always exists for both bracketings. The work reveals useful new subclasses of the {3- and <5-matrices (here called {30, bll respectively) which are generalizations of the {30- and <50-matrices (§I) defined in a recent paper [4]. Some simple inclusion relations between the various classes of matrices are obtained in § 3, and used in subsequent sections. Notation. Throughout this paper all summations are from I to oo, 00 unless otherwise stated; thus L means L. Likewise limn will mean k k~l lim. In both cases the variable will be omitted when it is unambiguous. Given an infinite matrix X, M(X) will denote some fixed positive number associated with X. 1. Introduction Two well-known methods of defining the generalized limit of a sequence and the generalized sum of a series respectively by an infinite matrix ([I], 59, 65) give rise to the following matrix classes: 1.1.
    [Show full text]
  • Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients
    Missouri University of Science and Technology Scholars' Mine Physics Faculty Research & Creative Works Physics 01 Sep 2000 Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients Ulrich D. Jentschura Missouri University of Science and Technology, [email protected] Jens Becher Ernst Joachim Weniger Gerhard Soff Follow this and additional works at: https://scholarsmine.mst.edu/phys_facwork Part of the Physics Commons Recommended Citation U. D. Jentschura et al., "Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients," Physical Review Letters, vol. 85, no. 12, pp. 2446-2449, American Physical Society (APS), Sep 2000. The definitive version is available at https://doi.org/10.1103/PhysRevLett.85.2446 This Article - Journal is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Physics Faculty Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. VOLUME 85, NUMBER 12 PHYSICAL REVIEW LETTERS 18SEPTEMBER 2000 Resummation of QED Perturbation Series by Sequence Transformations and the Prediction of Perturbative Coefficients U. D. Jentschura,1,* J. Becher,1 E. J. Weniger,2,3 and G. Soff 1 1Institut für Theoretische Physik, TU Dresden, D-01062 Dresden, Germany 2Max-Planck-Institut für Physik Komplexer Systeme, Nöthnitzer Straße 38-40, 01187 Dresden, Germany 3Institut für Physikalische und Theoretische Chemie, Universität Regensburg, D-93040 Regensburg, Germany (Received 10 November 1999) We propose a method for the resummation of divergent perturbative expansions in quantum electro- dynamics and related field theories.
    [Show full text]
  • Scalar Levin-Type Sequence Transformations Herbert H.H
    Journal of Computational and Applied Mathematics 122 (2000) 81–147 www.elsevier.nl/locate/cam Scalar Levin-type sequence transformations Herbert H.H. Homeier ∗; 1 Institut fur Physikalische und Theoretische Chemie, Universitat Regensburg, D-93040 Regensburg, Germany Received 7 June 1999; received in revised form 15 January 2000 Abstract Sequence transformations are important tools for the convergence acceleration of slowly convergent scalar sequences or series and for the summation of divergent series. The basic idea is to construct from a given sequence {{sn}} a 0 0 new sequence {{sn}} = T({{sn}}) where each sn depends on a ÿnite number of elements sn1 ;:::;snm . Often, the sn 0 are the partial sums of an inÿnite series. The aim is to ÿnd transformations such that {{sn}} converges faster than (or sums) {{sn}}. Transformations T({{sn}}; {{!n}}) that depend not only on the sequence elements or partial sums sn but also on an auxiliary sequence of the so-called remainder estimates !n are of Levin-type if they are linear in the sn, and nonlinear in the !n. Such remainder estimates provide an easy-to-use possibility to use asymptotic information on the problem sequence for the construction of highly ecient sequence transformations. As shown ÿrst by Levin, it is possible to obtain such asymptotic information easily for large classes of sequences in such a way that the !n are simple functions of a few sequence elements sn. Then, nonlinear sequence transformations are obtained. Special cases of such Levin-type transformations belong to the most powerful currently known extrapolation methods for scalar sequences and series.
    [Show full text]
  • Generalizations of Aitken's Process for Accelerating the Convergence Of
    “main” — 2007/6/26 — 18:15 — page 171 — #1 Volume 26, N. 2, pp. 171–189, 2007 Copyright © 2007 SBMAC ISSN 0101-8205 www.scielo.br/cam Generalizations of Aitken’s process for accelerating the convergence of sequences CLAUDE BREZINSKI1 and MICHELA REDIVO ZAGLIA2 1Laboratoire Paul Painlevé, UMR CNRS 8524, UFR de Mathématiques Pures et Appliquées, Université des Sciences et Technologies de Lille, 59655 – Villeneuve d’Ascq cedex, France 2Università degli Studi di Padova, Dipartimento di Matematica Pura ed Applicata Via Trieste 63, 35121 – Padova, Italy E-mails: [email protected] / [email protected] Abstract. When a sequence or an iterative process is slowly converging, a convergence accel- eration process has to be used. It consists in transforming the slowly converging sequence into a new one which, under some assumptions, converges faster to the same limit. In this paper, new scalar sequence transformations having a kernel (the set of sequences transformed into a constant sequence) generalizing the kernel of the Aitken’s 12 process are constructed. Then, these trans- formations are extended to vector sequences. They also lead to new fixed point methods which are studied. Mathematical subject classification: Primary: 65B05, 65B99; Secondary: 65H10. Key words: convergence acceleration, Aitken process, extrapolation, fixed point methods. 1 Introduction When a sequence (Sn) of real or complex numbers is slowly converging, it can be transformed into a new sequence (Tn) by a sequence transformation. Aitken’s 12 process and Richardson extrapolation (which gives rise to Romberg’s method for accelerating the convergence of the trapezoidal rule) are the most well known sequence transformations.
    [Show full text]
  • The Simplified Topological Ε–Algorithms for Accelerating
    The simplified topological ε–algorithms for accelerating sequences in a vector space Claude Brezinski∗ Michela Redivo–Zaglia† August 21, 2018 Abstract When a sequence of numbers is slowly converging, it can be transformed into a new sequence which, under some assumptions, could converge faster to the same limit. One of the most well–known sequence transformation is Shanks transformation which can be recursively implemented by the ε–algorithm of Wynn. This transformation and this algorithm have been extended (in two different ways) to sequence of elements of a topological vector space E. In this paper, we present new algorithms of implementing these topological Shanks transformations. They no longer require the manipulation of elements of the algebraic dual space E∗ of E, nor using the duality product into the rules of the algorithms, they need the storage of less elements of E, and the stability is improved. They also allow us to prove convergence and acceleration results for some types of sequences. Various applications involving sequences of vectors or matrices show the interest of the new algorithms. Keywords: Convergence acceleration, extrapolation, matrix equations, matrix functions, iterative methods. AMS: 65B05, 65F10, 65F30, 65F60, 65J10, 65J15, 65Q20. arXiv:1402.2473v2 [math.NA] 12 Feb 2014 1 Presentation Let (Sn) be a sequence of elements of a vector space E on a field K (R or C). If the sequence (Sn) is slowly converging to its limit S when n tends to infinity, it can be transformed, by a sequence transformation, into a new sequence converging, under some assumptions, faster to the same limit.
    [Show full text]
  • Convergence Acceleration During the 20Th Century C
    Journal of Computational and Applied Mathematics 122 (2000) 1–21 www.elsevier.nl/locate/cam View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Convergence acceleration during the 20th century C. Brezinski Laboratoire d’Analyse Numeriqueà et d’Optimisation, UFR IEEA, Universiteà des Sciences et Technologies de Lille, 59655–Villeneuve d’Ascq cedex, France Received 8 March 1999; received in revised form 12 October 1999 1. Introduction In numerical analysis many methods produce sequences, for instance iterative methods for solving systems of equations, methods involving series expansions, discretization methods (that is methods depending on a parameter such that the approximate solution tends to the exact one when the pa- rameter tends to zero), perturbation methods, etc. Sometimes, the convergence of these sequences is slow and their e ective use is quite limited. Convergence acceleration methods consist of trans- forming a slowly converging sequence (Sn) into a new sequence (Tn) converging to the same limit faster than the initial one. Among such sequence transformations, the most well known are certainly Richardson’s extrapolation algorithm and Aitken’s 2 process. All known methods are constructed by extrapolation and they are often called extrapolation methods. The idea consists of interpolating the terms Sn;Sn+1;:::;Sn+k of the sequence to be transformed by a sequence satisfying a certain relationship depending on parameters. This set of sequences is called kernel of the transformation and every sequence of this set is transformed into a constant sequence by the transformation into consideration. For example, as we will see below, the kernel of Aitken’s 2 process is the set of sequences satisfying ∀n; a0(Sn − S)+a1(Sn+1 − S) = 0, where a0 and a1 are parameters such that a0 + a1 6= 0.
    [Show full text]
  • (Sn) Be a Sequence of Elements of a Vector Space E Which Converges to S
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 35, pp. 248-265, June 2019. MATRIX SHANKS TRANSFORMATIONS∗ CLAUDE BREZINSKIy AND MICHELA REDIVO-ZAGLIAz Abstract. Shanks' transformation is a well know sequence transformation for accelerating the convergence of scalar sequences. It has been extended to the case of sequences of vectors and sequences of square matrices satisfying a linear difference equation with scalar coefficients. In this paper, a more general extension to the matrix case where the matrices can be rectangular and satisfy a difference equation with matrix coefficients is proposed and studied. In the particular case of square matrices, the new transformation can be recursively implemented by the matrix "-algorithm of Wynn. Then, the transformation is related to matrix Pad´e-type and Pad´eapproximants. Numerical experiments showing the interest of this transformation end the paper. Key words. Acceleration methods, Sequence transformations, Shanks transformation, Pad´eapproximation. AMS subject classifications. 65B05, 65B99, 65F60, 65H10, 41A21. 1. Introduction to Shanks transformations. Let (sn) be a sequence of elements of a vector space E which converges to s. If the convergence is slow, it is possible to transform it into a new sequence of elements of E (or a set of new sequences) which, under some assumptions, converges faster to the same limit. Such sequence transformations, also known as extrapolation methods, have been widely studied and have a bunch of applications: Systems of linear and nonlinear equations [8], matrix equations [7,20], matrix functions [8], block linear systems [23], nonlinear Fredholm integral equations [9], .
    [Show full text]
  • Algorithms for Asymptotic Extrapolation∗
    Algorithms for asymptotic extrapolation∗ Joris van der Hoeven CNRS, Département de Mathématiques Bâtiment 425 Université Paris-Sud 91405 Orsay Cedex France Email: [email protected] Web: http://www.math.u-psud.fr/~vdhoeven December 9, 2008 Consider a power series f ∈ R[[ z]] , which is obtained by a precise mathematical construction. For instance, f might be the solution to some differential or functional initial value problem or the diagonal of the solution to a partial differential equation. In cases when no suitable method is beforehand for determining the asymptotics of the coefficients fn, but when many such coefficients can be computed with high accuracy, it would be useful if a plausible asymptotic expansion for fn could be guessed automatically. In this paper, we will present a general scheme for the design of such “asymptotic extrapolation algorithms”. Roughly speaking, using discrete differentiation and tech- niques from automatic asymptotics, we strip off the terms of the asymptotic expansion one by one. The knowledge of more terms of the asymptotic expansion will then allow us to approximate the coefficients in the expansion with high accuracy. Keywords: extrapolation, asymptotic expansion, algorithm, guessing A.M.S. subject classification: 41A05, 41A60, 65B05, 68W30 1. Introduction Consider an infinite sequence f0,f1 , of real numbers. If f0,f1 , are the coefficients of a formal power series f R[[ z]] , then it is well-known [Pól37, Wil04, FS96] that a lot of information about the behaviour∈ of f near its dominant singularity can be obtained from the asymptotic behaviour of the sequence f0,f1 , . However, if f is the solution to some complicated equation, then it can be hard to compute the asymptotic behaviour using formal methods.
    [Show full text]
  • Shanks Sequence Transformations and Anderson Acceleration
    SHANKS SEQUENCE TRANSFORMATIONS AND ANDERSON ACCELERATION CLAUDE BREZINSKI ∗, MICHELA REDIVO-ZAGLIA y , AND YOUSEF SAAD z Abstract. This paper presents a general framework for Shanks transformations of sequences of elements in a vector space. It is shown that the Minimal Polynomial Extrapolation (MPE), the Modified Minimal Polynomial Extrapolation (MMPE), the Reduced Rank Extrapolation (RRE), the Vector Epsilon Algorithm (VEA), the Topological Epsilon Algorithm (TEA), and Anderson Accel- eration (AA), which are standard general techniques designed for accelerating arbitrary sequences and/or solving nonlinear equations, all fall into this framework. Their properties and their connec- tions with quasi-Newton and Broyden methods are studied. The paper then exploits this framework to compare these methods. In the linear case, it is known that AA and GMRES are `essentially' equivalent in a certain sense while GMRES and RRE are mathematically equivalent. This paper discusses the connection between AA, the RRE, the MPE, and other methods in the nonlinear case. Key words. Acceleration techniques; sequence transformations; Anderson Acceleration; Re- duced Rank Extrapolation; quasi-Newton methods; Broyden metods. AMS subject classifications: 65B05, 65B99, 65F10, 65H10. 1. Introduction. In computational sciences it is often necessary to obtain the limit of a sequence of objects of a vector space (scalars, vectors, matrices, ...) that converges slowly to its limit or even diverges. In some situations, we may be able to obtain a new sequence that converges faster to the same limit by modifying the method that produced the original sequence. However, in many instances, the process by which the sequence is produced is hidden (black box) or too cumbersome for this approach to be practical.
    [Show full text]