Class Notes Numerical Comptuation
Total Page:16
File Type:pdf, Size:1020Kb
class notes numerical comptuation J. M. Melenk Wien WS 2019/20 Contents 1 polynomial interpolation 1 1.1 Existence and uniqueness of the polynomial interpolation problem ........ 1 1.2 Neville-Scheme .................................... 2 1.3 Newton representation of the interpolating polynomial (CSE) . ......... 4 1.4 Extrapolation as a prime application of the Neville scheme . .... 8 1.5 asimpleerrorestimate ................................ 8 1.6 Extrapolation of function with additional structure . ....... 11 1.7 Chebyshevpolynomials................................ 12 1.7.1 Chebyshevpoints............................... 12 1.7.2 Error bounds for Chebyshev interpolation . .. 13 1.7.3 Interpolation with uniform point distribution . 15 1.8 Splines(CSE)..................................... 15 1.8.1 Piecewise linear approximation . 16 1.8.2 the classical cubic spline . 16 1.8.3 remarksonsplines .............................. 19 1.9 Remarks on Hermite interpolation . .. 19 1.10 trigonometric interpolation and FFT (CSE) . .... 21 1.10.1 trigonometric interpolation . 21 1.10.2FFT ...................................... 23 1.10.3 PropertiesoftheDFT ............................ 26 1.10.4 application: fast convolution of sequence . ... 27 2 Numerical integration 30 2.1 Newton-Cotesformulas.............................. .. 31 2.2 Rombergextrapolation............................... 34 2.3 non-smoothintegrandsandadaptivity . .... 35 2.4 Gaussianquadrature ................................ 36 2.4.1 Legendre polynomials Ln asorthogonalpolynomials . 37 2.4.2 Gaussianquadrature ............................. 39 2.5 Commentsonthetrapezoidalrule. ... 41 2.6 Quadraturein2D................................... 43 2.6.1 Quadratureonsquares ............................ 43 2.6.2 Quadratureontriangles ........................... 43 2.6.3 Furthercomments .............................. 44 3 conditioning and error analysis 45 3.1 errormeasures.................................... 45 3.2 conditioning and stability of algorithms . .. 45 3.3 stability of algorithms . 47 4 Gaussian elimination 49 4.1 lower and upper triangular matrices . ... 49 4.2 classical Gaussian elimination . 51 4.2.1 Interpretation of Gaussian elimination as an LU-factorization . 52 i 4.3 LU-factorization ................................... 54 4.3.1 Crout’s algorithm for computing LU-factorization . 54 4.3.2 bandedmatrices................................ 56 4.3.3 Cholesky-factorization . 58 4.3.4 skylinematrices................................ 58 4.4 Gaussian elimination with pivoting . 60 4.4.1 Motivation................................... 60 4.4.2 Algorithms .................................. 61 4.4.3 numerical difficulties: choice of the pivoting strategy . ... 62 4.5 condition number of a matrix A ........................... 63 4.6 QR-factorization(CSE) ............................... 65 4.6.1 orthogonalmatrices.............................. 65 4.6.2 QR-factorizationbyHouseholderreflections . 65 4.6.3 QR-factorizationwithpivoting . 69 4.6.4 Givensrotations................................ 70 5 Least Squares 73 5.1 Methodofthenormalequations . ... 73 5.2 least squares using QR-factorizations ........................ 74 5.2.1 QR-factorization ............................... 74 5.2.2 Solving least squares problems with QR-factorization . 75 5.3 underdeterminedsystems. ... 77 5.3.1 SVD...................................... 77 5.3.2 Finding the minimum norm solution using the SVD . 78 5.3.3 Solution of the least squares problem with the SVD . 78 5.3.4 FurtherpropertiesoftheSVD . 79 5.3.5 The Moore-Penrose Pseudoinverse (CSE) . ... 79 5.3.6 Furtherremarks................................ 81 6 Nonlinear equations and Newton’s method 82 6.1 Newton’smethodin1D ............................... 82 6.2 convergenceoffixedpointiterations. ..... 83 6.3 Newton’s method in higher dimensions . 85 6.4 implementationaspectsofNewtonmethods . .... 86 6.5 dampedandglobalizedNewtonmethods . .. 87 6.5.1 dampedNewtonmethod ........................... 87 6.5.2 adigression:descentmethods . 87 6.5.3 globalized Newton method as a descent method . 88 6.6 Quasi-Newtonmethods(CSE) . .. 90 6.6.1 Broydenmethod ............................... 90 6.7 unconstrained minimization problems (CSE) . ... 92 6.7.1 gradientmethods ............................... 92 6.7.2 gradient method with quadratic cost function . .. 93 6.7.3 trustregionmethods ............................. 94 ii 7 Eigenvalue problems 96 7.1 thepowermethod................................... 96 7.2 InverseIteration.................................. .. 98 7.3 error estimates–stopping criteria . 100 7.3.1 Bauer-Fike...................................100 7.3.2 remarks on stopping criteria . 100 7.4 orthogonalIteration............................... 102 7.5 Basic QR-algorithm..................................103 7.6 Jacobimethod(CSE) ................................ 105 7.6.1 Schurrepresentation .............................105 7.6.2 Jacobimethod ................................105 7.7 QR-algorithm with shift (CSE) . 107 7.7.1 furthercommentsonQR. .. .. .110 7.7.2 realmatrices .................................110 8 Conjugate Gradient method (CG) 111 8.1 convergencebehaviorofCG . 114 8.2 GMRES(CSE) ....................................116 8.2.1 realization of the GMRES method . 117 9 numerical methods for ODEs 121 9.1 Euler’smethod ....................................121 9.2 Runge-Kuttamethods ............................... 123 9.2.1 explicit Runge-Kutta methods . 123 9.2.2 implicit Runge-Kutta methods . 126 9.2.3 why implicit methods? . 126 9.2.4 the concept of A-stability(CSE). .. .. .127 9.3 Multistepmethods(CSE) .............................. 133 9.3.1 Adams-Bashforthmethods . 133 9.3.2 Adams-Moultonmethods. 134 9.3.3 BDFmethods.................................135 9.3.4 Remarksonmultistepmethods . 136 Bibliographie 137 iii 1 polynomial interpolation goal: given (xi, fi), i =0,...,n, find p s.t. p(x )= f , i =0,...,n. (1.1) ∈ Pn i i applications (examples): “Extrapolation”: typically fi = f(xi) for an (unknown) function f. For x x0,...,xn • the value p(x) yields an approximation to f(x). 6∈ { } “Dense output/plotting of f”, if only the values fi = f(xi) are given (or, e.g., function • evaluations are too expensive) Approximation of f: integration or differentiation of f integrate or differentiate the • interpolating polynom p → 1.1 Existence and uniqueness of the polynomial interpo- lation problem Theorem 1.1 (Lagrange interpolation) Let the points (“knots”) xi, i = 0,...,n, be dis- n trinct. Then there exists, for all values (fi)i=0 R, a unique interpolating polynomial p n. It is given by ⊂ ∈ P n n x x p(x)= f ℓ (x), ℓ (x)= − j (1.2) i i i x x i=0 j=0 i − j X Yj=i 6 The polynomials (ℓ )n are called Lagrange basis of the space w.r.t. the points (x )n . i i=0 Pn i i=0 Proof: 1. step: one observes that ℓ , i =0,...,n. i ∈ Pn 2. step: one asserts that ℓi(xj)= δij, i.e., ℓi(xi) = 1 and ℓi(xj)=0 for j = i. 3. step: Steps 1+2 imply that p given by (1.2) is a solution to the polynomial6 interpolation problem. 4. step: Uniqueness: Let p , p be two interpolating polynomials. Then, the difference 1 2 ∈ Pn p := p1 p2 is a polynomial of degree n with (at least) n + 1 zeros. Hence, p 0, i.e., p1 = p2. ✷ − ≡ Example 1.2 slide 2 The polynomial p interpolating the data ∈ P2 π √2 π (0, 0), ( , ) ( , 1) 4 2 2 1 is given by √2 p(x)=0 ℓ (x)+ ℓ (x)+1 ℓ (x), · 0 2 · 1 · 2 (x π/4)(x π/2) ℓ (x)= − − =1 (1.909...)x + (0.8105...)x2, 0 (0 π/4)(0 π/2) − − − (x 0)(x π/2) ℓ (x)= − − = (2.546...)x (1.62...)x2 1 (π/4 0)(π/4 π/2) − − − (x 0)(x π/4) ℓ (x)= − − = (0.636...)x + (0.81...)x2. 2 (π/2 0)(π/2 π/4) − − − That is, p(x)=(1.164...)x (0.3357...)x2 − Example 1.3 The data of Example 1.2 were chosen to be the values fi = sin(xi), i.e., f(x)= sin x. An approximation to f ′(0) = 1 could be obtained as f ′(0) p′(0) = 1.164.... An π/2 π/2 ≈ approximation to 0 f(x) dx =1 is given by 0 p(x) dx =1.00232... R R 1.2 Neville-Scheme It is not efficient to evaluate the interpolating polynomial p(x) at a point x based on (1.2) since it involves many (redundant) multiplications when evaluating the ℓi. Traditionally, an interpolating polynomial is evaluated at a point x with the aid of the Neville scheme: Theorem 1.4 Let x0,...,xn, be distinct knots and let fi, i =0,...n, be the corresponding values. Denote by p the solution of j,m ∈ Pm find p , s.t. p(x )= f for k = j, j +1,...,j + m. (1.3) ∈ Pm k k Then, there hold the recursions: pj,0 = fj, j =0,...,n (1.4) (x x )p (x) (x x )p (x) − j j+1,m−1 − − j+m j,m−1 pj,m(x)= x x m 1 (1.5) j+m− j ≥ The solution p of (1.1) is p(x)= p0,n(x). Proof: (1.4) X (1.5) Let π := be the right-hand side of (1.5). Then: π • ∈ Pm π(xj)= pj,m 1(xj)= fj • − π(xj+m)= pj+1,m 1(xj+m)= fj+m • − 2 for j +1 i j + m 1 there holds • ≤ ≤ − =fi =fi (xi xj) pj+1,m 1(xi) (xi xj+m) pj,m 1(xi) π(x ) = − − − − − = i x x z }| j+{m − j z }| { (x x x + x )f = i − j − i j+m i = f x x i j+m − j Theorem 1.1 implies π = pj,m. ✷ Theorem 1.4 shows that evaluating p at x can be realized with the following scheme: x0 f0 =: p0,0(x) p0,1(x) p0,2(x) ... p0,n(x)= p(x) −→ −→ . −→ −→ . ր ր ր . x1 f1 =: p1,0(x) p1,1(x) . −→ . ր . x2 f2 =: p2,0(x) . pn 2,2(x) . −→ − . ր . pn 1,1(x) . −→ − . ր xn fn =: pn,0(x) J here, the operation “ −→ ” is realized by formula (1.5) K ր slide 3 Exercise 1.5 Formulate explicitly the algorithm that computes (in a 2-dimensional array) the values pi,j. How many multiplications (in dependence on n) are needed? (It suffices to state α in the complexity bound O(nα).) The scheme computes the values “column by column”. If merely the last value p(x) is required, then one can be more memory efficient by overwriting the given vector of data: Algorithm 1.6 (Aitken-Neville Scheme) Input: knot vector x Rn+1, vector y Rn+1 of values, evaluation point x R Output: p(x), p solves∈ (1.1) ∈ ∈ for m = 1 : n do for j = 0 : n m do ⊲ array has triangular form − 3 (x xj ) yj+1 (x xj+m) yj yj := − − − xj+m xj end for − end for return y0 Remark 1.7 Cost of Alg.