Class Notes Numerical Comptuation

Total Page:16

File Type:pdf, Size:1020Kb

Class Notes Numerical Comptuation class notes numerical comptuation J. M. Melenk Wien WS 2019/20 Contents 1 polynomial interpolation 1 1.1 Existence and uniqueness of the polynomial interpolation problem ........ 1 1.2 Neville-Scheme .................................... 2 1.3 Newton representation of the interpolating polynomial (CSE) . ......... 4 1.4 Extrapolation as a prime application of the Neville scheme . .... 8 1.5 asimpleerrorestimate ................................ 8 1.6 Extrapolation of function with additional structure . ....... 11 1.7 Chebyshevpolynomials................................ 12 1.7.1 Chebyshevpoints............................... 12 1.7.2 Error bounds for Chebyshev interpolation . .. 13 1.7.3 Interpolation with uniform point distribution . 15 1.8 Splines(CSE)..................................... 15 1.8.1 Piecewise linear approximation . 16 1.8.2 the classical cubic spline . 16 1.8.3 remarksonsplines .............................. 19 1.9 Remarks on Hermite interpolation . .. 19 1.10 trigonometric interpolation and FFT (CSE) . .... 21 1.10.1 trigonometric interpolation . 21 1.10.2FFT ...................................... 23 1.10.3 PropertiesoftheDFT ............................ 26 1.10.4 application: fast convolution of sequence . ... 27 2 Numerical integration 30 2.1 Newton-Cotesformulas.............................. .. 31 2.2 Rombergextrapolation............................... 34 2.3 non-smoothintegrandsandadaptivity . .... 35 2.4 Gaussianquadrature ................................ 36 2.4.1 Legendre polynomials Ln asorthogonalpolynomials . 37 2.4.2 Gaussianquadrature ............................. 39 2.5 Commentsonthetrapezoidalrule. ... 41 2.6 Quadraturein2D................................... 43 2.6.1 Quadratureonsquares ............................ 43 2.6.2 Quadratureontriangles ........................... 43 2.6.3 Furthercomments .............................. 44 3 conditioning and error analysis 45 3.1 errormeasures.................................... 45 3.2 conditioning and stability of algorithms . .. 45 3.3 stability of algorithms . 47 4 Gaussian elimination 49 4.1 lower and upper triangular matrices . ... 49 4.2 classical Gaussian elimination . 51 4.2.1 Interpretation of Gaussian elimination as an LU-factorization . 52 i 4.3 LU-factorization ................................... 54 4.3.1 Crout’s algorithm for computing LU-factorization . 54 4.3.2 bandedmatrices................................ 56 4.3.3 Cholesky-factorization . 58 4.3.4 skylinematrices................................ 58 4.4 Gaussian elimination with pivoting . 60 4.4.1 Motivation................................... 60 4.4.2 Algorithms .................................. 61 4.4.3 numerical difficulties: choice of the pivoting strategy . ... 62 4.5 condition number of a matrix A ........................... 63 4.6 QR-factorization(CSE) ............................... 65 4.6.1 orthogonalmatrices.............................. 65 4.6.2 QR-factorizationbyHouseholderreflections . 65 4.6.3 QR-factorizationwithpivoting . 69 4.6.4 Givensrotations................................ 70 5 Least Squares 73 5.1 Methodofthenormalequations . ... 73 5.2 least squares using QR-factorizations ........................ 74 5.2.1 QR-factorization ............................... 74 5.2.2 Solving least squares problems with QR-factorization . 75 5.3 underdeterminedsystems. ... 77 5.3.1 SVD...................................... 77 5.3.2 Finding the minimum norm solution using the SVD . 78 5.3.3 Solution of the least squares problem with the SVD . 78 5.3.4 FurtherpropertiesoftheSVD . 79 5.3.5 The Moore-Penrose Pseudoinverse (CSE) . ... 79 5.3.6 Furtherremarks................................ 81 6 Nonlinear equations and Newton’s method 82 6.1 Newton’smethodin1D ............................... 82 6.2 convergenceoffixedpointiterations. ..... 83 6.3 Newton’s method in higher dimensions . 85 6.4 implementationaspectsofNewtonmethods . .... 86 6.5 dampedandglobalizedNewtonmethods . .. 87 6.5.1 dampedNewtonmethod ........................... 87 6.5.2 adigression:descentmethods . 87 6.5.3 globalized Newton method as a descent method . 88 6.6 Quasi-Newtonmethods(CSE) . .. 90 6.6.1 Broydenmethod ............................... 90 6.7 unconstrained minimization problems (CSE) . ... 92 6.7.1 gradientmethods ............................... 92 6.7.2 gradient method with quadratic cost function . .. 93 6.7.3 trustregionmethods ............................. 94 ii 7 Eigenvalue problems 96 7.1 thepowermethod................................... 96 7.2 InverseIteration.................................. .. 98 7.3 error estimates–stopping criteria . 100 7.3.1 Bauer-Fike...................................100 7.3.2 remarks on stopping criteria . 100 7.4 orthogonalIteration............................... 102 7.5 Basic QR-algorithm..................................103 7.6 Jacobimethod(CSE) ................................ 105 7.6.1 Schurrepresentation .............................105 7.6.2 Jacobimethod ................................105 7.7 QR-algorithm with shift (CSE) . 107 7.7.1 furthercommentsonQR. .. .. .110 7.7.2 realmatrices .................................110 8 Conjugate Gradient method (CG) 111 8.1 convergencebehaviorofCG . 114 8.2 GMRES(CSE) ....................................116 8.2.1 realization of the GMRES method . 117 9 numerical methods for ODEs 121 9.1 Euler’smethod ....................................121 9.2 Runge-Kuttamethods ............................... 123 9.2.1 explicit Runge-Kutta methods . 123 9.2.2 implicit Runge-Kutta methods . 126 9.2.3 why implicit methods? . 126 9.2.4 the concept of A-stability(CSE). .. .. .127 9.3 Multistepmethods(CSE) .............................. 133 9.3.1 Adams-Bashforthmethods . 133 9.3.2 Adams-Moultonmethods. 134 9.3.3 BDFmethods.................................135 9.3.4 Remarksonmultistepmethods . 136 Bibliographie 137 iii 1 polynomial interpolation goal: given (xi, fi), i =0,...,n, find p s.t. p(x )= f , i =0,...,n. (1.1) ∈ Pn i i applications (examples): “Extrapolation”: typically fi = f(xi) for an (unknown) function f. For x x0,...,xn • the value p(x) yields an approximation to f(x). 6∈ { } “Dense output/plotting of f”, if only the values fi = f(xi) are given (or, e.g., function • evaluations are too expensive) Approximation of f: integration or differentiation of f integrate or differentiate the • interpolating polynom p → 1.1 Existence and uniqueness of the polynomial interpo- lation problem Theorem 1.1 (Lagrange interpolation) Let the points (“knots”) xi, i = 0,...,n, be dis- n trinct. Then there exists, for all values (fi)i=0 R, a unique interpolating polynomial p n. It is given by ⊂ ∈ P n n x x p(x)= f ℓ (x), ℓ (x)= − j (1.2) i i i x x i=0 j=0 i − j X Yj=i 6 The polynomials (ℓ )n are called Lagrange basis of the space w.r.t. the points (x )n . i i=0 Pn i i=0 Proof: 1. step: one observes that ℓ , i =0,...,n. i ∈ Pn 2. step: one asserts that ℓi(xj)= δij, i.e., ℓi(xi) = 1 and ℓi(xj)=0 for j = i. 3. step: Steps 1+2 imply that p given by (1.2) is a solution to the polynomial6 interpolation problem. 4. step: Uniqueness: Let p , p be two interpolating polynomials. Then, the difference 1 2 ∈ Pn p := p1 p2 is a polynomial of degree n with (at least) n + 1 zeros. Hence, p 0, i.e., p1 = p2. ✷ − ≡ Example 1.2 slide 2 The polynomial p interpolating the data ∈ P2 π √2 π (0, 0), ( , ) ( , 1) 4 2 2 1 is given by √2 p(x)=0 ℓ (x)+ ℓ (x)+1 ℓ (x), · 0 2 · 1 · 2 (x π/4)(x π/2) ℓ (x)= − − =1 (1.909...)x + (0.8105...)x2, 0 (0 π/4)(0 π/2) − − − (x 0)(x π/2) ℓ (x)= − − = (2.546...)x (1.62...)x2 1 (π/4 0)(π/4 π/2) − − − (x 0)(x π/4) ℓ (x)= − − = (0.636...)x + (0.81...)x2. 2 (π/2 0)(π/2 π/4) − − − That is, p(x)=(1.164...)x (0.3357...)x2 − Example 1.3 The data of Example 1.2 were chosen to be the values fi = sin(xi), i.e., f(x)= sin x. An approximation to f ′(0) = 1 could be obtained as f ′(0) p′(0) = 1.164.... An π/2 π/2 ≈ approximation to 0 f(x) dx =1 is given by 0 p(x) dx =1.00232... R R 1.2 Neville-Scheme It is not efficient to evaluate the interpolating polynomial p(x) at a point x based on (1.2) since it involves many (redundant) multiplications when evaluating the ℓi. Traditionally, an interpolating polynomial is evaluated at a point x with the aid of the Neville scheme: Theorem 1.4 Let x0,...,xn, be distinct knots and let fi, i =0,...n, be the corresponding values. Denote by p the solution of j,m ∈ Pm find p , s.t. p(x )= f for k = j, j +1,...,j + m. (1.3) ∈ Pm k k Then, there hold the recursions: pj,0 = fj, j =0,...,n (1.4) (x x )p (x) (x x )p (x) − j j+1,m−1 − − j+m j,m−1 pj,m(x)= x x m 1 (1.5) j+m− j ≥ The solution p of (1.1) is p(x)= p0,n(x). Proof: (1.4) X (1.5) Let π := be the right-hand side of (1.5). Then: π • ∈ Pm π(xj)= pj,m 1(xj)= fj • − π(xj+m)= pj+1,m 1(xj+m)= fj+m • − 2 for j +1 i j + m 1 there holds • ≤ ≤ − =fi =fi (xi xj) pj+1,m 1(xi) (xi xj+m) pj,m 1(xi) π(x ) = − − − − − = i x x z }| j+{m − j z }| { (x x x + x )f = i − j − i j+m i = f x x i j+m − j Theorem 1.1 implies π = pj,m. ✷ Theorem 1.4 shows that evaluating p at x can be realized with the following scheme: x0 f0 =: p0,0(x) p0,1(x) p0,2(x) ... p0,n(x)= p(x) −→ −→ . −→ −→ . ր ր ր . x1 f1 =: p1,0(x) p1,1(x) . −→ . ր . x2 f2 =: p2,0(x) . pn 2,2(x) . −→ − . ր . pn 1,1(x) . −→ − . ր xn fn =: pn,0(x) J here, the operation “ −→ ” is realized by formula (1.5) K ր slide 3 Exercise 1.5 Formulate explicitly the algorithm that computes (in a 2-dimensional array) the values pi,j. How many multiplications (in dependence on n) are needed? (It suffices to state α in the complexity bound O(nα).) The scheme computes the values “column by column”. If merely the last value p(x) is required, then one can be more memory efficient by overwriting the given vector of data: Algorithm 1.6 (Aitken-Neville Scheme) Input: knot vector x Rn+1, vector y Rn+1 of values, evaluation point x R Output: p(x), p solves∈ (1.1) ∈ ∈ for m = 1 : n do for j = 0 : n m do ⊲ array has triangular form − 3 (x xj ) yj+1 (x xj+m) yj yj := − − − xj+m xj end for − end for return y0 Remark 1.7 Cost of Alg.
Recommended publications
  • November 1960 Table of Contents
    AMERICAN MATHEMATICAL SOCIETY VOLUME 7, NUMBER 6 ISSUE NO. 49 NOVEMBER 1960 AMERICAN MATHEMATICAL SOCIETY Nottces Edited by GORDON L. WALKER Contents MEETINGS Calendar of Meetings ••••••••••.••••••••••••.••• 660 Program of the November Meeting in Nashville, Tennessee . 661 Abstracts for Meeting on Pages 721-732 Program of the November Meeting in Pasadena, California • 666 Abstracts for Meeting on Pages 733-746 Program of the November Meeting in Evanston, Illinois . • • 672 Abstracts for Meeting on Pages 747-760 PRELIMINARY ANNOUNCEMENT OF MEETINGS •.•.••••• 677 NEWS AND COMMENT FROM THE CONFERENCE BOARD OF THE MATHEMATICAL SCIENCES ••••••••••••.••• 680 FROM THE AMS SECRETARY •...•••.••••••••.•••.• 682 NEWS ITEMS AND ANNOUNCEMENTS .•••••.•••••••••• 683 FEATURE ARTICLES The Sino-American Conference on Intellectual Cooperation .• 689 National Academy of Sciences -National Research Council .• 693 International Congress of Applied Mechanics ..••••••••. 700 PERSONAL ITEMS ••.•.••.•••••••••••••••••••••• 702 LETTERS TO THE EDITOR.. • • • • • • • . • . • . 710 MEMORANDA TO MEMBERS The Employment Register . • . • • . • • • • • . • • • • 712 Employment of Retired Mathematicians ••••••.••••..• 712 Abstracts of Papers by Title • • . • . • • • . • . • • . • . • • . • 713 Reciprocity Agreement with the Societe Mathematique de Belgique .••.•••.••.••.•.•••••..•••••.•. 713 NEW PUBLICATIONS ..•..•••••••••.•••.•••••••••• 714 ABSTRACTS OF CONTRIBUTED PAPERS •.•••••••.••..• 715 RESERVATION FORM .•...•.•.•••••.••••..•••...• 767 MEETINGS CALENDAR OF MEETINGS Note: This Calendar lists all of the meetings which have been approved by the Council up to the date at which this issue of the NOTICES was sent to press. The summer and annual meetings are joint meetings of the Mathematical Asso­ ciation of America and the American Mathematical Society. The meeting dates which fall rather far in the future are subject to change. This is particularly true of the meetings to which no numbers have yet been assigned. Meet­ Deadline ing Date Place for No.
    [Show full text]
  • REPORTS in INFORMATICS Department of Informatics
    REPORTS IN INFORMATICS ISSN 0333-3590 Sparsity in Higher Order Methods in Optimization Geir Gundersen and Trond Steihaug REPORT NO 327 June 2006 Department of Informatics UNIVERSITY OF BERGEN Bergen, Norway This report has URL http://www.ii.uib.no/publikasjoner/texrap/ps/2006-327.ps Reports in Informatics from Department of Informatics, University of Bergen, Norway, is available at http://www.ii.uib.no/publikasjoner/texrap/. Requests for paper copies of this report can be sent to: Department of Informatics, University of Bergen, Høyteknologisenteret, P.O. Box 7800, N-5020 Bergen, Norway Sparsity in Higher Order Methods in Optimization Geir Gundersen and Trond Steihaug Department of Informatics University of Bergen Norway fgeirg,[email protected] 29th June 2006 Abstract In this paper it is shown that when the sparsity structure of the problem is utilized higher order methods are competitive to second order methods (Newton), for solving unconstrained optimization problems when the objective function is three times continuously differentiable. It is also shown how to arrange the computations of the higher derivatives. 1 Introduction The use of higher order methods using exact derivatives in unconstrained optimization have not been considered practical from a computational point of view. We will show when the sparsity structure of the problem is utilized higher order methods are competitive compared to second order methods (Newton). The sparsity structure of the tensor is induced by the sparsity structure of the Hessian matrix. This is utilized to make efficient algorithms for Hessian matrices with a skyline structure. We show that classical third order methods, i.e Halley's, Chebyshev's and Super Halley's method, can all be regarded as two step Newton like methods.
    [Show full text]
  • Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1
    Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1 Richard Barrett2, Michael Berry3, Tony F. Chan4, James Demmel5, June M. Donato6, Jack Dongarra3,2, Victor Eijkhout7, Roldan Pozo8, Charles Romine9, and Henk Van der Vorst10 This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied Mathematics (http://www.siam.org/books). 1This work was supported in part by DARPA and ARO under contract number DAAL03-91-C-0047, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-84OR21400, and the Stichting Nationale Computer Faciliteit (NCF) by Grant CRG 92.03. 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830- 6173. 3Department of Computer Science, University of Tennessee, Knoxville, TN 37996. 4Applied Mathematics Department, University of California, Los Angeles, CA 90024-1555. 5Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720. 6Science Applications International Corporation, Oak Ridge, TN 37831 7Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX 78758 8National Institute of Standards and Technology, Gaithersburg, MD 9Office of Science and Technology Policy, Executive Office of the President 10Department of Mathematics, Utrecht University, Utrecht, the Netherlands. ii How to Use This Book We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods.
    [Show full text]
  • The Astrometric Core Solution for the Gaia Mission Overview of Models, Algorithms and Software Implementation
    Astronomy & Astrophysics manuscript no. aa17905-11 c ESO 2018 October 30, 2018 The astrometric core solution for the Gaia mission Overview of models, algorithms and software implementation L. Lindegren1, U. Lammers2, D. Hobbs1, W. O’Mullane2, U. Bastian3, and J. Hernandez´ 2 1 Lund Observatory, Lund University, Box 43, SE-22100 Lund, Sweden e-mail: Lennart.Lindegren, [email protected] 2 European Space Agency (ESA), European Space Astronomy Centre (ESAC), P.O. Box (Apdo. de Correos) 78, ES-28691 Villanueva de la Canada,˜ Madrid, Spain e-mail: Uwe.Lammers, William.OMullane, [email protected] 3 Astronomisches Rechen-Institut, Zentrum fur¨ Astronomie der Universitat¨ Heidelberg, Monchhofstr.¨ 12–14, DE-69120 Heidelberg, Germany e-mail: [email protected] Received 17 August 2011 / Accepted 25 November 2011 ABSTRACT Context. The Gaia satellite will observe about one billion stars and other point-like sources. The astrometric core solution will determine the astrometric parameters (position, parallax, and proper motion) for a subset of these sources, using a global solution approach which must also include a large number of parameters for the satellite attitude and optical instrument. The accurate and efficient implementation of this solution is an extremely demanding task, but crucial for the outcome of the mission. Aims. We aim to provide a comprehensive overview of the mathematical and physical models applicable to this solution, as well as its numerical and algorithmic framework. Methods. The astrometric core solution is a simultaneous least-squares estimation of about half a billion parameters, including the astrometric parameters for some 100 million well-behaved so-called primary sources.
    [Show full text]
  • Object Oriented Design Philosophy for Scientific Computing
    Mathematical Modelling and Numerical Analysis ESAIM: M2AN Mod´elisation Math´ematique et Analyse Num´erique M2AN, Vol. 36, No 5, 2002, pp. 793–807 DOI: 10.1051/m2an:2002041 OBJECT ORIENTED DESIGN PHILOSOPHY FOR SCIENTIFIC COMPUTING Philippe R.B. Devloo1 and Gustavo C. Longhin1 Abstract. This contribution gives an overview of current research in applying object oriented pro- gramming to scientific computing at the computational mechanics laboratory (LABMEC) at the school of civil engineering – UNICAMP. The main goal of applying object oriented programming to scien- tific computing is to implement increasingly complex algorithms in a structured manner and to hide the complexity behind a simple user interface. The following areas are current topics of research and documented within the paper: hp-adaptive finite elements in one-, two- and three dimensions with the development of automatic refinement strategies, multigrid methods applied to adaptively refined finite element solution spaces and parallel computing. Mathematics Subject Classification. 65M60, 65G20, 65M55, 65Y99, 65Y05. Received: 3 December, 2001. 1. Introduction Finite element programming has been a craft since the decade of 1960. Ever since, finite element programs are considered complex, especially when compared to their rival numerical approximation technique, the finite difference method. Essential elements of complexity of the finite element method are: • Meshes are generally unstructured, leading to the necessity of node renumbering schemes and complex sparcity structures of the tangent
    [Show full text]
  • Ubung 4.10 Check That the Inverse of L
    Ubung¨ 4.10 Check that the inverse of L(k) is given by 1 .. 0 . . . .. .. . . 0 1 (L(k))−1 = . (4.7) . .. . −lk ,k . +1 . . 0 .. . . .. .. 0 ··· 0 −l 0 ··· 0 1 n,k Each step of the above Gaussian elimination process sketched in Fig. 4.2 is realized by a multiplication from the left by a matrix, in fact, the matrix (L(k))−1 (cf. (4.7)). That is, the Gaussian elimination process can be described as A = A(1) → A(2) =(L(1))−1A(1) → A(3) =(L(2))−1A(2) =(L(2))−1(L(1))−1A(1) → ... → A(n) =(L(n−1))−1A(n−1) = ... =(L(n−1))−1(L(n−2))−1 ... (L(2))−1(L(1))−1A(1) =:U upper triangular Rewriting this|{z yields,} the LU-factorization A = L(1) ··· L(n−1) U =:L The matrix L is a lower triangular matrix| as{z the product} of lower triangular matrices (cf. Exercise 4.5). In fact, due to the special structure of the matrices L(k), it is given by 1 . l .. L = 21 . . .. .. ln ··· ln,n 1 1 −1 as the following exercise shows: Ubung¨ 4.11 For each k the product L(k)L(k+1) ··· L(n−1) is given by 1 .. 0 . . . .. .. . . 0 1 L(k)L(k+1) ··· L(n−1) = . . . lk ,k 1 +1 . . l .. k+2,k+1 . . .. .. 0 ··· 0 l l ··· l 1 n,k n,k+1 n,n−1 40 Thus, we have shown that Gaussian elimination produces a factorization A = LU, (4.8) where L and U are lower and upper triangular matrices determined by Alg.
    [Show full text]
  • Mathematical Methods for Physicists: a Concise Introduction
    Mathematical Methods for Physicists: A concise introduction TAI L. CHOW CAMBRIDGE UNIVERSITY PRESS Mathematical Methods for Physicists A concise introduction This text is designed for an intermediate-level, two-semester undergraduate course in mathematical physics. It provides an accessible account of most of the current, important mathematical tools required in physics these days. It is assumed that the reader has an adequate preparation in general physics and calculus. The book bridges the gap between an introductory physics course and more advanced courses in classical mechanics, electricity and magnetism, quantum mechanics, and thermal and statistical physics. The text contains a large number of worked examples to illustrate the mathematical techniques developed and to show their relevance to physics. The book is designed primarily for undergraduate physics majors, but could also be used by students in other subjects, such as engineering, astronomy and mathematics. TAI L. CHOW was born and raised in China. He received a BS degree in physics from the National Taiwan University, a Masters degree in physics from Case Western Reserve University, and a PhD in physics from the University of Rochester. Since 1969, Dr Chow has been in the Department of Physics at California State University, Stanislaus, and served as department chairman for 17 years, until 1992. He served as Visiting Professor of Physics at University of California (at Davis and Berkeley) during his sabbatical years. He also worked as Summer Faculty Research Fellow at Stanford University and at NASA. Dr Chow has published more than 35 articles in physics journals and is the author of two textbooks and a solutions manual.
    [Show full text]
  • SDT 5.2 Manual
    Structural Dynamics Toolbox FEMLink For Use with MATLAB r User’s Guide Etienne Balm`es Version 5.2 Jean-Michel Lecl`ere How to Contact SDTools 33 +1 41 13 13 57 Phone 33 +6 77 17 29 99 Fax SDTools Mail 44 rue Vergniaud 75013 Paris (France) http://www.sdtools.com Web comp.soft-sys.matlab Newsgroup http://www.openfem.net An Open-Source Finite Element Toolbox [email protected] Technical support [email protected] Product enhancement suggestions [email protected] Sales, pricing, and general information Structural Dynamics Toolbox User’s Guide on May 27, 2005 c Copyright 1991-2005 by SDTools The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual in its paper, PDF and HTML versions may be copied, printed, photocopied or reproduced in any form without prior written consent from SDTools. Structural Dynamics Toolbox is a registered trademark of SDTools OpenFEM is a registered trademark of INRIA and SDTools MATLAB is a registered trademark of The MathWorks, Inc. Other products or brand names are trademarks or registered trademarks of their respective holders. Contents 1 Preface 9 1.1 Getting started . 10 1.2 Understanding the Toolbox architecture . 12 1.2.1 Layers of code . 12 1.2.2 Global variables . 13 1.3 Typesetting conventions and scientific notations . 14 1.4 Release notes for SDT 5.2 and FEMLink 3.1 . 17 1.4.1 Key features . 17 1.4.2 Detail by function .
    [Show full text]
  • Finite Element Analysis of Emi in a Multi-Conductor Connector
    FINITE ELEMENT ANALYSIS OF EMI IN A MULTI-CONDUCTOR CONNECTOR A Thesis Presented to The Graduate Faculty of The University of Akron In Partial Fulfillment of the Requirements for the Degree Master of Science Mohammed Zafaruddin May, 2013 FINITE ELEMENT ANALYSIS OF EMI IN A MULTI-CONDUCTOR CONNECTOR Mohammed Zafaruddin Thesis Approved: Accepted: _______________________________ _______________________________ Advisor Department Chair Dr. Nathan Ida Dr. Jose Alexis De Abreu Garcia _______________________________ _______________________________ Committee Member Dean of the College Dr. George C. Giakos Dr. George K. Haritos _______________________________ _______________________________ Committee Member Dean of the Graduate School Dr. Hamid Bahrami Dr. George R. Newkome _______________________________ Date ii ABSTRACT This thesis discusses the numerical analysis of electrical multi-conductor connectors intended for operation at high frequencies. The analysis is based on a Finite Element Tool and looks at the effect of conductors on each other with a view to design for electromagnetic compatibility of connectors. When there is an electrically excited conductor in a medium, it acts as an antenna at high frequencies and radiates electromagnetic power. If there are any conductors in its vicinity they too act as antennas and receive a part of the EM power. Electromagnetic Interference (EMI) from nearby excited conductors causes induced currents, according to Faraday‘s law, which is considered as noise for other conductors. This noise is affected by factors such as distance between the conductors, strength and frequency of currents, permittivity, permeability and conductivity of the medium between the conductors. To reduce the effects of EMI in a shielded multi-conductor connector, values of induced currents at various distances between the conductors have been calculated and analyzed.
    [Show full text]
  • Aladdin : a Computational Toolkit for Interactive Engineering Matrix And
    ALADDIN A COMPUTATIONAL TOOLKIT FOR INTERACTIVE ENGINEERING MATRIX AND FINITE ELEMENT ANALYSIS Mark Austin Xiaoguang Chen and WaneJang Lin Institute for Systems Research and Department of Civil Engineering University of Maryland College Park MD Novemb er Abstract This rep ort describ es Version of ALADDIN an interactive computational to olkit for the matrix and nite element analysis of engineering systems The ALADDIN package is designed around a language sp ecication that includes quantities with physical units branching constructs and lo oping constructs The basic language functionality is enhanced with external libraries of matrix and nite element functions ALADDINs problem solving capabilities are demonstrated via the solution to a series of matrix and numerical analysis problems ALADDIN is employed in the nite element analysis of two building structures and two highway bridge structures Contents I INTRODUCTION TO ALADDIN Intro duction to ALADDIN Problem Statement ALADDIN Comp onents Scop e of this Rep ort II MATRIX LIBRARY Command Language for Quantity and Matrix Op erations How to Start and Stop ALADDIN Format of General Command Language Physical Quantities Denition and Printing of Quantities Formatting of Quantity Output Quantity Arithmetic Making a Quantity Dimensionless
    [Show full text]
  • 4 Gaussian Elimination 4.1 Lower and Upper Triangular Matrices
    ∗ ∗∗∗∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗∗∗∗ ∗ Figure 4.1: schematic representation of lower (left) and upper (right) matrices (n = 4); blank spaces represent a 0 4 Gaussian elimination goal: solve, for given A ∈ Rn×n and b ∈ Rn, the linear system of equations Ax = b. (4.1) In the sequel, we will often denote the entries of matrices by lower case letters, e.g., the entries of A are (A)ij = aij. Likewise for vectors, we sometimes write xi = xi. Remark 4.1 In matlab, the solution of (4.1) is realized by x = A\b. In python, the function numpy.linalg.solve performs this. In both cases, a routine from lapack1 realizes the actual computation. The matlab realization of the backslash operator \ is in fact very complex. A very good discussion of many aspects of the realization of \ can be found in [1]. 4.1 lower and upper triangular matrices A matrix A ∈ Rn×n is • an upper triangular matrix if Aij = 0 for j < i; • a lower triangular matrix if Aij = 0 for j > i. • a normalized lower triangular matrix if, in addition to being lower triangular, it satisfies Aii = 1 for i =1,...,n. Linear systems where A is a lower or upper triangular matrix are easily solved by “forward substitution” or “back substitution”: Algorithm 4.2 (solve Lx = b using forward substitution) Input: L ∈ Rn×n lower triangular, invertible, b ∈ Rn Output: solution x ∈ Rn of Lx = b for j = 1:n do j−1 xj := bj − ljkxk ljj J convention: empty sum = 0 K k=1 end for P . 1linear algebra package, see en.wikipedia.org/wiki/LAPACK 49 Algorithm 4.3 (solution of Ux = b using back substitution) Input: U ∈ Rn×n upper triangular, invertible, b ∈ Rn Output: solution x ∈ Rn of Ux = b for j = n:-1:1 do n xj := bj − ujkxk ujj k=j+1 ! end for P .
    [Show full text]
  • Stiffness Matrix for a Bar Element
    CIVL 7/8117 Chapter 3 - Truss Equations - Part 2 1/44 Chapter 3b – Development of Truss Equations Learning Objectives • To derive the stiffness matrix for a bar element. • To illustrate how to solve a bar assemblage by the direct stiffness method. • To introduce guidelines for selecting displacement functions. • To describe the concept of transformation of vectors in two different coordinate systems in the plane. • To derive the stiffness matrix for a bar arbitrarily oriented in the plane. • To demonstrate how to compute stress for a bar in the plane. • To show how to solve a plane truss problem. • To develop the transformation matrix in three- dimensional space and show how to use it to derive the stiffness matrix for a bar arbitrarily oriented in space. • To demonstrate the solution of space trusses. Stiffness Matrix for a Bar Element Inclined, or Skewed Supports If a support is inclined, or skewed, at some angle for the global x axis, as shown below, the boundary conditions on the displacements are not in the global x-y directions but in the x’-y’ directions. CIVL 7/8117 Chapter 3 - Truss Equations - Part 2 2/44 Stiffness Matrix for a Bar Element Inclined, or Skewed, Supports We must transform the local boundary condition of v’3 = 0 (in local coordinates) into the global x-y system. Stiffness Matrix for a Bar Element Inclined, or Skewed, Supports Therefore, the relationship between of the components of the displacement in the local and the global coordinate systems at node 3 is: uu'33cos sin vv'33sin cos We can rewrite the above expression as: cos sin t dtd'[]333 3 sin cos We can apply this sort of transformation to the entire displacement vector as: T dTdordTd'[] 11 []' CIVL 7/8117 Chapter 3 - Truss Equations - Part 2 3/44 Stiffness Matrix for a Bar Element Inclined, or Skewed, Supports T Where the matrix [T1] is: []I [0] [0] []TIT [0][][0] 1 [0] [0] [t3 ] Both the identity matrix [I] and the matrix [t3] are 2 x 2 matrices.
    [Show full text]