Lecture No. 7 Finite Element Method • Define the Approximating Functions Locally Over “Finite Elements” Advantages • It

Total Page:16

File Type:pdf, Size:1020Kb

Lecture No. 7 Finite Element Method • Define the Approximating Functions Locally Over “Finite Elements” Advantages • It Lecture no. 7 Finite Element Method Define the approximating functions locally over “finite elements” Advantages It’s much easier to satisfy the b.c.’s with local functions over local parts of the boundary than it is with global functions over the entire boundary. Splitting the domain into intervals and using lower order approximations within each element will cause the integral error to assure better accuracy on a pointwise basis. (Courant, 1920). An integral norm attempts to minimize total error over the entire domain. A low integral norm does not always mean that we have good pointwise error norm. CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 1 | 32 General Steps to FEM Divide the domain into N sub-intervals or finite elements. Develop interpolating functions valid over each element. We will make use of localized coordinate systems to assure functional universality (i.e. they will be applicable to any length element at any location). We will tailor these functions such that the required degree of functional continuity can be readily enforced. Enforce functional continuity Through definition of Cardinal basis Through “global” matrix assembly Note that we use these interpolating functions in conjunction with the implementation of the desired weighted residual form (i.e. integrations etc.) as before. CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 2 | 32 Lagrange Interpolation Lagrange Interpolation : pass an approximating function, g(x), exactly through the functional values at a set of interpolation points or nodes. CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 3 | 32 Method 1 to deriving g(x) – Power series: ─ We are given 푓0, 푓1, 푓2 and corresponding points 푥0, 푥1, 푥2 ─ Constraints we can apply 푔(푥0 = 0) = 푓0 푔(푥1 = ℎ) = 푓1 푔(푥2 = 2ℎ) = 푓2 3 Constraints ⇒ 3 d.o.f/3 nodes = 1 d.o.f/node ⇒ Polynomial form g(x) can have 3 d.o.f ⇒ quadratic ─ General form of g(x) 푔(푥) = 푎 + 푏푥 + 푐푥2 ─ Apply constraints 푔(푥0 = 0) = 푓0 ⇒ 푎 = 푓0 2 푔(푥1 = ℎ) = 푓1 ⇒ 푎 + 푏ℎ + 푐ℎ = 푓1 2 푔(푥2 = 2ℎ) = 푓2 ⇒ 푎 + 2ℎ푏 + 4ℎ 푐 = 푓2 CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 4 | 32 Solve a system of simultaneous equations. 1 0 0 푎 푓0 [1 1 1] [ 푏ℎ ]=[푓1] ⇒ 2 1 2 4 푐ℎ 푓2 푎 1 0 0 푓0 [ 푏ℎ ] = [−1.5 2 −0.5] [푓1] ⇒ 2 푐ℎ 0.5 −1 0.5 푓2 푎 = 푓0 1 3 1 푏 = (− 푓 + 2푓 − 푓 ) ℎ 2 0 1 2 2 1 1 1 푐 = ( 푓 − 푓 + 푓 ) ℎ2 2 0 1 2 2 1 1 ∴ 푔(푥) = 푓 + (−3푓 + 4푓 − 푓 )푥 + (푓 − 2푓 + 푓 )푥2 0 2ℎ 0 1 2 2ℎ2 0 1 2 Interpolating function throughout interval of interest [0,2ℎ] CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 5 | 32 Let’s rewrite 푔(푥) Factor out of 푓0, 푓1 and 푓2 3푥 1 4 1 1 1 푔(푥) = 푓 (1 − 푥 + 푥2) + 푓 ( 푥 − 푥2) + 푓 (− 푥 + 푥2) 0 ⏟ 2ℎ 2 ℎ2 1 ⏟2ℎ ℎ2 2 ⏟ 2ℎ 2 ℎ2 ≡휙0(푥) ≡휙1(푥) ≡휙2(푥) ⇒ 2 푔(푥) = ∑푖=0 푓푖 ⏟휙푖 (푥 ) → Interpolating basis functions!! Each is associated with one node. Looks very much like expansions we used for w.r. methods!!! Let’s plot these functions 휙0(푥0 = 0) = 1 휙1(푥0 = ∅) = 0 휙2(푥0 = 0) 휙0(푥1 = ℎ) = 0 휙1(푥1 = ℎ) = 1 휙2(푥1 = ℎ) = 0 휙0(푥1 = 2ℎ) = 0 휙1(푥2 = 2ℎ) = 0 휙2(푥2 = 2ℎ) = 1 CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 6 | 32 1 푖 = 푗 Thus 휙 (푥 ) = 훿 = ⇒ We can in fact use the constraints to derive 휙 , 휙 1 푗 푖푗 0 푖 ≠ 푗 0 1 and 휙2 Thus interpolating functions are 1 at nodes they are associated with and zero at all other nodes. At x values other than nodal values these functions vary and do not equal zero. A very important consequence of using Lagrange interpolation is that 푔(푥푖) = 푓푖 This property and the fact that we define nodes on inter-element boundaries will enable us to easily enforce functional continuity on inter-element boundaries CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 7 | 32 Applying Lagrange Interpolation to develop 풖풂풑풑 Option 1 – Develop a higher order approximation which is global and based on Lagrange basis function defined over the entire domain. 푁 푢푎푝푝 = ∑ 푢푖휙푖 푖=1 where 휙푖 = globally defined Lagrange basis functions valid over the entire domain 푢푖 = the expansion coefficients and by definition these equal the function values at the nodes!! 푖 = 1, 푁 are the N “nodes” defined throughout the global domain These nodes can be equispaced or nonequispaced CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 8 | 32 Boundary conditions can be readily incorporated into the expansion if they are function specified (essential type boundary conditions) ─ For example 푢(푥퐿) = 푢퐿 푢(푥푅) = 푢푅 ─ The expansion can now be written as 푁−1 푢푎푝푝 = 푢퐿휙1 + 푢푅휙푁 + ∑ 푢푖휙푖 푖=2 (since 푢1 = 푢퐿 and 푢푁 = 푢푅) ─ Thus 푢퐵 = 푢퐿휙1 + 푢푅휙푁 ─ We note that 푢퐵 satisfies admissibility conditions 푢퐵(푥퐿) = 푢퐿 ⏟휙 1 (푥 퐿 ) + 푢푅 ⏟휙 푁 (푥 퐿 ) = 푢퐿 =1 =0 푢퐵(푢푅) = 푢퐿 ⏟휙 1 (푥 푅 ) + 푢푅 ⏟휙 푁 (푥 푅 ) = 푢푅 =0 =1 CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 9 | 32 ─ Also we note that the remaining 휙푖 푖 = 2, 푁 − 1 satisfy 휙푖(푥퐿) = 0 → 푖 = 2, 푁 − 1 휙푖(푥푅) = 0 Thus satisfying function specified b.c.’s (essential) for 1-D is very easy!! For natural b.c.’s it is much more difficult. The sequence of functions 휙푖 are linearly independent The coefficients in the expansion are now actually equal to the values of the function at the nodes!! The drawback of option 1 is that you obtain poor pointwise convergence as N becomes large for most problems. Another drawback is that the matrix will also be almost fully populated Typically the technique does work well for slowly varying smooth solutions. CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 10 | 32 Example Solve 퐿(푢) = 푝(푥) 푥 ∈ [푥퐿, 푥푅] b.c.’s 푢(푥퐿) = 푢퐿 푢(푥푅) = 푢푅 Assume that we will apply a six term expansion 5 푢푎푝푝 = 푢퐿휙1 + 푢푅휙6 + ∑ 푢푖휙푖 푖=2 CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 11 | 32 Note that 푢퐵 = 푢퐿휙1 + 푢푅휙6 푢퐵 satisfies b.c.’s as specified Note that 휙2, 휙3, 휙4 and 휙5 all satisfy the homogeneous form of the essential function specified b.c.’s CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 12 | 32 Option 2 – Develop an approximation 푢푎푝푝 which is the sum of localized approximations 푀 푁푗 푗 푗 푢푎푝푝 = ∑ ∑ 푢푖 휙푖 푗=1 푖=1 푗 Where 푢푖 = expansion coefficient for element j and node i within element j 푗 휙푖 = Lagrange basis function for element j and node i 푗 Note that 휙푖 = 0 outside of element j 푗 = 1, 푀 = total number of localized domains or “finite elements” 푖 = 1, 푁푗 = total number of nodes within element j CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 13 | 32 Local unknowns at nodes 1 1 1 3 3 3 3 3 3 푢1 푢3 푢3 푢1 푢2 푢3 푢4 푢5 푢6 ← 푒푙푒푚푒푛푡 푖푛푑푒푥 푢2 푢2 푢2 푢4 푢4 푢4 1 2 3 1 2 3 ← 푙표푐푎푙 푛표푑푒 푖푛푑푒푥 The boundary conditions can again be readily built into 1-D type problems for function specified conditions (essential). Even in multiple dimensions, function specified b.c.’s (essential) can be incorporated in a straight forward manner The sequence of all functions will be linearly independent. Again note that those coeficients of expansion are equal to the actual function values at the nodes CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 14 | 32 To ensure 퐶0 inter-element functional continuity we must have at all inter-element boundaries: 푢푎푝푝(left of an inter-element boundary) = 푢푎푝푝 (right of an inter-element boundary) Thus in the example given Anywhere in element 1 1 1 1 1 1 1 1 푢 (휉) = 푢1휙1(휉) + 푢2휙2(휉) + 푢3휙3(휉) Note that all other Lagrange basis function from other elements are defined as zero. Anywhere in element 2 2 2 2 2 2 1 2 푢 (휉) = 푢1휙1 (휉) + 푢2휙2(휉) + 푢3휙3(휉) Element 1 evaluated at r.h.s. 1 1 1 푢 (휉3 ) = 푢3 Element 2 evaluated at l.h.s. 2 2 2 푢 (휉1 ) = 푢1 In order to enforce 푐0 functional continuity we must have 1 1 2 2 푢 (휉3 ) = 푢 (휉1 ) CE 60130 FINITE ELEMENT M ETHODS - L E C T U R E 7 P a g e 15 | 32 1 2 ∴ 푢3 = 푢1 In general we must have the expansion coefficients equal at inter-element boundaries in order to satisfy 푐0 functional continuity requirements In our example, the expansion coef.’s are related as 1 2 푢3 = 푢1 2 3 푢3 = 푢1 3 4 푢6 = 푢1 ∴ We must have expansion coef.’s at inter-element boundaries be equal There are several approaches to implement the inter-element functional continuity (i.e.
Recommended publications
  • Orthogonal Functions, Radial Basis Functions (RBF) and Cross-Validation (CV)
    Exercise 6: Orthogonal Functions, Radial Basis Functions (RBF) and Cross-Validation (CV) CSElab Computational Science & Engineering Laboratory Outline 1. Goals 2. Theory/ Examples 3. Questions Goals ⚫ Orthonormal Basis Functions ⚫ How to construct an orthonormal basis ? ⚫ Benefit of using an orthonormal basis? ⚫ Radial Basis Functions (RBF) ⚫ Cross Validation (CV) Motivation: Orthonormal Basis Functions ⚫ Computation of interpolation coefficients can be compute intensive ⚫ Adding or removing basis functions normally requires a re-computation of the coefficients ⚫ Overcome by orthonormal basis functions Gram-Schmidt Orthonormalization 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푛 ⚫ Goal: Generate a set of vectors 푢1, … , 푢푘 ⊂ 푅 such that 푢1, … , 푢푘 is orthonormal and spans the same subspace as 푣1, … , 푣푘 ⚫ Click here if the animation is not playing Recap: Projection of Vectors ⚫ Notation: Denote dot product as 푎Ԧ ⋅ 푏 = 푎Ԧ, 푏 1 (bilinear form). This implies the norm 푢 = 푎Ԧ, 푎Ԧ 2 ⚫ Define the scalar projection of v onto u as 푃푢 푣 = 푢,푣 | 푢 | 푢,푣 푢 ⚫ Hence, is the part of v pointing in direction of u | 푢 | | 푢 | 푢 푢 and 푣∗ = 푣 − , 푣 is orthogonal to 푢 푢 | 푢 | Gram-Schmidt for Vectors 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푣1 푢1 = | 푣1 | 푣2,푢1 푢1 푢2 = 푣2 − = 푣2 − 푣2, 푢1 푢1 as 푢1 normalized 푢1 푢1 푢෤3 = 푣3 − 푣3, 푢1 푢1 What’s missing? Gram-Schmidt for Vectors 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푣1 푢1 = | 푣1 | 푣2,푢1 푢1 푢2 = 푣2 − = 푣2 − 푣2, 푢1 푢1 as 푢1 normalized 푢1 푢1 푢෤3 = 푣3 − 푣3, 푢1 푢1 What’s missing? 푢෤3is orthogonal to 푢1, but not yet to 푢2 푢3 = 푢෤3 − 푢෤3, 푢2 푢2 If the animation is not playing, please click here Gram-Schmidt for Functions ⚫ Given: A set of functions 푔1, … , 푔푘 .We want 휙1, … , 휙푘 ⚫ Bridge to Gram-Schmidt for vectors: ∞ ⚫ Define 푔푖, 푔푗 = ׬−∞ 푔푖 푥 푔푗 푥 푑푥 ⚫ What’s the corresponding norm? Gram-Schmidt for Functions ⚫ Given: A set of functions 푔1, … , 푔푘 .We want 휙1, … , 휙푘 ⚫ Bridge to Gram-Schmidt for vectors: ∞ ⚫ 푔 , 푔 = 푔 푥 푔 푥 푑푥 .
    [Show full text]
  • Anatomically Informed Basis Functions — Anatomisch Informierte Basisfunktionen
    Anatomically Informed Basis Functions | Anatomisch Informierte Basisfunktionen Dissertation zur Erlangung des akademischen Grades doctor rerum naturalium (Dr.rer.nat.), genehmigt durch die Fakult¨atf¨urNaturwissenschaften der Otto{von{Guericke{Universit¨atMagdeburg von Diplom-Informatiker Stefan Kiebel geb. am 10. Juni 1968 in Wadern/Saar Gutachter: Prof. Dr. Lutz J¨ancke Prof. Dr. Hermann Hinrichs Prof. Dr. Cornelius Weiller Eingereicht am: 27. Februar 2001 Verteidigung am: 9. August 2001 Abstract In this thesis, a method is presented that incorporates anatomical information into the statistical analysis of functional neuroimaging data. Available anatomical informa- tion is used to explicitly specify spatial components within a functional volume that are assumed to carry evidence of functional activation. After estimating the activity by fitting the same spatial model to each functional volume and projecting the estimates back into voxel-space, one can proceed with a conventional time-series analysis such as statistical parametric mapping (SPM). The anatomical information used in this work comprised the reconstructed grey matter surface, derived from high-resolution T1-weighted magnetic resonance images (MRI). The spatial components specified in the model were of low spatial frequency and confined to the grey matter surface. By explaining the observed activity in terms of these components, one efficiently captures spatially smooth response components induced by underlying neuronal activations lo- calised close to or within the grey matter sheet. Effectively, the method implements a spatially variable anatomically informed deconvolution and consequently the method was named anatomically informed basis functions (AIBF). AIBF can be used for the analysis of any functional imaging modality. In this thesis it was applied to simu- lated and real functional MRI (fMRI) and positron emission tomography (PET) data.
    [Show full text]
  • A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations Maggie Elizabeth Chenoweth [email protected]
    Marshall University Marshall Digital Scholar Theses, Dissertations and Capstones 1-1-2012 A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations Maggie Elizabeth Chenoweth [email protected] Follow this and additional works at: http://mds.marshall.edu/etd Part of the Numerical Analysis and Computation Commons Recommended Citation Chenoweth, Maggie Elizabeth, "A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations" (2012). Theses, Dissertations and Capstones. Paper 243. This Thesis is brought to you for free and open access by Marshall Digital Scholar. It has been accepted for inclusion in Theses, Dissertations and Capstones by an authorized administrator of Marshall Digital Scholar. For more information, please contact [email protected]. A LOCAL RADIAL BASIS FUNCTION METHOD FOR THE NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS Athesissubmittedto the Graduate College of Marshall University In partial fulfillment of the requirements for the degree of Master of Arts in Mathematics by Maggie Elizabeth Chenoweth Approved by Dr. Scott Sarra, Committee Chairperson Dr. Anna Mummert Dr. Carl Mummert Marshall University May 2012 Copyright by Maggie Elizabeth Chenoweth 2012 ii ACKNOWLEDGMENTS I would like to begin by expressing my sincerest appreciation to my thesis advisor, Dr. Scott Sarra. His knowledge and expertise have guided me during my research endeavors and the process of writing this thesis. Dr. Sarra has also served as my teaching mentor, and I am grateful for all of his encouragement and advice. It has been an honor to work with him. I would also like to thank the other members of my thesis committee, Dr.
    [Show full text]
  • Linear Basis Function Models
    Linear basis function models Linear basis function models Linear models for regression Francesco Corona FC - Fortaleza Linear basis function models Linear models for regression Linear basis function models Linear models for regression The focus so far on unsupervised learning, we turn now to supervised learning ◮ Regression The goal of regression is to predict the value of one or more continuous target variables t, given the value of a D-dimensional vector x of input variables ◮ e.g., polynomial curve fitting FC - Fortaleza Linear basis function models Linear basis function models Linear models for regression (cont.) Training data of N = 10 points, blue circles ◮ each comprising an observation of the input variable x along with the 1 corresponding target variable t t 0 The unknown function sin(2πx) is used to generate the data, green curve ◮ −1 Goal: Predict the value of t for some new value of x ◮ 0 x 1 without knowledge of the green curve The input training data x was generated by choosing values of xn, for n = 1,..., N, that are spaced uniformly in the range [0, 1] The target training data t was obtained by computing values sin(2πxn) of the function and adding a small level of Gaussian noise FC - Fortaleza Linear basis function models Linear basis function models Linear models for regression (cont.) ◮ We shall fit the data using a polynomial function of the form M 2 M j y(x, w)= w0 + w1x + w2x + · · · + wM x = wj x (1) Xj=0 ◮ M is the polynomial order, x j is x raised to the power of j ◮ Polynomial coefficients w0,..., wM are collected
    [Show full text]
  • Introduction to Radial Basis Function Networks
    Intro duction to Radial Basis Function Networks Mark J L Orr Centre for Cognitive Science University of Edinburgh Buccleuch Place Edinburgh EH LW Scotland April Abstract This do cumentisanintro duction to radial basis function RBF networks a typ e of articial neural network for application to problems of sup ervised learning eg regression classication and time series prediction It is now only available in PostScript an older and now unsupp orted hyp ertext ver sion maybeavailable for a while longer The do cumentwas rst published in along with a package of Matlab functions implementing the metho ds describ ed In a new do cument Recent Advances in Radial Basis Function Networks b ecame available with a second and improved version of the Matlab package mjoancedacuk wwwancedacukmjopapersintrops wwwancedacukmjointrointrohtml wwwancedacukmjosoftwarerbfzip wwwancedacukmjopapersrecadps wwwancedacukmjosoftwarerbfzip Contents Intro duction Sup ervised Learning Nonparametric Regression Classication and Time Series Prediction Linear Mo dels Radial Functions Radial Basis Function Networks Least Squares The Optimal WeightVector The Pro jection Matrix Incremental Op erations The Eective NumberofParameters Example Mo del Selection Criteria CrossValidation Generalised
    [Show full text]
  • Linear Models for Regression
    Machine Learning Srihari Linear Models for Regression Sargur Srihari [email protected] 1 Machine Learning Srihari Topics in Linear Regression • What is regression? – Polynomial Curve Fitting with Scalar input – Linear Basis Function Models • Maximum Likelihood and Least Squares • Stochastic Gradient Descent • Regularized Least Squares 2 Machine Learning Srihari The regression task • It is a supervised learning task • Goal of regression: – predict value of one or more target variables t – given d-dimensional vector x of input variables – With dataset of known inputs and outputs • (x1,t1), ..(xN,tN) • Where xi is an input (possibly a vector) known as the predictor • ti is the target output (or response) for case i which is real-valued – Goal is to predict t from x for some future test case • We are not trying to model the distribution of x – We dont expect predictor to be a linear function of x • So ordinary linear regression of inputs will not work • We need to allow for a nonlinear function of x • We don’t have a theory of what form this function to take 3 Machine Learning Srihari An example problem • Fifty points generated (one-dimensional problem) – With x uniform from (0,1) – y generated from formula y=sin(1+x2)+noise • Where noise has N(0,0.032) distribution • Noise-free true function and data points are as shown 4 Machine Learning Srihari Applications of Regression 1. Expected claim amount an insured person will make (used to set insurance premiums) or prediction of future prices of securities 2. Also used for algorithmic
    [Show full text]
  • Radial Basis Functions
    Acta Numerica (2000), pp. 1–38 c Cambridge University Press, 2000 Radial basis functions M. D. Buhmann Mathematical Institute, Justus Liebig University, 35392 Giessen, Germany E-mail: [email protected] Radial basis function methods are modern ways to approximate multivariate functions, especially in the absence of grid data. They have been known, tested and analysed for several years now and many positive properties have been identified. This paper gives a selective but up-to-date survey of several recent developments that explains their usefulness from the theoretical point of view and contributes useful new classes of radial basis function. We consider particularly the new results on convergence rates of interpolation with radial basis functions, as well as some of the various achievements on approximation on spheres, and the efficient numerical computation of interpolants for very large sets of data. Several examples of useful applications are stated at the end of the paper. CONTENTS 1 Introduction 1 2 Convergence rates 5 3 Compact support 11 4 Iterative methods for implementation 16 5 Interpolation on spheres 25 6 Applications 28 References 34 1. Introduction There is a multitude of ways to approximate a function of many variables: multivariate polynomials, splines, tensor product methods, local methods and global methods. All of these approaches have many advantages and some disadvantages, but if the dimensionality of the problem (the number of variables) is large, which is often the case in many applications from stat- istics to neural networks, our choice of methods is greatly reduced, unless 2 M. D. Buhmann we resort solely to tensor product methods.
    [Show full text]
  • Basis Functions
    Basis Functions Volker Tresp Summer 2017 1 Nonlinear Mappings and Nonlinear Classifiers • Regression: { Linearity is often a good assumption when many inputs influence the output { Some natural laws are (approximately) linear F = ma { But in general, it is rather unlikely that a true function is linear • Classification: { Linear classifiers also often work well when many inputs influence the output { But also for classifiers, it is often not reasonable to assume that the classification boundaries are linear hyper planes 2 Trick • We simply transform the input into a high-dimensional space where the regressi- on/classification is again linear! • Other view: let's define appropriate features • Other view: let's define appropriate basis functions • XOR-type problem with patterns 0 0 ! +1 1 0 ! −1 0 1 ! −1 1 1 ! +1 3 XOR is not linearly separable 4 Trick: Let's Add Basis Functions • Linear Model: input vector: 1; x1; x2 • Let's consider x1x2 in addition • The interaction term x1x2 couples two inputs nonlinearly 5 With a Third Input z3 = x1x2 the XOR Becomes Linearly Separable f(x) = 1 − 2x1 − 2x2 + 4x1x2 = φ1(x) − 2φ2(x) − 2φ3(x) + 4φ4(x) with φ1(x) = 1; φ2(x) = x1; φ3(x) = x2; φ4(x) = x1x2 6 f(x) = 1 − 2x1 − 2x2 + 4x1x2 7 Separating Planes 8 A Nonlinear Function 9 f(x) = x − 0:3x3 2 3 Basis functions φ1(x) = 1; φ2(x) = x; φ3(x) = x ; φ4(x) = x und w = (0; 1; 0; −0:3) 10 Basic Idea • The simple idea: in addition to the original inputs, we add inputs that are calculated as deterministic functions of the existing inputs, and treat them as additional inputs
    [Show full text]
  • Lecture 6 – Orthonormal Wavelet Bases
    Lecture 6 – Orthonormal Wavelet Bases David Walnut Department of Mathematical Sciences George Mason University Fairfax, VA USA Chapman Lectures, Chapman University, Orange, CA 6-10 November 2017 Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases Outline Wavelet systems Example 1: The Haar Wavelet basis Characterizations of wavelet bases Example 2: The Shannon Wavelet basis Example 3: The Meyer Wavelet basis Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases Wavelet systems Definition A wavelet system in L2(R) is a collection of functions of the form j/2 j D2j Tk j,k Z = 2 (2 x k) j,k Z = j,k j,k Z { } 2 { − } 2 { } 2 where L2(R) is a fixed function sometimes called the 2 mother wavelet. A wavelet system that forms an orthonormal basis for L2(R) is called a wavelet orthonormal basis for L2(R). Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases If the mother wavelet (x) is concentrated around 0 then j j,k (x) is concentrated around 2− k. If (x) is essentially supported on an interval of length L, then j,k (x) is essentially j supported on an interval of length 2− L. Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases Since (D j T ) (γ)=D j M (γ) it follows that if is 2 k ^ 2− k concentrated on the interval I then j,k is concentrated on the interval 2j I. b b d Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases Dyadic time-frequency tiling Walnut (GMU) Lecture 6 – Orthonormal Wavelet Bases Example 1: The Haar System This is historically the first orthonormal wavelet basis, described by A.
    [Show full text]
  • Finite Element Methods
    Part I Finite Element Methods Chapter 1 Approximation of Functions 1.1 Approximation of Functions Many successful numerical methods for differential equations aim at approx- imating the unknown function by a sum N u(x)= ciϕi(x), (1.1) i=0 X where ϕi(x) are prescribed functions and ci, i = 0,...,N, are unknown coffi- cients to be determined. Solution methods for differential equations utilizing (1.1) must have a principle for constructing N + 1 equations to determine c0,...,cN . Then there is a machinery regarding the actual constructions of the equations for c0,...,cN in a particular problem. Finally, there is a solve phase for computing solution c0,...,cN of the N + 1 equations. Especially in the finite element method, the machinery for constructing the equations is quite comprehensive, with many mathematical and imple- mentational details entering the scene at the same time. From a pedagogical point of view it can therefore be wise to introduce the computational ma- chinery for a trivial equation, namely u = f. Solving this equation with f given and u on the form (1.1) means that we seek an approximation u to f. This approximation problem has the advantage of introducing most of the finite element toolbox, but with postponing variational forms, integration by parts, boundary conditions, and coordinate mappings. It is therefore from a pedagogical point of view advantageous to become familiar with finite ele- ment approximation before addressing finite element methods for differential equations. First, we refresh some linear algebra concepts about approximating vec- tors in vector spaces. Second, we extend these concepts to approximating functions in function spaces, using the same principles and the same nota- tion.
    [Show full text]
  • Basis Functions
    Basis Functions Volker Tresp Summer 2015 1 I am an AI optimist. We've got a lot of work in machine learning, which is sort of the polite term for AI nowadays because it got so broad that it's not that well defined. Bill Gates (Scientific American Interview, 2004) \If you invent a breakthrough in artificial intelligence, so machines can learn," Mr. Gates responded,\that is worth 10 Microsofts."(Quoted in NY Times, Monday March 3, 2004) 2 Nonlinear Mappings and Nonlinear Classifiers • Regression: { Linearity is often a good assumption when many inputs influence the output { Some natural laws are (approximately) linear F = ma { But in general, it is rather unlikely that a true function is linear • Classification: { Similarly, it is often not reasonable to assume that the classification boundaries are linear hyper planes 3 Trick • We simply transform the input into a high-dimensional space where the regressi- on/classification is again linear! • Other view: let's define appropriate features • Other view: let's define appropriate basis functions 4 XOR is not linearly separable 5 Trick: Let's Add Basis Functions • Linear Model: input vector: 1; x1; x2 • Let's consider x1x2 in addition • The interaction term x1x2 couples two inputs nonlinearly 6 With a Third Input z3 = x1x2 the XOR Becomes Linearly Separable f(x) = 1 − 2x1 − 2x2 + 4x1x2 = φ1(x) − 2φ2(x) − 2φ3(x) + 4φ4(x) with φ1(x) = 1; φ2(x) = x1; φ3(x) = x2; φ4(x) = x1x2 7 f(x) = 1 − 2x1 − 2x2 + 4x1x2 8 Separating Planes 9 A Nonlinear Function 10 f(x) = x − 0:3x3 2 3 Basis functions φ1(x) = 1; φ2(x)
    [Show full text]
  • Choosing Basis Functions and Shape Parameters for Radial Basis Function Methods
    Choosing Basis Functions and Shape Parameters for Radial Basis Function Methods Michael Mongillo October 25, 2011 Abstract Radial basis function (RBF) methods have broad applications in numerical analysis and statistics. They have found uses in the numerical solution of PDEs, data mining, machine learning, and kriging methods in statistics. This work examines the use of radial basis func- tions in scattered data approximation. In particular, the experiments in this paper test the properties of shape parameters in RBF methods, as well as methods for finding an optimal shape parameter. Locating an optimal shape parameter is a difficult problem and a topic of current research. Some experiments also consider whether the same methods can be applied to the more general problem of selecting basis functions. 1 Introduction Radial basis function (RBF) methods are an active area of mathematical research. The initial motivation for RBF methods came from geodesy, mapping, and meteorology. They have since found applications in other areas, such as the numerical solution of PDEs, machine learning, and statistics. In the 1970s, Rolland Hardy suggested what he called the multiquadric method for ap- plications in cartography because he was not satisfied with the results of polynomial interpolation. His new method for fitting data could handle unevenly distributed data sites with greater consis- tency than previous methods [Har90]. The method of RBF interpolation used in this paper is a generalization of Hardy’s multiquadric and inverse multiquadric methods. Many RBFs are defined by a constant called the shape parameter. The choice of basis function and shape parameter have a significant impact on the accuracy of an RBF method.
    [Show full text]