Linear Algebra and Functional Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra and Functional Analysis CIRA DA Training Mathematical introduction to data assimilation. Linear algebra and functional analysis Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University Fort Collins, Colorado Outline § Inner product, orthogonality § Norm and distance § Basis vectors § Matrix § Singular and eigenvalue decomposition 2 Inner product of vectors Why is inner product relevant to data assimilation? - cost function calculation - minimization - verification a = [ a1 a2 aN ] b = [ b1 b2 bN ] § Inner product is a scalar T T a,b = b,a = a b = b a = ∑aibi = a1b1 + a2b2 ++ aNbN i Fortran program: ab_inner=0. do i=1,N OR ab_inner=DOT_PRODUCT (a,b) ab_inner=ad_inner+a(i)*b(i) enddo 3 Inner product examples (N=2) a,b = a b + a b ++ a b 1 1 2 2 N N Example 2: Example 1: a = [ 3 1 ] b = [ 2 4 ] a = [ 3 1 ] b = [ −1 3 ] b b a a a,b = a1b1 + a2b2 = 3⋅(−1) +1⋅ 3 = 0 a,b = a1b1 + a2b2 = 3⋅2 +1⋅ 4 = 10 Orthogonality: If inner product is zero, vectors are orthogonal, and vice versa 4 Vector norm Why is vector norm relevant to data assimilation? - defines “magnitude” of a vector - all aspects (cost function, minimization, verification In data assimilation (e.g., Hilbert and Euclidian spaces), norm is connected with inner product 1/2 a = a,a Hilbert/Euclidian spaces are vector spaces with defined inner product and metric d(x,y): d(x, y) = x − y (Euclidian space is a finite-dimensional Hilbert space) The above characteristics of vector spaces used in data assimilation allow us to talk about “close”, “distant”, “small”, “large”, “orthogonal”, etc. 5 Properties and types of vector norms a = [ a1 a2 aN ] b = [ b1 b2 bN ] Elementary properties of norms Lp norm (p ≥ 1): § Non-negative function a ≥ 0 N 1/p ⎛ p ⎞ § Scalar multiplication γ a = γ a a = ⎜ ∑ ai ⎟ ⎝ i=1 ⎠ § Triangle inequality a + b ≤ a + b Important Lp-norms (1) Absolute value norm: a = a + a ++ a p=1 1 2 N (2) Quadratic norm: a = a2 + a2 ++ a2 p=2 1 2 N (3) Maximum (infinity) norm: a max a , a , , a p=∞ = ( 1 2 … N ) 6 What is the distance between a and b? a = [ 0 5 ] a-b d(a,b) = a − b = ? a b = [ 3 1 ] b A1: Absolute norm a − b p=1 = a1 − b1 + a2 − b2 = 0 − 3 + 5 −1 = 3+ 4 = 7 A2: Quadratic norm 2 2 2 2 a − b p=2 = (a1 − b1) + (a2 − b2 ) = (0 − 3) + (5 −1) = 5 A3: Maximum norm a b max a b , a b max 0 3 , 5 1 4 − p=∞ = ( 1 − 1 2 − 2 ) = ( − − ) = a − b p=1 ≥ a − b p=2 ≥ a − b p=∞ Distance may not be intuitive, it depends on the used norm 7 Function norm and distance df (x) Define a function and its derivative: y = f (x) y' = f '(x) = dx § One can define L2 norm in terms of function values, or its derivatives: (1) f [ f (x)]2 [ f (x)]2 [ f (x)]2 [ f (x)]2 VALUE = ∑ i = 1 + 2 ++ N i (2) f [ f '(x)]2 [ f '(x)]2 [ f '(x)]2 [ f '(x)]2 DERIV = ∑ i = 1 + 2 ++ N i 8 Example y1 y2 Q: Are the functions y1 and y2 “close” or “distant”? is “small” y1 − y2 VALUE is “large” y1 − y2 DERIV The two functions are “close” in norm (1), but “distant” in norm (2) 9 Basis of a vector space Why is basis relevant to data assimilation? - representation of forecast and analysis errors - defines analysis adjustment - uncertainty dimension Minimal set of linearly independent vectors that spans a space γ e Linearly independent: Given scalars i and vectors i, i = 1,…, N γ e +γ e + +γ e ≠ 0 1 1 2 2 N N Span: Any vector from that space can be expressed as a linear combination of basis vectors N F = (e1,e2 ,,eN ) ∃i ∈(1, N) :γ i ≠ 0 N b = γ 1e1 +γ 2e2 ++γ N eN ⇒ b ∈F 10 Orthogonal basis Most relevant basis is the orthogonal basis: ⎧ T ⎪ 1 for i = j ei ,ej = ei ej = δ ij = ⎨ 0 for i ≠ j ⎩⎪ Orthogonal basis with unit norm is called the orthonormal basis: e = 1 Example: Standard basis y e1 = (1,0) e2 e2 = (0,1) x e1 T 1 1 2 2 e1 e2 = e1,e2 = e1e2 + e1 e2 = 1⋅0 + 0 ⋅1 = 0 11 Basis representation of a vector Vector is generally represented by its components a (a ,a , ,a ) = 1 2 … N Given an orthonormal basis (e ,e , ,e ) 1 2 N This means a a e a e a e = 1 1 + 2 2 ++ N N Q: What is the meaning of vector components? A: Orthogonal projection of a vector to the basis component. T T a1 = e1 a a2 = e2 a y a1 Example: a eT a 1 2 0 3 2 a = (2,3) 1 = 1 = ⋅ + ⋅ = T e1 = (1,0) a2 = e2 a = 0 ⋅2 +1⋅ 3 = 3 a a2 e2 = (0,1) a = 2e1 + 3e2 e2 e1 x 12 Basis examples (calculate, plot) α = 900 e1 = (sinα,cosα ) Q1: e1 = (?,?) e2 = (− cosα,sinα ) e2 = (?,?) Q2: α = 450 e1 = (?,?) e2 = (?,?) 0 Q3 (with 45 ): a = ( 2,3 2) a1 = ? a2 = ? 13 Linear transformation and matrices § A linear transformation from vector spaces V to W is a function L such that L :V → W L(µa + b) = µ(La) + Lb § Matrix is a linear transformation defined relative to a particular basis N M V = (v1,v2 ,,vN ) Lvj = Aijwi M ∑ i 1 W = (w1,w2 ,,wM ) = The matrix A is given by its elements Aij and represents the linear transformation L relative to the pair of bases {v} and {w} Why are matrices and linear transformations relevant to data assimilation? - Error covariance (analysis and forecast uncertainty) - Forecast model, observation operator - Minimization, Hessian matrix, Kalman gain 14 Matrix § Matrix is a rectangular array of numbers (real numbers in our applications) § Matrix represents linear transformation (scaling and rotation) § Matrix has rows (horizontal) and columns (vertical) ⎛ ⎞ d11 d12 d1N ⎜ ⎟ d d D = ⎜ 21 22 ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ dN1 dNN ⎠ Transpose matrix Symmetric matrix Diagonal matrix Identity matrix ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ d11 d21 dN1 d11 d12 d1N d11 0 0 ⎛ 1 0 0 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ d d d d 0 d ⎜ 0 1 ⎟ DT = ⎜ 12 22 ⎟ D = ⎜ 12 22 ⎟ D = ⎜ 22 ⎟ I = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ d1N dNN ⎠ ⎝ d1N dNN ⎠ ⎝ 0 dNN ⎠ ⎝ 0 1 ⎠ § All these types of matrices are relevant in data assimilation 15 Matrix properties § Inverse of a product (ABC)−1 = C −1B−1A−1 § Transpose of a product (ABC)T = CT BT AT § Symmetric positive-definite matrix ⎛ ⎞ d11 d12 d1N ⎜ ⎟ d d T D = ⎜ 12 22 ⎟ x, Dx = x Dx ≥ 0 for any x ⎜ ⎟ ⎜ ⎟ ⎝ d1N dNN ⎠ T § Inner-product matrix A A Inner and outer product matrices are § Outer-product matrix AAT symmetric and positive-definite 2 xT (AT A)x = (xT AT )(Ax) = (Ax)T (Ax) = Ax ≥ 0 Error covariance matrices are symmetric and positive-definite 16 Examples Q1: What is the value of J(x)? Is it a scalar, vector, or matrix? 1 J(x) = xTWx 2 T ⎛ 9 1 ⎞ x = (1,2) W = ⎝⎜ 1 4 ⎠⎟ T T T ⎛ 1 ⎞ ⎛ 9 1 ⎞ ⎛ 1 ⎞ ⎛ 1 ⎞ ⎛ 9 ⋅1+1⋅2 ⎞ ⎛ 1 ⎞ ⎛ 11 ⎞ J(x) = = = = 1⋅11+ 2 ⋅ 7 = 25 ⎝⎜ 2 ⎠⎟ ⎝⎜ 1 4 ⎠⎟ ⎝⎜ 2 ⎠⎟ ⎝⎜ 2 ⎠⎟ ⎝⎜ 1⋅1+ 4 ⋅2 ⎠⎟ ⎝⎜ 2 ⎠⎟ ⎝⎜ 7 ⎠⎟ Q2: What is D-inverse (D-1)? ⎛ 1 ⎞ 0 0 0 ⎜ 7 ⎟ ⎜ ⎟ ⎛ 7 0 0 0 ⎞ ⎜ 1 ⎟ 0 − 0 0 ⎜ 0 −2 0 0 ⎟ D−1 = ⎜ ⎟ D = ⎜ ⎟ 2 0 0 3 0 ⎜ ⎟ ⎜ ⎟ ⎜ 1 ⎟ ⎜ ⎟ 0 0 0 ⎝ 0 0 0 1 ⎠ ⎜ 3 ⎟ ⎜ ⎟ ⎝ 0 0 0 1 ⎠ 17 Linear algebra, functional analysis § Vector space (e.g., linear space) is an object consisting of (1) field, (2) set of vectors and (3) addition rule § Linear algebra is a vector space with multiplication rule § Functional analysis is a branch of mathematics that deals with normed linear spaces associated with geometry and linear operators In data assimilation, we utilize all these components: 1- vector space: Any physical field (e.g., temperature, pressure, aerosol, chemical constituents, satellite radiance) is mathematically represented as a vector. A set of these vectors forms a vector space. 2- linear algebra: We use matrices to represent uncertainties, linear functions. 3- functional analysis: All operations with matrices and vectors are based on the notion of norms, distances, and inner products 18 Literature: Conway, J. B., 1985: A course in functional analysis. Springer-Verlag New York, pp. 404. Golub, G. H., and C.F. van Loan, 1989: Matrix computations. THe JoHns Hopkins Univ. Press, pp. 642. Hoffman, K., and R. Kunze, 1971: Linear algebra. Prentice Hall, Englewood Cliffs, New Jersey, pp.407. Horn, A.R., and C. R. JoHnson, 2005: Matrix analysis. Cambridge Univ. Press, pp. 561. 19.
Recommended publications
  • Matrix Analysis (Lecture 1)
    Matrix Analysis (Lecture 1) Yikun Zhang∗ March 29, 2018 Abstract Linear algebra and matrix analysis have long played a fundamen- tal and indispensable role in fields of science. Many modern research projects in applied science rely heavily on the theory of matrix. Mean- while, matrix analysis and its ramifications are active fields for re- search in their own right. In this course, we aim to study some fun- damental topics in matrix theory, such as eigen-pairs and equivalence relations of matrices, scrutinize the proofs of essential results, and dabble in some up-to-date applications of matrix theory. Our lecture will maintain a cohesive organization with the main stream of Horn's book[1] and complement some necessary background knowledge omit- ted by the textbook sporadically. 1 Introduction We begin our lectures with Chapter 1 of Horn's book[1], which focuses on eigenvalues, eigenvectors, and similarity of matrices. In this lecture, we re- view the concepts of eigenvalues and eigenvectors with which we are familiar in linear algebra, and investigate their connections with coefficients of the characteristic polynomial. Here we first outline the main concepts in Chap- ter 1 (Eigenvalues, Eigenvectors, and Similarity). ∗School of Mathematics, Sun Yat-sen University 1 Matrix Analysis and its Applications, Spring 2018 (L1) Yikun Zhang 1.1 Change of basis and similarity (Page 39-40, 43) Let V be an n-dimensional vector space over the field F, which can be R, C, or even Z(p) (the integers modulo a specified prime number p), and let the list B1 = fv1; v2; :::; vng be a basis for V.
    [Show full text]
  • Fundamentals of Functional Analysis Kluwer Texts in the Mathematical Sciences
    Fundamentals of Functional Analysis Kluwer Texts in the Mathematical Sciences VOLUME 12 A Graduate-Level Book Series The titles published in this series are listed at the end of this volume. Fundamentals of Functional Analysis by S. S. Kutateladze Sobolev Institute ofMathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia Springer-Science+Business Media, B.V. A C.I.P. Catalogue record for this book is available from the Library of Congress ISBN 978-90-481-4661-1 ISBN 978-94-015-8755-6 (eBook) DOI 10.1007/978-94-015-8755-6 Translated from OCHOBbI Ij)YHK~HOHaJIl>HODO aHaJIHsa. J/IS;l\~ 2, ;l\OIIOJIHeHHoe., Sobo1ev Institute of Mathematics, Novosibirsk, © 1995 S. S. Kutate1adze Printed on acid-free paper All Rights Reserved © 1996 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 1996. Softcover reprint of the hardcover 1st edition 1996 No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner. Contents Preface to the English Translation ix Preface to the First Russian Edition x Preface to the Second Russian Edition xii Chapter 1. An Excursion into Set Theory 1.1. Correspondences . 1 1.2. Ordered Sets . 3 1.3. Filters . 6 Exercises . 8 Chapter 2. Vector Spaces 2.1. Spaces and Subspaces ... ......................... 10 2.2. Linear Operators . 12 2.3. Equations in Operators ........................ .. 15 Exercises . 18 Chapter 3. Convex Analysis 3.1.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • Functional Analysis Lecture Notes Chapter 2. Operators on Hilbert Spaces
    FUNCTIONAL ANALYSIS LECTURE NOTES CHAPTER 2. OPERATORS ON HILBERT SPACES CHRISTOPHER HEIL 1. Elementary Properties and Examples First recall the basic definitions regarding operators. Definition 1.1 (Continuous and Bounded Operators). Let X, Y be normed linear spaces, and let L: X Y be a linear operator. ! (a) L is continuous at a point f X if f f in X implies Lf Lf in Y . 2 n ! n ! (b) L is continuous if it is continuous at every point, i.e., if fn f in X implies Lfn Lf in Y for every f. ! ! (c) L is bounded if there exists a finite K 0 such that ≥ f X; Lf K f : 8 2 k k ≤ k k Note that Lf is the norm of Lf in Y , while f is the norm of f in X. k k k k (d) The operator norm of L is L = sup Lf : k k kfk=1 k k (e) We let (X; Y ) denote the set of all bounded linear operators mapping X into Y , i.e., B (X; Y ) = L: X Y : L is bounded and linear : B f ! g If X = Y = X then we write (X) = (X; X). B B (f) If Y = F then we say that L is a functional. The set of all bounded linear functionals on X is the dual space of X, and is denoted X0 = (X; F) = L: X F : L is bounded and linear : B f ! g We saw in Chapter 1 that, for a linear operator, boundedness and continuity are equivalent.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • On the Origin and Early History of Functional Analysis
    U.U.D.M. Project Report 2008:1 On the origin and early history of functional analysis Jens Lindström Examensarbete i matematik, 30 hp Handledare och examinator: Sten Kaijser Januari 2008 Department of Mathematics Uppsala University Abstract In this report we will study the origins and history of functional analysis up until 1918. We begin by studying ordinary and partial differential equations in the 18th and 19th century to see why there was a need to develop the concepts of functions and limits. We will see how a general theory of infinite systems of equations and determinants by Helge von Koch were used in Ivar Fredholm’s 1900 paper on the integral equation b Z ϕ(s) = f(s) + λ K(s, t)f(t)dt (1) a which resulted in a vast study of integral equations. One of the most enthusiastic followers of Fredholm and integral equation theory was David Hilbert, and we will see how he further developed the theory of integral equations and spectral theory. The concept introduced by Fredholm to study sets of transformations, or operators, made Maurice Fr´echet realize that the focus should be shifted from particular objects to sets of objects and the algebraic properties of these sets. This led him to introduce abstract spaces and we will see how he introduced the axioms that defines them. Finally, we will investigate how the Lebesgue theory of integration were used by Frigyes Riesz who was able to connect all theory of Fredholm, Fr´echet and Lebesgue to form a general theory, and a new discipline of mathematics, now known as functional analysis.
    [Show full text]
  • A Appendix: Linear Algebra and Functional Analysis
    A Appendix: Linear Algebra and Functional Analysis In this appendix, we have collected some results of functional analysis and matrix algebra. Since the computational aspects are in the foreground in this book, we give separate proofs for the linear algebraic and functional analyses, although finite-dimensional spaces are special cases of Hilbert spaces. A.1 Linear Algebra A general reference concerning the results in this section is [47]. The following theorem gives the singular value decomposition of an arbitrary real matrix. Theorem A.1. Every matrix A ∈ Rm×n allows a decomposition A = UΛV T, where U ∈ Rm×m and V ∈ Rn×n are orthogonal matrices and Λ ∈ Rm×n is diagonal with nonnegative diagonal elements λj such that λ1 ≥ λ2 ≥ ··· ≥ λmin(m,n) ≥ 0. Proof: Before going to the proof, we recall that a diagonal matrix is of the form ⎡ ⎤ λ1 0 ... 00... 0 ⎢ ⎥ ⎢ . ⎥ ⎢ 0 λ2 . ⎥ Λ = = [diag(λ ,...,λm), 0] , ⎢ . ⎥ 1 ⎣ . .. ⎦ 0 ... λm 0 ... 0 if m ≤ n, and 0 denotes a zero matrix of size m×(n−m). Similarly, if m>n, Λ is of the form 312 A Appendix: Linear Algebra and Functional Analysis ⎡ ⎤ λ1 0 ... 0 ⎢ ⎥ ⎢ . ⎥ ⎢ 0 λ2 . ⎥ ⎢ ⎥ ⎢ . .. ⎥ ⎢ . ⎥ diag(λ ,...,λn) Λ = ⎢ ⎥ = 1 , ⎢ λn ⎥ ⎢ ⎥ 0 ⎢ 0 ... 0 ⎥ ⎢ . ⎥ ⎣ . ⎦ 0 ... 0 where 0 is a zero matrix of the size (m − n) × n. Briefly, we write Λ = diag(λ1,...,λmin(m,n)). n Let A = λ1, and we assume that λ1 =0.Let x ∈ R be a unit vector m with Ax = A ,andy =(1/λ1)Ax ∈ R , i.e., y is also a unit vector. We n m pick vectors v2,...,vn ∈ R and u2,...,um ∈ R such that {x, v2,...,vn} n is an orthonormal basis in R and {y,u2,...,um} is an orthonormal basis in Rm, respectively.
    [Show full text]
  • AMATH 731: Applied Functional Analysis Lecture Notes
    AMATH 731: Applied Functional Analysis Lecture Notes Sumeet Khatri November 24, 2014 Table of Contents List of Tables ................................................... v List of Theorems ................................................ ix List of Definitions ................................................ xii Preface ....................................................... xiii 1 Review of Real Analysis .......................................... 1 1.1 Convergence and Cauchy Sequences...............................1 1.2 Convergence of Sequences and Cauchy Sequences.......................1 2 Measure Theory ............................................... 2 2.1 The Concept of Measurability...................................3 2.1.1 Simple Functions...................................... 10 2.2 Elementary Properties of Measures................................ 11 2.2.1 Arithmetic in [0, ] .................................... 12 1 2.3 Integration of Positive Functions.................................. 13 2.4 Integration of Complex Functions................................. 14 2.5 Sets of Measure Zero......................................... 14 2.6 Positive Borel Measures....................................... 14 2.6.1 Vector Spaces and Topological Preliminaries...................... 14 2.6.2 The Riesz Representation Theorem........................... 14 2.6.3 Regularity Properties of Borel Measures........................ 14 2.6.4 Lesbesgue Measure..................................... 14 2.6.5 Continuity Properties of Measurable Functions...................
    [Show full text]
  • Functional Analysis
    Functional Analysis Individualized Behavior Intervention for Early Education Behavior Assessments • Purpose – Analyze and understand environmental factors contributing to challenging or maladaptive behaviors – Determine function of behaviors – Develop best interventions based on function – Determine best replacement behaviors to teach student Types of Behavior Assessments • Indirect assessment – Interviews and questionnaires • Direct observation/Descriptive assessment – Observe behaviors and collect data on antecedents and consequences • Functional analysis/Testing conditions – Experimental manipulations to determine function What’s the Big Deal About Function? • Function of behavior is more important than what the behavior looks like • Behaviors can serve multiple functions Why bother with testing? • Current understanding of function is not correct • Therefore, current interventions are not working Why bother with testing? • What the heck is the function?! • Observations alone have not been able to determine function Functional Analysis • Experimental manipulations and testing for function of behavior • Conditions – Test for Attention – Test for Escape – Test for Tangible – Test for Self Stimulatory – “Play” condition, which serves as the control Test for Attention • Attention or Self Stim? • http://www.youtube.com/watch? v=dETNNYxXAOc&feature=related • Ignore student, but stay near by • Pay attention each time he screams, see if behavior increases Test for Escape • Escape or attention? • http://www.youtube.com/watch? v=wb43xEVx3W0 (second
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]
  • Math 623: Matrix Analysis Final Exam Preparation
    Mike O'Sullivan Department of Mathematics San Diego State University Spring 2013 Math 623: Matrix Analysis Final Exam Preparation The final exam has two parts, which I intend to each take one hour. Part one is on the material covered since the last exam: Determinants; normal, Hermitian and positive definite matrices; positive matrices and Perron's theorem. The problems will all be very similar to the ones in my notes, or in the accompanying list. For part two, you will write an essay on what I see as the fundamental theme of the course, the four equivalence relations on matrices: matrix equivalence, similarity, unitary equivalence and unitary similarity (See p. 41 in Horn 2nd Ed. He calls matrix equivalence simply \equivalence."). Imagine you are writing for a fellow master's student. Your goal is to explain to them the key ideas. Matrix equivalence: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a simple set of representatives for the equivalence classes: identify them and sketch the proof. Similarity: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a set of representatives for the equivalence classes using Jordan matrices. State the Jordan theorem. (4) The proof of the Jordan theorem can be broken into two parts: (1) writing the ambient space as a direct sum of generalized eigenspaces, (2) classifying nilpotent matrices. Sketch the proof of each. Explain the role of invariant spaces. Unitary equivalence (1) Define it. (2) Explain the relationship with abstract inner product spaces and change of basis.
    [Show full text]