Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Hilbert Spaces Chapter 3 Hilbert spaces Contents Introduction .......................................... ............. 3.2 Innerproducts....................................... ............... 3.2 Cauchy-Schwarzinequality . ................ 3.2 Inducednorm ........................................ .............. 3.3 Hilbertspace......................................... .............. 3.4 Minimum norm optimization problems . ............. 3.5 Chebyshevsets ....................................... .............. 3.5 Projectors ........................................... ............. 3.6 Orthogonality .......................................... ............ 3.6 Orthogonalcomplements ................................. ............... 3.11 Orthogonalprojection ................................... ............... 3.12 Directsum........................................... ............. 3.13 Orthogonalsets ....................................... .............. 3.14 Gram-Schmidtprocedure . ................. 3.15 Approximation......................................... ............. 3.17 Normalequations ...................................... .............. 3.17 Grammatrices......................................... ............. 3.18 Orthogonalbases ..................................... ............... 3.18 Approximation and Fourier series . ................. 3.18 Linearvarieties ........................................ ............. 3.20 Dualapproximationproblem ............................... ............... 3.20 Applications........................................... ............ 3.22 Fourierseries ........................................ .............. 3.24 Complete orthonormal sequences / countable orthonormal bases . ......................... 3.26 Wavelets............................................. ............ 3.26 Vectorspaces ........................................ .............. 3.27 Normedspaces....................................... ............... 3.27 Innerproductspaces................................. .................. 3.28 Summary ............................................ ............ 3.28 Minimum distance to a convex set . ............... 3.29 Projection onto convex sets . ............... 3.30 Summary ............................................ ............ 3.31 3.1 3.2 c J. Fessler, November 5, 2004, 17:8 (student version) Key missing geometrical concepts: angle and orthogonality (“right angles”). 3.1 Introduction We now turn to the subset of normed spaces called Hilbert spaces, which must have an inner product. These are particularly useful spaces in applications/analysis. Why not introduce Hilbert first then? For generality: it is helpful to see which properties are general to vector spaces, or to normed spaces, vs which require additional assumptions like an inner product. Overview inner product • orthogonality • orthogonal projections • applications • least-squares minimization • orthonormalization of a basis • Fourier series • General forms of things you have seen before: Cauchy-Schwarz, Gram-Schmidt, Parseval’s theorem 3.2 Inner products Definition. A pre-Hilbert space, aka an inner product space , is a vector space defined on the field = R or = C, along with an inner product operation , : , which must satisfy the followingX axioms x, y F, α .F 1. x, y = y, x ∗ (Hermitianh· symmetry·i X×X→F), where ∗ denotes complex conjugate. ∀ ∈X ∈F 2. hx + yi , zh = xi , z + y, z (additivity) 3. hαx, y =i α xh, y i(scalingh )i 4. hx, x i 0 andh x,i x = 0 iff x = 0. (positive definite) h i ≥ h i Properties of inner products Bilinearity property: α x , β y = α β∗ x , y . i i j j i j h i j i * i j + i j X X X X Lemma. In an inner product space, if x, y = 0 for all y, then x = 0. Proof. Let y = x. h i 2 Cauchy-Schwarz inequality Lemma. For all x,y in an inner product space, x, y x, x y, y = x y (see induced norm below), |h i| ≤ h i h i k k k k with equality iff x and y are linearly dependent.p p Proof. For any λ the positive definiteness of , ensures that ∈F h· ·i 0 x λy, x λy = x, x λ y, x λ∗ x, y + λ 2 y, y . ≤ h − − i h i − h i − h i | | h i If y = 0, the inequality holds trivially. Otherwise, consider λ = x, y / y, y and we have h i h i 0 x, x y, x 2 / y, y . ≤ h i−|h i| h i Rearranging yields y, x x, x y, y = x y . The proof about equality|h conditionsi| ≤ h is Problemih i 3.1k. k k k 2 p This result generalizes all the “Cauchy-Schwarz inequalities” you have seen in previous classes, e.g., vectors in Rn, random variables, discrete-time and continuous-time signals, each of which corresponds to a particular inner product space. c J. Fessler, November 5, 2004, 17:8 (student version) 3.3 Angle Thanks to this inequality, we can generalize the notion of the angle between vectors to any general inner product space as follows: x, y θ = cos−1 |h i| , x, y = 0. x y ∀ 6 k k k k This definition is legitimate since the argument of cos−1 will always be between 0 and 1 due to the Cauchy-Schwarz inequality. Induced norm Proposition. In an inner product space ( , , ), the induced norm x = x, x is indeed a norm. X h· ·i k k h i Proof. What must we show? p The first axiom ensures that x, x is real. • x 0 with equality iff x =h 0 followsi from Axiom 4. • kαxk ≥= αx, αx = α x, αx = α αx, x ∗ = αα∗ x, x ∗ = α x, x = α x , using Axioms 1 and 3. • k k h i h i h i h i | | h i | | k k The only condition remaining to be verified is the triangle inequality: x + y 2 = x, x + x, y + y, x + y, y • p p p p k k p h i h i h i h i = x 2 + 2 real( x, y ) + y 2 x 2 + 2 x, y + y 2 x 2 + 2 x y + y 2 = ( x + y )2 . 2 (Recallk k if z = a +h ıb, theni ak=k real(≤ kz) k √a2|h+ b2 =i| z k.) k ≤ k k k k k k k k k k k k ...................................................≤ ...................................................| | ................ Any inner product space is necessarily a normed space. Is the reverse true? Not in general. The following property distinguishes inner product spaces from mere normed spaces. Lemma. (The parallelogram law.) In an inner product space: x + y 2 + x y 2 = 2 x 2 + 2 y 2 , x, y . (3-1) k k k − k k k k k ∀ ∈X Proof. Expand the norms into inner products and simplify. 2 x x-y x+y y Remarkably, the converse of this Lemma also holds (see, e.g., problem [2, p. 175]). Proposition. If ( , ) is a normed space over C or R, and its norm satisfies the parallelogram law (3-1), then is also an inner product space,X withk·k inner product: X 1 x, y = x + y 2 x y 2 + i x + iy 2 i x iy 2 . h i 4 k k − k − k k k − k − k Proof. homework challenge problem. Continuity of inner products Lemma. In an inner product space ( , , ), if xn x and yn y, then xn, yn x, y . Proof. X h· ·i → → h i → h i x , y x, y = x , y x, y + x, y x, y x , y x, y + x, y x, y |h n ni − h i| |h n ni − h ni h ni − h i| ≤ |h n ni − h ni| |h ni − h i| = x x, y + x, y y |h n − ni| |h n − i| x x y + x y y by Cauchy-Schwarz ≤ k n − k k nk k k k n − k x x M + x y y since y is convergent and hence bounded ≤ k n − k k k k n − k n 0 as n . → → ∞ Thus x , y x, y . 2 h n ni → h i 3.4 c J. Fessler, November 5, 2004, 17:8 (student version) Examples Many of the normed spaces we considered previously are actually induced by suitable inner product space. Example. In Euclidean space, the usual inner product (aka “dot product”) is n x, y = a b , where x = (a ,...,a ) and y = (b ,...,b ). h i i i 1 n 1 n i=1 X Verifying the axioms is trivial. The induced norm is the usual `2 norm. Example. For the space ` over the complex field, the usual inner product is1 x, y = a b∗. 2 h i i i i The Holder¨ inequality, which is equivalent to the Cauchy-Schwarz inequality for this space, ensures that x, y x 2 y 2 . So the inner product is indeed finite for x, y ` . Thus ` is not only a Banach space, itP is also an inner|h producti| space. ≤ k k k k ∈ 2 2 Example. What about ` for p = 2? Do suitable inner products exist? p 6 Consider = R2, with x = (1, 0) and y = (0, 1). X k·kp · The parallelogram law holds (for this x and y) iff 2(1 + 1)2/p = 2 12 + 2 12, i.e., iff 22/p = 2. · · Thus `2 is only inner product space in the `p family of normed spaces. Example. The space of measurable functions on [a, b] with inner product b f, g = w(t)f(t)g∗(t) dt, h i Za where w(t) > 0, t is some (real) weighting function. Choosing w = 1 yields [a, b]. ∀ L2 Hilbert space Definition. A complete inner product space is called a Hilbert space. In other words, a Hilbert space is a Banach space along with an inner product that induces its norm. The addition of the inner product opens many analytical doors, as we shall see. The concept “complete” is appropriate here since any inner product space is a normed space. All of the preceding examples of inner product spaces were complete vector spaces (under the induced norm). Example. The following is an inner product space, but not a Hilbert space, since it is incomplete: b R [a, b] = f : [a, b] R : Riemann integral f 2(t) dt < , 2 → ∞ ( Za ) with inner product (easily verified to satisfy the axioms): f, g = b f(t)g(t) dt . h i a 1Note that the conjugate goes with the second argument become of Axiom 3. I haveR heard that some treatments scale the second argument in Axiom 3, which affects where the conjugates go in the inner products. c J. Fessler, November 5, 2004, 17:8 (student version) 3.5 Minimum norm optimization problems Section 3.3 is called “the projection theorem” and it is about a certain type of minimum norm problem. Before focusing on that specific minimum norm problem, we consider the broad family of such problems.
Recommended publications
  • The Parallelogram Law Objective: to Take Students Through the Process
    The Parallelogram Law Objective: To take students through the process of discovery, making a conjecture, further exploration, and finally proof. I. Introduction: Use one of the following… • Geometer’s Sketchpad demonstration • Geogebra demonstration • The introductory handout from this lesson Using one of the introductory activities, allow students to explore and make a conjecture about the relationship between the sum of the squares of the sides of a parallelogram and the sum of the squares of the diagonals. Conjecture: The sum of the squares of the sides of a parallelogram equals the sum of the squares of the diagonals. Ask the question: Can we prove this is always true? II. Activity: Have students look at one more example. Follow the instructions on the exploration handouts, “Demonstrating the Parallelogram Law.” • Give each student a copy of the student handouts, scissors, a glue stick, and two different colored highlighters. Have students follow the instructions. When they get toward the end, they will need to cut very small pieces to fit in the uncovered space. Most likely there will be a very small amount of space left uncovered, or a small amount will extend outside the figure. • After the activity, discuss the results. Did the squares along the two diagonals fit into the squares along all four sides? Since it is unlikely that it will fit exactly, students might question if the relationship is always true. At this point, talk about how we will need to find a convincing proof. III. Go through one or more of the proofs below: Page 1 of 10 MCC@WCCUSD 02/26/13 A.
    [Show full text]
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • CHAPTER 3. VECTOR ALGEBRA Part 1: Addition and Scalar
    CHAPTER 3. VECTOR ALGEBRA Part 1: Addition and Scalar Multiplication for Vectors. §1. Basics. Geometric or physical quantities such as length, area, volume, tempera- ture, pressure, speed, energy, capacity etc. are given by specifying a single numbers. Such quantities are called scalars, because many of them can be measured by tools with scales. Simply put, a scalar is just a number. Quantities such as force, velocity, acceleration, momentum, angular velocity, electric or magnetic field at a point etc are vector quantities, which are represented by an arrow. If the ‘base’ and the ‘head’ of this arrow are B and H repectively, then we denote this vector by −−→BH: Figure 1. Often we use a single block letter in lower case, such as u, v, w, p, q, r etc. to denote a vector. Thus, if we also use v to denote the above vector −−→BH, then v = −−→BH.A vector v has two ingradients: magnitude and direction. The magnitude is the length of the arrow representing v, and is denoted by v . In case v = −−→BH, certainly we | | have v = −−→BH for the magnitude of v. The meaning of the direction of a vector is | | | | self–evident. Two vectors are considered to be equal if they have the same magnitude and direction. You recognize two equal vectors in drawing, if their representing arrows are parallel to each other, pointing in the same way, and have the same length 1 Figure 2. For example, if A, B, C, D are vertices of a parallelogram, followed in that order, then −→AB = −−→DC and −−→AD = −−→BC: Figure 3.
    [Show full text]
  • Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis
    Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Meisam Sharify To cite this version: Meisam Sharify. Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues. Numerical Analysis [math.NA]. Ecole Polytechnique X, 2011. English. pastel-00643836 HAL Id: pastel-00643836 https://pastel.archives-ouvertes.fr/pastel-00643836 Submitted on 24 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Th`esepr´esent´eepour obtenir le titre de DOCTEUR DE L'ECOLE´ POLYTECHNIQUE Sp´ecialit´e: Math´ematiquesAppliqu´ees par Meisam Sharify Scaling Algorithms and Tropical Methods in Numerical Matrix Analysis: Application to the Optimal Assignment Problem and to the Accurate Computation of Eigenvalues Jury Marianne Akian Pr´esident du jury St´ephaneGaubert Directeur Laurence Grammont Examinateur Laura Grigori Examinateur Andrei Sobolevski Rapporteur Fran¸coiseTisseur Rapporteur Paul Van Dooren Examinateur September 2011 Abstract Tropical algebra, which can be considered as a relatively new field in Mathemat- ics, emerged in several branches of science such as optimization, synchronization of production and transportation, discrete event systems, optimal control, oper- ations research, etc.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • THESIS MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Submitted by Lara Kassab Department of Mathematics in Partial Fu
    THESIS MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Submitted by Lara Kassab Department of Mathematics In partial fulfillment of the requirements For the Degree of Master of Science Colorado State University Fort Collins, Colorado Spring 2019 Master’s Committee: Advisor: Henry Adams Michael Kirby Bailey Fosdick Copyright by Lara Kassab 2019 All Rights Reserved ABSTRACT MULTIDIMENSIONAL SCALING: INFINITE METRIC MEASURE SPACES Multidimensional scaling (MDS) is a popular technique for mapping a finite metric space into a low-dimensional Euclidean space in a way that best preserves pairwise distances. We study a notion of MDS on infinite metric measure spaces, along with its optimality properties and goodness of fit. This allows us to study the MDS embeddings of the geodesic circle S1 into Rm for all m, and to ask questions about the MDS embeddings of the geodesic n-spheres Sn into Rm. Furthermore, we address questions on convergence of MDS. For instance, if a sequence of metric measure spaces converges to a fixed metric measure space X, then in what sense do the MDS embeddings of these spaces converge to the MDS embedding of X? Convergence is understood when each metric space in the sequence has the same finite number of points, or when each metric space has a finite number of points tending to infinity. We are also interested in notions of convergence when each metric space in the sequence has an arbitrary (possibly infinite) number of points. ii ACKNOWLEDGEMENTS We would like to thank Henry Adams, Mark Blumstein, Bailey Fosdick, Michael Kirby, Henry Kvinge, Facundo Mémoli, Louis Scharf, the students in Michael Kirby’s Spring 2018 class, and the Pattern Analysis Laboratory at Colorado State University for their helpful conversations and support throughout this project.
    [Show full text]
  • Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
    USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play.
    [Show full text]
  • Communication-Optimal Parallel 2.5D Matrix Multiplication and LU Factorization Algorithms
    Introduction 2.5D matrix multiplication 2.5D LU factorization Conclusion Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms Edgar Solomonik and James Demmel UC Berkeley September 1st, 2011 Edgar Solomonik and James Demmel 2.5D algorithms 1/ 36 Introduction 2.5D matrix multiplication 2.5D LU factorization Conclusion Outline Introduction Strong scaling 2.5D matrix multiplication Strong scaling matrix multiplication Performing faster at scale 2.5D LU factorization Communication-optimal LU without pivoting Communication-optimal LU with pivoting Conclusion Edgar Solomonik and James Demmel 2.5D algorithms 2/ 36 Introduction 2.5D matrix multiplication Strong scaling 2.5D LU factorization Conclusion Solving science problems faster Parallel computers can solve bigger problems I weak scaling Parallel computers can also solve a xed problem faster I strong scaling Obstacles to strong scaling I may increase relative cost of communication I may hurt load balance Edgar Solomonik and James Demmel 2.5D algorithms 3/ 36 Introduction 2.5D matrix multiplication Strong scaling 2.5D LU factorization Conclusion Achieving strong scaling How to reduce communication and maintain load balance? I reduce communication along the critical path Communicate less I avoid unnecessary communication Communicate smarter I know your network topology Edgar Solomonik and James Demmel 2.5D algorithms 4/ 36 Introduction 2.5D matrix multiplication Strong scaling matrix multiplication 2.5D LU factorization Performing faster at scale Conclusion
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]