The Gram-Schmidt Procedure, Orthogonal Complements, and Orthogonal Projections

Total Page:16

File Type:pdf, Size:1020Kb

The Gram-Schmidt Procedure, Orthogonal Complements, and Orthogonal Projections The Gram-Schmidt Procedure, Orthogonal Complements, and Orthogonal Projections 1 Orthogonal Vectors and Gram-Schmidt In this section, we will develop the standard algorithm for production orthonormal sets of vectors and explore some related matters We present the results in a general, real, inner product space, V rather than just in Rn. We will make use of this level of generality later on when we discuss the topic of conjugate direction methods and the related conjugate gradient methods for optimization. There, once again, we will meet the Gram-Schmidt process. We begin by recalling that a set of non-zero vectors fv1;:::; vkg is called an orthogonal set provided for any indices i; j; i 6= j,the inner products hvi; vji = 0. It is called 2 an orthonormal set provided that, in addition, hvi; vii = kvik = 1. It should be clear that any orthogonal set of vectors must be a linearly independent set since, if α1v1 + ··· + αkvk = 0 then, for any i = 1; : : : ; k, taking the inner product of the sum with vi, and using linearity of the inner product and the orthogonality of the vectors, hvi; α1v1 + ··· αkvki = αihvi; vii = 0 : But since hvi; vii 6= 0 we must have αi = 0. This means, in particular, that in any n-dimensional space any set of n orthogonal vectors forms a basis. The Gram-Schmidt Orthogonalization Process is a constructive method, valid in any finite-dimensional inner product space, which will replace any basis U = fu1; u2;:::; ung with an orthonormal basis V = fv1; v2;:::; vng. Moreover, the replacement is made in such a way that for all k = 1; 2; : : : ; n, the subspace spanned by the first k vectors fu1;:::; ukg and that spanned by the new vectors fv1;:::; vkg are the same. To do this we proceed inductively. First observe that u1 6= 0 since U is a linearly independent set. We take v1 = u1=ku1k. Suppose now that the v1;:::; vk have been chosen so that they form an orthonormal set and so that each vj ; j = 1; : : : ; k, is a linear combination of the vectors u1;:::; uk. We write w = uk+1 − (α1v1 + ··· + αkvk) ; where the values of the scalars α1; : : : ; αk are still to be determined. Since hw; vji = huk+1 − (α1v1 + ··· + αkvk) ; vji = huk+1; vji − αj ; for i = 1; : : : ; k ; 1 it follows that if we choose αj = huk+1; vji then hw; vji = 0 for j = 1; : : : ; k. Since, moreover, w is a linear combination of uk+1 and v1;:::; vk, it is also a linear combination of uk+! and u1;:::; uk. Finally, the vector w 6= 0 since u1;:::; uk; uk+1 are linearly independent and the coefficient of uk+1 in the expression for w is not zero. We may now define vk+1 = w=kwk. The set fv1; vk; vk+1g is certainly an orthonormal set with the required properties and the proof by induction is complete. We can summarize the procedure by listing a series of steps, It is really irrelevant whether we normalize with each step. We do not do it here, preferring to do so, if necessary, at the end of the procedure. The Gram-Schmidt Procedure Step 1: v1 = u1: Compute kv1k ; hu2; v1i Step 2: v2 = u2 − 2 v1 : Compute kv2k ; kv1k hu3; v1i hu3; v2i Step 3: v3 = u3 − 2 v1 − 2 v2 : Compute kv3k ; kv1k kv2k . k−1 X huk; vii Step k: v = u − v : Compute kv k . k k kv k2 i k i=1 i . 2 Examples Let us give some examples. Example 2.1 Let 80 1 1 0 1 1 0 1 1 9 < = U = @ −1 A ; @ 0 A ; @ 1 A = fu1; u2; u3g : : 1 1 2 ; 2 > 2 Then v1 = (1; −1; 1) and kv1k = 3. Then, we compute v2: 0 1 1 *0 1 1 0 1 1+ 0 1 1 hu ; v i 1 v = u − 2 1 v = 1 − 0 ; −1 −1 2 2 kv k2 1 @ A 3 @ A @ A @ A 1 1 1 1 1 0 1 0 1 0 1 1 1 1 3 2 2 2 2 = @ 0 A − @ −1 A = @ 3 A and kv2k = : 3 1 3 1 1 3 Finally, hu3; v1i hu3; v2i v3 = u3 − 2 v1 − 2 v2 kv1k kv2k 0 1 1 0 1 1 0 1 1 *0 1 1 0 1 1+ 0 1 1 *0 1 1 + 1 3 3 3 = 1 − 1 ; −1 −1 − 1 ; B 2 C B 2 C @ A 3 @ A @ A @ A 2 @ A @ 3 A @ 3 A 2 2 1 1 2 1 1 3 3 0 1 1 0 1 1 0 1 1 0 1 1 − 2 5 2 1 = 1 − −1 − 2 = B 0 C and kv k2 = : @ A 3 @ A 6 @ A @ A 3 2 2 1 1 1 2 The normalized set is 0 1 0 1 p1 p1 0 −p1 1 (3) (6) (2) B p1 C B p2 C B C v^1 = B − C ; v^2 = B C ; v^3 = B 0 C : B (3) C B (6) C @ A @ p1 A @ p1 A p1 (3) (6) (2) In a more geometric vein, we consider the next example. 3 > Example 2.2 Let H be the plane in R spanned by the vectors u1 = (1; 2; 2) and > u2 = (−1; 0; 2) . These vectors are clearly linearly independent and so form a basis for the plane. We wish to find an orthonormal basis for the plane and extend it to an orthonormal basis for all of R3. We add one linearly independent vector to the original 3 > set of two to form a basis for all of R by adding the vector u3 = (0; 0; 1) . Then the 3 set of vectors fu1; u2; u3g are a linearly independent set in R and so form a basis for 3 the entire space. If one has any doubt about the linear independence of this set, just compute det (col [u1; u2; v3]) = 2 6= 0. Now, we could have orthogonalized the set consisting of the two given vectors, and then added a third, but the fact that the Gram-Schmidt procedure preserves the span at each stage, it is simpler to add the additional linearly independent vector now. The process then proceeds as usual: 2 2 2 2 v1 = u1 and ku1k = 1 + 2 + 2 = 9;: 0 −1 1 *0 −1 1 0 1 1+ 0 1 1 hu ; v i 1 v = u − 2 1 v = 0 − 0 ; 2 2 2 2 9 1 @ A 9 @ A @ A @ A 2 2 2 2 0 4 1 0 −1 1 0 1 1 − 3 3 = 0 − 2 = B − 2 C @ A 9 @ A @ 3 A 2 2 4 3 2 Note that kv2k = 36=9 = 4. Finally, hu ; v i hu ; v i v = u − 3 1 v − 3 2 v 3 3 9 1 4 2 0 0 1 *0 0 1 0 1 1+ 0 1 1 *0 0 1 0 −4=3 1+ 0 −4=3 1 1 1 = 0 − 0 ; 2 2 − 0 ; −2=3 −2=3 @ A 9 @ A @ A @ A 4 @ A @ A @ A 1 1 2 2 1 4=3 4=3 0 4 1 0 2 1 0 0 1 0 1 1 − 2 1 3 9 = 0 − 2 − B − 2 C = B − 2 C @ A 9 @ A 3 @ 3 A @ 9 A 1 2 4 1 3 9 Now v1 and v2 are an orthogonal basis for the plane H, and, together with v3 form an orthogonal basis for all of R3. In order to get the orthonormal basis, we merely divide each by their norm. Since, as we have seen, kv1k = 3 and kv2k = 2, we need p p only compute the norm of kv3k = (4=81 + 4=81 + 1=81) = (1=9) = 1=3. Hence the vectors of the required orthonormal basis are 0 1 1 0 2 1 0 2 1 3 − 3 3 v^ = B 2 C ; v^ = B − 1 C ; and v^ = B − 2 C : 1 @ 3 A 2 @ 3 A 3 @ 3 A 2 2 1 3 3 3 4 As another example, we leave the vector space Rn. Example 2.3 Here we look at the space of polynomials of degree at most 3, defined on the interval [−1; 1] and having real coefficients. This is the vector space we denote by P3([−1; 1]). We take, as a basis, the monomials f1; t; t2; t3g. These polynomials clearly 2 3 span the vector space and are linearly independent since, αo 1; α1 t + α2 t + α3 t = 0 for all t 2 [−1; 1] then all the αi = 0 because such a polynomial, if not the zero polynomial, can have at most three real roots according to the Fundamental Theorem of Algebra. In this vector space we introduce the form 1 Z hp1; p2i = p1(t) p2(t) dt : −1 We claim that this form is an inner product on P3([−1; 1]). To verify that the claim is true, we must show that the form is a positive definite, symmetric, bi-linear form. First, since p1(t)p2(t) = p2(t)p1(t) the form is clearly symmetric. Moreover, since p2(t) ≥ 0 for any p 2 P3([−1; 1]) we certainly know that 1 Z p2(t) dt ≥ 0 −1 and is equal to 0 if and only if p(t) ≡ 0 on [−1; 1]. So the form is positive definite. Since we already know that the form is symmetric, it suffices to show that the form is 3 linear in the first argument.
Recommended publications
  • Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases
    Week 11 Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 11.1 Opening Remarks 11.1.1 Low Rank Approximation * View at edX 463 Week 11. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464 11.1.2 Outline 11.1. Opening Remarks..................................... 463 11.1.1. Low Rank Approximation............................. 463 11.1.2. Outline....................................... 464 11.1.3. What You Will Learn................................ 465 11.2. Projecting a Vector onto a Subspace........................... 466 11.2.1. Component in the Direction of ............................ 466 11.2.2. An Application: Rank-1 Approximation...................... 470 11.2.3. Projection onto a Subspace............................. 474 11.2.4. An Application: Rank-2 Approximation...................... 476 11.2.5. An Application: Rank-k Approximation...................... 478 11.3. Orthonormal Bases.................................... 481 11.3.1. The Unit Basis Vectors, Again........................... 481 11.3.2. Orthonormal Vectors................................ 482 11.3.3. Orthogonal Bases.................................. 485 11.3.4. Orthogonal Bases (Alternative Explanation).................... 488 11.3.5. The QR Factorization................................ 492 11.3.6. Solving the Linear Least-Squares Problem via QR Factorization......... 493 11.3.7. The QR Factorization (Again)........................... 494 11.4. Change of Basis...................................... 498 11.4.1. The Unit Basis Vectors,
    [Show full text]
  • Orthogonal Bases and the -So in Section 4.8 We Discussed the Problem of Finding the Orthogonal Projection P
    Orthogonal Bases and the -So In Section 4.8 we discussed the problemR. of finding the orthogonal projection p the vector b into the V of , . , suhspace the vectors , v2 ho If v1 v,, form a for V, and the in x n matrix A has these basis vectors as its column vectors. ilt the orthogonal projection p is given by p = Ax where x is the (unique) solution of the normal system ATAx = A7b. The formula for p takes an especially simple and attractive Form when the ba vectors , . .. , v1 v are mutually orthogonal. DEFINITION Orthogonal Basis An orthogonal basis for the suhspacc V of R” is a basis consisting of vectors , ,v,, that are mutually orthogonal, so that v v = 0 if I j. If in additii these basis vectors are unit vectors, so that v1 . = I for i = 1. 2 n, thct the orthogonal basis is called an orthonormal basis. Example 1 The vectors = (1, 1,0), v2 = (1, —1,2), v3 = (—1,1,1) form an orthogonal basis for . We “normalize” ‘ R3 can this orthogonal basis 1w viding each basis vector by its length: If w1=—- (1=1,2,3), lvii 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm 295 then the vectors /1 I 1 /1 1 ‘\ 1 2” / I w1 0) W2 = —— _z) W3 for . form an orthonormal basis R3 , . ..., v, of the m x ii matrix A Now suppose that the column vectors v v2 form an orthogonal basis for the suhspacc V of R’. Then V}.VI 0 0 v2.v ..
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • CLIFFORD ALGEBRAS Property, Then There Is a Unique Isomorphism (V ) (V ) Intertwining the Two Inclusions of V
    CHAPTER 2 Clifford algebras 1. Exterior algebras 1.1. Definition. For any vector space V over a field K, let T (V ) = k k k Z T (V ) be the tensor algebra, with T (V ) = V V the k-fold tensor∈ product. The quotient of T (V ) by the two-sided⊗···⊗ ideal (V ) generated byL all v w + w v is the exterior algebra, denoted (V ).I The product in (V ) is usually⊗ denoted⊗ α α , although we will frequently∧ omit the wedge ∧ 1 ∧ 2 sign and just write α1α2. Since (V ) is a graded ideal, the exterior algebra inherits a grading I (V )= k(V ) ∧ ∧ k Z M∈ where k(V ) is the image of T k(V ) under the quotient map. Clearly, 0(V )∧ = K and 1(V ) = V so that we can think of V as a subspace of ∧(V ). We may thus∧ think of (V ) as the associative algebra linearly gener- ated∧ by V , subject to the relations∧ vw + wv = 0. We will write φ = k if φ k(V ). The exterior algebra is commutative | | ∈∧ (in the graded sense). That is, for φ k1 (V ) and φ k2 (V ), 1 ∈∧ 2 ∈∧ [φ , φ ] := φ φ + ( 1)k1k2 φ φ = 0. 1 2 1 2 − 2 1 k If V has finite dimension, with basis e1,...,en, the space (V ) has basis ∧ e = e e I i1 · · · ik for all ordered subsets I = i1,...,ik of 1,...,n . (If k = 0, we put { } k { n } e = 1.) In particular, we see that dim (V )= k , and ∅ ∧ n n dim (V )= = 2n.
    [Show full text]
  • Geometric (Clifford) Algebra and Its Applications
    Geometric (Clifford) algebra and its applications Douglas Lundholm F01, KTH January 23, 2006∗ Abstract In this Master of Science Thesis I introduce geometric algebra both from the traditional geometric setting of vector spaces, and also from a more combinatorial view which simplifies common relations and opera- tions. This view enables us to define Clifford algebras with scalars in arbitrary rings and provides new suggestions for an infinite-dimensional approach. Furthermore, I give a quick review of classic results regarding geo- metric algebras, such as their classification in terms of matrix algebras, the connection to orthogonal and Spin groups, and their representation theory. A number of lower-dimensional examples are worked out in a sys- tematic way using so called norm functions, while general applications of representation theory include normed division algebras and vector fields on spheres. I also consider examples in relativistic physics, where reformulations in terms of geometric algebra give rise to both computational and conceptual simplifications. arXiv:math/0605280v1 [math.RA] 10 May 2006 ∗Corrected May 2, 2006. Contents 1 Introduction 1 2 Foundations 3 2.1 Geometric algebra ( , q)...................... 3 2.2 Combinatorial CliffordG V algebra l(X,R,r)............. 6 2.3 Standardoperations .........................C 9 2.4 Vectorspacegeometry . 13 2.5 Linearfunctions ........................... 16 2.6 Infinite-dimensional Clifford algebra . 19 3 Isomorphisms 23 4 Groups 28 4.1 Group actions on .......................... 28 4.2 TheLipschitzgroupG ......................... 30 4.3 PropertiesofPinandSpingroups . 31 4.4 Spinors ................................ 34 5 A study of lower-dimensional algebras 36 5.1 (R1) ................................. 36 G 5.2 (R0,1) =∼ C -Thecomplexnumbers . 36 5.3 G(R0,0,1)...............................
    [Show full text]
  • Orthogonal Bases
    Orthogonal bases • Recall: Suppose that v1 , , vn are nonzero and (pairwise) orthogonal. Then v1 , , vn are independent. Definition 1. A basis v1 , , vn of a vector space V is an orthogonal basis if the vectors are (pairwise) orthogonal. 1 1 0 3 Example 2. Are the vectors − 1 , 1 , 0 an orthogonal basis for R ? 0 0 1 Solution. Note that we do not need to check that the three vectors are independent. That follows from their orthogonality. Example 3. Suppose v1 , , vn is an orthogonal basis of V, and that w is in V. Find c1 , , cn such that w = c1 v1 + + cnvn. Solution. Take the dot product of v1 with both sides: If v1 , , vn is an orthogonal basis of V, and w is in V, then w · v j w = c1 v1 + + cnvn with cj = . v j · v j Armin Straub 1 [email protected] 3 1 1 0 Example 4. Express 7 in terms of the basis − 1 , 1 , 0 . 4 0 0 1 Solution. Definition 5. A basis v1 , , vn of a vector space V is an orthonormal basis if the vectors are orthogonal and have length 1 . If v1 , , vn is an orthonormal basis of V, and w is in V, then w = c1 v1 + + cnvn with cj = v j · w. 1 1 0 Example 6. Is the basis − 1 , 1 , 0 orthonormal? If not, normalize the vectors 0 0 1 to produce an orthonormal basis. Solution. Armin Straub 2 [email protected] Orthogonal projections Definition 7. The orthogonal projection of vector x onto vector y is x · y xˆ = y.
    [Show full text]
  • Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007)
    MAT067 University of California, Davis Winter 2007 Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007) The abstract definition of vector spaces only takes into account algebraic properties for the addition and scalar multiplication of vectors. For vectors in Rn, for example, we also have geometric intuition which involves the length of vectors or angles between vectors. In this section we discuss inner product spaces, which are vector spaces with an inner product defined on them, which allow us to introduce the notion of length (or norm) of vectors and concepts such as orthogonality. 1 Inner product In this section V is a finite-dimensional, nonzero vector space over F. Definition 1. An inner product on V is a map ·, · : V × V → F (u, v) →u, v with the following properties: 1. Linearity in first slot: u + v, w = u, w + v, w for all u, v, w ∈ V and au, v = au, v; 2. Positivity: v, v≥0 for all v ∈ V ; 3. Positive definiteness: v, v =0ifandonlyifv =0; 4. Conjugate symmetry: u, v = v, u for all u, v ∈ V . Remark 1. Recall that every real number x ∈ R equals its complex conjugate. Hence for real vector spaces the condition about conjugate symmetry becomes symmetry. Definition 2. An inner product space is a vector space over F together with an inner product ·, ·. Copyright c 2007 by the authors. These lecture notes may be reproduced in their entirety for non- commercial purposes. 2NORMS 2 Example 1. V = Fn n u =(u1,...,un),v =(v1,...,vn) ∈ F Then n u, v = uivi.
    [Show full text]
  • Basics of Euclidean Geometry
    This is page 162 Printer: Opaque this 6 Basics of Euclidean Geometry Rien n'est beau que le vrai. |Hermann Minkowski 6.1 Inner Products, Euclidean Spaces In a±ne geometry it is possible to deal with ratios of vectors and barycen- ters of points, but there is no way to express the notion of length of a line segment or to talk about orthogonality of vectors. A Euclidean structure allows us to deal with metric notions such as orthogonality and length (or distance). This chapter and the next two cover the bare bones of Euclidean ge- ometry. One of our main goals is to give the basic properties of the transformations that preserve the Euclidean structure, rotations and re- ections, since they play an important role in practice. As a±ne geometry is the study of properties invariant under bijective a±ne maps and projec- tive geometry is the study of properties invariant under bijective projective maps, Euclidean geometry is the study of properties invariant under certain a±ne maps called rigid motions. Rigid motions are the maps that preserve the distance between points. Such maps are, in fact, a±ne and bijective (at least in the ¯nite{dimensional case; see Lemma 7.4.3). They form a group Is(n) of a±ne maps whose corresponding linear maps form the group O(n) of orthogonal transformations. The subgroup SE(n) of Is(n) corresponds to the orientation{preserving rigid motions, and there is a corresponding 6.1. Inner Products, Euclidean Spaces 163 subgroup SO(n) of O(n), the group of rotations.
    [Show full text]
  • Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality
    MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Vector Norms 2 Matrix Norms 3 Inner Product Spaces 4 Orthogonal Vectors 5 Gram–Schmidt Orthogonalization & QR Factorization 6 Unitary and Orthogonal Matrices 7 Orthogonal Reduction 8 Complementary Subspaces 9 Orthogonal Decomposition 10 Singular Value Decomposition 11 Orthogonal Projections [email protected] MATH 532 2 Vector Norms Vector[0] Norms 1 Vector Norms 2 Matrix Norms Definition 3 Inner Product Spaces Let x; y 2 Rn (Cn). Then 4 Orthogonal Vectors n T X 5 Gram–Schmidt Orthogonalization & QRx Factorizationy = xi yi 2 R i=1 6 Unitary and Orthogonal Matrices n X ∗ = ¯ 2 7 Orthogonal Reduction x y xi yi C i=1 8 Complementary Subspaces is called the standard inner product for Rn (Cn). 9 Orthogonal Decomposition 10 Singular Value Decomposition 11 Orthogonal Projections [email protected] MATH 532 4 Vector Norms Definition Let V be a vector space. A function k · k : V! R≥0 is called a norm provided for any x; y 2 V and α 2 R 1 kxk ≥ 0 and kxk = 0 if and only if x = 0, 2 kαxk = jαj kxk, 3 kx + yk ≤ kxk + kyk. Remark The inequality in (3) is known as the triangle inequality. [email protected] MATH 532 5 Vector Norms Remark Any inner product h·; ·i induces a norm via (more later) p kxk = hx; xi: We will show that the standard inner product induces the Euclidean norm (cf.
    [Show full text]
  • Projection and Its Importance in Scientific Computing ______
    Projection and its Importance in Scientific Computing ________________________________________________ Stan Tomov EECS Department The University of Tennessee, Knoxville March 22, 2017 COCS 594 Lecture Notes Slide 1 / 41 03/22/2017 Contact information office : Claxton 317 phone : (865) 974-6317 email : [email protected] Additional reference materials: [1] R.Barrett, M.Berry, T.F.Chan, J.Demmel, J.Donato, J. Dongarra, V. Eijkhout, R.Pozo, C.Romine, and H.Van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd edition) http://netlib2.cs.utk.edu/linalg/html_templates/Templates.html [2] Yousef Saad, Iterative methods for sparse linear systems (1st edition) http://www-users.cs.umn.edu/~saad/books.html Slide 2 / 41 Topics as related to high-performance scientific computing‏ Projection in Scientific Computing Sparse matrices, PDEs, Numerical parallel implementations solution, Tools, etc. Iterative Methods Slide 3 / 41 Topics on new architectures – multicore, GPUs (CUDA & OpenCL), MIC Projection in Scientific Computing Sparse matrices, PDEs, Numerical parallel implementations solution, Tools, etc. Iterative Methods Slide 4 / 41 Outline Part I – Fundamentals Part II – Projection in Linear Algebra Part III – Projection in Functional Analysis (e.g. PDEs) HPC with Multicore and GPUs Slide 5 / 41 Part I Fundamentals Slide 6 / 41 Projection in Scientific Computing [ an example – in solvers for PDE discretizations ] A model leading to self-consistent iteration with need for high-performance diagonalization
    [Show full text]
  • [Math.CA] 18 Sep 2001 Lmnso Ieradra Analysis Real and Linear of Elements Tpe Semmes Stephen Oso,Texas Houston, Ieuniversity Rice Preface
    Elements of Linear and Real Analysis Stephen Semmes Rice University Houston, Texas arXiv:math/0108030v5 [math.CA] 18 Sep 2001 Preface This book deals with some basic themes in mathematical analysis along the lines of classical norms on functions and sequences, general normed vector spaces, inner product spaces, linear operators, some maximal and square- function operators, interpolation of operators, and quasisymmetric mappings between metric spaces. Aspects of the broad area of harmonic analysis are entailed in particular, involving famous work of M. Riesz, Hardy, Littlewood, Paley, Calder´on, and Zygmund. However, instead of working with arbitrary continuous or integrable func- tions, we shall often be ready to use only step functions on an interval, i.e., functions which are piecewise-constant. Similarly, instead of infinite- dimensional Hilbert or Banach spaces, we shall frequently restrict our atten- tion to finite-dimensional inner product or normed vector spaces. We shall, however, be interested in quantitative matters. We do not attempt to be exhaustive in any way, and there are many re- lated and very interesting subjects that are not addressed. The bibliography lists a number of books and articles with further information. The formal prerequisites for this book are quite limited. Much of what we do is connected to the notion of integration, but for step functions ordinary integrals reduce to finite sums. A sufficient background should be provided by standard linear algebra of real and complex finite-dimensional vector spaces and some knowledge of beginning analysis, as in the first few chapters of Rudin’s celebrated Principles of Mathematical Analysis [Rud1].
    [Show full text]
  • 17. Inner Product Spaces Definition 17.1. Let V Be a Real Vector Space
    17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function h ; i: V × V −! R; which is • symmetric, that is hu; vi = hv; ui: • bilinear, that is linear (in both factors): hλu, vi = λhu; vi; for all scalars λ and hu1 + u2; vi = hu1; vi + hu2; vi; for all vectors u1, u2 and v. • positive that is hv; vi ≥ 0: • non-degenerate that is if hu; vi = 0 for every v 2 V then u = 0. We say that V is a real inner product space. The associated quadratic form is the function Q: V −! R; defined by Q(v) = hv; vi: Example 17.2. Let A 2 Mn;n(R) be a real matrix. We can define a function n n h ; i: R × R −! R; by the rule hu; vi = utAv: The basic rules of matrix multiplication imply that this function is bi- linear. Note that the entries of A are given by t aij = eiAej = hei; eji: In particular, it is symmetric if and only if A is symmetric that is At = A. It is non-degenerate if and only if A is invertible, that is A has rank n. Positivity is a little harder to characterise. Perhaps the canonical example is to take A = In. In this case if t P u = (r1; r2; : : : ; rn) and v = (s1; s2; : : : ; sn) then u Inv = risi. Note 1 P 2 that if we take u = v then we get ri . The square root of this is the Euclidean distance.
    [Show full text]