[Math.CA] 18 Sep 2001 Lmnso Ieradra Analysis Real and Linear of Elements Tpe Semmes Stephen Oso,Texas Houston, Ieuniversity Rice Preface

Total Page:16

File Type:pdf, Size:1020Kb

[Math.CA] 18 Sep 2001 Lmnso Ieradra Analysis Real and Linear of Elements Tpe Semmes Stephen Oso,Texas Houston, Ieuniversity Rice Preface Elements of Linear and Real Analysis Stephen Semmes Rice University Houston, Texas arXiv:math/0108030v5 [math.CA] 18 Sep 2001 Preface This book deals with some basic themes in mathematical analysis along the lines of classical norms on functions and sequences, general normed vector spaces, inner product spaces, linear operators, some maximal and square- function operators, interpolation of operators, and quasisymmetric mappings between metric spaces. Aspects of the broad area of harmonic analysis are entailed in particular, involving famous work of M. Riesz, Hardy, Littlewood, Paley, Calder´on, and Zygmund. However, instead of working with arbitrary continuous or integrable func- tions, we shall often be ready to use only step functions on an interval, i.e., functions which are piecewise-constant. Similarly, instead of infinite- dimensional Hilbert or Banach spaces, we shall frequently restrict our atten- tion to finite-dimensional inner product or normed vector spaces. We shall, however, be interested in quantitative matters. We do not attempt to be exhaustive in any way, and there are many re- lated and very interesting subjects that are not addressed. The bibliography lists a number of books and articles with further information. The formal prerequisites for this book are quite limited. Much of what we do is connected to the notion of integration, but for step functions ordinary integrals reduce to finite sums. A sufficient background should be provided by standard linear algebra of real and complex finite-dimensional vector spaces and some knowledge of beginning analysis, as in the first few chapters of Rudin’s celebrated Principles of Mathematical Analysis [Rud1]. This is not to say that the present monograph would necessarily be easy to read with this background, as the types of issues considered may be unfamiliar. On the other hand, it is hoped that this monograph can be helpful to readers with a variety of perspectives. ii Contents Preface ii 1 Notation and conventions 1 2 Dyadic intervals 3 2.1 The unit interval and dyadic subintervals . 3 2.2 Functionsontheunitinterval . 5 2.3 Haarfunctions .......................... 6 2.4 Binarysequences ......................... 8 3 Convexity and some basic inequalities 9 3.1 Convexfunctions ......................... 9 3.2 Jensen’s inequality . 12 3.3 H¨older’s inequality . 13 3.4 Minkowski’s inequality . 14 3.5 p< 1................................ 17 4 Normed vector spaces 18 4.1 Definitions and basic properties . 18 4.2 Dualspacesandnorms . .. 20 4.3 Secondduals ........................... 21 4.4 Lineartransformationsandnorms . 25 4.5 Lineartransformationsandduals . 25 4.6 Innerproductspaces .. .. 27 4.7 Inner product spaces, continued . 29 4.8 Separationofconvexsets. 30 4.9 Somevariations .......................... 32 iii iv CONTENTS 5 Strict convexity 37 5.1 Functionsofonerealvariable . 37 5.2 Theunitballinanormedvectorspace . 38 5.3 Linearfunctionals......................... 39 5.4 Uniqueness of points of minimal distance . 42 5.5 Clarkson’s inequalities . 43 6 Spectral theory 44 6.1 Thespectrumandspectralradius . 44 6.2 Spectralradiusandnorms . 45 6.3 Spectralradiusandnorms,2. 46 6.4 Innerproductspaces . .. .. 50 6.5 The C∗-identity.......................... 53 6.6 Projections ............................ 55 6.7 Remarksaboutdiagonalizableoperators . 58 6.8 Commuting families of operators . 59 7 Linear operators between inner product spaces 61 7.1 Preliminaryremarks ....................... 61 7.2 Schmidtdecompositions . 62 7.3 TheHilbert-Schmidtnorm . 63 7.4 Anumericalfeature........................ 64 7.5 Numericalrange.......................... 65 8 Subspaces and quotient spaces 67 8.1 Linearalgebra........................... 67 8.2 Quotientspacesandnorms. 68 8.3 Mappings between vector spaces . 70 9 Variation seminorms 73 9.1 Basicdefinitions.......................... 73 9.2 The p = 2 and n = 1, p =1cases ................ 75 9.3 Minimization . 76 9.4 Truncations ............................ 78 10 Groups 81 10.1Generalnotions .......................... 81 10.2 Some operators on F(G)..................... 83 CONTENTS v 10.3 Commutativegroups . 83 10.4Specialcases............................ 85 10.5 Groupsofmatrices ........................ 86 11 Some special families of functions 89 11.1 Rademacherfunctions . 89 11.2 Linearfunctionsonspheres . 92 11.3 Linearfunctions,continued. 94 11.4 Lacunary sums, p =4....................... 95 12 Maximal functions 97 12.1 Definitionsandbasicproperties . 97 12.2 Thesizeofthemaximalfunction . 99 12.3 Somevariations ..........................101 12.4 Moreonthesizeofthemaximalfunction . 102 13 Square functions 106 13.1 S-functions ............................106 13.2Estimates,1............................108 13.3Estimates,2............................113 13.4Duality...............................117 13.5 Duality,continued. .120 13.6 Some inequalities . 122 13.7 Another inequality for p =1...................125 13.8Variants ..............................126 13.9 Some remarks concerning p =4 .................127 14 Interpolation of operators 130 14.1 Thebasicresult..........................130 14.2 Adigressionaboutconvexfunctions. 133 14.3 A place where the maximum is attained . 135 14.4 Therestoftheargument. .136 14.5 Areformulation ..........................138 14.6 Ageneralization. .. .. .139 15 Quasisymmetric mappings 141 15.1Basicnotions ...........................141 15.2Examples .............................144 vi CONTENTS 15.3Cantorsets ............................145 15.4 Bounds in terms of C ta .....................147 Bibliography 150 Chapter 1 Notation and conventions If a and b are real numbers with a ≤ b, then the following are the intervals in the real line R with endpoints a and b: [a, b] = {x ∈ R : a ≤ x ≤ b}; (a, b) = {x ∈ R : a<x<b}; [a, b) = {x ∈ R : a ≤ x < b}; (a, b] = {x ∈ R : a < x ≤ b}. All but the first is the empty set when a = b, while [a, b] consists of the one point a = b. In general, the first of these intervals is called the closed interval with endpoints a and b, and the second is the open interval with endpoints a and b. The third and fourth are half-open, half-closed intervals, with the third being left-closed and right-open, and the fourth left-open and right-closed. The length of each of these intervals is defined to be b − a. If an interval is denoted I, we may write |I| for the length of I. For the record, see Chapter 1 in [Rud1] concerning detailed properties of the real numbers (as well as the complex numbers C). In particular, let us recall the “least upper bound” or “completeness” property, to the effect that a nonempty set E of real numbers which has an upper bound has a least upper bound. The least upper bound is also called the supremum of E, and is denoted sup E. Similarly, if F is a nonempty set of real numbers which has a lower bound, then F has a greatest lower bound, or infimum, which is denoted inf F . We shall sometimes use the extended real numbers (as in [Rud1]), with ∞ and −∞ added to the real numbers, and write sup E = ∞ 1 2 CHAPTER 1. NOTATION AND CONVENTIONS and inf F = −∞ if E and F are nonempty sets of real numbers such that E does not have an upper bound and F does not have a lower bound. If A is a subset of some specified set X (like the real line), we let 1A(x) denote the indicator function of A on X (sometimes called the characteristic function associated to A, although in other contexts this name can be used for something quite different). This is the function which is equal to 1 when x ∈ A, and is equal to 0 when x ∈ X\A. Definition 1.1 (Step functions) A function on the real line, or on an in- terval in the real line, is called a step function if it is a finite linear combi- nation of indicator functions of intervals. This is equivalent to saying that there is a partition of the domain into intervals on which the function is constant. In these notes, one is normally welcome to assume that a given function on the real line, or on an interval in the real line, is a step function. In fact, one is normally welcome to assume that a given function is a dyadic step function, as defined in the next chapter. For step functions, it is very easy to define the integral over an interval in the domain of definition, by reducing to linear combinations of lengths of intervals. An exception to this convention occurs when we consider convex or mono- tone functions, which we do not necessarily wish to ask to be step functions. When dealing with integrals, typically the function being integrated can be taken to be a step function. (This function might be the composition of a non-step function with a step function, which is still a step function.) Chapter 2 Dyadic intervals 2.1 The unit interval and dyadic subintervals Normally, a reference to “the unit interval” might suggest the interval [0, 1] in the real line. It will be convenient to use [0, 1) instead, for minor technical reasons (and one could easily work around this anyway). Definition 2.1 (Dyadic intervals in [0, 1)) The dyadic subintervals of the unit interval [0, 1) are the intervals of the form [j 2−k, (j +1)2−k), where j and k are nonnegative integers, and j +1 ≤ 2k. (Thus the length of such an interval is of the form 2−k, where k is a nonnegative integer.) In general one can define the dyadic intervals in R to be the intervals of the same form, except that j and k are allowed to be arbitrary integers. The half-open, half-closed condition leads to nice properties in terms of disjointness, as in the following lemmas. (With closed intervals one could get disjointness of interiors in similar circumstances. This would be fine in terms of integrals, measures, etc.) Lemma 2.2 (Partitions of [0, 1)) For each nonnegative integer k, [0, 1) is the union of the dyadic subintervals of itself of length 2−k, and these intervals are pairwise disjoint. Lemma 2.3 (Comparing pairs of intervals) If J1 and J2 are two dyadic subintervals of [0, 1), then either J1 ⊆ J2, or J2 ⊆ J1, or J1 ∩ J2 = ∅. (The first two possibilities are not mutually exclusive, as one could have J1 = J2.) 3 4 CHAPTER 2.
Recommended publications
  • Orthogonal Bases and the -So in Section 4.8 We Discussed the Problem of Finding the Orthogonal Projection P
    Orthogonal Bases and the -So In Section 4.8 we discussed the problemR. of finding the orthogonal projection p the vector b into the V of , . , suhspace the vectors , v2 ho If v1 v,, form a for V, and the in x n matrix A has these basis vectors as its column vectors. ilt the orthogonal projection p is given by p = Ax where x is the (unique) solution of the normal system ATAx = A7b. The formula for p takes an especially simple and attractive Form when the ba vectors , . .. , v1 v are mutually orthogonal. DEFINITION Orthogonal Basis An orthogonal basis for the suhspacc V of R” is a basis consisting of vectors , ,v,, that are mutually orthogonal, so that v v = 0 if I j. If in additii these basis vectors are unit vectors, so that v1 . = I for i = 1. 2 n, thct the orthogonal basis is called an orthonormal basis. Example 1 The vectors = (1, 1,0), v2 = (1, —1,2), v3 = (—1,1,1) form an orthogonal basis for . We “normalize” ‘ R3 can this orthogonal basis 1w viding each basis vector by its length: If w1=—- (1=1,2,3), lvii 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm 295 then the vectors /1 I 1 /1 1 ‘\ 1 2” / I w1 0) W2 = —— _z) W3 for . form an orthonormal basis R3 , . ..., v, of the m x ii matrix A Now suppose that the column vectors v v2 form an orthogonal basis for the suhspacc V of R’. Then V}.VI 0 0 v2.v ..
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • CLIFFORD ALGEBRAS Property, Then There Is a Unique Isomorphism (V ) (V ) Intertwining the Two Inclusions of V
    CHAPTER 2 Clifford algebras 1. Exterior algebras 1.1. Definition. For any vector space V over a field K, let T (V ) = k k k Z T (V ) be the tensor algebra, with T (V ) = V V the k-fold tensor∈ product. The quotient of T (V ) by the two-sided⊗···⊗ ideal (V ) generated byL all v w + w v is the exterior algebra, denoted (V ).I The product in (V ) is usually⊗ denoted⊗ α α , although we will frequently∧ omit the wedge ∧ 1 ∧ 2 sign and just write α1α2. Since (V ) is a graded ideal, the exterior algebra inherits a grading I (V )= k(V ) ∧ ∧ k Z M∈ where k(V ) is the image of T k(V ) under the quotient map. Clearly, 0(V )∧ = K and 1(V ) = V so that we can think of V as a subspace of ∧(V ). We may thus∧ think of (V ) as the associative algebra linearly gener- ated∧ by V , subject to the relations∧ vw + wv = 0. We will write φ = k if φ k(V ). The exterior algebra is commutative | | ∈∧ (in the graded sense). That is, for φ k1 (V ) and φ k2 (V ), 1 ∈∧ 2 ∈∧ [φ , φ ] := φ φ + ( 1)k1k2 φ φ = 0. 1 2 1 2 − 2 1 k If V has finite dimension, with basis e1,...,en, the space (V ) has basis ∧ e = e e I i1 · · · ik for all ordered subsets I = i1,...,ik of 1,...,n . (If k = 0, we put { } k { n } e = 1.) In particular, we see that dim (V )= k , and ∅ ∧ n n dim (V )= = 2n.
    [Show full text]
  • Geometric (Clifford) Algebra and Its Applications
    Geometric (Clifford) algebra and its applications Douglas Lundholm F01, KTH January 23, 2006∗ Abstract In this Master of Science Thesis I introduce geometric algebra both from the traditional geometric setting of vector spaces, and also from a more combinatorial view which simplifies common relations and opera- tions. This view enables us to define Clifford algebras with scalars in arbitrary rings and provides new suggestions for an infinite-dimensional approach. Furthermore, I give a quick review of classic results regarding geo- metric algebras, such as their classification in terms of matrix algebras, the connection to orthogonal and Spin groups, and their representation theory. A number of lower-dimensional examples are worked out in a sys- tematic way using so called norm functions, while general applications of representation theory include normed division algebras and vector fields on spheres. I also consider examples in relativistic physics, where reformulations in terms of geometric algebra give rise to both computational and conceptual simplifications. arXiv:math/0605280v1 [math.RA] 10 May 2006 ∗Corrected May 2, 2006. Contents 1 Introduction 1 2 Foundations 3 2.1 Geometric algebra ( , q)...................... 3 2.2 Combinatorial CliffordG V algebra l(X,R,r)............. 6 2.3 Standardoperations .........................C 9 2.4 Vectorspacegeometry . 13 2.5 Linearfunctions ........................... 16 2.6 Infinite-dimensional Clifford algebra . 19 3 Isomorphisms 23 4 Groups 28 4.1 Group actions on .......................... 28 4.2 TheLipschitzgroupG ......................... 30 4.3 PropertiesofPinandSpingroups . 31 4.4 Spinors ................................ 34 5 A study of lower-dimensional algebras 36 5.1 (R1) ................................. 36 G 5.2 (R0,1) =∼ C -Thecomplexnumbers . 36 5.3 G(R0,0,1)...............................
    [Show full text]
  • Orthogonal Bases
    Orthogonal bases • Recall: Suppose that v1 , , vn are nonzero and (pairwise) orthogonal. Then v1 , , vn are independent. Definition 1. A basis v1 , , vn of a vector space V is an orthogonal basis if the vectors are (pairwise) orthogonal. 1 1 0 3 Example 2. Are the vectors − 1 , 1 , 0 an orthogonal basis for R ? 0 0 1 Solution. Note that we do not need to check that the three vectors are independent. That follows from their orthogonality. Example 3. Suppose v1 , , vn is an orthogonal basis of V, and that w is in V. Find c1 , , cn such that w = c1 v1 + + cnvn. Solution. Take the dot product of v1 with both sides: If v1 , , vn is an orthogonal basis of V, and w is in V, then w · v j w = c1 v1 + + cnvn with cj = . v j · v j Armin Straub 1 [email protected] 3 1 1 0 Example 4. Express 7 in terms of the basis − 1 , 1 , 0 . 4 0 0 1 Solution. Definition 5. A basis v1 , , vn of a vector space V is an orthonormal basis if the vectors are orthogonal and have length 1 . If v1 , , vn is an orthonormal basis of V, and w is in V, then w = c1 v1 + + cnvn with cj = v j · w. 1 1 0 Example 6. Is the basis − 1 , 1 , 0 orthonormal? If not, normalize the vectors 0 0 1 to produce an orthonormal basis. Solution. Armin Straub 2 [email protected] Orthogonal projections Definition 7. The orthogonal projection of vector x onto vector y is x · y xˆ = y.
    [Show full text]
  • Basics of Euclidean Geometry
    This is page 162 Printer: Opaque this 6 Basics of Euclidean Geometry Rien n'est beau que le vrai. |Hermann Minkowski 6.1 Inner Products, Euclidean Spaces In a±ne geometry it is possible to deal with ratios of vectors and barycen- ters of points, but there is no way to express the notion of length of a line segment or to talk about orthogonality of vectors. A Euclidean structure allows us to deal with metric notions such as orthogonality and length (or distance). This chapter and the next two cover the bare bones of Euclidean ge- ometry. One of our main goals is to give the basic properties of the transformations that preserve the Euclidean structure, rotations and re- ections, since they play an important role in practice. As a±ne geometry is the study of properties invariant under bijective a±ne maps and projec- tive geometry is the study of properties invariant under bijective projective maps, Euclidean geometry is the study of properties invariant under certain a±ne maps called rigid motions. Rigid motions are the maps that preserve the distance between points. Such maps are, in fact, a±ne and bijective (at least in the ¯nite{dimensional case; see Lemma 7.4.3). They form a group Is(n) of a±ne maps whose corresponding linear maps form the group O(n) of orthogonal transformations. The subgroup SE(n) of Is(n) corresponds to the orientation{preserving rigid motions, and there is a corresponding 6.1. Inner Products, Euclidean Spaces 163 subgroup SO(n) of O(n), the group of rotations.
    [Show full text]
  • 17. Inner Product Spaces Definition 17.1. Let V Be a Real Vector Space
    17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function h ; i: V × V −! R; which is • symmetric, that is hu; vi = hv; ui: • bilinear, that is linear (in both factors): hλu, vi = λhu; vi; for all scalars λ and hu1 + u2; vi = hu1; vi + hu2; vi; for all vectors u1, u2 and v. • positive that is hv; vi ≥ 0: • non-degenerate that is if hu; vi = 0 for every v 2 V then u = 0. We say that V is a real inner product space. The associated quadratic form is the function Q: V −! R; defined by Q(v) = hv; vi: Example 17.2. Let A 2 Mn;n(R) be a real matrix. We can define a function n n h ; i: R × R −! R; by the rule hu; vi = utAv: The basic rules of matrix multiplication imply that this function is bi- linear. Note that the entries of A are given by t aij = eiAej = hei; eji: In particular, it is symmetric if and only if A is symmetric that is At = A. It is non-degenerate if and only if A is invertible, that is A has rank n. Positivity is a little harder to characterise. Perhaps the canonical example is to take A = In. In this case if t P u = (r1; r2; : : : ; rn) and v = (s1; s2; : : : ; sn) then u Inv = risi. Note 1 P 2 that if we take u = v then we get ri . The square root of this is the Euclidean distance.
    [Show full text]
  • 8. Orthogonality
    8. Orthogonality In Section 5.3 we introduced the dot product in Rn and extended the basic geometric notions of length and n distance. A set f1, f2, ..., fm of nonzero vectors in R was called an orthogonal set if fi f j = 0 for all i = j, and it{ was proved that} every orthogonal set is independent. In particular, it was observed· that the expansion6 of a vector as a linear combination of orthogonal basis vectors is easy to obtain because formulas exist for the coefficients. Hence the orthogonal bases are the “nice” bases, and much of this chapter is devoted to extending results about bases to orthogonal bases. This leads to some very powerful methods and theorems. Our first task is to show that every subspace of Rn has an orthogonal basis. 8.1 Orthogonal Complements and Projections If v1, ..., vm is linearly independent in a general vector space, and if vm+1 is not in span v1, ..., vm , { } { n } then v1, ..., vm, vm 1 is independent (Lemma 6.4.1). Here is the analog for orthogonal sets in R . { + } Lemma 8.1.1: Orthogonal Lemma n n Let f1, f2, ..., fm be an orthogonal set in R . Given x in R , write { } x f1 x f2 x fm fm+1 = x · 2 f1 · 2 f2 · 2 fm f1 f2 fm − k k − k k −···− k k Then: 1. fm 1 fk = 0 for k = 1, 2, ..., m. + · 2. If x is not in span f1, ..., fm , then fm 1 = 0 and f1, ..., fm, fm 1 is an orthogonal set.
    [Show full text]
  • Bilinear Forms Lecture Notes for MA1212
    Chapter 3. Bilinear forms Lecture notes for MA1212 P. Karageorgis [email protected] 1/25 Bilinear forms Definition 3.1 – Bilinear form A bilinear form on a real vector space V is a function f : V V R × → which assigns a number to each pair of elements of V in such a way that f is linear in each variable. A typical example of a bilinear form is the dot product on Rn. We shall usually write x, y instead of f(x, y) for simplicity and we h i shall also identify each 1 1 matrix with its unique entry. × Theorem 3.2 – Bilinear forms on Rn Every bilinear form on Rn has the form t x, y = x Ay = aijxiyj h i Xi,j for some n n matrix A and we also have aij = ei, ej for all i,j. × h i 2/25 Matrix of a bilinear form Definition 3.3 – Matrix of a bilinear form Suppose that , is a bilinear form on V and let v1, v2,..., vn be a h i basis of V . The matrix of the form with respect to this basis is the matrix A whose entries are given by aij = vi, vj for all i,j. h i Theorem 3.4 – Change of basis Suppose that , is a bilinear form on Rn and let A be its matrix with respect to theh standardi basis. Then the matrix of the form with respect t to some other basis v1, v2,..., vn is given by B AB, where B is the matrix whose columns are the vectors v1, v2,..., vn.
    [Show full text]
  • The Gram-Schmidt Procedure, Orthogonal Complements, and Orthogonal Projections
    The Gram-Schmidt Procedure, Orthogonal Complements, and Orthogonal Projections 1 Orthogonal Vectors and Gram-Schmidt In this section, we will develop the standard algorithm for production orthonormal sets of vectors and explore some related matters We present the results in a general, real, inner product space, V rather than just in Rn. We will make use of this level of generality later on when we discuss the topic of conjugate direction methods and the related conjugate gradient methods for optimization. There, once again, we will meet the Gram-Schmidt process. We begin by recalling that a set of non-zero vectors fv1;:::; vkg is called an orthogonal set provided for any indices i; j; i 6= j,the inner products hvi; vji = 0. It is called 2 an orthonormal set provided that, in addition, hvi; vii = kvik = 1. It should be clear that any orthogonal set of vectors must be a linearly independent set since, if α1v1 + ··· + αkvk = 0 then, for any i = 1; : : : ; k, taking the inner product of the sum with vi, and using linearity of the inner product and the orthogonality of the vectors, hvi; α1v1 + ··· αkvki = αihvi; vii = 0 : But since hvi; vii 6= 0 we must have αi = 0. This means, in particular, that in any n-dimensional space any set of n orthogonal vectors forms a basis. The Gram-Schmidt Orthogonalization Process is a constructive method, valid in any finite-dimensional inner product space, which will replace any basis U = fu1; u2;:::; ung with an orthonormal basis V = fv1; v2;:::; vng. Moreover, the replacement is made in such a way that for all k = 1; 2; : : : ; n, the subspace spanned by the first k vectors fu1;:::; ukg and that spanned by the new vectors fv1;:::; vkg are the same.
    [Show full text]
  • Inner Product Spaces and Orthogonality
    Inner Product Spaces and Orthogonality week 13-14 Fall 2006 1 Dot product of Rn The inner product or dot product of Rn is a function h ; i de¯ned by T T n hu; vi = a1b1 + a2b2 + ¢ ¢ ¢ + anbn for u = [a1; a2; : : : ; an] ; v = [b1; b2; : : : ; bn] 2 R : The inner product h ; i satis¯es the following properties: (1) Linearity: hau + bv; wi = ahu; wi + bhv; wi. (2) Symmetric Property: hu; vi = hv; ui. (3) Positive De¯nite Property: For any u 2 V , hu; ui ¸ 0; and hu; ui = 0 if and only if u = 0. With the dot product we have geometric concepts such as the length of a vector, the angle between two vectors, orthogonality, etc. We shall push these concepts to abstract vector spaces so that geometric concepts can be applied to describe abstract vectors. 2 Inner product spaces De¯nition 2.1. An inner product of a real vector space V is an assignment that for any two vectors u; v 2 V , there is a real number hu; vi, satisfying the following properties: (1) Linearity: hau + bv; wi = ahu; wi + bhv; wi. (2) Symmetric Property: hu; vi = hv; ui. (3) Positive De¯nite Property: For any u 2 V , hu; ui ¸ 0; and hu; ui = 0 if and only if u = 0. The vector space V with an inner product is called a (real) inner product space. h i h i Example 2.1. For x = x1 , y = y1 2 R2, de¯ne x2 y2 hx; yi = 2x1y1 ¡ x1y2 ¡ x2y1 + 5x2y2: Then h ; i is an inner product on R2.
    [Show full text]
  • Lecture 25: 6.3 Orthonormal Bases
    Lecture 25: 6.3 Orthonormal Bases Wei-Ta Chu 2008/12/24 Theorem 6.3.2 Theorem 6.3.2 If S is an orthonormal basis for an n-dimensional inner product space, and if (u)s = (u1, u2, …, un) and (v)s = (v1, v2, …, vn) then: 2 2 2 u u1 u2 un 2 2 2 d(u, v) (u1 v1) (u2 v2 ) (un vn ) u, v u1v1 u2v2 unvn Remark By working with orthonormal bases, the computation of general norms and inner products can be reduced to the computation of Euclidean norms and inner products of the coordinate vectors. 2008/12/24 Elementary Linear Algebra 2 Example If R3 has the Euclidean inner product, then the norm of the vector u=(1,1,1) is However, if we let R3 have the orthonormal basis S in the last example, then we know from that the coordinate vector of u relative to S is (u)s= (1, -1/5, 7/5) The norm of u yields 2008/12/24 Elementary Linear Algebra 3 Coordinates Relative to Orthogonal Bases If S = {v1, v2, …, vn} is an orthogonal basis for a vector space V, then normalizing each of these vectors yields the orthonormal basis v v v S' 1 , 2 ,, n v1 v2 vn Thus, if u is any vector in V, it follows from theorem 6.3.1 that v v v v v v u u, 1 1 u, 2 2 u, n n v1 v1 v2 v2 v n vn or u, v1 u, v2 u, vn u 2 v1 2 v2 2 vn v1 v 2 vn The above equation expresses u as a linear combination of the vectors in the orthogonal basis S.
    [Show full text]