Vectors in Function Spaces
Total Page:16
File Type:pdf, Size:1020Kb
Jim Lambers MAT 606 Spring Semester 2015-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V , also known as a linear vector space, is a set of objects, called vectors, together with two operations: • Addition of two vectors in V , which must be commutative, associative, and have an identity element, which is the zero vector 0. Each vector v must have an additive inverse −v which, when added to v, yields the zero vector. • Multiplication of a vector in V by a scalar, which is typically a real or complex number. The term \scalar" is used in this context, rather than \number", because the multiplication process is \scaling" a given vector by a factor indicated by a given number. Scalar multiplication must satisfy distributive laws, and have an identity element, 1, such that 1v = v for any vector v 2 V . Both operations must be closed, which means that the result of either operation must be a vector in V . That is, if u and v are two vectors in V , then u + v must also be in V , and αv must be in V for any scalar α. n Example 1 The set of all points in n-dimensional space, R , is a vector space. Addition is defined as follows: 0 1 0 1 0 1 u1 v1 u1 + v1 B u2 C B v2 C B u2 + v2 C u + v = B C + B C = B C = u + v: B . C B . C B . C @ . A @ . A @ . A un vn un + vn Scalar multiplication is defined by 0 1 αv1 B αv2 C αv = B C : B . C @ . A αvn Similarly, the set of all n-dimensional points whose coordinates are complex numbers, denoted by n C , is also a vector space. 2 In these next few examples, we introduce some vector spaces whose vectors are functions, which are also known as function spaces. Example 2 The set of all polynomials of degree at most n, denoted by Pn, is a vector space, in which addition and scalar multiplication are defined as follows. Given f(x); g(x) 2 Pn, (f + g)(x) = f(x) + g(x); (αf)(x) = αf(x): 1 These operations are closed, because adding two polynomials of degree at most n will not yield a sum whose degree is greater than n, and multiplying any polynomial by a nonzero scalar will not change its degree. 2 Example 3 The set of all functions with power series of the form 1 X n f(x) = anx ; n=0 that are convergent on the interval (−1; 1) is a vector space, in which addition and multiplication are defined as in the previous example. These operations are closed because the sum of two convergent series is also convergent, as is a scalar multiple of a convergent series. 2 Example 4 The set of all continuous functions on the interval [a; b], denoted by C[a; b], is a vector space in which addition and scalar multiplication are defined as in the previous two examples. These operations are closed because the sum of two continuous functions, and a scalar multiple of a continuous function, is also continuous. 2 A vector space V is most effectively described in terms of a set of specific vectors fv1; v2;:::g that, in conjunction with the operations of addition and scalar multiplication, can be used to obtain every vector in the space. That is, for every vector v 2 V , there must exist scalars c1; c2;:::, such that v = c1v1 + c2v2 + ··· : We say that v is a linear combination of v1; v2;:::, and the scalars c1; c2;::: are the coefficients of the linear combination. Ideally, it should be possible to express any vector v 2 V as a unique linear combination of the vectors v1; v2;::: that are to be used to describe all vectors in V . With this criteria in mind, we introduce the following two essential concepts from linear algebra: • A set of vectors fv1; v2;:::; vng is linearly independent if the vector equation c1v1 + c2v2 + ··· + anvn = 0 is satisfied if and only if c1 = c2 = ··· = 0: In other words, this set of vectors is linearly independent if it is not possible to express any vector in the set as a linear combination of other vectors in the set. This definition can be generalized in a natural way to an infinite set of vectors. If a set of vectors is not linearly independent, then we say that it is linearly dependent. • A set of vectors fv1; v2;:::; vng spans a vector space V if, for any vector v 2 V , there exist scalars c1; c2; : : : ; an such that v = c1v1 + c2v2 + ··· + anvn: That is, any vector in V can be expressed as a linear combination of vectors in the set. We define spanfv1; v2;:::; vng to be the set of all linear combinations of v1; v2;:::; vn. As with linear independence, the notion of span generalizes naturally to an infinite set of vectors. 2 We then say that a set of vectors fv1; v2;:::g (which may be finite or infinite) is a basis for a vector space V if it is linearly independent, and if it spans V . This definition ensures that any vector in V is a unique linear combination of the vectors in the basis. If a basis for V is finite, then we say that V is finite-dimensional and define the dimension of V to be the number of elements in a basis; all bases of a finite-dimensional vector space must have the same number of elements. If V does not have a finite basis, then we say that V is infinite-dimensional. Example 5 The function space P3, consisting of polynomials of degree at most 3, has a basis 2 3 f1; x; x ; x g. It is clear that any polynomial in P3 can be expressed as a linear combination of these basis functions, as the coefficients of any such polynomial are also the coefficients in the linear combination of these basis functions. To confirm linear independence, suppose that there exists constants c0; c1; c2; and c3 such that 2 3 c0(1) + c1x + c2x + c3x = 0 for all x 2 R. Then certainly this must be the case at x = 0, which requires that c1 = 0. Substituting 3 other values of x into the above equation yields a system of 3 linear equations in the remaining 3 unknows c1; c2 and c3. It can be shown that the only solution of such a system of equations is 2 3 the trivial solution c1 = c2 = c3 = 0. Therefore the set f1; x; x ; x g is linearly independent. An alternative basis consists of the first 4 Chebyshev polynomials f1; x; 2x2 − 1; 4x3 − 3xg. It can be confirmed using a similar approach that these polynomials are linearly independent. 2 Example 6 The function space consisting of all power series that are convergent on the interval (−1; 1) has as a basis the infinite set f1; x; x2; x3;:::g. Using an inductive argument, it can be shown that this set is linearly independent 2 Inner Product n Recall that the dot product of two vectors u and v in R is u · v = u1v1 + u2v2 + ··· + unvn = kukkvk cos θ; where q 2 2 2 kuk = u1 + u2 + ··· + un is the magnitude or length of u, and θ is the angle between u and v, with 0 ≤ θ ≤ π radians. The dot product has the following properties: 1. u · u = kuk2 2. u · (v + w) = u · v + u · w 3. u · v = v · u 4. u · (cv) = c(u · v) When u and v are perpendicular, then cos θ = 0. It follows that u · v = 0, and we say that u and v are orthogonal. We would like to generalize the concept of a dot product to vectors in function spaces, and we also need to ensure that complex numbers are properly taken into account. To that end, we define the inner product of two functions f(x) and g(x) to be Z b hf; gi = f(x)g(x)w(x) dx; a 3 where w(x) is a weight function and, for any complex number z = x + iy, z = x − iy is the complex conjugate of z. The interval of integration [a; b] depends on the function space under consideration. Using this definition, it can be verified that the inner product has the following properties: 1. hf; g + hi = hf; gi + hf; hi 2. hf; gi = hg; fi 3. hf; cgi = chf; gi for any complex number c Note that the second property is slightly different from the corresponding property for vectors in n R , as it requires the complex conjugate. Combining the second and third property yields the result hcf; gi = chf; gi. Inner Product Spaces and Hilbert Spaces n Just as we use kvk to measure the magnitude of a vector v 2 R , we need a notion of magnitude for a function f(x) in a function space. To that end, we say that a function k · k : V ! R is a norm on a vector space V if it satisfies the following conditions: 1. kvk ≥ 0 for any vector v 2 V , and kvk = 0 if and only if v = 0. 2. kαvk = jαjkvk for any complex scalar α. 3. ku + vk ≤ kuk + kvk for any two vectors u; v 2 V . This is known as the Triangle inequality.