Introduction Vectors in Function Spaces
Total Page:16
File Type:pdf, Size:1020Kb
Jim Lambers MAT 415/515 Fall Semester 2013-14 Lectures 1 and 2 Notes These notes correspond to Section 5.1 in the text. Introduction This course is about series solutions of ordinary differential equations (ODEs). Unlike ODEs covered in MAT 285, whose solutions can be expressed as linear combinations of n functions, where n is the order of the ODE, the ODEs discussed in this course have solutions that are expressed as infinite series of functions, similar to power series seen in MAT 169. This approach of representing solutions using infinite series provides the following benefits: • It allows solutions to be represented using simpler functions, particularly polynomials, than the exponential or trigonometric functions normally used to represent solutions in \closed form". This leads to more efficient evaluation of solutions on a computer or calculator. • It enables solution of ODEs with variable coefficients, as opposed to ODEs seen in MAT 285 that either had constant coefficients or were of a special form. • It facilitates approximation of solutions by polynomials, by truncating the infinite series after a certain number of terms, which in turn aids in understanding the qualitative behavior of solutions. The solution of ODEs via infinite series will lead to the definition of several families of special functions, such as Bessel functions or various kinds of orthogonal polynomials such as Legendre polynomials. Each family of special functions that we will see in this course has an orthogonality relation, which simplifies computations of coefficients in infinite series involving such functions. As n such, orthogonality is going to be an essential concept in this course. Vectors in R are known to be orthogonal if and only if they are perpendicular, or, equivalently, if their dot product is equal to n zero. We will begin this course by applying familiar concepts from vector spaces such as R to sets of functions, thus leading to the concept of a function space. Vectors in Function Spaces We begin with some necessary terminology. A vector space V , also known as a linear vector space, is a set of objects, called vectors, together with two operations: • Addition of two vectors in V , which must be commutative, associative, and have an identity element, which is the zero vector 0. Each vector v must have an additive inverse −v which, when added to v, yields the zero vector. • Multiplication of a vector in V by a scalar, which is typically a real or complex number. The term \scalar" is used in this context, rather than \number", because the multiplication process is \scaling" a given vector by a factor indicated by a given number. Scalar multiplication must satisfy distributive laws, and have an identity element, 1, such that 1v = v for any vector v 2 V . 1 Both operations must be closed, which means that the result of either operation must be a vector in V . That is, if u and v are two vectors in V , then u + v must also be in V , and αv must be in V for any scalar α. n Example The set of all points in n-dimensional space, R , is a vector space. Addition is defined as follows: 0 1 0 1 0 1 u1 v1 u1 + v1 B u2 C B v2 C B u2 + v2 C u + v = B C + B C = B C = u + v: B . C B . C B . C @ . A @ . A @ . A un vn un + vn Scalar multiplication is defined by 0 1 αv1 B αv2 C αv = B C : B . C @ . A αvn Similarly, the set of all n-dimensional points whose coordinates are complex numbers, denoted by n C , is also a vector space. 2 In these next few examples, we introduce some vector spaces whose vectors are functions, which are also known as function spaces. Example The set of all polynomials of degree at most n, denoted by Pn, is a vector space, in which addition and scalar multiplication are defined as follows. Given f(x); g(x) 2 Pn, (f + g)(x) = f(x) + g(x); (αf)(x) = αf(x): These operations are closed, because adding two polynomials of degree at most n will not yield a sum whose degree is greater than n, and multiplying any polynomial by a nonzero scalar will not change its degree. 2 Example The set of all functions with power series of the form 1 X n f(x) = anx ; n=0 that are convergent on the interval (−1; 1) is a vector space, in which addition and multiplication are defined as in the previous example. These operations are closed because the sum of two convergent series is also convergent, as is a scalar multiple of a convergent series. 2 Example The set of all continuous functions on the interval [a; b], denoted by C[a; b], is a vector space in which addition and scalar multiplication are defined as in the previous two examples. These operations are closed because the sum of two continuous functions, and a scalar multiple of a continuous function, is also continuous. 2 A vector space V is most effectively described in terms of a set of specific vectors fv1; v2;:::g that, in conjunction with the operations of addition and scalar multiplication, can be used to obtain every vector in the space. That is, for every vector v 2 V , there must exist scalars c1; c2;:::, such that v = c1v1 + c2v2 + ··· : 2 We say that v is a linear combination of v1; v2;:::, and the scalars c1; c2;::: are the coefficients of the linear combination. Ideally, it should be possible to express any vector v 2 V as a unique linear combination of the vectors v1; v2;::: that are to be used to describe all vectors in V . With this criteria in mind, we introduce the following two essential concepts from linear algebra: • A set of vectors fv1; v2;:::; vng is linearly independent if the vector equation c1v1 + c2v2 + ··· + anvn = 0 is satisfied if and only if c1 = c2 = ··· = 0: In other words, this set of vectors is linearly independent if it is not possible to express any vector in the set as a linear combination of other vectors in the set. This definition can be generalized in a natural way to an infinite set of vectors. If a set of vectors is not linearly independent, then we say that it is linearly dependent. • A set of vectors fv1; v2;:::; vng spans a vector space V if, for any vector v 2 V , there exist scalars c1; c2; : : : ; an such that v = c1v1 + c2v2 + ··· + anvn: That is, any vector in V can be expressed as a linear combination of vectors in the set. We define spanfv1; v2;:::; vng to be the set of all linear combinations of v1; v2;:::; vn. As with linear independence, the notion of span generalizes naturally to an infinite set of vectors. We then say that a set of vectors fv1; v2;:::g (which may be finite or infinite) is a basis for a vector space V if it is linearly independent, and if it spans V . This definition ensures that any vector in V is a unique linear combination of the vectors in the basis. If a basis for V is finite, then we say that V is finite-dimensional and define the dimension of V to be the number of elements in a basis; all bases of a finite-dimensional vector space must have the same number of elements. If V does not have a finite basis, then we say that V is infinite-dimensional. Example The function space P3, consisting of polynomials of degree at most 3, has a basis 2 3 f1; x; x ; x g. It is clear that any polynomial in P3 can be expressed as a linear combination of these basis functions, as the coefficients of any such polynomial are also the coefficients in the linear combination of these basis functions. To confirm linear independence, suppose that there exists constants c0; c1; c2; and c3 such that 2 3 c0(1) + c1x + c2x + c3x = 0 for all x 2 R. Then certainly this must be the case at x = 0, which requires that c1 = 0. Substituting 3 other values of x into the above equation yields a system of 3 linear equations in the remaining 3 unknows c1; c2 and c3. It can be shown that the only solution of such a system of equations is 2 3 the trivial solution c1 = c2 = c3 = 0. Therefore the set f1; x; x ; x g is linearly independent. An alternative basis consists of the first 4 Chebyshev polynomials f1; x; 2x2 − 1; 4x3 − 3xg. It can be confirmed using a similar approach that these polynomials are linearly independent. 2 Example The function space consisting of all power series that are convergent on the interval (−1; 1) has as a basis the infinite set f1; x; x2; x3;:::g. Using an inductive argument, it can be shown that this set is linearly independent 2 3 Scalar Product n Recall that the dot product of two vectors u and v in R is u · v = u1v1 + u2v2 + ··· + unvn = kukkvk cos θ; where q 2 2 2 kuk = u1 + u2 + ··· + un is the magnitude or length of u, and θ is the angle between u and v, with 0 ≤ θ ≤ π radians. The dot product has the following properties: 1. u · u = kuk2 2. u · (v + w) = u · v + u · w 3. u · v = v · u 4. u · (cv) = c(u · v) When u and v are perpendicular, then cos θ = 0. It follows that u · v = 0, and we say that u and v are orthogonal.