
Orthogonal and Orthonormal Bases With Homework #2 This handout discusses orthogonal and orthonormal bases of a finite-dimensional real vector space. (Later, we will have to consider the case of vector spaces over the complex numbers.) To be definite, we will consider the vector space Rn, where n is a positive integer. You can consider the elements of this space to be signals and transformations such as the k-level Haar Transform Hk to be linear transformations from Rn to itself. n Some review of linear algebra: Recall that a set x1, x2,..., xk of elements of R is said to k { } be linearly independent if whenever =1 cixi = 0 for some c1, c2,...,ck R, it follows that Pi ∈ c1 = c2 = = c = 0. That is, the only way for a linear combination of x1, x2,..., x to be · · · k k zero is for all the coefficients in the linear combination to be zero. If x1, x2,..., x is a linearly { k} independent set of vectors in Rn, then k n, and k = n if and only if x1, x2,..., x is a basis ≤ { k} of Rn. n n Given a basis B = x1, x2,..., xn of R , any element v R can be written uniquely as a { n } ∈ linear combination v = i=1 cixi, where the ci are real numbers. The numbers ci are called the components of v relativeP to the basis B. Finally, we need some facts about scalar products: If x = (x1,x2,...,xn) and y = (y1,y2,...,yn), n then the scalar product of x and y is defined to be x y = =1 x y . An important fact about · i i i scalar products is that they are linear in both arguments.P Thus, for example, if s,t R and ∈ x, y, z Rn, then x (sy + tz)= s(x y)+ t(x z). The scalar product is also known as the “dot ∈ · · · product,” because of the usual notation. However, there is another common notation for scalar product that we might use sometimes: x, y . A third common term for the scalar product is “inner h i product.” Definition: Two vectors x and y are said to be orthogonal if x y = 0, that is, if their scalar · product is zero. n Theorem: Suppose x1, x2, ..., xk are non-zero vectors in R that are pairwise orthogonal (that is, x x = 0 for all i = j). Then the set x1, x2,..., x is a lineary independent set of vectors. i · j 6 { k} Proof: Suppose that we have c1x1 + c2x2 + + c x = 0 for some scalers c1, c2,...,c R. · · · k k k ∈ Let i be any integer in the range 1 i k. We must show that c = 0. Now, consider the dot ≤ ≤ i product of c with c1x1 + c2x2 + + c x . Since c1x1 + c2x2 + + c x = 0, we have: i · · · k k · · · k k 0= x 0 i · = x (c1x1 + c2x2 + + c x ) i · · · · k k = c1(x x1)+ c2(x x2)+ + c2(x x ) i · i · · · · i · k = c (x x ) i i · i where the last equality follows because x x = 0 for i = j. Now, since x is not the zero vector, i · j 6 i we know that x x = 0. So the fact that 0 = c (x x ) implies c = 0, as we wanted to show. i · i 6 i i · i i Corollary: Suppose that B = x1, x2,..., x is a set of n vectors in Rn that are pairwise { n} orthogonal. Then A is a basis of Rn. Proof: This follows simply because any set of n linearly independent vectors in Rn is a basis. n Definition: The length or norm of a vector x = (x1,x2,...,xn) R is defined to be x = 2 ∈ | | √x x . (Note then than x x = x .) · · | | Definition: A basis B = x1, x2,..., x of Rn is said to be an orthogonal basis if the elements { n} of B are pairwise orthogonal, that is x x whenever i = j. If in addition x x = 1 for all i, then i · j 6 i · i the basis is said to be an orthonormal basis. Thus, an orthonormal basis is a basis consisting of unit-length, mutually orthogonal vectors. We introduce the notation δ for integers i and j, defined by δ =0 if i = j and δ = 1. Thus, ij ij 6 ii a basis B = x1, x2,..., xn is orthonormal if and only if xi xj = δij for all i, j. { n } · n Given a vector v R and an orthonormal basis B = x1, x2,..., xn of R , it is very easy to ∈ { th } find the components of v with respect to the basis B. In fact, the i component of v is simply v x . This is the content of the following theorem: · i Theorem: Suppose that B = x1, x2,..., x is an orthonormal basis of Rn and v is any { n} vector in Rn. Then n v = (v x )x X · i i i=1 n Proof: Since B is a basis, we can write v = c x for some unique c1, c2,...,c R. We X i i n ∈ i=1 just have to show that c = v x for each i. But i · i n v x = c x x · i X j j · i j=1 n = c (x x ) X j j · i j=1 n = c δ X j ji j=1 = ci We can look at this in terms of linear transformations. Given any basis B = x1, x2,..., xn of n n n { } R , we can always define a linear transformation TB : R R by T (v) = (v x1, v x2,..., v xn). n → · · · Now, suppose we are given (t1,t2,...,tn) R and we would like to find a vector v such that ∈ −1 T (v) = (t1,t2,...,tn). That is, we are trying to compute T (t1,t2,...,tn). We know from the definition of T that v xi = ti for all i. For a general basis this information does not make · th it easy to recover v. However, for an orthonormal basis, v xi is just the i component of v n · n relative to the basis. That is, we can write v = i=1(v xi)xi = i=1 tixi. This could also −1 n P · P be written as T (t1,t2,...,tn) = i=1 tixi. Another way of looking at this is to say that if we know the transformed vector T (v)P of an unknown vector v, we have a simple explicit formula for reconstructing v. N N If we apply this to the k-level Haar Transform Hk : R R , we know that the components → k N j of Hk(f) can be computed as scalar products of f with the vectors Vi for i = 1, 2,..., 2k and Wi N for j = 1, 2,...,k and i = 1, 2,..., 2j : k k k k k k k−1 k−1 k−1 1 1 1 H (f)= a1, a2,...,a k d1, d2,...,d k d1 , d2 ,...,d −1 d1, d2,...,d 2 k N/2 | N/2 | N/2k |···| N/ ak = f Vk, for i = 1, 2,...,N/2k i · i dj = f Wj , for j = 1, 2,...,k and i = 1, 2,...,N/2j i · i k j RN Now, it turns out that this set of vectors Vi and Wi forms an orthonormal basis of . Knowing this, we know automatically how to reconstruct a signal f form its k-level Haar transform. Namely, if f is a signal such that k k k k k k k−1 k−1 k−1 1 1 1 H (f)= a1, a2,...,a k d1 , d2,...,d k d1 , d2 ,...,d −1 d1, d2,...,d 2 k N/2 | N/2 | N/2k |···| N/ then N/2k k N/2j f = akVk + djWj X i i X X i i i=1 j=1 i=1 (Admittedly, the naming and indexing of the basis vectors is sort of strange, but you shouldn’t let this obscure the fundamental simplicity of the result.) The fact that we can so easily reconstruct a signal given its transform shows why it has been considered so useful to construct orthonormal bases of wavelets (and scaling functions). The case of Haar wavelets is relatively straightforward, and its orthonormality has been known and understood for a century. Finding other systems of wavelets that have the orthonormality property has not been so easy. In the 1980s, a general method for constructing orthonormal bases of wavelets was discovered. This is one of the breakthroughs that has incited a lot of the recent interest in wavelet theory. Homework Exercises — due Friday, February 3 1. Suppose that B = x1, x2,..., x is an orthogonal basis of Rn, but not necessarily an orthonor- { n} mal basis. Suppose that v Rn. Find a simple formula for the components of v relative to the ∈ n basis B. That is, suppose that v = i=1 cixi. Find a formula for ci. (The formula should use scalar products.) Justify your answer.P k N j N 2. Show that the vectors Vi for i = 1, 2,..., 2k and Wi for j = 1, 2,...,k and i = 1, 2,..., 2j form an orthonormal basis of RN . This will take some work. A useful concept is the support of a vector v RN . The support of v = (v1, v2,...,v ) is defined to be the set of indices for ∈ n which v is non-zero. That is, support(v) = i v = 0 .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-