
Wavelets for Computer Graphics: A Primer Part 2 Eric J. Stollnitz Tony D. DeRose David H. Salesin University of Washington j 1 Introduction It is often convenient to put the different scaling functions § i(x) for a given level j together into a single row matrix, Wavelets are a mathematical tool for hierarchically decomposing ¨ j j ¢¥¢¥¢ j § (x) := [ § (x) (x)], functions. They allow a function to be described in terms of a coarse 0 mj © 1 overall shape, plus details that range from broad to narrow. Regard- less of whether the function of interest is an image, a curve, or a where mj is the dimension of Vj. We can do the same for the surface, wavelets provide an elegant technique for representing the wavelets: levels of detail present. j j ¢¥¢ ¢ j (x) := [ (x) © (x)], In Part 1 of this primer we discussed the simple case of Haar 0 nj 1 wavelets in one and two dimensions, and showed how they can be where nj is the dimension of Wj. Because Wj is the orthogonal com- used for image compression. In Part 2, we present the mathematical plement of Vj in Vj+1, the dimensions of these spaces satisfy mj+1 = theory of multiresolution analysis, then develop bounded-interval mj + nj. spline wavelets and describe their use in multiresolution curve and surface editing. The condition that the subspaces Vj be nested is equivalent to requir- ing that the scaling functions be re®nable. That is, for all j = 1, 2, ¤ there must exist a matrix of constants Pj such that 2 Multiresolution analysis ¨ ¨ j © 1(x) = j(x) Pj. (1) The Haar wavelets we discussed in Part 1 are just one of many bases that can be used to treat functions in a hierarchical fashion. In this In other words, each scaling function at level j 1 must be express- section, we develop a mathematical framework known as multires- ible as a linear combination of ª®nerº scaling functions at level j. © j j © 1 j j 1 olution analysis for studying wavelets [2, 11]. Our examples will Note that since V and V have dimensions m and m , respec- © j j j 1 continue to focus on the Haar basis, but the more general mathe- tively, P is an m m matrix (taller than it is wide). matical notation used here will come in handy for discussing other Since the wavelet space Wj © 1 is by de®nition also a subspace of Vj, wavelet bases in later sections. we can write the wavelets j © 1(x) as linear combinations of the scal- ¨ © ing functions j(x). This means there is an mj nj 1 matrix of con- Multiresolution analysis relies on many results from linear algebra. j Some readers may wish to consult the appendix in Part 1 for a brief stants Q satisfying ¨ review. © j 1(x) = j(x) Qj. (2) As discussed in Part 1, the starting point for multiresolution analysis is a nested set of vector spaces Example: In the Haar basis, at a particular level j there are mj = 2j scaling functions and nj = 2j wavelets. Thus, ¡ ¡£¢¤¢¥¢ 0 ¡ 1 2 V V V there must be re®nement matrices describing how the two scaling functions in V1 and the two wavelets in W1 can be As j increases, the resolution of functions in Vj increases. The basis made from the four scaling functions in V2: functions for the space Vj are known as scaling functions. 1 0 1 0 The next step in multiresolution analysis is to de®ne wavelet spaces. j j j+1 2 1 0 2 1 0 For each j, we de®ne W as the orthogonal complement of V in V . P = and Q = 0 1 0 1 This means that Wj includes all the functions in Vj+1 that are orthog- 0 1 0 1 onal to all those in Vj under some chosen inner product. The func- tions we choose as a basis for Wj are called wavelets. Remark: In the case of wavelets constructed on the un- bounded real line, the columns of Pj are shifted versions 2.1 A matrix formulation for re®nement of one another, as are the columns of Qj. One column therefore characterizes each matrix, so Pj and Qj are com- The rest of our discussion of multiresolution analysis will focus on © ¥ ¥ pletely determined by sequences ( ¥ ¤ , p 1, p0, p1, ) and wavelets de®ned on a bounded domain, although we will also refer © ¥ ¥ ( ¥ , q 1, q0, q1, ), which also do not depend on j. Equa- to wavelets on the unbounded real line wherever appropriate. In the j tions (1) and (2) therefore often appear in the literature as ex- bounded case, each space V has a ®nite basis, allowing us to use ma- pressions of the form trix notation in much of what follows, as did Lounsbery et al. [10] and Quak and Weyrich [13]. § § (x) = pi (2x i) ¦ Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for com- i § puter graphics: A primer, part 2. IEEE Computer Graphics and Applica- (x) = qi (2x i). tions, 15(4):75±85, July 1995. i These equations are referred to as two-scale relations for notation can also be used for the decomposition process outlined in scaling functions and wavelets, respectively. Section 2.1 of Part 1. Consider a function in some approximation space Vj. Let's assume Note that equations (1) and (2) can be expressed as a single equation we have the coef®cients of this function in terms of some scaling using block-matrix notation: function basis. We can write these coef®cients as a column matrix T ¢¥¢¥¢ j j j j ¨ ¨ © © ¡ ¡ j of values C = [c c © ] . The coef®cients c could, for ex- j 1 j 1 = Pj Qj . (3) 0 mj 1 i ample, be thought of as pixel colors, or alternatively, as the x- or 2 Example: Substituting the matrices from the previous ex- y-coordinates of a curve's control points in IR . ample into Equation (3) along with the appropriate basis Suppose we wish to create a low-resolution version Cj © 1 of Cj with functions gives a smaller number of coef®cients mj © 1. The standard approach for © j © 1 j 1 creating the m values of C is to use some form of linear ®lter- 1 0 1 0 ing and down-sampling on the mj entries of Cj. This process can be 1 1 1 1 2 2 2 2 1 0 1 0 expressed as a matrix equation § § § § § § [ ] = [ ] 0 1 0 1 0 1 2 3 0 1 0 1 j © 1 j j 0 1 0 1 C = A C (6) © where Aj is an mj 1 mj matrix of constants (wider than it is tall). It is important to realize that once we have chosen scaling func- j © 1 j tions and their re®nement matrices Pj, the wavelet matrices Qj are Since C contains fewer entries than C , this ®ltering process clearly loses some amount of detail. For many choices of Aj, it is somewhat constrained (though not completely determined). In fact, j © 1 ¨ since all functions in j © 1(x) must be orthogonal to all functions possible to capture the lost detail as another column matrix D , © © j © 1 j 1 j 1 computed by ¤¦¥ § § in (x), we know ¢ = 0 for all k and . k £ j © 1 j j To deal with all these inner products simultaneously, let's de®ne D = B C (7) © some new notation for a matrix of inner products. We will denote where Bj is an nj 1 mj matrix of constants related to Aj. The pair of ¨ © © © j © 1 j 1 j 1 j 1 ¥ ¥ j j ¤ § ¢ § by [ ¢ ] the matrix whose (k, ) entry is . £ £ k matrices A and B are called analysis ®lters. The process of splitting © Armed with this notation, we can rewrite the orthogonality condi- the coef®cients Cj into a low-resolution version Cj © 1 and detail Dj 1 tion on the wavelets as is called analysis or decomposition. ¨ © j © 1 j 1 j j ¥ [ ¢ ] = 0. (4) If A and B are chosen appropriately, then the original coef®cients £ © Cj can be recovered from Cj © 1 and Dj 1 by using the matrices Pj Substituting Equation (2) into Equation (4) yields and Qj from the previous section: © © ¨ ¨ j © 1 j j j j j 1 j j 1 ¥ [ ¢ ] Q = 0. (5) C = P C + Q D . (8) £ © Recovering Cj from Cj © 1 and Dj 1 is called synthesis or reconstruc- A matrix equation with a right-hand side of zero like this one is j j known as a homogeneous system of equations. The set of all pos- tion. In this context, P and Q are called synthesis ®lters. ¨ ¨ j © 1 j ¥ sible solutions is called the null space of [ ¢ ], and the £ 2 j Example: In the unnormalized Haar basis, the matrices A columns of Q must form a basis for this space. There are a multitude 2 of bases for the null space of a matrix, implying that there are many and B are given by: j ¨ different wavelet bases for a given wavelet space W . Ordinarily, we 1 1 1 0 0 uniquely determine the Qj matrices by imposing further constraints A2 = 2 0 0 1 1 © in addition to the orthogonality requirement given above. For exam- ¨ ple, the Haar wavelet matrices can be found by requiring the least 1 1 1 0 0 B2 = © number of consecutive nonzero entries in each column.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-