
2018 15th Conference on Computer and Robot Vision Learning Filters for the 2D Wavelet Transform Daniel Recoskie and Richard Mann Cheriton School of Computer Science University of Waterloo Waterloo, Canada {dprecosk, mannr}@uwaterloo.ca Abstract—We propose a new method for learning filters for the 2D discrete wavelet transform. We extend our previous work on the 1D wavelet transform in order to process images. We show that the 2D wavelet transform can be represented as a modified convolutional neural network (CNN). Doing so allows us to learn wavelet filters from data by gradient descent. Our learned wavelets are similar to traditional wavelets which are typically derived using Fourier methods. For filter comparison, we make use of a cosine measure under all filter rotations. The learned wavelets are able to capture the structure of the training data. Furthermore, we can generate images from our model in order to evaluate the filters. The main findings of this work is that wavelet functions can arise naturally from data, without the need for Fourier methods. Our model requires relatively few parameters compared to traditional CNNs, and is easily incorporated into neural network frameworks. Keywords-wavelets, convolution neural networks, filter banks I. INTRODUCTION The purpose of this work is to extend our previous 1D wavelet transform model to 2D [1]. The wavelet transform is a linear time-frequency transform that makes use of a multiscale filter bank made up of self-similar wavelet filters. We focus on orthogonal filters so that we can perfectly reconstruct the input signal from its wavelet coefficients. Generally, wavelet filters are designed using Fourier meth- ods. We propose a new method of learning wavelet filters Figure 1. Summary of results from [1]. Left column: Learned wavelet directly from data. We accomplish this by framing the (solid) and scaling (dashed) functions. Type of training data is shown above the plots. Middle column: Closest traditional wavelet (solid) and scaling wavelet transform as a variant of a convolutional neural (dashed) functions according to (17). Right column: Plots of the scaling network (CNN). Our learning method makes use of an filters from the first two columns with corresponding distance measure. autoencoder framework [2]. We impose a sparsity constraint on the learned representation in order to learn wavelets that exploit structure in the training data. traditional method of using a fixed feature representation A motivation of our previous work was to construct a such as SIFT [5] or SURF [6]. One of the most notable model that was able to learn directly from raw 1D signals, methods of learning directly from image data is the CNN such as audio. Typical models first perform a fixed feature [7], which learns a set of 2D filters that are applied in transform (e.g. the Fourier transform) instead of dealing with a cascade. Our model instead learns 1D filters that are the raw data directly [3]. More recently, there has been applied along each dimension of the image. Thus, the filters work showing impressive results on raw audio [4]. In [1] still have a 2D receptive field, but fewer parameters are we proposed a novel model based on the wavelet transform required. Furthermore, we reuse the filters in each layer of that was able to learn useful filters from 1D data. The results the network, unlike in a traditional CNN where separate are summarized in Figure 1. filters are used at each layer. We extend our 1D model in order to learn wavelet The wavelet transform has been used in neural networks filters from 2D image data. This is in contrast to the more in the past, such as in the wavelet network [8] and the 978-1-5386-6481-0/18/$31.00 ©2018 IEEE 198 DOI 10.1109/CRV.2018.00036 We call h the scaling (lowpass) filter, and g the wavelet (highpass) filter. We will use k to denote the length of the filters. The discrete wavelet transform can then be computed with the following iterative steps: +∞ aj+1[p]= h[n − 2p]aj[n] (6) n=−∞ Figure 2. Frequency view of the wavelet functions. +∞ dj+1[p]= g[n − 2p]aj[n] (7) scattering transform [9]. These models do not learn wavelet n=−∞ filters, but instead use fixed filters. The scattering transform with a0 = x (i.e. the input signal). We call aj and dj the computes coefficients similar to those of SIFT descriptors approximation and detail coefficients respectively. Note that [10]. Learning wavelet filters using neural networks has been the detail coefficients are exactly the wavelet coefficients proposed before in [11]. However, the signals considered had from (2). The discrete wavelet transform algorithm makes domains over the vertices of graphs. Furthermore, second use of a cascade of convolutions, each followed by down- generation wavelets were considered [12]. In this work we sampling. At each iteration, the signal is split into high and consider only first generation wavelets. low frequency components. The high frequency components II. WAVELET TRANSFORM correspond to the wavelet coefficients. The low frequency A. 1D Wavelet Transform coefficients are then used in the next iteration. The original signal can be recovered from the approxima- We will begin our discussion with the 1D wavelet trans- tion and detail coefficients using: form. The wavelet transform is a linear time-frequency trans- +∞ form that makes use of a dictionary of wavelets. Wavelets [ ]= [ − 2 ] [ ] are functions that are localized in time and frequency. In a aj p h p n aj+1 n =−∞ traditional wavelet transform, each wavelet is a shifted and n (8) +∞ dilated version of a mother wavelet, ψ. We restrict ourselves + g[p − 2n]d +1[n] to the discrete case, where our wavelets are of the form j n=−∞ 1 n ψ [n]= ψ (1) Note that the coefficients are upsampled by a factor of two j 2j 2j at each iteration by inserting zeros at even indices. for n, j ∈ Z. Let x be a 1D discrete signal of length N. The discrete wavelet transform is computed by convolving B. 2D Wavelet Transform x with the wavelet functions: The discrete wavelet transform can be extended to two N−1 dimensions by computing the convolutions along each axis [ 2j]= [ ] [ − ] Wx n, x m ψj m n . (2) separately [14]. In the 1D case, we computed two com- m=0 ponents at each iteration of the algorithm (highpass and The wavelet functions are constrained to have zero mean lowpass). In the 2D case, we will compute four components. and unit norm. For a fixed shift n, the wavelets form an Let LR and LC correspond to convolving the scaling overlapping bandpass filter bank illustrated in Figure 2. In (lowpass) filter along the rows and columns respectively. order to cover the entire frequency axis, we require the We can similarly define HR and HC for the wavelet (high- notion of a scaling function, φ. The scaling function can be pass) filter. At every iteration of the 2D wavelet transform thought of as the sum of all wavelet functions above a fixed algorithm, we compute the four components as illustrated in scale. Formally, it is defined such that its Fourier transform Figure 3: has the following property [13]: approximation: LR(LC(x)) (9) +∞ ˆ 2 ˆ 2 |ψ(sω)| detail horizontal: LR(HC(x)) (10) |φ(ω)| = ds (3) 1 s detail vertical: HR(LC(x)) (11) In order to compute the discrete wavelet transform effi- ciently, we will make use of an iterative algorithm. Let us detail diagonal: HR(HC(x)) (12) first define two filters: where x is now a 2D discrete signal. The three components 1 t h[n]= √ φ ,φ(t − n) (4) computed using at least one wavelet filter are kept as 2 2 the detail coefficients. The single approximation component computed by convolving the scaling filter along both axes is 1 t g[n]= √ ψ ,φ(t − n) (5) passed to the next iteration of the algorithm. The output of 2 2 199 LR LR lowpass LR LC HC LR HR HR HC columns LC LC HC LR lowpass HR HR HC rows LC HC highpass LR columns HC HR HR LC HC lowpass HR columns LC (a) highpass rows LR LR highpass HR LC HC HR HR - detail coefficients columns HC LC HC ffi - approximation coe cients LR LR LC HC Figure 3. One iteration of the 2D discrete wavelet transform. HR HR LC HC LR LR LC HC HR HR Figure 4. One iteration of the 2D discrete wavelet transform applied LC HC to a circular image. Not that the different coefficient components pick up different edge orientations. (b) Figure 5. (a) Wavelet coefficient matrix after three iterations of the wavelet the transform is structured as in Figure 5a. Note that after transform algorithm. (b) The wavelet coefficients are computed from the each convolution, the image is downsampled by a factor scaling coefficients of the previous iteration. of two along the direction of the convolution. Computing the 2D wavelet transform in this fashion is used in the JPEG2000 standard [15]. The three components containing the detail coefficients each correspond to a different orientation: horizontal, ver- tical, and diagonal. Each component responds to changes along its corresponding direction. Figure 4 shows one level of the wavelet transform applied to an image. Note that the three detail coefficient components highlight different edge orientations. Figure 6. Impulse responses of a typical wavelet filter. As in the 1D case, we subsample by a factor of two after each convolution. Thus, the total number of coefficients is equal to the number of pixels in our original image. Figure 5 shows how the coefficients are typically represented graphically.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-