
COMPRESSION ARTIFACT REMOVAL WITH STACKED MULTI-CONTEXT CHANNEL-WISE ATTENTION NETWORK Binglin Li? Jie Liang? Yang Wangy ?School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada yDepartment of Computer Science, University of Manitoba, Winnipeg, MB, Canada fbinglinl, [email protected]? [email protected] ABSTRACT perceptual and metric similarities between the reconstructed and the original images. Image compression plays an important role in saving disk Since JPEG operates at block level with block size of storage and transmission bandwidth. Among traditional 8×8 pixels, it may cause one object to be divided into sev- compression standards, JPEG is one of the commonly used eral blocks during the compression and lead to artifacts in the standards in lossy image compression. However, the decom- reconstructed image. As different parts of an object are highly pressed JPEG images usually have inevitable artifacts due to correlated and share similar textures, contextual information the quantization step, especially at low bitrate. Many recent may help capturing patterns in this region and reducing the ar- works leverage deep learning networks to remove the JPEG tifacts. In this paper, we propose to use attentions to capture artifacts and have achieved notable progress. In this paper, contextual information in an image for artifact removal. The we propose a stacked multi-context channel-wise attention attention mechanism has been used in other computer vision model. The channel-wise attention adaptively integrates fea- and image processing tasks. For example, Chu et al. [5] apply tures along the channel dimension given a set of feature maps. spatial attentions for human pose estimation. Spatial attention We apply multiple context-based channel attentions to enable allocates different weights in different spatial positions in an the network to capture features from different resolutions. image. In image restoration, the evaluation is based on the The entire architecture is trained progressively from the im- whole image and we average the computed values of all pix- age space of low quality factor to that of high quality factor. els, which means learning individual importance to different Experiments show that we can achieve the state-of-the-art pixels may not work. Besides, spatial attention will produce performance with lower complexity. a large number of zeros. When the attention map is applied Index Terms— Compression artifact removal, image to features of deep learning network, most of the features will restoration, hourglass network, attention be mapped to zeros. Based on these observations, we propose a deep learning- based stacked multi-context channel-wise attention model 1. INTRODUCTION and apply it to JPEG compression artifact removal. In our model, feature maps are integrated by a learnt rescaling atten- We consider the problem of artifact removal in lossy im- tion vector along the channel dimension. In [26], the channel age compression. Compared to lossless compression stan- attention is incorporated into each residual block with the dards such as PNG [13], lossy compression methods (e.g. same architecture. Different from [26], we augment the chan- JPEG [23], JPEG2000 [12] and WebP [8]) can produce nel attention with multiple contexts at different scales. This smaller compressed files at the expense of a small amount of allows our model to effectively exploit multi-scale contex- information loss. JPEG is the most commonly used standard tual information. During training, we also take advantage of in lossy image compression nowadays. The main components decompressed images with different quality factors to pro- in JPEG include DCT, quantization, and entropy coding. gressively supervise the network. Experiments show that Among them, almost all the information loss is caused by the our method can achieve the state-of-the-art performance with quantization, which introduces various artifacts (blocking, lower complexity. ringing, blurring) that degrade the reconstructed images at the decoder. Compression artifact removal is a post-filtering process that aims to restore the degraded image as close to 2. RELATED WORK the artifact-free image as possible. Recent works show that deep learning is a promising technique for artifact removal. Several researches on compression artifact removal have been Methods based on deep learning can significantly improve proposed in recent years. In [6], a 4-layer convolutional net- Fig. 1. Stacked multi-context channel-wise attention model. There are four stacks of hourglass networks H1;H2;H3;H4 in our 1 2 3 4 system. H2 and H3 have the same architecture as H1. Xb , Xb , Xb and Xb are reconstructed outputs for the four sub-networks respectively. work is proposed, where the easy-to-hard transfer learning is are reserved. The densely connected structure helps restoring used to initialize the parameters from a shallow network and mid/high-frequency signals. In [7, 15], it is shown that image transfer features learnt from high compression factors to that restoration can benefit subsequent high level computer vision of low quality factors, which facilitates faster convergence tasks such as detection and segmentation. than random initialization. The model in [20] includes 8 lay- Most recent works [14, 26, 11] focus on image super- ers. It predicts residual map between the input and ground resolution and achieve superior performance. However, their truth image, and uses skip connections to help to propagate models have many more parameters (10M+) and require a information. It also combines the direct mapping loss with larger training dataset to support. On the other hand, they are the Sobel edge loss to focus on high-frequency recovery for not specifically designed for compression artifact removal. As better perceptual reconstructions. In [3], a 12-layer deep net- the noises in compression and super-resolution are quite dif- work with hierarchical skip connections and a multi-scale loss ferent, the techniques in super-resolution do not necessarily is proposed. The architecture has multiple downsampling and achieve satisfactory results when applied to compression arti- upsampling, and predicts the reconstructed outputs at differ- fact removal task. ent scales. It demonstrates that deeper network has better ca- pability to restore images and is also effective in low-level vision tasks. 3. PROPOSED MODEL Some works [10, 7] follow the spirit of Generative Ad- In this section, we propose a deep learning-based multi- versarial Network (GAN). Basically GAN contains a gen- context channel-wise attention model to reduce JPEG com- erator and a discriminator, where the generator produces a pression artifacts. Our proposed model is based on the candidate image output to fool the discriminator so that it is stacked hourglass network [17] which is originally devel- hard for the discriminator to distinguish whether the image oped for human pose estimation. Fig. 1 gives an overview of is from the generator or it is a real image. These methods our proposed model. We use 4 stacks of hourglass networks show that they can generate more realistic reconstructions, fH1;H2;H3;H4g to allow for iterative reconstructions. Each but may get relatively lower PSNR performance. They also yellow box in Fig. 1 represents a single residual module same apply extra perceptual loss where a pre-trained VGG network as in [17]. For the last few layers in each stack of the hour- is used to share similar high-layer features between the pre- glass network, we collect the outputs of residual blocks from dicted and the original images. The dual domain learning different scales, and apply the channel-wise attention for each is adopted in [10, 9, 25] where features from both pixel do- scale (as shown in green boxes in Fig. 1). main and DCT domain are integrated to enhance the final re- Before the first hourglass network, we use a convolution construction. However, it is not clear that whether the im- layer and a residual block to obtain high frequency compo- provement is due to the proposed DCT-domain reconstruc- nents. Each stack of hourglass network produces a 2D resid- tion or the increase in the number of parameters from the ual map between the input decompressed image and the tar- branch. In [21], a very deep MemNet consisting of many get image with a channel dimension of 1. The residual map memory blocks is developed. Gate units are applied to con- is added to the input decompressed image X to generate the trol how much previous memory blocks and the current state reconstructed image Xb i at current stack. A 1×1 convolution layer remaps the residual map to match the number of fea- Classic 5 LIVE1 ture channels, and then add the output feature and input of Method PSRN SSIM PSNR SSIM this stack as an input for the next hourglass network. The last JPEG 27.82 0.7595 27.77 0.7730 stack of hourglass network outputs Xb 4 without further steps. ARCNN [6] 29.03 0.7929 28.96 0.8076 Given the input decompressed JPEG image X, the final re- TNRD [4] 29.28 0.7992 29.15 0.8111 constructed image Xb 4 is obtained by DnCNN [24] 29.40 0.8026 29.19 0.8123 CAS-CNN [3] ∼ ∼ 29.44∗ 0.8333∗ 4 Xb = X + H4(H3(H2(H1(X; θ1); θ2); θ3); θ4): (1) MemNet [21] 29.69 0.8107 29.45 0.8193 hourglass 29.61 0.8100 29.37 0.8182 Θ = fθ1; θ2; θ3; θ4g are parameters of the four sub-networks. hourglass(PS) 29.63 0.8109 29.38 0.8186 The model parameters are trained end-to-end. Ours(PS+atten) 29.70 0.8121 29.45 0.8201 0.8297∗ 0.8342∗ 3.1. Channel-wise Attention Network Channel attention network has shown great success in image Table 1. Average PSNR/SSIM on datasets Classic5 and super-resolution [26]. It adaptively integrates features by con- LIVE1 with quality factor 10.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-