
A Content-Aware Image Prior Taeg Sang Cho†, Neel Joshi‡, C. Lawrence Zitnick‡, Sing Bing Kang‡, Richard Szeliski‡, William T. Freeman† † Massachusetts Institute of Technology, ‡ Microsoft Research Abstract In image restoration tasks, a heavy-tailed gradient distri- bution of natural images has been extensively exploited as an image prior. Most image restoration algorithms impose a sparse gradient prior on the whole image, reconstructing an image with piecewise smooth characteristics. While the sparse gradient prior removes ringing and noise artifacts, it also tends to remove mid-frequency textures, degrading the visual quality. We can attribute such degradations to imposing an incorrect image prior. The gradient profile in fractal-like textures, such as trees, is close to a Gaussian dis- tribution, and small gradients from such regions are severely penalized by the sparse gradient prior. 0 Orthogonal gradients 0 Aligned gradients To address this issue, we introduce an image restoration 10 10 algorithm that adapts the image prior to the underlying tex- ∆ −1 o∆ x −1 ax ture. We adapt the prior to both low-level local structures as 10 10 well as mid-level textural characteristics. Improvements in −2 −2 10 10 visual quality is demonstrated on deconvolution and denois- −3 −3 ing tasks. 10 10 −4 −4 10 10 0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 1. Introduction Gradient magnitude Gradient magnitude Figure 1. The colored gradient profiles correspond to the region Image enhancement algorithms resort to image priors to with the same color mask. The steered gradient profile is spatially hallucinate information lost during the image capture. In variant in most natural images. Therefore, the image prior should recent years, image priors based on image gradient statis- adapt to the image content. Insets illustrate how steered gradients tics have received much attention. Natural images often adapt to local structures. consist of smooth regions with abrupt edges, leading to a heavy-tailed gradient profile. We can parameterize heavy- 2. Related work tailed gradient statistics with a generalized Gaussian distri- Image prior research revolves around finding a good im- bution or a mixture of Gaussians. Prior works hand-select age transform or basis functions such that the transformed parameters for the model distribution, and fix them for the image exhibits characteristics distinct from unnatural im- entire image, imposing the same image prior everywhere ages. Transforms derived from signal processing have been [10, 17, 23]. Unfortunately, different textures have differ- exploited in the past, including the Fourier transform [12], ent gradient statistics even within a single image, therefore the wavelet transform [28], the curvelet transform [6], and imposing a single image prior for the entire image is inap- the contourlet transform [8]. propriate (Figure 1). Basis functions learned from natural images have also We introduce an algorithm that adapts the image prior to been introduced. Most techniques learn filters that lie in the both low-level local structures as well as mid-level texture null-space of the natural image manifold [20, 30, 31, 32]. 1 cues, thereby imposing the correct prior for each texture. Aharon et al. [1] learns a vocabulary from which a natu- Adapting the image prior to the image content improves the ral image is composed. However, none of these techniques quality of restored images. adapt the basis functions to the image under analysis. 1 Strictly speaking, an estimate of image statistics made after examin- Edge-preserving smoothing operators do adapt to local ing the image is no longer a “prior” probability. But the fitted gradient distributions play the same role as an image prior in image reconstruction structures while reconstructing images. The anisotropic dif- equations, and we keep that terminology. fusion operator [5] detects edges, and smoothes along edges 1 −4 Orthogonal gradient x 10-4 Aligned gradient x 10 −4 but not across them. A similar idea appeared in a prob- 6 6 14 14 abilistic framework called a Gaussian conditional random 5 5 12 12 field [26]. A bilateral filter [27] is also closely related to 10 4 10 4 anisotropic operators. Elad [9] and Barash [2] discuss rela- 8 3 8 3 tionships between edge-preserving operators. log−lambda log−lambda 6 6 2 2 Some image models adapt to edge orientations as well as 4 4 magnitudes. Hammond et al.[13] present a Gaussian scale 1 1 2 2 0 0 mixture model that captures the statistics of gradients adap- 0 0.5 1 gamma1.5 2 2.5 3 3.5 0 0.5 1 gamma1.5 2 2.5 3 3.5 tively steered in the dominant orientation in image patches. Figure 2. The distribution of γ, ln(λ) for ∇ox, ∇ax in natural im- Roth et al.[21] present a random field model that adapts to ages. While the density is the highest around γ = 0.5, the density the oriented structures. Bennett et al.[3] and Joshi et al.[14] tails off slowly with significant density around γ = 2. We show, as exploit a prior on colors: an observation that there are two insets, some patches from Figure 1 that are representative of differ- dominant colors in a small window. ent γ, ln(λ). Adapting the image prior to textural characteristics was investigated for gray-scale images consisting of a single 3.2. The distribution of γ, λ in natural images texture [23]. Bishop et al. [4] present a variational im- Different textures give rise to different gradient profiles age restoration framework that breaks an image into square and thus different γ, λ. We study the distribution of the blocks and adapts the image prior to each block indepen- shape parameters γ, λ in natural images. We sample ∼ dently (i.e. the image prior is fixed within the block). How- 110, 000 patches of size 41 × 41 from 500 high quality nat- ever, Bishop et al.[4] do not address issues with estimating ural images. We fit the gradient profile from each patch to the image prior at texture boundaries. a generalized Gaussian distribution to associate each patch with γ, λ. We fit the distribution by searching for γ, λ that 3. Image characteristics minimize the Kullback-Leibler (KL) divergence between the empirical gradient distribution and the model distribu- We analyze the statistics of gradients adaptively steered tion p, which is equivalent to minimizing the negative log- in the dominant local orientation of an image x. Roth et al. likelihood of the model distribution evaluated over the gra- [21] observe that the gradient profile of orthogonal gradi- dient samples: ents ∇ox is typically of higher variance compared to that of aligned gradients ∇ x, and propose imposing different pri- N a ˜ 1 ors on and . We show that different textures within [˜γ, λ] = argmin − ln (p(∇xi)) (2) ∇ox ∇ax γ,λ N ( i=1 ) the same image also have distinct gradient profiles. X We parameterize the gradient profile using a generalized Figure 2 shows the Parzen-windowfit to sampled γ,˜ ln(λ˜) Gaussian distribution: for ∇ox, ∇ax. For orthogonal gradients ∇ox, there exists a 1 ( γ ) large cluster near γ = 0.5, ln(λ)=2. This cluster corre- γλ γ p(∇x)= 1 exp(−λk∇xk ) (1) spondsto patches from a smooth region with abrupt edges or 2Γ( γ ) a region near texture boundaries. This observation supports the dead leaves image model – an image is a collage of over- where Γ is a Gamma function, and γ, λ are the shape pa- lapping instances [16, 19]. However, we also observe a sig- rameters. γ determines the peakiness and λ determines the nificant density even when γ is greater than 1. Samples near width of a distribution. We assume that ∇ x and ∇ x are o a γ = 2 with large λ correspond to flat regions such as sky, independent: p(∇ x, ∇ x)= p(∇ x)p(∇ x). o a o a and samples near γ = 2 with small λ correspond to fractal- like textures such as tree leaves or grass. We observe sim- 3.1. Spatially variant gradient statistics ilar characteristics for aligned gradients ∇ax as well. The The local gradient statistics can be different from the distribution of shape parameters suggests that a significant global gradient statistics. Figure 1 shows the gradient statis- portion of natural images is not piecewise smooth, which tics of the colored regions. Two phenomena are responsible justifies adapting the image prior to the image content. for the spatially variant gradient statistics: the material and 4. Adapting the image prior the viewing distance. For example, a building is noticeably more piecewise smooth than a gravel path due to material The goal of this paper is to identify the correct image properties, whereas the same gravel path can exhibit differ- prior (i.e. shape parameters of a generalized Gaussian dis- ent gradient statistics depending on the viewing distance. To tribution) for each pixel in the image. One way to identify account for the spatially variant gradient statistics, we pro- the image prior is to fit gradients from every sliding win- pose adjusting the image prior locally. dow to a generalized Gaussian distribution (Eq 2), but the Variance of gradients0.045 Fourth moment of gradients required computation would be overwhelming. We intro- 0.1 0.04 0.09 duce a regression-based method to estimate the image prior. 0.035 0.08 0.03 0.07 4.1. Image model 0.06 0.025 0.05 0.02 0.04 Let y be an observed degraded image, k be a blur kernel 0.015 0.03 0.01 (a point-spread function or a PSF), and x be a latent image.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-