<<

DOI 10.1515/jisys-2013-0033 Journal of Intelligent Systems 2013; 22(3): 299–315

Research Article

Rajkumar L. Biradar* and Vinayadatt V. Kohir Texture Inpainting Using Covariance in Wavelet Domain

Abstract: In this article, the covariance of wavelet transform coefficients is used to obtain a texture inpainted image. The covariance is obtained by the maximum likelihood estimate. The integration of wavelet decomposition and maximum likelihood estimate of a group of pixels (texture) captures the best-fitting texture used to fill in the inpainting area. The image is decomposed into four wavelet coefficient images by using wavelet transform. These wavelet coefficient images are divided into small square patches. The damaged region in each coefficient image is filled by searching a similar square patch around the damaged region in that particular wavelet coefficient image by using covariance.

Keywords: Inpainting, texture, maximum likelihood estimate, covariance, wavelet transform.

*Corresponding author: Rajkumar L. Biradar, G. Narayanamma Institute of Technology and Science, Electronics and tel etm department, shaikpet, Hyderabad 500008, India, e-mail: [email protected] Vinayadatt V. Kohir: Poojya Doddappa Appa College of Engineering, Gulbarga 585102, India

1 Introduction

Image inpainting is the technique of filling in damaged regions in a non-detect- able way for an observer who does not know the original damaged image. The concept of digital inpainting was introduced by Bertalmio et al. [3]. In the most conventional form, the user selects an area for inpainting and the auto- matically fills in the region with information surrounding it without loss of any perceptual quality. Inpainting techniques are broadly categorized as structure inpainting and texture inpainting. The structure filling rely on filling the inner area with information from a structured region, which is the boundary of the region to be inpainted. Texture inpainting techniques fill in the damaged or missed regions using the similar neighborhood region in the image. They try to match the statistics of damaged regions to the statistics of known regions in the neighborhood of a damaged area. 300 R. L. Biradar and V. V. Kohir

In this article, we address the issue of texture inpainting by integrating the maximum likelihood estimate (MLE) of covariance and wavelet transform. The original damaged texture image is decomposed into four quarter-size wavelet coefficient images by using wavelet transform. Then, each wavelet coefficient image is divided into small square patches. The damaged regions in all wavelet coefficient images are filled by searching a similar patch around the damaged region in that particular image. The covariance is used to find similarity between two patches. After filling in the damaged region of each wavelet coeffi- cient image in the wavelet domain, inverse discrete wavelet transform (DWT) is used to get the inpainted image. The proposed algorithm achieves better results for removing of objects and inpainting of a larger damaged region in the texture image.

2 Previous Work

Inpainting techniques can be classified into two broad categories: structure inpainting and texture inpainting.

2.1 Structure Inpainting

In structure inpainting, the damaged region is filled by diffusing with the neigh- boring information of the image. The diffusion of neighboring information is achieved using a partial differential equation (PDE). Bertalmio et al. [3] have developed a digital inpainting technique algorithm based on the PDE, and it is an extension of the level lines based disocclusion method proposed by Masnou and Morel [16]. In Bertalmio et al.’s inpainting, the direction of arrival of isophotes is maintained by computing the direction of the largest spatial change. The direc- tion is obtained by computing a discretized gradient vector. Bertalmio et al. [4] used the ideas from computational fluid dynamics to propagate isophote lines continuously from the exterior into the region to be inpainted. Chan and Shen proposed two image inpainting algorithms. The total variation (TV) inpainting model [6, 13] uses Euler–Lagrange modeling. Inside the inpainting domain, this model employs anisotropic diffusion [18] based on the contrast of the isophotes. The curvature-driven diffusion model [7, 8] is an extended TV algorithm in which the geometric information of isophotes is taken into account while defining the strength of the diffusion process, thus allowing the inpainting to proceed over inpainting regions. Texture Inpainting Using Covariance 301

Zhang [21] introduced fractional-order image inpainting (a projection inter- polation method) into metal artifacts reduction in computer tomography (CT) images. They introduced a fast non-iterative method based on the fast marching method and coherence transport for metal artifacts reduction in CT. Zhang [22] also proposed an image inpainting by combining the TV model with a fractional derivative called fractional-order TV image inpainting model. They introduced a new class of fractional-order variational image inpainting models in both space and wavelet domains. All PDE-based methods require a difficult implementation process. Some steps are numerically unstable and the inpainting process is slow. For large damaged regions, the results show a blocky effect.

2.2 Texture Inpainting

Texture inpainting is relatively difficult as texture is to be either synthesized or pasted from the surrounding area of the damaged region based on some similar properties. The statistical properties of texture play an important role in syn- thesizing and measuring the similarity between two textures. The damaged or missing regions are filled in using similar neighborhood texture in the image. The statistics of the boundary of damaged regions should be similar to the statistics of known neighborhood regions. Hirani and Totsuka [15] combined frequency and spatial domain information to fill a given region with a selected texture. Other algorithms, e.g., see refs. [1, 20], can be used as well to recreate a preselected texture to fill in a square region to be inpainted. The algorithm mainly deals with texture synthesis and not with structured background. Efros and Leung [11] proposed a non-par- ametric texture synthesis model-based Markov random field to inpaint textural images. In their method, first a neighborhood around a damaged pixel is selected and then the known regions of the image are searched to find the most similar region to the selected neighborhood. Finally, the central pixel found in the neigh- borhood is copied to the damaged pixel. This method is time consuming and does not produce good results in structured regions. Criminisi and Perez [10] modified the model in ref. [11] to achieve better results. Chang [9] showed that the priority function adopted in ref. [11] is not a well-defined function and may become unreliable after a number of iterations. Therefore, they proposed a new function to assign priority to pixels providing a robust exemplar-based algorithm. Bertalmio et al. [5] proposed to decompose the original image into textural and structural subimages. The structural subimage is reconstructed using a structure algorithm and the textural subimage is restored 302 R. L. Biradar and V. V. Kohir by a texture synthesis approach. Then, two processed components are combined to obtain an inpainted image. A similar approach is proposed by Grossauer [14], in which instead of decomposing the image, the original image is segmented into two subregions. Bai et al. [2] proposed completion of missing parts by structure propaga- tion and synthesizing the regions along the salient structures specified by the user. After structure completion, a finer algorithm is used to fill in the remain- ing unknown regions. It can prevent erroneous matching blocks and reduce the breaking of salient structures. The proposed algorithm copies the texture in wavelet domain by using the MLE of covariance in a localized window around the inpainting area.

3 Wavelet Transform

Wavelet transforms are based on small waves, called wavelets, of varying fre- quency and limited time duration. This allow us to analyze both time and fre- quency domain information of a function simultaneously. A wavelet basis starts with two orthogonal functions: the scaling function φ(x) or father wavelet, and the wavelet function ψ(x) or mother wavelet, which allows detecting frequency components and their location in the time or spatial domain. The one-dimension [12] scaling function φ(x) and the wavelet function are defined as

φ =−φ ()xh∑ φ()nx2(2)n , (1) n

ψφ=− ()xh∑ ψ()nx2(2)n , (2) n where hφ(n) and hψ(n) are called the scaling function coefficient vector and the wavelet function coefficient vector, respectively. In two dimensions, a two-dimension scaling function φ(x, y) and three two- dimension wavelet functions, ψH(x, y), ψV(x, y), and ψD(x, y), are required. Each is the product of a one-dimension scaling function φ(.) and the corresponding wavelet function ψ(.). Excluding the products that produce one-dimensional results, such as φ(x)ψ(x), the four remaining products produce the separable scaling function φ(x, y) = φ(x)φ( y), (3)

ψH(x, y) = ψ(x)φ( y), (4) Texture Inpainting Using Covariance 303

ψV(x, y) = φ(x)ψ( y), (5) ψD(x, y) = ψ(x)ψ( y). (6)

These wavelets measure variations in intensity or gray level of image along different directions. ψH(.), ψV(.), and ψD(.) measure variations along columns (horizontal edges), rows (vertical edges), and diagonals, respectively. By scaling and translation of these orthogonal functions, we obtain a complete basis set and they are defined as

j/2 j j φj, m, n(x, y) = 2 φ(2 x – m, 2 y – n), (7)

i j/2 i j j ψj, m, n(x, y) = 2 ψ (2 x – m, 2 y – n), (8) where i = {H, V, D} and identifies the directional wavelets of Eqs. (3)–(6). The two-dimensional DWT of image f(x, y) of size N × N is

1 NN−−11 Wj(,mn,)= fx(,yx)(φ ,)y , φ 0,∑∑ jm0 ,n (9) N xy==00

1 NN−−11 ii= ψ Wjψ(,mn,) ∑∑fx(,yx)(jm,,n ,)y . (10) N xy==00

The approximate wavelet coefficient Wφ( j0, m, n) defines the approximation of i f(x, y) at starting scale j0. The detail wavelet coefficients Wjψ(,mn,) add horizon- tal, vertical, and diagonal details for scales j ≥ j0. The inverse two-dimension DWT to obtain image f (x, y) from wavelet coef- ficients is given by

11∞ fx(,yW)(=+jm,,nx)(φψ,)yWii(,jm,)nx(,y). ∑∑ φψ0,jm0 ,,nj∑∑∑∑ mn, NNmn iH==,,VD jj0 mn (11)

The two-dimension Eqs. (9) and (10) are expressed in terms of a convolution operation:

D  Wjψφ(1−=,,mn)(Wj,,mn)(∗−hnψψ)(∗−hm), (12) nk=≥2,km02=≥kk,0

V  Wjψφ(1−=,,mn)(Wj,,mn)(∗−hnψφ)(∗−hm), (13) nk=≥2,km02=≥kk,0

H  Wjψφ(1−=,,mn)(Wj,,mn)(∗−hnφψ)(∗−hm), (14) nk=≥2,km02=≥kk,0  Wjφφ(1−=,,mn)(Wj,,mn)(∗−hnφφ)(∗−hm). (15) nk=≥2,km02=≥kk,0 304 R. L. Biradar and V. V. Kohir

In Eqs. (12)–(15), the input image f(x, y) is used as wavelet coefficient Wφ( j, m, n) image at scale j to yield four quarter-size wavelet coefficient images at scale H V D j-1: Wφ( j–1, m, n), Wjψ (1− ,,mn), Wjψ (1− ,,mn), and Wjψ (1− ,,mn), which rep- resent average or approximated coefficient image, horizontal edge coefficient image, vertical edge coefficient image, and diagonal edge coefficient image of input image f(x, y), respectively. For simplicity, we denote these images as A(x, y), H(x, y), V(x, y), and D(x, y), respectively, in our further analysis.

4 Texture Similarity Measure

Texture is a closely related group of pixels. Hence, the similarity between two textures can be estimated on the basis of how best we understand and model the inter-relations between the pixels of a texture. The inter-relations of the pixels can be estimated if we treat the image as a random process. A stochastic process [17, 19] is a generalization of the random variable concept.

Consider a random variable Z with set of n observation samples (,zz12,...,)zn . In most of the image processing application, these are pixel values in the image. Each sample is characterized by the common probability density function (PDF). The PDF for a normally distributed variable Z is

11− µσ22=−µ pz(|,) 21/2 exp(2 z ), (16) (2πσ )2σ where mean, μ = E[(z)] and variance, σ2 = E[(z–μ)(z–μ)].

4.1 Maximum Likelihood Estimation

For n independent observations (z1, z2, … , zn), the joint distribution is the product of their univariate distributions, i.e.,

2222 pz(,,12zz...,|nnµσ,)=…pz(|12µσ,)pz(|µσ,)pz(|µσ,) n = µσ2 ∏ pz(|i ,) i=1 (17) 11− n =−µ 2 2/n 22exp(∑ zi ), (2πσ )2σ i=1 which is the joint density of the sample’s given mean and variance. Texture Inpainting Using Covariance 305

4.1.1 Likelihood Function

The likelihood of the samples is defined as 11− n µσ2 =−µ 2 L(, |,zz12, ...,)zzni2/n 22exp(∑ ). (18) (2πσ )2σ i=1

We want to maximize L, given the observed sample values (z1, z2, … , zn). Now we determine for what values of mean and variance L is maximum. It is compu- tationally easier to maximize the natural logarithm of L than L itself; therefore,

11− n µσ2 =−µ 2 ln((Lz,|12,,zz...,)ni)lne2/n 22xp∑()z (2πσ )2σ i=1 11− n =+− µ 2 ln2/n 22∑()zi (19) (2πσ )2σ i=1 −nn 1 n =−πσ22−−µ ln(2 )ln( )(2 ∑ zi ). 222σ i=1

4.1.2 Estimate of Mean

Now to find the value of mean μ that maximizes the likelihood of the samples, we maximize the log-likelihood function by taking the partial derivative of Eq. (19) with respect to μ and equating it to zero:

∂(ln( L)) 1 n =−µ = 2 ∑()zi 0 ∂µ σ i=1 n (20) 1 µ== ∑()zEii[]z . n i=1

4.1.3 Estimate of Variance

The second-order statistics, covariance function [17], is used for searching simi- larities in images or image parts. To find σ2, differentiate Eq. (19) with respect to σ2 and equate it to zero:

∂−(ln( Ln)) 1 n =+ −=µ 2 224 ∑()zi 0 ∂σσ2σ i=1 11nn σµ22=−=−µµ− (21) ∑∑()zzii()()zi nnii=−1 i =−µ 2 = Ez()i Cov( zz,). 306 R. L. Biradar and V. V. Kohir

Thus, the MLE mean and variance is same as the mean and variance of random variable Z.

4.1.4 Covariance–Texture Similarity Measure

The similarity between textures can be measured on the basis of the statistical properties of textures, which can be computed using Eqs. (20) and (21). The second-order similarity measures based on variance can be defined as covariance:

Cov( fg,)=−Ef[((,xy))µµfg((gx,)y − )] 1 NN (22) =−µµ− 2 ∑∑((fx,)ygfg)( (,xy)), N xy==11

2 2 where (,µσff) and (,µσgg) are the mean and variance of textures f(x, y) and g(x, y), respectively.

5 Wavelet Domain Texture Inpainting Using Covariance

The texture is an oscillating pattern and each pattern is a group of pixels that are tightly coupled to one another within the group. In texture inpainting, we need to diffuse a whole group of pixels as a unit. The wavelet transform of the input image is taken, and each of the wavelet coefficient images defines a texture completely or partially. A patch at the boundary of the inpainting area is taken as the reference patch of texture. A search for a similar patch in the wavelet image is made and is then placed adjacent to it, to fill in the inpainting area. Independent search and place is carried out in each of the four wavelet coefficient images. The placed patch is the new reference patch, and a search for a similar patch is carried out with respect to this new reference. The searched patch is placed in adjacent as before. This process is repeated for the entire inpainting area horizontally. The MLE of covariance is used to measure the similarity between two tex- tures. We measure this similarity for texture patches instead of the entire image NN in wavelet domain. Initially, each of wavelet coefficient image of size , is 22 divided into small square patches of size NB × NB. The block size vary from 4 × 4 to 7 × 7 depending on texture element. PA1, PA2, PA3, . . . are patches of approximate coefficient image A. Similarly, the patches PH1, PH2, PH3, . . . ; PV1, PV2, PV3, . . . ; and Texture Inpainting Using Covariance 307

PD1, PD2, PD3, . . . are patches of horizontal coefficient image H, vertical coefficient image V, and diagonal coefficient image D, respectively. Equation (22) of covariance is used to measure the similarity between texture images. We define it for texture patches:

NN 1 BB =−µµ− Cov( PPRr,) 2 ∑∑ ((PmRR,)nP)( rr(,mn)) (23) N B mn==11

For two similar patches, the covariance value is maximum. The image is decom- posed into four wavelet coefficient images using DWT. These coefficient images are divided into small square patches. We determine the covariance and correla- tion coefficient between reference patch PR(x, y) and neighborhood patches Pr(x, y) (r = 1, 2, 3, . . .). The damaged region in each coefficient image is filled by searching the best square patch around the damaged region in that particular wavelet coef- ficient image and then pasting. The pasted patch is Pr(x, y); the reference patch is

PR(x, y); and search for a similar patch in the wavelet coefficient image is carried out. The searched patch is placed as before. This process is repeated for each row of inpainting area and for all wavelet coefficient images. The inverse is taken to obtain the inpainted image.

5.1 Algorithm

The proposed wavelet domain texture inpainting using the MLE algorithm has the following four steps: –– Step 1: DWT step: A user-defined mask image of size N × N with area of inpaint- ing Ω is created. The input image and the mask image are decomposed using two-dimensional DWT to obtain four quarter-size coefficient images of size N/2 × N/2: approximate coefficient image A, horizontal coefficient image H, vertical coefficient image V, and diagonal coefficient image D. –– Step 2: Wavelet coefficient image dividing step: Each of the wavelet coeffi-

cient images is divided into small square patches. PA1, PA2, PA3, . . . are patches

in the approximate coefficient image A; similarly, the patches PH1, PH2, PH3, . . . ;

PV1, PV2, PV3, . . . ; and PD1, PD2, PD3, . . . are the patches in horizontal coefficient image H, vertical coefficient image V, and diagonal coefficient image D, respectively. –– Step 3: Finding and filling step: The inpainting region in each wavelet coef- ficient image is filled by searching, in the local neighborhood, the “best” matching patch with the reference patch using covariance [Eq. (22)]. The searched patch is placed adjacent to the reference. This step is applied to all the four wavelet coefficient images. The placed patch will be the new refer- 308 R. L. Biradar and V. V. Kohir

ence patch. The processes of search and paste are repeated until the inpaint- ing region is completely filled. –– Step 4: IDWT step: Applying IDWT (inverse DWT) to the images of step 3, the inpainted image spatial domain is obtained.

6 Test Images and Results

Test images are collected from different sources, including the digital photo- graphs taken under natural conditions of sunlight. Images collected are divided into two different sets based on the damages present in them. –– Set 1: This set consist of clean texture (Figures 1A–8A); these are artificially degraded by placing object/objects on them (Figures 1B–8B). Lena face, infant face, and sun are used as objects. These object/objects are inpainted using covariance (Figures 1D–8D). We showed inpainting in spatial domain results in imperfect construction of texture as shown in Figures 1C–4C. Thus, we use the wavelet domain for the construction of texture, and the results of texture inpainting using covariance are much better compared with spatial domain inpainting.

AB

CD

Figure 1. (A) Original Texture Image. (B) Lena Face is Placed on the Original Image. (C) Inpainted Texture Image in Spatial Domain; Shows Imperfect Texture Construction. (D) Inpainted Image in Wavelet Domain using Covariance. Texture Inpainting Using Covariance 309

AB

CD

Figure 2. (A) Original Texture Image. (B) Lena Face is Placed on the Original Image. (C) Inpainted Texture Image in Spatial Domain; Shows Imperfect Texture Construction. (D) Inpainted Image in Wavelet Domain using Covariance.

AB

CD

Figure 3. (A) Original Texture Image. (B) Lena Face is Placed on the Original Image. (C) Inpainted Texture Image in Spatial Domain; Shows Imperfect Texture Reconstruction. (D) Inpainted Image in Wavelet Domain using Covariance. 310 R. L. Biradar and V. V. Kohir

AB

CD

Figure 4. (A) Original Texture Image. (B) Lena Face is Placed on the Original Image. (C) Inpainted Texture Image in Spatial Domain; Shows Imperfect Texture Construction. (D) Inpainted Image in Wavelet Domain using Covariance.

AB

CD

Figure 5. (A) Original Texture Image. (B) The Infant Face is Placed on the Original Image. (c) Inpainted Image in Wavelet Domain using Covariance. (D) Inpainted Image in Wavelet Domain Using Correlation. Texture Inpainting Using Covariance 311

AB

CD

Figure 6. (A) Original Texture Image. (B) Sun is Placed on the Original Image. (C) Inpainted Image in Wavelet Domain using Covariance. (D) Inpainted Image in Wavelet Domain using Correlation.

AB

CD

Figure 7. (A) Original Texture Image. (B) Lena Face is Placed on the Original Image. (C) Inpainted Image in Wavelet Domain using Covariance. (D) Inpainted Image in Wavelet Domain using Correlation. 312 R. L. Biradar and V. V. Kohir

A B

C D

Figure 8. (A) Original Texture Image. (B) Lena and Infant Face are Placed on the Original Image. (C) Inpainted Image in Wavelet Domain using Covariance. (D) Inpainted Image in Wavelet Domain using Correlation.

–– Set 2: This set consists of naturally degraded/damaged texture images. Figures 9A and 10A show damaged texture and Figures 9B and 10B show inpainted textures using covariance. Texture inpainting using covari- ance is used to fill in holes and scratches on archaeological images. We envisage that texture inpainting may aid archaeological work to restore

AB

Figure 9. (A) Original Texture Image with Holes. (B) The Holes are Inpainted using Covariance. Texture Inpainting Using Covariance 313

AB

Figure 10. (A) Damaged Window in Coffee Bar. (B) Reconstructed Window using Covariance.

AB

Figure 11. (A) Archaeological Texture Image with a Hole. (B) The Hole is Removed using Covariance.

AB

Figure 12. (A) Archaeological Texture Image with a Hole. (B) The Hole is Removed using Covariance. 314 R. L. Biradar and V. V. Kohir

AB

Figure 13. (A) Archaeological Texture Image with a Crack. (B) Crack is Removed using Covariance.

old monuments.­ Most of the archaeological monuments have developed cracks. The restoration work of the monuments may be helped by provid- ing the possible appearance of the monument using texture inpainting. Figures 11A, 12A, and 13A show the archaeological images of portions of the Ramalingeswara Temple, Tadapatri, Anantapur District, Andrapradesh, India, taken in daylight by the authors. Figures 11B, 12B, and 13B show the restored archaeological images, respectively.

7 Conclusion

The MLE in the wavelet domain is the key to the success of texture inpainting. Wavelet decomposition enables to estimate covariance better as there are four similar images of the original image. However, the investigations carried out can be further fine tuned by incorporating higher-order statistical measures to measure the similarity between the textures.

Received May 12, 2013; previously published online June 26, 2013.

Bibliography

[1] m. Ashikhmin, Synthesizing natural images, in: Proc. ACM, pp. 217–226, 2001. [2] B. Bai, Z. Miao and Z. Tang, An improved structure propagation based image inpainting, in: Proc. SPIE 8009, pp. 80091D, 2011. Texture Inpainting Using Covariance 315

[3] m. Bertalmio, G. Sapiro, V. Caselles and C. Ballester, Image inpainting, in: Proceedings of SIGGRAPH, Computer Graphics Processing, 2000. [4] m. Bertalmio, A. L. Bertozzi and G. Sapiro. Navier-stokes, fluid dynamics and image and video inpainting, in: IEEE International Conference on and Patter Recognition (CVPR), pp. 355–362, 2001. [5] m. Bertalmio, L. Vese, G. Sapiro and S. Osher, Simultaneous structure and texture image inpainting, UCLA CAM report 02-47, 2003. [6] F. Chan and J. Shen, Mathematical models for local deterministic inpainting, UCLA Computational and Applied Mathematics Reports 00-11, 2000. [7] F. Chan and J. Shen, Non-textured inpainting by curvature driven diffusion. J. Vis. Commun. Image R., 12 (2001), 436–449. [8] F. Chan, S. H. Kang and J. Shen, Euler’s elastica and curvature-driven diffusion, SIAM J. Appl. Math. 63 (2002), 564–592. [9] Y.-S. Chang, M. Olivera, B. Bowen and R. Mckenna, Fast digital image inpainting, in: Proc. International Conference on Visualization, Imaging and Image Processing, VIIP 2001, pp. 261–266, 2001. [10] A. Criminisi and P. Perez, Object removal by exemplar-based inpainting, IEEE Trans. Image Proc. 13 (2004), 1200–1212. [11] A. Efros and T. Leung, Texture synthesis by non-parametric sampling, in: Proc. IEEE International Conference Computer Vision, pp. 1033–1038, Corfu, Greece, September 1999. [12] R. E. Gonzalez and R. E. Woods, Digital image processing, 2nd edition, Dorling Kindersley (India) Private Ltd., India, 2007. [13] H. Grossauer, Digital inpainting using the complex Ginzburg–Landau equation, in: Scale Space Method in Computer Vision, Lecturer Notes, 2003. [14] H. Grossauer, A combined PDE and texture synthesis approach to inpainting. Computer Vision – ECCV 2004, Lecture Notes in Computer Science, Vol. 3022, pp. 214–224, 2004. [15] A. Hirani and T. Totsuka, Combining frequency and spatial domain information for fast interactive image noise removal, in: Computer Graphics, Proc. SIGGRAPH 1996, pp. 269–276, 1996. [16] S. Masnou and J. M. Morel, Level-lines based disocclusion, in: 5th IEEE International Conf on Image Processing, Chicago, IL, 1998. [17] A. Papoulis. Probability, random variable and stochastic processes, 4th edition, McGraw-Hill Publishing Company Ltd., New Delhi, India, 2002. [18] P. Perona and J. Malik, Scale-space edge detection using anisotropic diffusion, IEEE Trans. Pattern Anal. 12 (1990), 629–639. [19] A. Rosenfeld and A. C. Kak, Digital picture processing, 2nd edition, Academic Press, New York, 1982. [20] E. Simoncelli and J. Portilla, Texture characterization via joint statistics of wavelet coefficient magnitudes, in: 5th IEEE Int’l Conf. on Image Processing, Chicago, IL, Oct. 4–7, 1998. [21] Y. Zhang, Y. F. Pu, J. R. Huand and J. L. Zhou, Fast X-ray CT metal artifacts reduction based on noniterative sinogram inpainting, in: Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2011. [22] Y. Zhang, Y. F. Pu, J. R. Hu and J. L. Zhou, A class of fractional order variational image inpainting models, Appl. Math. Inf. Sci. 2 (2012), 299–306.