Article A Multilevel Iteration Method for Solving a Coupled Model in Image Restoration

Hongqi Yang 1,2 and Bing Zhou 1,2,∗

1 School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China; [email protected] 2 Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou 510275, China * Correspondence: [email protected]

 Received: 9 February 2020; Accepted: 1 March 2020; Published: 4 March 2020 

Abstract: The problem of out-of-focus image restoration can be modeled as an ill-posed integral equation, which can be regularized as a second kind of equation using the Tikhonov method. The multiscale collocation method with the compression strategy has already been developed to discretize this well-posed equation. However, the integral computation and solution of the large multiscale collocation integral equation are two time-consuming processes. To overcome these difficulties, we propose a fully discrete multiscale collocation method using an integral approximation strategy to compute the integral, which efficiently converts the integral operation to the matrix operation and reduces costs. In addition, we also propose a multilevel iteration method (MIM) to solve the fully discrete integral equation obtained from the integral approximation strategy. Herein, the stopping criterion and the computation complexity that correspond to the MIM are shown. Furthermore, a posteriori parameter choice strategy is developed for this method, and the final convergence order is evaluated. We present three numerical experiments to display the performance and computation efficiency of our proposed methods.

Keywords: multiscale collocation method; integral approximation strategy; multilevel iteration method; image restoration

1. Introduction Continuous integral are often used to model certain practical problems in image processing. However, the corresponding discrete models are often used instead of continuous models because discrete models are much easier and more convenient to implement than continuous integral models. Discrete models are piecewise constant approximations of integral equation models, and they introduce a bottleneck model error that cannot be addressed by any image processing method. To overcome the accuracy deficiency of conventional discrete reconstruction models, we use continuous models directly to restore images, which are more in line with physical laws. This idea first appeared in [1], and was widely used later, such as in [2–4]. In addition to making more sense in physics, continuous models can be discretized with higher-order accuracy. This means that the model error will be significantly decreased when compared with piecewise constant discretization, especially in the field of image enlargement. Many researchers have made great contributions to the solution of integral equations, however, many difficulties remain. First, integral operators are compact in Banach space since integral kernels are normally smooth. This will produce a situation in which the solutions of the relevant integral equations do not depend on the known data continuously. To overcome this problem, the Tikhonov [5] and the Lavrentiev [6] regularization methods were proposed to regularize the ill-posed integral equation

Mathematics 2020, 8, 346; doi:10.3390/math8030346 www.mdpi.com/journal/mathematics Mathematics 2020, 8, 346 2 of 22 into a second-kind integral equation. Second, the equation using the Tikhonov regularization method contains a composition integral operator, which greatly increases the computation time. Given this condition, a coupled system equation that only involves a one-time integral operator was proposed in [7] to reduce the high computational cost. The original second-kind integral equation involves the composition of an integral operator, which is time-consuming. The collocation method [8] and the Galerkin method [9] were proposed to discretize the coupled system, and the collocation method is much easier. The third problem is that after the coupled system equation is discretized by the collocation method, a full coefficient matrix is generated. To overcome this issue, Chen et al. [10] proposed that the integral operator be represented using a multiscale basis and obtained a sparse coefficient matrix. Then, a matrix compression technique [1,11] is used to approximate that matrix, which does not affect the existing convergence order. Finally, appropriately choosing the regularization parameter (see [12–17]) is a crucial process that should balance between the approximation accuracy and the well-posedness. The purpose of this paper is to efficiently solve the second-kind integral equation on the basis of the previous achievements of other researchers. Although the equivalent coupled system is developed to reduce the computational complexity caused by the composition of the original integral operator, this is insufficient because a lot of computing time is required to compute the integral. Therefore, inspired by [18], we further propose a fully discrete multiscale collocation method using an integral approximation strategy to compute the integration. The idea of this strategy is the use of the Gaussian quadrature formula to efficiently compute the sparse coefficient matrix. By using the piecewise Gauss-Legendre quadrature, we turn the calculation of the integration into the matrix operation, which will tremendously reduce the computation time. Another challenging issue is that directly solving the large, fully discrete system obtained from the matrix compression strategy is time-consuming. Inspired by [19], we propose a multilevel iteration method (MIM) to solve the large, fully discrete coupled system, and we further present the computation complexity of the MIM. We also propose a stopping criterion of this iteration process, and we prove that this criterion can maintain the existing convergence rate. Finally, we adopt an a posteriori choice of the regularization parameter related to the MIM and then show that it will lead to an optimal convergence order. This paper is organized into six sections. In Section2, an overview flowchart is displayed first, and then the integral equation model of the first kind is reviewed to reconstruct an out-of-focus image. Following this, an equivalent coupled system is deduced. In Section3, we present the fast multiscale collocation method to discretize the coupled system using piecewise polynomial spaces. Additionally, a compression strategy is developed to generate a sparse coefficient matrix in order to make the coupled equation easily solvable. Finally, we propose an integral approximation strategy to compute the nonzero entries of the compressed coefficient matrix, which turns the operation of the integral into a matrix computation. We also provide a convergence analysis of the proposed method. In Section4, we propose a multilevel iteration method corresponding to the multiscale method to solve the integral equations, and a complete analysis of the convergence rate of the corresponding approximate solution is shown. A posteriori choice of the regularization parameter, which is related to the multilevel iteration method, is presented in Section5, and we further prove that the MIM, combined with this posteriori parameter choice, makes our solution optimal. In Section6, we report three comparative tests to verify the efficiency of our two proposed methods. The first test shows the computing efficiency of the coefficient matrix using the integral approximation strategy and the numerical quadrature scheme in [1]. The other two tests exhibit the performance of the MIM compared with the Gaussian elimination method and the multilevel augmentation method, respectively. These tests reveal the high efficiency of our proposed methods. Mathematics 2020, 8, 346 3 of 22

2. The Integral Equation Model for Image Restoration In this section, we first depict the overall process of reconstructing out-of-focus images and then describe some common notations that are used throughout the paper. Finally, in the second subsection, we introduce the approach to formulating our image restoration problem into an integral equation.

2.1. System Overview We present an overview flowchart for reconstructing an out-of-focus image in Figure1. This process includes four main parts, that is, modeling, discretization, the parameter choice rule, and solving the equation. For the input out-of-focus image, we formulate it into a Tikhonov-regularized integral equation, which is described in the next subsection. Solving this integral equation necessitates discretization. We propose a fully discrete multiscale collocation method based on an integral approximation strategy. This method works by converting the calculation of the integral into a matrix operation. The next two parts describe the parameter choice rule and solution of the problem. We note that these two parts are actually a whole when executed in practice. However, we describe it in two parts because it is too complicated to describe as a whole. For a clearer presentation, we first display the multilevel iteration method in Section4 under the condition that we have already selected a good regularization parameter. Then, we describe how this regularization parameter is chosen in Section5.

Figure 1. Overview flowchart.

Some notations are needed. Suppose that Rd represents d-dimensional Euclidean space. In addition, Ω ⊂ R2 and E ⊂ R denote the subsets of Rd. L∞ is a special kind of space of Lp when p = ∞, and it is proved to be a Banach space. We use x, including x with upper and lower indices, to denote the solution of the equation. Similarly, we use y, including y with upper and lower indices, to denote the known variable of the equation. Furthermore, the operator K represents the blurring kernel, and K∗ represents the adjoint operator of K. The notation A ⊕⊥ B denotes the direct sum of spaces A and B when A⊥B. The notation a ∼ b means that a and b are in the same order. R(·) represents the range of the operator. Finally, many extended and new notations that are not declared here are defined in the context in which they are first used.

2.2. Model Definition In this subsection, we describe an integral equation model obtained from a reformulation of the Tikhonov regularization equation for image restoration. In addition, an equivalent coupled system is developed for solving this integral equation quickly and efficiently. Mathematics 2020, 8, 346 4 of 22

Assume that the whole image is a support set. In general, images are usually rectangular. Let Ω ⊂ R2 denotes the image domain, also the support set. The image restoration can be modeled by a continuous integral equation of the first kind

Kx = y, (1) where y is the blurred image and we aim to reconstruct the blurred image y into a clearer image x : Ω → R. Thus, image x can also be called the reconstructed image. K is the linear compact operator defined in terms of a kernel g by Z (Kx)(u) := g(u, u0)x(u0)du0, u ∈ Ω. (2) Ω

Let Ω = E × E, with E := [0, 1]. The images may not be in [0, 1] × [0, 1], but can be shaped to by scaling and shifting. According to [20], the out-of-focus image is usually modeled with the kernel

1 (u − u0)2 + (v − v0)2 g(u, u0) = exp(− ), (3) 2πσ2 2σ2 where u:= (u, v),u0:= (u0, v0) ∈ Ω, and σ is the parameter characterizing the level of ambiguity in the kernel. Blurring kernel (3) is a two-dimensional symmetric operator. It can be written as a combination of two univariate integral operators. That is to say, solving Equation (1) is equivalent to solving the following system:

Z 1 0 2 1 (v − v ) 0 0 √ exp(− )x1(u, v )dv = y(u, v), 2πσ 0 2σ2 (4) Z 1 0 2 1 (u − u ) 0 0 0 0 √ exp(− )x2(u , v )du = x1(u, v ). 2πσ 0 2σ2

In this system, x2 is the final solution of the reconstruction, and x1 is the intermediate result. These two equations mean that we can first deal with all the rows of the image (first equation) and then all the columns (second equation). Hence, the problem has changed significantly. We have changed the procedure of dealing with a two-dimensional image to processing the rows of the image first, and then the columns. Furthermore, as can be seen from the two equations in (4), the rows and columns of the image are processed similarly, and both processes can be formulated to a one-dimensional integral equation: Kx = y. (5)

For x ∈ X := L∞(E), K is the linear compact operator defined by Z Kx(u) := k(u, u0)x(u0)du0, u ∈ E, E with

1 (u − u0)2 k(u, u0) = √ exp(− ), u, u0 ∈ E, 2πσ 2σ2 and y ∈ X is the observed . Hence, we focus on Equation (5) in the following illustration. Ill-posedness makes Equation (5) difficult to solve, but the Tikhonov regularization method can handle this problem. By using the Tikhonov regularization method, we get the following equation:

∗ ∗ (αI + K K)xα = K y (6) Mathematics 2020, 8, 346 5 of 22 where α > 0 is the regularization parameter, and K∗ is the adjoint operator of K. It is easy to prove that the operator αI + K∗K is invertible. In addition, yδ is usually noisy data, with

δ ky − y k∞ ≤ δ (7)

δ δ for a noise level δ. xα denotes the solution of Equation (6) when y is replaced by y . K is self-adjoint; that is, K∗ = K. Note that Equation (6) includes K∗K, which is defined by a 2-fold integral. This 2-fold δ−K δ vδ = y √ xα integral is tremendously computationally expensive. By letting α : α , the authors of [7] split Equation (6) into a coupled system: ( √ δ − K∗ δ = αxα √ vα 0 δ δ δ . (8) Kxα + αvα = y

These two were proven to be equal in [7], and the advantage is that system (8) only involves one single integral. Thus, we do not solve Equation (6) but instead solve system (8) to reduce the computational difficulties. In the next section, we apply a fully discrete method to solve system (8).

3. Fully Discrete Multiscale Collocation Method In this section, a multiscale method is reviewed first, and then we develop a fully discrete formulation of the multiscale collocation method using an integral approximation strategy. By using this strategy, we turn the computation of the integral into a matrix operation, which will greatly reduce the calculation time.

3.1. Multiscale Collocation Method Let us begin with a quick review of the fast collocation method [10,21–24] for solving system (8). We denote N as the natural numbers set. For n ∈ N+, N+ := N\0, let Zn := {0, 1, 2, ... , n − 1}. Let ∞ ⊥ Xn, n ∈ N, denote the subspaces of L (E). Let W0 := X0, Xn+1 = Xn ⊕ Wn+1, n ∈ N. More clearly,

⊥ ⊥ ⊥ Xn = W0 ⊕ W1 ⊕ · · · ⊕ Wn.

A basis construction of Xn can be found in [25,26]. Suppose that {eij} is an orthogonal basis of Wi. Let Un := {(i, j) : i ∈ Zn+1, j ∈ Zw(i)}, where w(n) is the dimension of Wn. According to (9), we get

Xn = span{eij : (i, j) ∈ Un}. Similarly, the multiscale collocation functionals and the collocation points are important. Suppose that `ij is the collocation functional. Meanwhile, we have a sequence of collocation points {vij : (i, j) ∈ ∞ Un} in E. We define an interpolation projection Pn : L (E) → Xn, n ∈ N and an orthogonal projection 2 Qn : L (E) → Xn, n ∈ N. ∈ K = K| For n N, let n : Pn Xn . The multiscale collocation method for system (8) is to find the solutions δ α,n δ α,n vα,n := ∑ vij eij and xα,n := ∑ xij eij (i,j)∈Un (i,j)∈Un of the system ( √ δ − K∗ δ = αxα,n √nvα,n 0 δ δ δ . (9) Knxα,n + αvα,n = Pny 0 0 For (i, j), (i j ) ∈ Un, after the introduction of three definitions,

δ δ En := [h`i0 j0 , eiji], Kn := [h`i0 j0 , Keiji], yn := [h`i0 j0 , y i], Mathematics 2020, 8, 346 6 of 22

Equation (9) has the matrix form "√ #" # " # − δ αEn √ Kn xα,n 0 δ = δ . (10) Kn αEn vα,n yn

Note that Kn is a dense matrix. The compression of Kn is an important factor of our method. Following the compression strategy of [7], we get the compression matrix Kn:

( 0 h`ij, Kei0 j0 i, i + i ≤ n, Kn = 0, otherwise.

Thus, substituting Kn with Kn, we finally need to solve the equation "√ #" # " # − δ αEn √ Kn xα,n 0 δ = δ . (11) Kn αEn vα,n yn

3.2. Integral Approximation Strategy

The computation of system (11) mainly lies in the integral operator Kn. Next, we focus on this problem and develop an integral approximation strategy using the Gaussian quadrature formula to solve this coupled system. Note that the integral operator K is Z 0 0 Keij(u) = k(u, u )eij(u)du , E and the orthogonal basis functions {eij : (i, j) ∈ Un} are all piecewise polynomial functions. Thus, the piecewise Gauss–Legendre quadrature is used here. Integral approximation strategy: With the supports of these basis functions, we divide E equally n q q+1 S into µ parts. Let E := [ n , n ], for q ∈ n ; then, E = E . Each basis function is a continuous q µ µ Zµ q∈Zµn q function in Eq, q ∈ Zµn . Then, we choose m > 1 Gaussian points in each part Eq and let γ = 2m − 1. All Gaussian points form a set of piecewise Gaussian points G in order. Let g(n) := |G|; then, n g(n) = m × µ . Thus, the integral under the accuracy of γ in Eq can be written as

(q+1)m K ( )| ≈ ( ) ( ) ∈ eij u Eq ∑ ρtk u, gt eij gt , u Eq, (12) t=qm+1 where {gt, t = qm + 1, ··· , (q + 1)m} represents a Gaussian point in Eq, and ρt represents the weight corresponding to gt. Further, the integral with an accuracy of γ in E is

K ( )| ≈ K ( )| eij u E ∑ eij u Eq , q∈Zµn which can be written as a formulation of vector multiplication

T Keij(u)|E ≈ [k(u, g1), k(u, g2),..., k(u, gg(n))][wij(g1), wij(g2),..., wij(gg(n)))] .

Furthermore, as shown in Figure2, we can use this strategy to approximate Kn as a matrix form of

Kn ≈ LnKs(n)×g(n)Wg(n)×s(n), (13) where Wg(n)×s(n) is a basis function matrix with weights. And it stands for all the basis functions {eij, (i, j) ∈ Un} act on all piecewise Gaussian points of G with the weight of ρt. Ln denotes the matrix representation of point evaluation functional `s, and the details of Ln refer to [26]. Mathematics 2020, 8, 346 7 of 22

Figure 2. The components of matrix Kn.

In order to combine it with the matrix compression strategy, we write Equation (13) in blocks. 0 This is the matrix representation Ke n, i, i ∈ Zn+1, which is the approximate value of Kn using the integral approximation strategy.

( 0 (LnKs(n)×g(n)Wg(n)×s(n))ii0 , i + i ≤ n Ke n = (14) 0, otherwise

Then, Equation (10) becomes a formulation of the fully discrete multiscale collocation equation: "√ #" # " # − δ αEn √ Ke n xα,n 0 δ = δ , (15) Ke n αEn vα,n yn where Ke n is as given in Equation (14). ∗ ∗ Let A := K K, An := KnKn. Assume that Kn is the operator corresponding to the matrix ∗ representation Kn using the compression strategy, and define An := KnKn. Additionally, let Ken represent the operator with respect to the matrix representation Ke n using the integral approximation ∗ strategy and the compression strategy, and then define Aen := KenKen. Therefore, corresponding to Equation (11), system (9) becomes ( √ xδ − K∗vδ = αeα,n √en eα,n 0 δ δ δ . (16) Kenxeα,n + αveα,n = Pny

By adopting the integral approximation strategy, the computation of the integral becomes the computation of the matrix multiplication (14), which greatly reduces the calculation. Next, we estimate the convergence order of the fast multiscale collocation method using the integral approximation strategy. We should note that parameter c is different in different scenarios below, unless explicitly stated.

We assume that there exists a constant M such that kKkL2(E)→L∞(E) ≤ M. According to [5,12], for any α > 0, αI + K∗K is invertible. We also have the inequality

√ M ∗ −1 α + 2 ∗ −1 ∗ M k(αI + K K) k∞ ≤ 3 and k(αI + K K) K kL2(E)→L∞(E) ≤ . (17) α 2 α

Assume that xˆ is the exact solution of Equation (1), and y ∈ R(K). (H1) If xˆ ∈ R(AνK∗) with 0 < ν ≤ 1, then there exists an ω ∈ L∞(E) such that xˆ = AνK∗ω. Following from [5,27], if hypothesis (H1) holds, then we have the convergence rate

ν kxˆ − xαk∞ ≤ O(α ) (18) Mathematics 2020, 8, 346 8 of 22

Suppose that c0 > 0. In order to estimate the convergence rate of the integral approximation strategy, we propose the following hypothesis: (H2) For all n ∈ N and some positive constants r, −rn ∗ −rn −rn k(I − Pn)Kk∞ ≤ c0µ , k(I − Pn)K k∞ ≤ c0µ , kK(I − Qn)k∞ ≤ c0µ , ∗ −rn −rn ∗ −rn kK (I − Qn)k∞ ≤ c0µ , k(I − Pn)Kk∞ ≤ c0µ , k(I − Pn)K kL2(E)→L∞(E) ≤ c0µ . when K = K∗ is a Fredholm integral operator with a kernel having the rth derivative, this hypothesis holds true. Following [28], we can write the remark below.

j j+1 2m Remark 1. Let Ej = [ µn , µn ] for j ∈ Zµn . The Gauss-Legendre quadrature of function f ∈ H [0, 1] on Ej is: Z m f (µ)dµ = ∑ ρi f (µi), µi ∈ Ej, i = 1, ··· , m, Ej i=1 where µi, 1 ≤ i ≤ m is a Gaussian point on Ej. The remainder of the Gauss–Legendre quadrature is given as

(2m) Z f (ξ) 2 rm( f ) = ω (µ)dµ, (2m)! Ej

(2m) ( ) = ( − )( − ) ··· ( − ) | f (ξ) | ≤ ∈ where ω µ : µ µ1 µ µ1 µ µm , and (2m)! M for ξ E. We can conclude that: M |r ( f )| ≤ µ−2mn. m µn

Note that when m Gaussian points are used, the accuracy of the Gauss-Legendre quadrature is γ = 2m − 1. Corresponding to Remark1, we give the following proposition.

Proposition 1. Assume that the integral accuracy m satisfies

r > 2m. (19)

Then, we can obtain the conclusion that

−(γ+1)n kKn − Kenk∞ ≤ cµ . (20)

Proof. Assume that r > 2m, the operator K ∈ H2m[0, 1]. Because the polynomial functions are 2m infinitely differentiable, we can get the result that Keij ∈ H [0, 1], for (i, j) ∈ Un. From Remark (1), we gain the following inequality:

( + ) m q 1 c |K (·) − (· ) ( )| ≤ −(γ+1)n sup eij ∑ ∑ ρtk , gt eij gt n µ . (21) ∈ µ j Zµn q∈Zµn t=mq+1

Furthermore, we can infer from inequality (21) that when all µn areas are added, then

−(γ+1)n kKn − Kenk∞ ≤ cµ . (22)

Thus, the proof has been completed.

Note that the Gaussian function has an infinite derivative; thus, assumption (19) is easy to satisfy.

−(γ+1)n −rn/2 Theorem 1. Let c0 > 0, and β(n, γ) = max{µ , nµ }. If hypothesis (H2) and condition (19) are true, then for n ∈ Z, kA − Aenk∞ ≤ c0β(n, γ). (23) Mathematics 2020, 8, 346 9 of 22

Proof. First, kA − Aenk∞ ≤ kA − Ank∞ + kAn − Aenk∞; then, we can prove this theorem by estimating kA − Ank∞ and kAn − Aenk∞ separately. If hypothesis (H2) holds, and according to Lemma (3.2) in [22], we have the conclusion that for all n ∈ N, −rn/2 −rn kA − Ank∞ ≤ cnµ , and kKn − Knk∞ ≤ cnµ .

−(γ+1)n ∗ ∗ From condition (19), we have that kKn − Kenk∞ ≤ cµ . At the same time, kKn − Kenk∞ ≤ −(γ+1)n ∗ cµ . Since kKnk∞ and kKenk∞ are uniformly bounded, we can infer that

∗ ∗ ∗ ∗ ∗ An − Aen = KnKn − KenKen = Kn(Kn − Ken) + (Kn − Ken)Ken. −(γ+1)n −(γ+1)n −rn/2 Thus, we can obtain kAn − Aenk∞ ≤ cµ . Because β(n, γ) = max{µ , nµ }, we can get the inequality

kA − Aenk∞ ≤ kA − Ank∞ + kAn − Aenk∞ ≤ c0β(n, γ). This completes the proof.

Lemma 1. Assume that hypothesis (H2) holds, and c0 is the same parameter as in Theorem1. If parameters n and γ are chosen from 1 α3/2 β(n, γ) ≤ √ , (24) 2c0 α + M/2 then we can conclude that αI + Aen is invertible. In addition, √ −1 2 α + M k(αI + Aen) k∞ ≤ . (25) α3/2

Proof. According to a known result in [29], we conclude that αI + Aen is invertible and

−1 −1 k(αI + A) k∞ k(αI + Aen) k∞ ≤ . (26) −1 1 − k(αI + A) k∞kA − Aenk∞

Estimate (25) follows from the above bound, condition (24), estimates (17), and Theorem1.

Next, we give two lemmas. The proofs are similar to those in [22], so they are omitted here.

Lemma 2. Assume that hypothesis (H2) and assumption (19) hold, xˆ ∈ R(K∗), and inequality (24) is satisfied. Then, for n ∈ N and parameter c > 0,

nµ−rn kxα − xα,nk∞ ≤ c . e α3/2

Lemma 3. Suppose that hypothesis (H2) holds, and inequality (24) is satisfied. Then, for n ∈ N and c > 0,

δ δ −rn δ kxα,n − x k∞ ≤ c( + µ ). e eα,n α α3/2

δ For the remainder of this section, we estimate the error bound for kxˆ − xeα,nk∞.

Theorem 2. Suppose that hypotheses (H1) and (H2) and assumption (19) hold, xˆ ∈ R(K∗), and inequality (24) is satisfied. Then, for c1 > 0, −rn δ ν δ nµ kxˆ − x k∞ ≤ c (α + + ). (27) eα,n 1 α α3/2 Mathematics 2020, 8, 346 10 of 22

Proof. From the triangle inequality, we have

δ δ kxˆ − xeα,nk∞ ≤ kxˆ − xαk∞ + kxα − xeα,nk∞ + kxeα,n − xeα,nk∞. It is apparent that the estimate in this theorem follows directly from the above bound, inequality (18), and Lemmas2 and3.

4. Multilevel Iteration Method In general, we solve Equation (16) while choosing the regularization parameter. When parameter selection is finished, the equation is solved. Note that when executed in practice, these two processes are a whole and occurring simultaneously. However, in order to describe it more clearly, we split it into two processes. In this section, we present the multilevel iteration method (MIM) for a fixed α and assume that this parameter is already well selected. In the next section, we show the regularization parameter choice rule. In this section, we first describe the multilevel iteration method and then present the computation complexity of this algorithm. Finally, the error estimation is then proved.

4.1. Multilevel Iteration Method

δ After obtaining matrices En, Ke n, and y , we begin to solve Equation (15). If we just invert this equation directly, it will require considerable time. Thus, we follow a MIM instead of inversion to obtain a fast algorithm. First, we introduce the MIM to the coupled system (16). We now assume that the fixed parameter α is already selected according to the rule in Section5. With the decomposition of solution domain, for n = k + m and k, m ∈ N+, we can write the δ δ solutions xeα,n ∈ X and veα,n ∈ X of system (16) as two block vectors

( δ δ δ δ T xeα,n = [(xeα,n)0, (xeα,n)1,..., (xeα,n)m] , δ δ δ δ T veα,n = [(veα,n)0, (veα,n)1,..., (veα,n)m] , δ δ δ δ where (xeα,n)0 ∈ Xk, (veα,n)0 ∈ Xk, and (xeα,n)j ∈ Wk+j, (veα,n)j ∈ Wk+j, for j = 1, 2, ... , m. The operator Kek+m also has the following matrix form   PkKek+mQk PkKek+mQk+1 ··· PkKek+mQk+m    Pk+1Kek+mQk Pk+1Kek+mQk+1 ··· Pk+1Kek+mQk+m  Ke :=   . k+m  . . .. .   . . . .  Pk+mKek+mQk Pk+mKek+mQk+1 ··· Pk+mKek+mQk+m The MIM for solving the coupled system (16) can be represented as Algorithm1. Let

L H Kek+m := PkKek+mQk+m, and Kek+m := (Pk+m − Pk)Kek+mQk+m. (28)

L H We split the operator Kek+m into two parts, Kek+m = Kek+m + Kek+m, which are lower and higher ∗L ∗H ∗ frequencies, respectively. Similarly, we can obtain Kek+m and Kek+m by splitting Kek+m using an analogous approach. Accordingly, the coupled system (16) can be written as ( √ αxδ − K∗L vδ = K∗H vδ , eα,k+m ek√+m eα,k+m ek+m eα,k+m L δ δ δ H δ (29) Kek+mxeα,k+m + αveα,k+m = yk+m − Kek+mxeα,k+m. Mathematics 2020, 8, 346 11 of 22

Algorithm 1: Multilevel Iteration Method (MIM). Step 1 : Initialization. For a fixed k > 0 and fixed α > 0, solve (16) with n = k exactly to obtain δ δ xeα,k, veα,k ∈ Xk.  δ   δ  δ,0 xeα,k δ,0 veα,k L Step 2 : Projection. Set x := ∈ Xk+m and v := ∈ Xk+m, then compute Ke , eα,k+m 0 eα,k+m 0 k+m H ∗L ∗H Kek+m, and Kek+m, Kek+m separately. ` ∈ δ,` = [( δ,` ) ( δ,` ) ( δ,` ) ]T ∈ Step 3 : Update. For any N+, find xeα,k+m xeα,k+m 0, xeα,k+m 1, ... , xeα,k+m m Xk+m, δ,` = [( δ,` ) ( δ,` ) ( δ,` ) ]T ∈ ( δ,` ) ∈ ( δ,` ) ∈ and veα,k+m veα,k+m 0, veα,k+m 1, ... , veα,k+m m Xk+m with xeα,k+m 0 Xk, xeα,k+m j ( δ,` ) ∈ ( δ,` ) ∈ = Wk+j, and veα,k+m 0 Xk, veα,k+m j Wk+j, j 1, 2, . . . , m from the iteration

( √ ` ` `− αxδ, − Ke∗L vδ, = Ke∗H vδ, 1 , eα,k+m k√+m eα,k+m k+m eα,k+m . KL δ,` + δ,` = δ − KH δ,`−1 ek+mxeα,k+m αveα,k+m yk+m ek+mxeα,k+m k δ,` − δ,`−1 k < k δ,` − δ,`−1 k < Step 4 : Stopping criterion. Stop the iteration when xeα,k+m xeα,k+m ∞ δ, veα,k+m veα,k+m ∞ δ.

4.2. Computation Complexity We now turn to studying the computation complexity of this algorithm. Specifically, we estimate the number of multiplications used in the method. As a result, we write the iterative equation in 0 Algorithm1 in the matrix representation form. First, we introduce the block matrix Ke i0i := [Kei0 j0,ij : j ∈ 0 Zw(i0), j ∈ Zw(i)] and define Ke n := [Ke i0i : i , i ∈ Zn+1]. Moreover, for a fixed k ∈ N, we define the blocks k 0 k 0 k Ke 0,0 := Ke k. Additionally, for s, s ∈ N, we define Ke 0,s := [Ke i0i : i ∈ Zk+1, i = k + s], Ke s,0 := [Ke i0i : i ∈ 0 k k 0 Zk+1, i = k + s], and Ke s0,s := Ke k+s0,k+s. From these definitions, we write Ke k+m = [Ke i0,i : i , i ∈ Zm+1]. We also partition matrix En in the same way, which we omit here. L H Then, the matrix representations of the operators Kek+m and Kek+m are

 k k k   0 0 ··· 0  Ke 0,0 Ke 0,1 ··· Ke 0,m    k k ··· k  L  0 0 ··· 0  H Ke 1,0 Ke 1,1 Ke 1,m  Ke k,m :=   , and Ke k,m :=  . . .  , respectively. (30)  . . .. .   . . .. .   . . . .   . . . .  k k k 0 0 ··· 0 Ke m,0 Ke m,1 ··· Ke m,m

δ,` δ,` ` We also write down the matrix representations of the operators xeα,k+m and veα,k+m at the th ` ` ` ` δ iteration as exk,m := [exi : i ∈ Zm+1] and vek,m := [vei : i ∈ Zm+1]. Furthermore, we define yk+m := δ [yi : i ∈ Zm+1]. Using this block form of the matrix, the solutions of the iterative equation in Algorithm1 become

m m 1 k ` √ `−1 k ` exi = ∑ Ke i,jvej − ∑ Ei,jexj , i = m, m − 1, . . . , 1. (31) α j=0 j=i+1

m m 1 1 k ` √ δ √ `−1 k ` vei = yi − ∑ Ke i,jexj − ∑ Ei,jvei , i = m, m − 1, . . . , 1. (32) α α j=0 j=i+1 For i = 0,

"√ #" # " #  √ k  " # ` m − αEk K ` αEk −Ke k ex0 0 0,j e 0,j exj √ ` = +  √  ` . (33) Ke αE v yδ ∑ k k v k k e0 0 j=1 −Ke 0,j − αE0,j ej Mathematics 2020, 8, 346 12 of 22

For a matrix A, we denote by N (A) the number of nonzero entries of A. Let "√ # αEk −Ke k Rk = √ . Ke k αEk

−1 For ` > 0, we need 2N (Ke k,m) + 2N (Ek+m) multiplications with an inverse operation Rk to obtain ` ` exi and vei , i = m, m − 1, ... , 1, 0 from Equation (31). However, in all iterations, the inverse operation −1 −1 Rk only needs to be done once; thus, we assume that the inverse operation Rk needs M(k) ` ` multiplications. Hence, the number of multiplications for computing exi and vei , i = m, m − 1, ... , 1, 0 `−1 `−1 from exi and vei , i = m, m − 1, . . . , 1, 0 is

` Nk,m = 2N (Ke k+m) + 2N (Eek+m) + M(k). (34)

−1 0 0 In addition, we only need to compute the same inverse operation Rk to obtain exk,m and vek,m in the first step in Algorithm1. Therefore, we are now ready to summarize the above discussion in a proposition.

` ` Proposition 2. The total number of multiplications required to obtain exk,m and vek,m is given by

M(k) + `[2N (Kek+m) + 2N (Eek+m)]. (35)

Note that when we solve the coupled system (16) directly, we need to compute the inverse −1 operation Rk+m. However, when we use the MIM, we only need to compute the inverse operation −1 Rk . This is the key factor that leads to a fast algorithm. k − δ,` k In the next subsection, we estimate the error xˆ xeα,k+m ∞.

4.3. Error Estimation From the triangle inequality

k − δ,` k ≤ k − δ k + k δ − δ,` k xˆ xeα,k+m ∞ xˆ xeα,k+m ∞ xeα,k+m xeα,k+m ∞, k δ − δ,` k we only need to estimate xeα,k+m xeα,k+m ∞. Let

∗ H ∗H L ∗L L Dk,m := KenKek+m + Kek+mKek+m, Fk,m(α) := αI + Kek+mKek+m. ∗ Then, αI + KenKen = Dk,m + Fk,m(α).

Lemma 4. Suppose that hypotheses (H1) (H2) and condition (24) hold, then kDk,mk∞ → 0, as k → ∞, for m ∈ N+. Then for N ∈ N+, k > N and m ∈ N+,

3/2 α −1 1 kDk,mk∞ < √ , and kF (α)k∞ ≤ . (36) 2 α + M k,m √α3/2 − kD k 2 α+M k,m ∞

Proof. If these two hypotheses are true, then there exists a constant c2 > 0 such that

k(I − Pk)Kenk∞ ≤ k(I − Pk)Knk∞ + k(I − Pk)(Kn − Kn)k∞ + k(I − Pk)(Kn − Ken)k∞ α3/2 ≤ c2β(β, γ) ≤ √ , 2 α + M Mathematics 2020, 8, 346 13 of 22

∗ and k(I − Pk)Kenk∞ is the same. We rewrite kDk,mk∞ as

3/2 ∗ ∗ ∗ α kD k∞ = k(I − P )Ke P Ke + Ke (I − P )Kenk∞ ≤ √ → 0, as k → ∞. (37) k,m k n k n n k 2 α + M

From the definition of Fk+m(α),(17), and Theorem1, for any u ∈ Xk+m, we have

3/2 ∗ α kF (α)uk∞ ≥ k(αI + Ke Ken)uk∞ − kD uk∞ ≥ ( √ − kD k∞)kuk∞. (38) k,m n k,m 2 α + M k,m

This, together with Equation (37), proves that the first part of (36) is true. Then, obviously, the second part is true.

A corollary (cf, Corollary 9.9, page 337 of [10]) confirms that the condition numbers cond(αI + ∗L L Ken Ken) and cond(αI + Ken) have the same order. In other words, the MIM will not ruin the well-condition property of the original multiscale method. However selecting an appropriate k is important, which will influence the result of the iteration process. It follows from the iterative equation in Algorithm1 that ( + K∗L KL ) δ,` = K∗L ( δ − K∗H δ,`−1 ) αI ek+m ek+m xeα,k+m ek+m yk+m ek+mxeα,k+m

+ K∗H ( δ − KH δ,`−2 − KL δ,`−1 ) ek+m yk+m ek+mxeα,k+m ek+mxeα,k+m . (39)

Generally, in order to make the iteration process convergence, we would choose k such that

∗L L −1 ∗L ∗H k(αI + Kek+mKek+m) Kek+mKek+mk∞ < 1, (40) and ∗L L −1 ∗H ∗H k(αI + Kek+mKek+m) Kek+mKek+mk∞ < 1. (41)

Theorem 3. If hypothesis (H2) and condition (24) hold, let

` −rk 2 δ kµ γ ` := c t (α)( + ), (42) eα,k,m, 3 k α α3/2 where µ−rk tk(α) = , √α3/2 − kD k 2 α+M k,m ∞ and then for N0 ∈ N+, k > N0, and m ∈ N+, we obtain the result

k δ − δ,` k ≤ xeα,k+m xeα,k+m ∞ γeα,k,m,`. (43)

Proof. We prove this result by induction on `. ` = δ,0 = δ First, we prove that Equation (43) holds when 0. It is apparent that xeα,k+m xeα,k. Therefore, from the result of Lemmas2 and3, we can conclude that there exists a constant c,

δ δ δ δ kxeα,k+m − xeα,kk∞ ≤ kxα − xeα,k+mk∞ + kxα − xeα,kk∞ δ kµ−rk (44) ≤ c( + ), α α3/2 and then we obtain Equation (43) when ` = 0. Meanwhile, following from Equation (29), we can obtain

∗L L ∗L H ∗H L δ ∗ δ (αI + Kek+mKek+m + Kek+mKek+m + Kek+mKek+m)xeα,k+m = Kek+myk+m (45) Mathematics 2020, 8, 346 14 of 22

Subtracting (39) from (45), we have

2 δ,` − δ = F −1 ( ) xeα,k+m xeα,k+m k,m α ∑ wi (46) i=1 where

= (K∗L KH + K∗H KL )( δ − δ,`−1 ) w1 ek+m ek+m ek+m ek+m xeα,k+m xeα,k+m , and = K∗H KH ( δ − δ,`−2 ) w2 ek+m ek+m xeα,k+m xeα,k+m .

On the basis of Lemma4, for N0 ∈ N+ and k > N0,

2 k δ − δ,` k ≤ kF −1 ( )k k k xeα,k+m xeα,k+m ∞ k,m α ∞ ∑ wi ∞. (47) i=1

∗H H ∗L H ∗H L Next, we estimate Kek+mKek+m, Kek+mKek+m, and Kek+mKek+m. From hypothesis (H2), we have

H H ∗ ∗ kKek+mKek+mk∞ = k(Pk+m − Pk)Kek+mQk+m(Pk+m − Pk)Kek+mQk+mk∞ (48) ∗ ∗ −2rk ≤ kPk+m(I − Pk)Kek+mPk+m(I − Pk)Kek+mk∞ ≤ cµ , and L H ∗ ∗ kKek+mKek+mk∞ = kPkKek+mQk+m(Pk+m − Pk)Kek+mQk+mk∞ (49) ∗ −rk ≤ MkPk+m(I − Pk)Kek+mk∞ ≤ cµ . Then, we get

k δ − δ,` k ≤ −rkkF −1 ( )k ( + ) xeα,k+m xeα,k+m ∞ cµ α,k+m α ∞ γeα,k,m,`−1 γeα,k,m,`−2 (50) ≤ ctk(α)(γeα,k,m,`−1 + γeα,k,m,`−2).

We assume that Equation (43) holds for ` = q − 1, q ∈ N+, and prove that it also holds for ` = q. Following from Equations (40) and (41), we can obtain

µ−rk tk(α) = < 1. (51) √α3/2 − kD k 2 α+M k,m ∞

From Equation (50) and inequality (51), we obtain

q−1 q−2 −rk δ δ,q 2 2 δ kµ kx − x k∞ ≤ ct (α)[ct (α) + ct (α)]( + ) eα,k+m eα,k+m k k k α α3/2 q+1 q δ kµ−rk = c[ct 2 (α) + ct 2 (α)]( + ) (52) k k α α3/2 q δ kµ−rk ≤ c t 2 (α)( + ) = γ , 3 k α α3/2 eα,k,m,q which completes the proof of this theorem.

Theorem 4. If hypotheses (H1) and (H2) and assumption (19) hold, the parameters n and γ are chosen to satisfy condition (24), and the integer k is chosen to satisfy Equations (40) and (41), then for N0 ∈ N+, k > N0, and m ∈ N+, −rn ` −rk δ,` ν δ nµ 2 kµ kxˆ − x k∞ ≤ c [α + + + t (α) ]. (53) eα,k+m 4 α α3/2 k α3/2 Mathematics 2020, 8, 346 15 of 22

Moreover, if parameters k and m are chosen on the basis of n := k + m, and the number of iterations ` satisfies nµ−rn ` kµ−rk 2δ [ + t 2 (α) ] ≤ , (54) α3/2 k α3/2 α then δ kxˆ − xδ,` k ≤ 3c (αν + ). (55) eα,k+m ∞ 4 α

Proof. From the triangle inequality, we have

k − δ,` k ≤ k − δ k + k δ − δ,` k xˆ xeα,k+m ∞ xˆ xeα,k+m ∞ xeα,k+m xeα,k+m ∞. The combination of Theorems2 and4 implies the inequality (53). If the three parameters n, γ, and k are chosen to satisfy conditions (40), (41), and (54), together with m := n − k, as well as inequality (53), we can conclude that

δ δ kxˆ − xδ,` k ≤ c (αν + 3 ) ≤ 3c (αν + ), (56) eα,k+m ∞ 4 α 4 α which completes the proof.

Note that

k δ,` − δ,`−1 k ≤ k δ,` − δ k xeα,k+m xeα,k+m ∞ xeα,k+m xeα,k+m ∞ ≤ γeα,k,m,`−1 and γeα,k,m,` is exponentially decreasing. Therefore, the stopping criterion can be reached at a finite step. With the next remark, we show that the stopping criterion of Algorithm1 can guarantee the convergence rate of Equation (55).

Remark 2. Suppose that hypotheses (H1) and (H2) and assumption (19) hold. The parameters n and γ are −rn selected to satisfy condition (24), and n satisfies nµ ≤ δ . If the iteration stops when α3/2 α

k δ,` − δ,`−1 k < k δ,` − δ,`−1 k < xeα,k+m xeα,k+m ∞ δ and veα,k+m veα,k+m ∞ δ, (57) then the estimation error is δ kxˆ − xδ,` k ≤ c(αν + ). (58) eα,k+m ∞ α

Proof. From the triangle inequality, we get k − δ,` k ≤ k − δ k + k δ − δ,` k xˆ xeα,k+m ∞ xˆ xeα,k+m ∞ xeα,k+m xeα,k+m ∞. δ,` On the one hand, following from the iterative equation in Algorithm1, we can write xeα,k+m as √ δ,` = ( + K∗ K )−1[ K∗H ( δ,`−1 − δ,` ) − K∗ K∗H ( δ,`−1 − δ,` ) + K∗ δ] xeα,k+m αI ek+m ek+m α ek+m veα,k+m veα,k+m ek+m ek+m xeα,k+m xeα,k+m ek+my . (59)

On the other hand, δ ∗ −1 ∗ δ xeα,k+m = (αI + Kek+mKek+m) Kek+my . (60) Combining Equations (59) and (60), we have

√ αKH K∗ KH k δ,` − δ k ≤ k ek+m k k δ,`−1 − δ,` k + k ek+m ek+m k k δ,`−1 − δ,` k xeα,k+m xeα,k+m ∞ ∗ ∞ veα,k+m veα,k+m ∞ ∗ ∞ xeα,k+m xeα,k+m ∞. αI+Kek+mKek+m αI+Kek+mKek+m Mathematics 2020, 8, 346 16 of 22

From the right inequality of (17) and the error bound (57), we can obtain

δ kxδ − xδ,` k ≤ c . (61) eα,k+m eα,k+m ∞ α

−rn According to Theorem2, the above inequality, and also nµ ≤ δ , we can get the final result: α3/2 α

δ kxˆ − xδ,` k ≤ c(αν + ). (62) eα,k+m ∞ α

5. Regularization Parameter Choice Strategies Choosing an appropriate regularization parameter α is an important process in solving the ill-posed integral equation. We present a posteriori parameter choice strategy [14] for our proposed method. For any given α > 0, we choose the parameters k(α), m(α), n(α), γ according to Theorem4. Following from [14], we assume that there exist two increasing continuous functions,

ν α ϕ(α) := 3c4α and ψ(α) := , (63) 3c4 with ϕ(0) = 0, such that we can write Equation (53) as

δ kxˆ − xδ,` k ≤ ϕ(α) + . (64) eα,k+m ∞ ψ(α)

−1 Then, α = αopt := (ϕψ) (δ) would be the best choice. For constants q0 > 1 and ρ > 0, we let the positive integer N be determined by

N−1 N ρδq0 ≤ 1 < ρδq0 , (65)

i and then define the set ∆N := {αi := ρδq0 : i = 0, 1, ... , N}, and define M(∆N) := {αi : αi ∈ δ ∆N, ϕ(αi) ≤ }. Obviously, α∗ := max{αi : αi ∈ M(∆N)} can be the approximation of the ψ(αi) regularization parameter, but the function ϕ(α) involves the unknown smoothness order ν of the integral operator. Therefore, it is infeasible to use M(∆N) directly, and a little modification is necessary. We next present a rule for choosing the regularization parameter α.

+ Rule 1. As suggested in [14], we choose the parameter α := α+ = max{αj : αj ∈ M (∆N)} as an approximation of αopt, where

δ +( ) = { ∈ k δ,` − δ,` k ≤ = } M ∆N : αj : αj ∆N, xeα ,n(α ) xeα ,n(α ) ∞ 4 , i 0, 1, . . . , j j j i i ψ(αi)

This is a posteriori parameter choice strategy, which does not involve the smoothness order ν. We next present a crucial lemma, which can be found in Theorem 2.1 of [14].

Lemma 5. Suppose that assumption (64) holds, and k(α+) and m(α+) are the integers corresponding to α+. If for αi ∈ ∆N, i = 1, 2, ... , N, we have M(∆N) 6= ∅, ∆N\M(∆N) 6= ∅, and ψ(αi) ≤ cψ(αi−1), for c > 0, then we can get the result kxˆ − xδ,` k ≤ 6cϕ((ϕψ)−1(δ)). (66) eα+,k(α+)+m(α+) ∞ Mathematics 2020, 8, 346 17 of 22

Proof. This lemma can be obtained from [7] with a slight modification. Thus, we omit the details of the proof.

Then, we estimate the convergence order when we choose the parameter α := α+.

Theorem 5. Suppose that hypotheses (H1) and (H2) and assumption (19) are true, and if we choose the regularization parameter according to Rule1, then the convergence order of the approximate solution xδ,` is eα+,k(α+)+m(α+)

δ,` ν kxˆ − x k = O(δ ν+1 ), as δ → 0, m ∈ . (67) eα+,k(α+)+m(α+) ∞ N0

Proof. Substituting Equation (63) into Equation (66), we can obtain Equation (67). Thus, it is sufficient to verify the hypotheses of Lemma5. From the definition of ∆N, we can get ψ(αi) = q0ψ(αi−1). On the 0 ν+1 ν+1 ν+1 ν δ one hand, ϕ(α0)ψ(α0) = (ρδq ) = (ρδ) , and ρ δ ≤ 1, and then we can obtain ϕ(α0) ≤ . 0 ψ(α0) N ν+1 δ On the other hand, ϕ(αN)ψ(αN) = (ρδq ) > 1 and δ ≤ 1, and then ϕ(αN) > . In sum, we 0 ψ(αN ) can conclude that α0 ∈ M(∆N) and αN ∈/ M(∆N). Therefore, M(∆N) 6= ∅, and at the same time, we also have ∆N\M(∆N) 6= ∅. So far, we have proven that the conditions of Lemma5 are met. Therefore, there exists a constant c5 such that

δ,` − ν kxˆ − x k ≤ 6q ϕ((ϕψ(δ)) 1) = c δ ν+1 . (68) eα+,k(α+)+mα+ ∞ 0 5

The proof has been completed.

6. Numerical Experiments In this section, three numerical examples are presented for the restoration of out-of-focus images to verify the performance of the fully discrete multiscale collection method and the multilevel iteration method. Matlab was used to conduct our simulations, and all examples below were run on a computer with a 3.00 GHz CPU and 8 GB memory. Using the integral equation necessitates transforming the discrete matrix (the observed image) into a continuous function. We used the method in [1] directly. Assume that the size of the image is ro × co, and the pixels of image are on the grid {(i/ro, j/co) : i = 0, 1, ... , ro; j = 0, 1, ... , co}. The function to formulate the image is

ro co h(u, v) := ∑ ∑ hijφi,ro(u)φj,co(v), (69) i=0 j=0 where hij is the old pixel value. Assume that s is a positive integer. Then, for l = 0, 1, . . . , s,  (st − (l − 1)), (l − 1)/s < t ≤ l/s,  φl,s(t) = ((l + 1) − st), l/s < t ≤ (l + 1)/s, (70)  0, otherwise.

Then Equation (5) becomes

ro co Z 1 Z 1 (u−u0)2+(v−v0)2 1 − 0 0 0 0 ( ) = 2σ2 ( ) ( ) y u, v ∑ ∑ 2 e φi,ro u φj,co v du dv . (71) i=0 j=0 0 0 2πσ

The noise level is defined as δ = kyk∞ · e/100. Note that we employ the piecewise linear functions in [26] for simplicity in the following examples. Mathematics 2020, 8, 346 18 of 22

Example 1. In our first experiment, we verified the effectiveness of the integral approximation strategy for the coupled integral equation. We set n = 0 as the initial level and measured the computing time in seconds to generate the coefficient matrix Kn of the coupled operator Equation (9) for the range n = 5 to 10 in Table1. For comparison, we repeated the experiment of the numerical quadrature scheme in [1]. In this example, we set both integral methods to have the same accuracy, γ = 3. For different values of n, TN denotes the computing time of the numerical quadrature scheme in [1], and TI denotes the computing time of the proposed integral approximation strategy. As we can see in Table1, the proposed integral approximation strategy uses less time to generate the coefficient matrix Kn, which proves that the integral approximation strategy is an efficient fast algorithm.

Table 1. Computing time for generating matrices Kn.

n 5 6 7 8 9 10

TN 0.31 0.58 1.17 2.42 5.45 13.98 TI 0.002 0.004 0.013 0.047 0.190 2.14

Example 2. The second simulation was executed to verify the efficiency of the proposed multilevel iteration method. Figure3a is the original clear ’building’ image with the size 256 × 384, and the blurred image is recovered in Figure3b with σ = 0.01 in the blurring kernel and noise level δ = 0.01 using the MIM. For comparison, we also conducted experiments using the Gaussian elimination method (GEM) in [28] and the multilevel augmentation method (MAM) in [10]. TGEM, TMAM and TMIM represent the computing time of the Gaussian elimination method, the multilevel augmentation method and the multilevel iteration method, respectively. The value n listed in Table2 ranges from 5 to 10, and in this case, the continuous intensity function (69) is needed. As shown in Section2, two one-dimensional integral equations are solved to recover the blurred image. Therefore, the computing time here denotes the summation of the time needed to solve the coupled Equation (8) twice. In our MIM experiments, all processes stopped at the second iteration, which is δ δ very fast because of the good selection of the initial values xeα,k and veα,k. These two initial values can be found in Step 1 of Algorithm1. Figure3c–e show the reconstructed images of (b) using the GEM, the MAM and the MIM. Meanwhile, Table2 and Figure4 exhibit the computing time of these three methods. On the whole, the computing time of the MIM is the least among the results of these methods. All results show the proposed multilevel iteration method requires less computational resources. The difference is obvious, especially when the indicator of the extended dimension n is large.

Table 2. The computing time of the GEM, the MAM and the MIM.

n 5 6 7 8 9 10

TGEM 0.0131 0.0645 0.3535 2.4250 15.8140 100.6399 TMAM 0.0103 0.0528 0.3454 2.2584 13.8898 82.9874 TMIM 0.0084 0.0500 0.3478 1.8875 11.0880 64.6845 Mathematics 2020, 8, 346 19 of 22

Figure 3. (a) The original clear ’building’ image. (b) Blurred image with σ = 0.01 and δ = 0.01.(c–e) are reconstructed images of (b) with the GEM, the MAM, and the MIM.

Figure 4. The computing time of the GEM, the MAM and the MIM.

Example 3. In this example, we demonstrate that the performance of the MIM is as good as the alternative method. As shown in Figure5b–d, we consider the restoration of the ’Lena’ image, which has the size 257 × 257. We use the image with σ = 0.01 in the blurring kernel and different noise level δ. Note that when δ = 0, the image is noise free. We introduce the peak signal-to-noise ratio (PSNR) to evaluate the restored images and blurred images. For comparison, we solved the corresponding integral equation by using the proposed multilevel iteration method and the Gaussian elimination method with the piecewise linear polynomial basis functions at n = 8. Tables3 and4 list α+ obtained from the parameter choice using Rule1, the PSNR value of the blurred image (PB), the PSNR value of the reconstructed image (PR), and the corresponding time to solve the equation using the GEM and the MIM separately, where the noise level δ ranges from 0 to 0.15. Figure5e–j show the reconstructed image corresponding to different methods and different noise levels of 0, 0.03, and 0.1. In general, by comparing the numerical experiments in Tables3 and4, we can conclude that there is almost no difference in the PSNR value of the reconstructed image using the GEM and the MIM, but more specifically, the MIM performs better in most cases. But more seriously, the MIM performs better in most cases. Mathematics 2020, 8, 346 20 of 22

Figure 5. (a) Original clear ’Lena’ image. (b–d) are blurred images with σ = 0.01 in kernel and noise level δ = 0, 0.03, 0.1, respectively. (e–g) are reconstructed images of (b–d) respectively using the GEM. (h–j) are reconstructed images of (b–d) respectively using the MIM.

Table 3. Performance of the Gaussian elimination method.

δ 0 0.01 0.03 0.05 0.10 0.15 −7 α+ 4.3980× 10 0.0067 0.0167 0.0256 0.04 0.05 PB 23.0173 22.9552 22.4792 21.6726 19.1102 16.7175 PR 31.9072 25.8306 24.5039 23.6348 21.9135 20.7536 Mathematics 2020, 8, 346 21 of 22

Table 4. Performance of the multilevel iteration method.

δ 0 0.01 0.03 0.05 0.10 0.15 −6 α+ 6.6570× 10 0.009 0.0199 0.0240 0.0394 0.0526 PB 23.0173 22.9528 22.4801 21.6692 19.1201 16.7044 PR 30.9030 25.9479 24.5537 23.6281 21.9105 20.7557

Example2, combined with Example3, proves that the MIM uses less computation time than the alternative methods, while the performance is equally well. Therefore, we can conclude that the multilevel iteration method is an effective fast algorithm to solve the integral equation in image restoration.

7. Conclusions In this paper, we formulate the problem of image restoration as an integral equation. In order to solve this integral equation, we propose two fast algorithms. The first one is the fully discrete multiscale collocation method, which converts the calculation of the integral to a matrix operation. The second one is the multilevel iteration method, which guarantees that the solution has an optimal order. All examples verify that the proposed methods are accurate and efficient when compared with alternative strategies. In the future, we will still focus on finding faster and more efficient methods.

Author Contributions: B.Z. completed the analysis and designed the paper; H.Y. managed the project and reviewed the paper. All authors have read and agreed to the published version of the manuscript. Funding: This work is supported in part by NSFC under grant 11571386. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Lu, Y.; Shen, L.; Xu, Y. Integral equation models for image restoration: High accuracy methods and fast algorithms. Inverse Probl. 2010, 26, 045006. [CrossRef] 2. Chan, R.H.; Chan, T.F.; Shen, L.; Shen, Z. Wavelet algorithms for high-resolution image reconstruction. SIAM J. Sci. Comput. 2003, 24, 1408–1432. [CrossRef] 3. Jiang, Y.; Li, S.; Xu, Y. A Higher-Order Polynomial Method for SPECT Reconstruction. IEEE Trans. Med. Imaging 2019, 38, 1271–1283. [CrossRef][PubMed] 4. Liu, Y.; Shen, L.; Xu, Y.; Yang, H. A collocation method solving integral equation models for image restoration. J. Integral Eq. Appl. 2016, 28, 263–307. [CrossRef] 5. Groetsch, C.W. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind; Research Notes in Mathematics; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984. 6. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996. 7. Chen, Z.; Ding, S.; Xu, Y.; Yang, H. Multiscale collocation methods for ill-posed integral equations via a coupled system. Inverse Probl. 2012, 28, 025006. [CrossRef] 8. Russell, R.D.; Shampine, L.F. A collocation method for boundary value problems. Numer. Math. 1972, 19, 1–28. [CrossRef] 9. Krasnosel’skii, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitskii, Y.B.; Stetsenko, V.Y. Approximate Solution of Operator Equations; Wolters-Noordhoff Pub: Groningen, The Netherlands, 1972. 10. Chen, Z.; Micchelli, C.A.; Xu, Y. Multiscale Methods for Fredholm Integral Equations; Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 2015. 11. Micchelli, C.A.; Xu, Y.; Zhao, Y. Wavelet Galerkin methods for second-kind integral equations. J. Comput. Appl. Math. 1997, 86, 251–270. [CrossRef] 12. Groetsch, C.W. Convergence analysis of a regularized degenerate kernel method for Fredholm integral equations of the first kind. Integral Eq. Oper. Theory 1990, 13, 67–75. [CrossRef] Mathematics 2020, 8, 346 22 of 22

13. Hämarik, U.; Raus, T. About the balancing principle for choice of the regularization parameter. Numer. Funct. Anal. Optim. 2009, 30, 951–970. [CrossRef] 14. Pereverzev, S.; Schock, E. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM J. Numer. Anal. 2005, 43, 2060–2076. [CrossRef] 15. Bauer, F.; Kindermann, S. The quasi-optimality criterion for classical inverse problems. Inverse Probl. 2008, 24, 035002. [CrossRef] 16. Jin, Q.; Wang, W. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule. Inverse Probl. 2018, 34, 035001. [CrossRef] 17. Rajan, M.P. Convergence analysis of a regularized approximation for solving Fredholm integral equations of the first kind. J. Math. Anal. Appl. 2003, 279, 522–530. [CrossRef] 18. Ma, Y.; Xu, Y. Computing involved the Gaussian function with a small standard deviation. J. Sci. Comput. 2019, 78, 1744–1767. [CrossRef] 19. Luo, X.; Fan, L.; Wu, Y.; Li, F. Fast multi-level iteration methods with compression technique for solving ill-posed integral equations. J. Comput. Appl. Math. 2014, 256, 131–151. [CrossRef] 20. Gonzalez, R.C.; Wintz, P. Digital Image Processing; Addison-Wesley Publishing Co.: Reading, MA, USA; London, UK; Amsterdam, The Netherlands, 1977. 21. Chen, Z.; Micchelli, C.A.; Xu, Y. A construction of interpolating wavelets on invariant sets. Math. Comp. 1999, 68, 1569–1587. [CrossRef] 22. Chen, Z.; Xu, Y.; Yang, H. Fast collocation methods for solving ill-posed integral equations of the first kind. Inverse Probl. 2008, 24, 065007. [CrossRef] 23. Chen, Z.; Xu, Y.; Yang, H. A multilevel augmentation method for solving ill-posed operator equations. Inverse Probl. 2006, 22, 155–174. [CrossRef] 24. Chen, Z.; Micchelli, C.A.; Xu, Y. Fast collocation methods for second kind integral equations. SIAM J. Numer. Anal. 2002, 40, 344–375. [CrossRef] 25. Micchelli, C.A.; Xu, Y. Reconstruction and decomposition algorithms for biorthogonal multiwavelets. Multidimens. Syst. Signal Process. 1997, 8, 31–69. [CrossRef] 26. Fang, W.; Lu, M. A fast collocation method for an inverse boundary value problem. Int. J. Numer. Methods Eng. 2004, 59, 1563–1585. [CrossRef] 27. Groetsch, C.W. Uniform convergence of regularization methods for Fredholm equations of the first kind. J. Aust. Math. Soc. Ser. A 1985, 39, 282–286. [CrossRef] 28. Kincaid, D.; Cheney, W. Numerical Analysis: Mathematics of Scientific Computing; Brooks/Cole Publishing Co.: Pacific Grove, CA, USA, 1991. 29. Taylor, A.E.; Lay, D.C. Introduction to , 2nd ed.; Robert E., Ed. Krieger Publishing Co., Inc.: Melbourne, FL, USA, 1986.

c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).