<<

mathematics

Article Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition

Pornsarp Pornsawad 1,2, Elena Resmerita 3,* and Christine Böckmann 4,5

1 Department of Mathematics, Faculty of Science, Silpakorn University, 6 Rachamakka Nai Rd., Nakhon Pathom 73000, Thailand; [email protected] 2 Centre of Excellence in Mathematics, Mahidol University, Rama 6 Rd., Bangkok 10400, Thailand 3 Institute of Mathematics, Alpen-Adria University of Klagenfurt, Universitätsstr. 65-67, A-9020 Klagenfurt, Austria 4 Institute of Mathematics, University of Potsdam, Karl-Liebknecht-Str. 24-25, 14476 Potsdam, Germany; [email protected] 5 Helmholtz Centre for Polar and Marine Research, Alfred Wegener Institute, Telegrafenberg A45, 14473 Potsdam, Germany * Correspondence: [email protected]

Abstract: We prove the logarithmic convergence rate of the families of usual and modified iterative Runge-Kutta methods for nonlinear ill-posed problems between Hilbert spaces under the logarith- mic source condition, and numerically verify the obtained results. The iterative regularization is terminated by the a posteriori discrepancy principle.   Keywords: nonlinear inverse problem; ill-posed problem; iterative regularization; Runge-Kutta Citation: Pornsawad, P.; Resmerita, methods; logarithmic source condition; discrepancy principle; convergence rate E.; Böckmann, C. Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition. 1. Introduction Mathematics 2021, 9, 1042. https://doi.org/10.3390/ Let X and Y be infinite-dimensional real Hilbert spaces with inner products h·, ·i and math9091042 norms k · k. Let us consider a nonlinear ill-posed operator equation

Communicated by: Denis N. Sidorov F(w) = g, (1) ( ) ⊂ → Received: 12 March 2021 where F : D F X Y is a nonlinear operator between the Hilbert spaces X and Y. We + Accepted: 1 May 2021 assume that (1) has a solution w for exact data (which need not be unique). We have ε Published: 4 May 2021 approximate data g with kgε − gk ≤ ε, ε > 0. (2) Publisher’s Note: MDPI stays neutral Besides the classical Tikhonov–Phillips regularization, a plethora of interesting variational with regard to jurisdictional claims in and iterative approaches for ill-posed problems can be found, e.g., in Morozov [1], Tikhonov published maps and institutional affil- and Arsenin [2], Bakushinsky and Kokurin [3], and Kaltenbacher et al. [4]. We focus here iations. on iterative methods, as they are also very popular and effective to use in applications. The simplest iterative regularization is the Landweber method—see, e.g., Hanke et al. [5], where the analysis for convergence rates is done under Hölder-type source condition. A more effective method often used in applications is the Levenberg–Marquardt method Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. ε ε 0 ε ∗ 0 ε −1 0 ε ∗ ε ε wk+1 = wk + (αk I + F (wk) F (wk)) F (wk) (g − F(wk)). (3) This article is an open access article distributed under the terms and This was investigated in [6–8] under the Hölder-type source condition (HSC) and a conditions of the Creative Commons posteriori discrepancy principle (DP). Jin [7] proved optimal convergence rates for an a Attribution (CC BY) license (https:// priori chosen geometric step size αk, whereas Hochbruck and Hönig [6] showed creativecommons.org/licenses/by/ convergence with the optimal rate for quite general step size including the 4.0/).

Mathematics 2021, 9, 1042. https://doi.org/10.3390/math9091042 https://www.mdpi.com/journal/mathematics Mathematics 2021, 9, 1042 2 of 15

geometric sequence. Later, Hanke [8] avoided any constraints on the rate of decay of the regularization parameter to show the optimal convergence rate. Tautenhahn [9] proved that asymptotic regularization, i.e., the approximation of problem (1) by a solution of the Showalter (SDE)

d wε(t) = F0(wε(t))∗[gε − F(wε(t))], 0 < t ≤ T, wε(0) = w¯ , (4) dt where the regularization parameter T is chosen according to the DP under the HSC, is a stable method to solve nonlinear ill-posed problems. Solving SDE by the family of Runge-Kutta (RK) methods delivers a family of RK-type iterative regularization methods

ε ε T 0 ε ∗ 0 ε −11 0 ε ∗ ε ε wk+1 = wk + τkb (δ + τk AF (wk) F (wk)) F (wk) (g − F(wk)), k ∈ N0, (5)

where 1 denotes the (s × 1) vector of identity operators, while δ is the (s × s) diagonal matrix of bounded linear operators with identity operator on the entire diagonal and zero operator outside of the main diagonal with respect to the appropriate spaces. The parameter τk = 1/αk in (5) is the step-length, also called the relaxation parameter. The (s × s) matrix A and the (s × 1) vector b are the given parameters that correspond to the specific RK method, building the so-called Butcher tableau (succession of stages). Different choices of the RK parameters generate various iterative methods. Böckmann and Pornsawad [10] showed convergence for the whole RK-type family (including the well-known Landweber and the Levenberg–Marquardt methods). That paper also emphasized advantages of using some procedures from the mentioned family, e.g., regarding implicit A-stable Butcher tableaux. For instance, the Landweber method needs a lot of iteration steps, while those implicit methods need only a few iteration steps, thus minimizing the rounding errors. Later, Pornsawad and Böckmann [11] filled in the missing results on optimal conver- gence rates, but only for particular first-stage methods under HSC and using DP. Our current study considers further the unifying RK-framework described above, as well as a modified version presented below, showing optimality of the RK-regularization schemes under logarithmic source conditions. ε An additional term αk(wk − ξ), as in the iteratively regularized Gauss–Newton method (see, e.g., [4]),

ε ε 0 ε ∗ 0 δ −1 0 ε ∗ ε ε ε wk+1 = wk − (F (wk) F (wk) + αk I) (F (wk) (F(wk) − g ) + αk(wk − ξ)),

was added to a modified Landweber method. Thus, Scherzer [12] proved a convergence rate result under HSC without particular assumptions on the nonlinearity of operator F. Moreover, in Pornsawad and Böckmann [13], an the additional term was included to the whole family of iterative RK-type methods (which contains the modified Landweber iteration), ε ε T −11 0 ε ∗ ε ε −1 ε wk+1 = wk + τkb Π F (wk) (g − F(wk)) − τk (wk − ζ), (6) 0 ε ∗ 0 ε where ζ ∈ X and Π = δ + τk AF (wk) F (wk). Using a priori and a posteriori stopping rules, the convergence rate results of the RK-type family are obtained under HSC if the Fréchet derivative is properly scaled. Due to the minimal assumptions for the convergence analysis of the modified iterative RK-type methods, an additional term was added to SDE

d wε(t) = F0(wε(t))∗[gε − F(wε(t))] − (wε(t) − w¯ ), 0 < t ≤ T, wε(0) = w¯ . (7) dt Pornsawad et al. [14] investigated this continuous version of the modified iterative RK-type methods for nonlinear inverse ill-posed problems. The convergence analysis yields the optimal rate of convergence under a modified DP and an exponential source condition. Mathematics 2021, 9, 1042 3 of 15

Recently, a second-order asymptotic regularization for the linear problem Axˆ = y was investigated in [15]

x¨(t) + µx˙(t) + Aˆ ∗ Axˆ (t) = Aˆ ∗yδ, x(0) = x¯, x˙(0) = x¯˙

under HSC using DP. Define ( e −p ln λ for 0 < λ ≤ 1 ϕ = ϕp, ϕp(λ) := (8) 0 for λ = 0 with p > 0 and the usual logarithmic sourcewise representation

+ 0 + ∗ 0 +  w − w0 = ϕ F (w ) F (w ) v, v ∈ X, (9)

where kvk is sufficiently small and w0 ∈ D(F) is an initial guess that may incorporate a priori knowledge on the solution. In numerous applications—e.g., heat conduction, scattering theory, which are severely ill- posed problems—the Hölder source condition is far too strong. Therefore, Hohage [16] proved convergence and logarithmic convergence rates for the iteratively regularized Gauss–Newton method in a Hilbert space setting, provided a logarithmic source condition (9) is satisfied and DP is used as the stopping rule. Deuflhard et al. [17] showed some convergence rate result for the Landweber iteration using DP and (9) under a Newton–Mysovskii condition on the nonlinear operator. Sufficient conditions for the convergence rate, which is logarithmic in the data noise level, are given. In Hohage [18], a systematic study of convergence rates for regularization methods under (9) including the case of operator approximations for a priori and a posteriori stopping rules is provided. A logarithmic source condition is considered by Pereverzyev et al. [19] for a derivative-free method, by Mahale and Nair [20] for a simplified generalized Gauss– Newton method, and by Böckmann et al. [21] for the Levenberg–Marquardt method using DP as stopping rule. Pornsawad et al. [22] solved the inverse potential problem, which is exponentially ill-posed, employing the modified Landweber method and proved convergence rate under the logarithmic source condition via DP for this method. To the best of our knowledge, for the first time, convergence rates are established both for the whole family of RK-methods and for the modified version, when applied to severely ill-posed problems (i.e., under the logarithmic source condition). The structure of this article is as follows. Section2 provides assumptions and technical estimations. We derive the convergence rate of the RK-type method (5) in Section3 and of the modified RK-type method (6) in Section4 under the logarithmic source conditions (8) and (9). In Section5, the performed numerical experiments confirm the theoretical results.

2. Preliminary Results

Lemma 1. Let K be a linear operator with kKk ≤ 1. For k ∈ N with k > 1, e0 := ϕ(λ)v with ϕ given by (8) and p > 0, there exist positive constants c1 and c2 such that

∗ k −p (I − K K) e0 ≤ c1(ln(k + e)) kvk (10)

and

∗ k −1/2 −p K(I − K K) e0 ≤ c2(k + 1) (ln(k + e)) kvk. (11)

Proof. By spectral theory (8), and (A1) and (A2) in [22], we have Mathematics 2021, 9, 1042 4 of 15

∗ k ∗ k ∗ (I − K K) e0 ≤ k(I − K K) ϕ(K K)kkvk ≤ sup |(1 − λ)k(1 − ln λ)−p|kvk λ∈(0,1] −p ≤ c1(ln(k + e)) kvk, (12)

for some constant c1 > 0. Similarly, spectral theory (8), and (A3) and (A4) in [22], provides

∗ k ∗ k ∗ 1/2 ∗ K(I − K K) e0 ≤ k(I − K K) (K K) ϕ(K K)kkvk ≤ sup |(1 − λ)kλ1/2(1 − ln λ)−p|kvk λ∈(0,1] −1/2 −p ≤ c2(k + 1) (ln(k + e)) kvk, (13)

for some constant c2 > 0.

Assumption 1. There exist positive constants cL, cr, and cbR and a linear bounded operator Rw : Y → Y such that for w ∈ Bρ(w0), the following conditions hold

0 0 + F (w) = RwF (w ) (14) + kRw − Ik ≤ cL w − w (15)

kRwk ≤ cr (16) |kRwk − kIk| ≥ cbR, (17)

where w+ is the exact solution of (1).

+ ε ε 0 ε 0 + Let ek := w − wk be the error of the kth iteration wk, Sk := F (wk) and S := F (w ).

Proposition 1. Let the conditions (14) and (15) in Assumption1 hold. Then,

+ 0 + + 1 F(wε ) − F(w ) − F (w )(wε − w ) ≤ c ke kkSe k (18) k k 2 L k k

for w ∈ Bρ(w0).

The proof is given in [22] using the mean value theorem.

Assumption 2. Let K be a linear operator and τ be a positive number. There exist positive constants D1 and D2 such that

∗ kI − τ f (−τKK )k ≤ D1 (19)

and ∗ kI + τ f (−τKK )k ≤ D2, (20) with f (t) := bT(I − At)−11.

We note that the explicit provides kI − τ f (−τKK∗)k ≤ |1 − τ| and kI + τ f (−τKK∗)k ≤ |1 + τ|. Thus, the conditions (19) and (20) hold if τ is bounded. For the implicit Euler method, we have

kI − τ f (−τKK∗)k ≤ sup |1 − τ(1 + τλ)−1| 0<λ≤λ0

and kI + τ f (−τKK∗)k ≤ sup |1 + τ(1 + τλ)−1|, 0<λ≤λ0 Mathematics 2021, 9, 1042 5 of 15

for some positive number λ0. We observe from Figure1 that the conditions (19) and (20) hold for τ > 0.

(a)(b)

Figure 1. Plots of (a) z = |1 − τ(1 + τλ)−1| and (b) z = |1 + τ(1 + τλ)−1| for 0 < λ ≤ 1 and 0 < τ ≤ 500.

Finally, we need a technical result for the next two sections.

Lemma 2. Let Assumptions1 and2 hold for the operator S := F0(w+). Then, there exists a positive number cR such that

∗ ∗ + ε i) kI − τkR ε f (−τkSkS )k ≤ cRkw − w k (21) wk k k and ∗ ∗ + ε ii) k(1 − αk)I − τkR ε f (−τkSkS )k ≤ cRkw − w k. (22) wk k k

Proof. (i) Following the proof technique of Theorem 1 in [22] and using (17), we have

−1 1 ≤ cbR kRw − Ik (23)

and ∗ −1 ∗ ∗ kI + Rwk ≤ cbR kI − RwkkI + Rwk. (24) Using Assumption2; the estimates in Equations (15), (16), (19), (20), (24); and the triangle inequality, we obtain

∗ ∗ kI − τkR ε f (−τkSkS )k wk k

1 h ∗ ∗ i 1 h ∗ ∗ i = (I + Rwε )(I − τk f (−τkSkSk )) + (I − Rwε )(I + τk f (−τkSkSk )) 2 k 2 k 1 ∗ ∗ 1 ∗ ∗ ≤ kI + Rwε kkI − τk f (−τkSkSk )k + kI − Rwε kkI + τk f (−τkSkSk )k 2 k 2 k 1 h −1 ∗ i ∗ ≤ cR D1kI + Rwε k + D2 kI − Rwε k 2 b k k + ε ≤cRkw − wkk, (25)

1 h −1 i with positive number cR = 2 cbR D1(kIk + cr) + D2 cL. ∗ ∗ (ii) Denote Ak = τkR ε f (−τkSkS ). We have wk k 1 k(1 − α )I − A k ≤ k[1 − (1 + α )](I + A ) + [1 + (1 − α )](I − A )k k k 2 k k k k |α | |2 − α | ≤ k kI + A k + k kI − A k. (26) 2 k 2 k Mathematics 2021, 9, 1042 6 of 15

Part (i) ensures an upper bound for the second term of the last formula. Hence, a similar upper bound for kI + Akk remains to be determined. To this end, we will use 1 ∗ the inequality I + PQ = [(I − P)(I − Q) + (I + P)(I + Q)] applied to P = R ε and 2 wk ∗ Q = τk f (−τkSkSk ). Thus, by using (19) and (20), we obtain

1 ∗ ∗ ∗ ∗ kI + Akk ≤ [(I − Rwε )(I − τk f (−τkSkSk )) + (I + Rwε )(I + τk f (−τkSkSk ))] 2 k k D1 ∗ D2 ∗ ≤ kI − Rwε k + kI + Rwε k 2 k 2 k + ε ≤ ckw − wkk,

for some positive c, where the last inequality follows as in (24) and (25). Now, (26) combined with part (i) and the last inequality yield (22).

3. Convergence Rate for the Iterative RK-Type Regularization To investigate the convergence rate of the RK-type regularization method (5) under the logarithmic source condition, the nonlinear operator F has to satisfy the local property in an open ball Bρ(w0) of radius ρ around w0

0 1 F(w) − F(w˜ ) − F (w)(w − w˜ ) ≤ ηkF(w) − F(w˜ )k, η < , (27) 2

with w, w˜ ∈ Bρ(w0) ⊂ D(F). In addition, the regularization parameter k∗ is chosen ac- cording to the generalized discrepancy principle, i.e., the iteration is stopped after k∗ steps with

gε − F(wε ) ≤ γε < kgε − F(wε )k, 0 ≤ k < k , (28) k∗ k ∗

2−η where γ > 1−η is a positive number. Note that the triangle inequality yields

1 0 1 0 F (w)(w − w˜ ) ≤ kF(w) − F(w˜ )k ≤ F (w)(w − w˜ ) . (29) 1 + η 1 − η

In the sequel, we establish an error estimate that will be useful in deriving the loga- rithmic convergence rate.

Theorem 1. Let Assumptions1 and2 be valid. Assume that problem (1) has a solution w+ in ε B ρ (w0) and g fulfills (2). Furthermore, assume that the Fre´chet derivative of F is scaled such 2 0 that kF (w)k ≤ 1 for w ∈ B ρ (w0) and the source conditions (8) and (9) are fulfilled. Thereby, 2 the iterative RK-type regularization is stopped according to the discrepancy principle (28). If kvk is sufficiently small, then there exists a constant c depending only on p and kvk such that for any 0 ≤ k < k∗,

+ ε −p w − wk ≤ c(ln k) (30)

and

ε ε −1/2 −p kg − F(wk)k ≤ 4c(k + 1) (ln k) . Mathematics 2021, 9, 1042 7 of 15

Proof. Using (5), we can show that

+ ε ∗ 0 ε ∗ ε ε ek+1 =w − wk + τk f (−τkSk Sk)F (wk) (F(wk) − g ) ∗ ∗ ∗ 0 ε ∗ ε ε =(I − S S)ek + S Sek + τk f (−τkSk Sk)F (wk) (F(wk) − g ) ∗ ∗ ε + ε + =(I − S S)ek + S [F(wk) − F(w ) − S(wk − w )] ∗ + ε ε ε ∗ 0 ε ∗ ε ε + S [F(w ) − g + g − F(wk)] + τk f (−τkSk Sk)F (wk) (F(wk) − g ) ∗ ∗ ε + ε + ∗ ε =(I − S S)ek + S [F(wk) − F(w ) − S(wk − w )] + S (g − g ) ∗ ε ε ∗ 0 ε ∗ ε ε + S [g − F(wk)] + τk f (−τkSk Sk)F (wk) (F(wk) − g ) ∗ ∗ ε + ε + ∗ ε =(I − S S)ek + S [F(wk) − F(w ) − S(wk − w )] + S (g − g ) ∗ ∗ 0 ε ∗ ε ε + [S − τk f (−τkSk Sk)F (wk) ](g − F(wk)). (31)

Using the spectral theory and (14), we have

∗ 0 ε ∗ 0 ε ∗ ∗ ∗ ∗ ∗ f (−τkS Sk)F (w ) = F (w ) f (−τkSkS ) = S R ε f (−τkSkS ). (32) k k k k wk k

Consequently, Equation (31) can be rewritten as

∗ ∗ ε + ε + ∗ ε ek+1 =(I − S S)ek + S [F(wk) − F(w ) − S(wk − w )] + S (g − g ) ∗ ∗ ∗ ε ε + S [I − τkR ε f (−τkSkS )](g − F(w )) wk k k ∗ ∗ ε ∗ =(I − S S)ek + S (g − g ) + S zk, (33)

where

ε + ε + ∗ ∗ ε ε zk = F(w ) − F(w ) − S(w − w ) + [I − τkR ε f (−τkSkS )](g − F(w )). (34) k k wk k k

By recurrence and Equation (33), we obtain

k k ∗ k ∗ j−1 ∗ ε ∗ j−1 ∗ ek = (I − S S) e0 + ∑(I − S S) S (g − g ) + ∑(I − S S) S zk−j. (35) j=1 j=1

Moreover, it holds that

k k ∗ k ∗ j−1 ∗ ε ∗ j−1 ∗ Sek = (I − SS ) Se0 + ∑(I − SS ) SS (g − g ) + ∑(I − SS ) SS zk−j. (36) j=1 j=1

We will prove by mathematical induction that

−p kekk ≤ c(ln(k + e)) (37)

and −1/2 −p kSekk ≤ c(k + 1) (ln(k + e)) (38)

hold for all 0 ≤ k < k∗ with a positive constant c independent of k. Using the discrepancy 2 − η principle (28), triangle inequality, and γ > , we can show that 1 − η

2 kgε − F(wε )k ≤ 2kgε − F(wε )k − γε ≤ 2kg − F(wε )k ≤ kSe k. (39) k k k 1 − η k Mathematics 2021, 9, 1042 8 of 15

Using Proposition1, Lemma2,(34), and (39), it follows that

ε + ε + ∗ ∗ ε ε kzkk ≤kF(w ) − F(w ) − S(w − w )k + kI − τkR ε f (−τkSkS )kkg − F(w )k k k wk k k 1 2c ≤ c ke kkSe k + R ke kkSe k 2 L k k 1 − η k k

≤cˆ1kekkkSekk, (40)

1 2cR with cˆ1 ≥ 2 cL + 1−η . By assumption kSk ≤ 1 (see Vainikko and Veterennikov [23], as cited in Hanke et al. [5]), we have k−1 √ ∗ j ∗ ∑ (I − S S) S ≤ k (41) j=0

and ∗ j ∗ −1/2 (I − S S) S ≤ (j + 1) , j ≥ 1. (42) Therefore,

k k−1 ∗ j−1 ∗ ε ∗ j ∗ ε ∑(I − S S) S (g − g ) = ∑ (I − S S) S (g − g ) j=1 j=0 √ ≤ kε (43)

and

k k−1 ∗ j−1 ∗ ∗ j ∗ ∑(I − S S) S zk−j = ∑ (I − S S) S zk−j−1 j=1 j=0 k−1 −1/2 ≤ ∑ (j + 1) kzk−j−1k. (44) j=0

Using Lemma1,(40), (43), and (44), Equation (35) becomes

√ k−1 −p −1/2 kekk ≤c1(ln(k + e)) kvk + kε + ∑ (j + 1) kzk−j−1k j=0 √ k−1 −p −1/2 ≤c1(ln(k + e)) kvk + kε + cˆ1 ∑ (j + 1) kek−j−1kkSek−j−1k. (45) j=0

Employing the assumption of the induction in Equations (37) and (38) into the third term of (45), we obtain

k−1 −1/2 ∑ (j + 1) kek−j−1kkSek−j−1k j=0 k−1 ≤c2 ∑ (j + 1)−1/2(k − j)−1/2(ln(k − j − 1 + e))−2p j=0 − − k−1 j + 1  1/2 k − j  1/2  1  =c2 (ln(k − j − 1 + e))−2p ∑ + + + j=0 k 1 k 1 k 1 − − k−1 j + 1  1/2 k − j  1/2 1  ln(k + e) 2p ≤c2(ln(k + e))−p . (46) ∑ + + + ( − − + ) j=0 k 1 k 1 k 1 ln k j 1 e Mathematics 2021, 9, 1042 9 of 15

Similar to Equation (45) in [22], we have

ln(k + e)   k + 1  ≤E 1 + ln (47) ln(k − j − 1 + e) k − j − 1 + e

with a generic constant E < 2, which does not depend on k ≥ 1. Using (47), we can estimate (46) as

k−1 −1/2 ∑ (j + 1) kek−j−1kkSek−j−1k j=0 − − k−1 j + 1  1/2 k − j  1/2 1   k + 1 2p ≤c2E2p(ln(k + e))−p 1 + ln ∑ + + + − − + j=0 k 1 k 1 k 1 k j 1 e − − k−1 j + 1  1/2 k − j  1/2 1   k − j 2p ≤c2E2p(ln(k + e))−p 1 − ln . (48) ∑ + + + + j=0 k 1 k 1 k 1 k 1

k−1 The sum ∑j=0 · is bounded because the

Z 1−s x−1/2(1 − x)−1/2(1 − ln(1 − x))2pdx s

= 1 is bounded with s : 2(k+1) from above by a positive constant Ep independent of k. Thus, (45) becomes √ −p 2 2p −p kekk ≤c1(ln(k + e)) kvk + kε + cˆ1c E Ep(ln(k + e)) √ h 2i −p = c1kvk + cpc (ln(k + e)) + kε, (49)

2p with cp = cˆ1E Ep. By assumption kSk ≤ 1 (see Vainikko and Veterennikov [23] as cited in Hanke et al. [5]), we have k−1 ∗ j ∗ ∗ k ∑ (I − SS ) SS ≤ kI − (I − SS ) k ≤ 1 (50) j=0 and k(I − SS∗)jSS∗k ≤ (j + 1)−1. (51) Thus,

k k−1 ∗ j−1 ∗ ε ∗ j ∗ ε ∑(I − SS ) SS (g − g ) = ∑ (I − SS ) SS (g − g ) ≤ ε (52) j=1 j=0

and

k k−1 ∗ j−1 ∗ ∗ j ∗ ∑(I − SS ) SS zk−j = ∑ (I − SS ) SS zk−j−1 j=1 j=0 k−1 −1 ≤ ∑ (j + 1) kzk−j−1k. (53) j=0

Using Lemma1,(40), (52), and (53), Equation (36) can be estimated as

k−1 −1/2 −p −1 kSekk ≤c2(k + 1) (ln(k + e)) kvk + ε + cˆ1 ∑ (j + 1) kek−j−1kkSek−j−1k. (54) j=0 Mathematics 2021, 9, 1042 10 of 15

Using (47) and the assumption of the induction in Equations (37) and (38) into the third term of (54), we obtain

k−1 −1 ∑ (j + 1) kek−j−1kkSek−j−1k j=0 k−1 ≤c2 ∑ (j + 1)−1(k − j)−1/2(ln(k − j − 1 + e))−2p j=0 =c2(k + 1)−1/2(ln(k + e))−p − − k−1 j + 1  1 k − j  1/2 ln(k + e) 2p 1 × (ln(k + e))−p ∑ + + ( − − + ) + j=0 k 1 k 1 ln k j 1 e k 1 =c2E2p(k + 1)−1/2(ln(k + e))−p − − k−1 j + 1  1 k − j  1/2  k − j 2p 1 × 1 − ln . (55) ∑ + + + + j=0 k 1 k 1 k 1 k 1

= 1 The summation in (55) is bounded because, with s : 2(k+1) , the integral

Z 1−s −1 −1/2 2p x (1 − x) (1 − ln(1 − x)) dx ≤ Eep, s

for some positive constant Eep independently of k. Thus, (54) becomes

−1/2 −p 2 2p −1/2 −p kSekk ≤c2(k + 1) (ln(k + e)) kvk + ε + cˆ1c E Eep(k + 1) (ln(k + e)) 2 −1/2 −p =[c2kvk + c˜pc ](k + 1) (ln(k + e)) + ε (56)

2p with c˜p = cˆ1E Eep. Setting c∗ = max{c1, c2}, we have √ 2 −p kekk ≤ [c∗kvk + cpc ](ln(k + e)) + kε (57)

and 2 −1/2 −p kSekk ≤ [c∗kvk + c˜pc ](k + 1) (ln(k + e)) + ε. (58) The discrepancy principles (28) and (29) provide

ε ε 1 γε ≤ kg − F(w )k ≤ ε + kSe k, 0 ≤ k < k∗ . k 1 − η k

Using (58), we obtain

2 −1/2 −p (1 − η)(γ − 1)ε ≤ kSekk ≤ [c∗kvk + c˜pc ](k + 1) (ln(k + e)) + ε, 0 ≤ k < k∗ . (59)

Setting ω = (1 − η)(γ − 1) − 1 > 0, (59) leads to

1 h 2i −1/2 −p ε ≤ c∗kvk + c˜ c (k + 1) (ln(k + e)) , 0 ≤ k < k∗ . (60) ω p Applying (60) to (57), we obtain   1 h 2i −p ke k ≤ 1 + c∗kvk + cˆ c (ln(k + e)) , 0 ≤ k < k∗ (61) k ω p

with cˆp = max{cp, c˜p}. Mathematics 2021, 9, 1042 11 of 15

Applying (60) to (58), we obtain   1 h 2i −1/2 −p kSe k ≤ 1 + c∗kvk + cˆ c (k + 1) (ln(k + e)) , 0 ≤ k < k∗ . (62) k ω p

 1  2 We choose a sufficiently small kvk such that 1 + ω c∗kvk + cˆpc ≤ c. Thus, the in- duction is completed. Using (37), we can show that

 ln k p ke k ≤ c (ln k)−p ≤ c(ln k)−p. (63) k ln(k + e)

The second assertion is obtained by using (39) as follows:

2  ln k p kgε − F(wε )k ≤ c(k + 1)−1/2 (ln k)−p k 1 − η ln(k + e) ≤ 4c(k + 1)−1/2(ln k)−p. (64)

We are now in a position to show the logarithmic convergence rate for the iterative RK-type regularization under a logarithmic source condition, when the iteration is stopped according to the discrepancy principle (28).

Theorem 2. Under the assumptions of Theorem1 and for 1 ≤ p ≤ 2, one has

1 2 p k∗ (ln(k∗)) = O(1/ε), (65)

−p kek∗ k = O((− ln ε) ). (66)

Proof. From (60), it follows that

−1/2 −p −1/2 −p ε ≤ c(k + 1) (ln(k + e)) ≤ c(k + 1) (ln(k + 1)) , 0 ≤ k < k∗, (67)

for some positive constant c. By taking k∗ = k − 1, one obtains (65). Furthermore,  (− )−2p  k = O ln ε Lemma (A4) in [22] applied to (65) yields ∗ ε2 . For showing the second ∗ inequality, we use e0 = ϕ(S S)v in (30) and proceed as in the proof of Theorem 2 in [22].

4. Convergence Rate for the Modified Version of the Iterative RK-Type Regularization The paper [13] contains a study of the modified iterative Runge-Kutta regulariza- tion method

ε ε T −11 0 ε ∗ ε ε −1 ε wk+1 = wk + τkb Π F (wk) (g − F(wk)) − τk (wk − ζ), (68)

0 ε ∗ 0 ε where ζ ∈ D(F) and Π = δ + τk AF (wk) F (wk). More precisely, it presents a detailed convergence analysis and derives Hölder-type convergence rates. The aim in this section is to show convergence rates for (68) with the natural choice ζ = w0. We consider here the logarithmic source condition (9) with ϕ defined by (8), where kvk is small enough. That is, we deal with the following method:

ε ε T −11 0 ε ∗ ε ε −1 ε wk+1 = wk + τkb Π F (wk) (g − F(wk)) − τk (wk − w0). (69)

We work further under assumptions (27) and (28) with an appropriately chosen constant γ—compare inequality (2.10) in [13]. For the sake of completeness, we recall below the convergence result adapted to the choice ζ = w0 (compare to Proposition 2.1 and Theorem 2.1 in [13]). Mathematics 2021, 9, 1042 12 of 15

+ ε ε Theorem 3. Let w be a solution of (1) in B ρ (w0) with w0 = w and assume that g fulfills (2). 8 0 ∞ If the parameters αk, ∀k ∈ N0 with ∑ αk < ∞ are small enough and if the termination index is k=0 defined by (28), then wε → w† as ε → . k∗ 0

We state below a result on essential upper bounds for the errors in (69).

Theorem 4. Let Assumptions1 and2 hold for the operator S := F0(w†). Assume that problem (1) + ε has a solution w in B ρ (w0) and g fulfills (2). Assume that the Fre´chet derivative of F is scaled 8 ∞ 0 such that kF (w)k ≤ 1 for w ∈ B ρ (w0) and that the parameters αk, ∀k ∈ N0 with αk < ∞ are 8 ∑ k=0 small enough. Furthermore, assume that the source condition (9) is fulfilled and that the modified RK-type regularization method (69) is stopped according to (28). If kvk is sufficiently small, then there exists a constant c depending only on p and kvk such that for any 0 ≤ k < k∗,

+ ε −p w − wk ≤ c(ln k) (70)

and

ε ε −1/2 −p kg − F(wk)k ≤ 4c(k + 1) (ln k) . (71)

+ ε Proof. First, we deduce an explicit formula for ek = w − wk. We proceed similarly to the proof in Theorem1, but this time, we need to take into account the additional term ε −αk(wk − w0). Thus, the proof steps are as follows. I. We establish an explicit formula for the error ek:

∗ 0 ε ∗ ε ε ε ek+1 =ek + τk f (−τkSk Sk)F (wk) (F(wk) − g ) + αk(wk − w0) ∗ 0 ε ∗ ε ε + =(1 − αk)ek + τk f (−τkSk Sk)F (wk) (F(wk) − g ) + αk(w − w0) ∗ ∗ ε + ε + =(1 − αk)(I − S S)ek + (1 − αk)S [F(wk) − F(w ) − S(wk − w )] ∗ ε ∗ ε ε + +(1 − αk)S (g − g ) + (1 − αk)S (g − F(wk)) + αk(w − w0) ∗ 0 ε ∗ ε ε − τk f (−τkSk Sk)F (wk) (g − F(wk)) ∗ ∗ =(1 − αk)(I − S S)ek + (1 − αk)S qk ∗ ε ∗ ∗ ∗ ε ε +(1 − αk)S (g − g ) + S (I − τkR ε f (−τkSkS ))(g − F(w )) wk k k ∗ ε ε + +αkS (F(wk − g )) + αk(w − w0), (72)

ε + ε + where the last equality follows from (32). We denoted qk = F(wk) − F(w ) − S(wk − w ). Thus, (72) can be shortly rewritten as

∗ ∗ ε ∗ + ek+1 = (1 − αk)(I − S S)ek + (1 − αk)S (g − g ) + S zk + αk(w − w0) (73)

with ∗ ∗ ε ε zk = (1 − αk)qk + [(1 − αk)I − τkR ε f (−τkSkS )](g − F(w )). (74) wk k k Therefore, we obtain the following closed formula:

"k−1 k−1 j # ∗ k ∗ j ek = ∏(1 − αj)(I − S S) + ∑ αk−j−1(I − S S) ∏(1 − αk−l) e0 j=0 j=0 l=1 " k j # ∗ j−1 ∗ ε + ∑(I − S S) ∏(1 − αk−l) S (y − y ) j=1 l=1 k−1 k−1 ∗ j ∗ + ∑ ∏ (1 − αl)(I − S S) S zk−j−1. (75) j=0 l=k−j Mathematics 2021, 9, 1042 13 of 15

From Proposition1,(34), Lemma2,(39), and (74), it follows that

kzkk ≤ cˆkekkkSekk,

for any 0 ≤ k ≤ k∗, where cˆ is a positive constant. II. Following the technical steps of the proof of Theorem 1 in [22], one can similarly show by induction that there is a positive number c, such that the following inequalities hold for any 0 ≤ k ≤ k∗: −p kekk ≤ c(ln(k + e)) , (76) −1/2 −p kSekk ≤ c(k + 1) (ln(k + e)) . (77) Then, one can eventually obtain (70) and (71) as in the mentioned proof. Note that Theorem 1 in [22] is based on Proposition 1 in [22], which requires small enough parameters ∞  p < ≤ ln(k+e) > ≥ αk, such that ∑ αk ∞ and αn−k+1 ln(k+1+e) , for all n k 1 (compare to (16) k=0 ln(k+e) = on page 4 in [22]). Since the smallest value of ln(k+1+e) is about 0.84 (when k 1), one can clearly find αk ∈ (0, 1) small enough so as to satisfy the imposed inequalities, e.g., a r harmonic-type sequence such as αk = 1/(k + 2) for some r > 1. One can show convergence rates for the modified Runge-Kutta regularization method, as done in the previous section for the unmodified version.

Theorem 5. Under the assumptions of Theorem4 and for 1 ≤ p ≤ 2, one has

1 2 p k∗ (ln(k∗)) = O(1/ε),

−p kek∗ k = O((− ln ε) ).

5. Numerical Example The purpose of the following numerical example is to verify the error estimates shown above. Define the nonlinear operator F : L2[0, 1] → L2[0, 1] as

Z 1 [F(w)](s) = exp k(s, t)w(t)dt, (78) 0 with the kernel function  s(1 − t) if s < t; k(s, t) = (79) t(1 − s) if t ≤ s. The noisy data is given by gε(s) = exp(sin(πs)/π2) + ε cos(100s), s ∈ [0, 1], and the exact solution is w∗(t) = sin(πt). In order to demonstrate the results in Theorems1 and 4, we consider Landweber, Levenberg–Marquardt (LM), Lobatto IIIC, and the Radau IIA methods, see Table1 for the Butcher tableau. The implementation in this section is the same as the one reported in [10,13]. The num- ber of basis functions is 65 and the number of equidistant grid points is 150, while the 1.1 parameter τk is the harmonic sequence term (k + 1) . As expected, the results in Figure2 + ε show that the curve of ln kw − wkk lies below a straight line with slope −p, as suggested by (30) in Theorem1 and (70) in Theorem4.

Table 1. Butcher tableau for (a) explicit Euler or Landweber, (b) implicit Euler or Levenberg– Marquardt, (c) Lobatto IIIC, and (d) Radau IIA methods.

1 1 1 5 1 0 2 − 2 3 12 − 12 1 1 3 1 0 0 1 1 1 2 2 1 4 4 1 1 3 1 1 1 2 2 4 4 (a)(b)(c)(d) Mathematics 2021, 9, 1042 14 of 15

(a)(b)

(c)(d)

+ ε Figure 2. The plot of ln kw − wkk versus ln(ln(k)) for (a) one−step RK methods and (b) for two−step methods with ε = 10−3.(c,d) Results with ε = 10−4. The parameters γ are (a) 1.1,(b) 25, 1.1 (c) 1.1, and (d) 100. For (a–d), a harmonic sequence τk = (k + 1) and w0 = εw∗ are used. 6. Summary and Outlook Up to now, the logarithmic convergence rate under logarithmic source condition has only been investigated for particular examples, namely, the Levenberg–Marquardt method (Böckmann et al. [21]) and the modified Landweber method (Pornsawad et al. [22]). Here, we extended the results to the whole family of Runge-Kutta-type methods with and without modification. For the future, it is still open to prove the optimal convergence rate under Hölder source condition for the whole family without modification.

Author Contributions: The authors P.P., E.R., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Acknowledgments: The authors are grateful to the referees for the interesting comments and sug- gestions that helped to improve the quality of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. References 1. Morozov, V.A. On the solution of functional equations by the method of regularization. Sov. Math. Dokl. 1966, 7, 414–417. 2. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill Posed Problems; V. H. Winston and Sons: New York, NY, USA, 1977. 3. Bakushinsky, A.B.; Kokurin, M.Y. Iterative Methods for Approximate Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 2004. 4. Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods For Nonlinear Ill-Posed Problems; Walter de Gruyter: Berlin, Germany, 2008. 5. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [CrossRef] 6. Hochbruck, M.; Hönig, M. On the convergence of a regularizing Levenberg-Marquardt scheme for nonlinear ill-posed problems. Numer. Math. 2010, 115, 71–79. [CrossRef] Mathematics 2021, 9, 1042 15 of 15

7. Jin, Q. On a regularized Levenberg-Marquardt method for solving nonlinear inverse problems. Numer. Math. 2010, 115, 229–259. [CrossRef] 8. Hanke, M. The regularizing Levenberg-Marquardt scheme is of optimal order. J. Integral Equ. Appl. 2010, 22, 259–283. [CrossRef] 9. Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [CrossRef] 10. Böckmann, C.; Pornsawad, P. Iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [CrossRef] 11. Pornsawad, P.; Böckmann, C. Convergence rate analysis of the first-stage Runge-Kutta-type regularizations. Inverse Probl. 2010, 26, 035005. [CrossRef] 12. Scherzer, O. A modified Landweber iteration for solving parameter estimation problems. Appl. Math. Optim. 1998, 38, 45–68. [CrossRef] 13. Pornsawad, P.; Böckmann, C. Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [CrossRef] 14. Pornsawad, P.; Sapsakul, N.; Böckmann, C. A modified asymptotical regularization of nonlinear ill-posed problems. Mathematics 2019, 7, 419. [CrossRef] 15. Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018, 1–26. 16. Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and an inverse scattering problem. Inverse Probl. 1997, 13, 1279–1299. [CrossRef] 17. Deuflhard, P.; Engl, W.; Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14, 1081–1106. [CrossRef] 18. Hohage, T. Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optimiz. 2000, 21, 439–464. [CrossRef] 19. Pereverzyev, S.S.; Pinnau, R.; Siedow, N. Regularized fixed-point iterations for nonlinear inverse problems. Inverse Probl. 2005, 22, 1–22. [CrossRef] 20. Mahale, P.; Nair, M.T. A simplified generalized Gauss-Newton method for nonlinear ill-posed problems. Math. Comput. 2009, 78, 171–184. [CrossRef] 21. Böckmann, C.; Kammanee, A.; Braunß, A. Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inverse Ill-Posed Probl. 2011, 19, 345–367. [CrossRef] 22. Pornsawad, P.; Sungcharoen, P.; Böckmann, C. Convergence rate of the modified Landweber method for solving inverse potential problems. Mathematics 2020, 8, 608. [CrossRef] 23. Vainikko, G.; Veterennikov, A.Y. Iteration Procedures in Ill-Posed Problems; Nauka: Moscow, Russia, 1986.