<<

A mollifier approach to regularize a Cauchy problem for the inhomogeneous Helmholtz equation

Pierre MARECHAL´ ,∗ Walter Cedric SIMO TAO LEE,† Faouzi TRIKI‡

May 7, 2021

Abstract

The Cauchy problem for the inhomogeneous Helmholtz equation with non-uniform refraction index is considered. The ill-posedness of this problem is tackled by means of the variational form of mollification. This approach is proved to be consistent, and the proposed numerical simulations are quite promising.

1 Introduction

Let V be a C3,1 bounded domain of R3 with boundary ∂V . For x0 ∂V , we denote by ν(x0) the unit normal vector to ∂V pointing outward V . Let Γ be a nonempty∈ open subset of ∂V . We consider the Cauchy problem for the inhomogeneous Helmholtz equation

∆u(x) + k2η(x)u(x) = S(x), x V, (1) 0 0 0 ∈ ∂νu(x ) = f(x ), x Γ, (2) ∈ u(x0) = g(x0), x0 Γ. (3) ∈

∞ arXiv:2105.02665v1 [math.AP] 6 May 2021 Here, u = u(x) is the unknown amplitude of the incident field, η L (Ω) is the refraction index, k is a positive wave number, S L2(Ω) is the source ,∈ and f L2(Γ) and ∈ ∈ ∗Institut de Math´ematiques universit´e Paul Sabatier, 31062 Toulouse, France. Email: [email protected] †Institut de Math´ematiques universit´e Paul Sabatier, 31062 Toulouse, France. Email: wsimo- [email protected] ‡Laboratoire Jean Kuntzmann, UMR CNRS 5224, Universit´eGrenoble-Alpes, 700 Avenue Centrale, 38401 Saint-Martin-d’H`eres,France. Email: [email protected]. This work is supported in part by the grant ANR-17-CE40-0029 of the French National Research Agency ANR (project MultiOnde).

1 g L2(Γ) are empirically known boundary conditions. ∈

The Helmholtz equation arises in a large range of applications related to the propagation of acoustic and electromagnetic waves in the time-harmonic regime. In this paper, we con- sider the inverse problem of reconstructing an acoustic or electromagnetic field from partial data given on an open part of the boundary of a given domain. This problem called the Cauchy problem for the Helmholtz equation is known to be ill-posed if Γ does not occupy the whole boundary ∂V [13,9, 18,1]. In [17], the above system was considered in the particular case where the refraction index is constant. However, in practice, the emitted wave travels through an environment in which the refraction index fails to be constant, and we need to investigate the corresponding problem.

We are facing a linear inverse problem. Our aim is to derive a stable approximation method for this problem, which yields stable and amenable computational scheme. Our main focus will be on mollification, in the variational sense of the term, which turns out to be both flexible and numerically efficient. Mollifiers were introduced in partial differential equations by K.O. Friedrichs [27, 11]. The term mollification has been used in the field of inverse problems since the eighties. In the original works on the subject, mollifiers were used to smooth the data prior to inversion. In his book, D.A. Murio [23] provides an overview of this approach and its application to some classical inverse problems. Let us also mention the paper by D.N. Hao [14], which provides a wide framework for the mollification approach in this initial meaning. In [21], A.K. Louis and P. Maass proposed another approach, based on inner product duality. This approach has been subsequently referred to as the method of approximate inverses [24]. The approximate inverses are particularly well adapted to problems in which the adjoint equation has explicit solutions. A third approach, based on a variational formulation, also appeared in the same period of time. In [20], A. Lannes et al. gave such a formulation while studying the problems of Fourier extrapolation and deconvolution. This variational formulation was not studied further until the papers by N. Alibaud et al. [2] and by X. Bonnefond and P. Mar´echal [7], where convergence properties of the variational formulation was considered. A definite advantage of the variational approach to mollification lies in the fact that it offers a quite flexible framework, just like the Tikhonov regularization, while being more respectful of the initial model equation than the latter. The paper is organized as follows. In Section2, we introduce the linear operator associated to the Cauchy problem for the inhomogeneous Helmholtz equation. We propose a regularized variational formulation of the ill-posed problem based on mollification. Under an additional assumption on the targeted solution we show in Theorem7 that the unique min- imizer converges strongly to the minimum-norm least square solution. Section3 is devoted to numerical experiments. We consider two numerical examples in order to illustrate the efficiency of our regularization approach.

2 2 Functional setting and regularization

We shall work in a functional space which enables us to interpret the ideal (exact) data as the image of u by a bounded linear operator. We observe that:

(I) the Laplacian ∆ is a bounded operator from H2(V ) to L2(V ), so that, since η L∞(Ω), 2 2 2 ∈ the operator T1u = (∆ + k η)u is also bounded from H (V ) to L (V )[12];

1 3 1/2 (II) u (H (V )) , so that u ∂νu H (Γ) is a continuous linear operator, which ∇ ∈ 7→ ∈ 2 2 implies in turn that the operator T2u = ∂νu|Γ is compact from H (V ) to L (Γ);

2 3/2 (III) the trace operator u u|∂V maps H (V ) to H (Γ) continuously, so that the operator 7→ 2 2 T3u = u|Γ is compact from H (V ) to L (Γ).

Therefore, a natural choice for our workspace is H2(V ). We can then write our system in the form T u = v, (4) in which T : H2(V ) L2(V ) L2(Γ) L2(Γ) −→ × × 2  u ∆u + k ηu, ∂νu, u 7−→ and v = (S, f, g) G := L2(V ) L2(Γ) L2(Γ). ∈ × × We first show that T is injective.

Proposition 1. The linear bounded map T defined by (4) is injective.

Proof. For a proof of this classical result, we refer to the ’s book [19]. More recent proofs based on Carleman estimates can be found in [18,9,1, 26]. The principal idea is to show that a part of Γ is non-characteristic with respect to the Helmholtz operator, which in turns leads to the existence of a small neighborhood of that part of the boundary in V where ∞ the solution is identically zero. Since η L (Ω), the Helmholtz operator T1 possesses the unique continuation property in V , and∈ hence the solution is identically zero in the whole domain V which completes the proof.

We are now going to set up our approach to the regularization of the problem. We consider the mollifier 1 x 3 ϕα(x) = ϕ , α (0, 1], x R , α3 α ∈ ∈ in which ϕ is an integrable function such that

3 (1) supp ϕ Br := x R x r for some positive r, ⊂ { ∈ | k k ≤ } (2) R ϕ(x) dx = 1.

3 Desirable additional properties of ϕ are, as usual, nonnegativity, isotropy, smoothness, radial decrease. 2 3 We denote by Cα the operator by ϕα: for every u L (R ), Cαu := ϕα u. Our ∈ ∗ regularization principle will control smoothness by means of Cα, and α will play the role of the regularization parameter. One difficulty lies in the fact that convolving u by ϕα entails extrapolating u from V to the larger set V + αBr. Zero padding is obviously forbidden here since we wish to preserve H2 regularity. It is then necessary to introduce an extension operator that preserves the properties of the solution on V . 2+s R3 2+s R3 For s [0, 1), we denote by H0 ( ) the set of functions in H ( ) having a compact ∈ 2+s 2+s R3 . Our objective is to derive an extension operator E : H (V ) H0 ( ) that in 2+s → addition of satisfying Eu V = u for all u H (V ), is bounded and invertible. Note that there are many extension| operators satisfying∈ these properties. We next provide a complete characterization of a useful extension operator suited for C3,1 smooth bounded domains. For other extension operators on Sobolev spaces under weaker regularity assumptions on the domain V, see, for example, [10, 22,8, 25]. For ε (0, 1) small enough define the tubular domains ∈ ± 0 0 0 + V := x tν(x ):(x , t) ∂V (0, ε) , and Vε := V ∂V V . (5) ε { ± ∈ × } ∪ ∪ ε − We first notice that Vε V Vε for all ε (0, 1). Due to the regularity of ∂V , the function defined by ⊂ ⊂ ∈

φ(x0, t) = x0 tν(x0), (6) − is a C2,1-diffeomorphism from ∂V (0, ε) onto V − for ε small enough. × ε 2 2 Let u be fixed in H (V ). The first step is to construct an extension uε of u to H (Vε/2). For α, β R, set ∈ u˜(x0 + tν(x0)) = αu(x0 tν(x0)) + βu(x0 2tν(x0)), for (x0, t) ∂V (0, ε/2). (7) − − ∈ × 2 2 + Since ν is a C vector field,u ˜ lies in H (Vε ), and verifies

0 0 0 0 0 u˜(x ) = (α + β)u(x ), and ∂νu˜(x ) = ( α 2β)∂νu(x ), x ∂V. (8) − − ∈

Let uε be defined by

 + u˜(x), x Vε/2, uε(x) = ∈ (9) u(x), x V. ∈ 2 2 + By construction, we have uε H (V ) H (V ). Considering the traces (8) and taking ∈ ∪ ε α = 3 and β = 2, implies that uε and its first have no jumps across ∂V , and 2 − thus uε H (Vε/2). ∈ ∞ 3 Let χε C (R ) be a cut off function satisfying ∈ 0 3 χε = 1, on Vε/8, and χε = 0, on R V3ε/8. (10) \ 4 Now, we are ready to introduce the operator E. For u H2(V ), define ∈

Eu = χεuε, where uε and χε are respectively given in (9) and (10). 2+s 2+s R3 Proposition 2. Let s [0, 1) be fixed. The extension operator E : H (V ) H0 ( ) is bounded, invertible, and∈ satisfies →

u 2+s Eu 2+s 3 Cs u 2+s , (11) k kH (V ) ≤ k kH (R ) ≤ k kH (V ) where Cs > 1 is a constant that only depends on s, ε and V . In addition, Supp(Eu) Vε. ⊂ + − Proof. The left side inequality is straightforward. The functions ψj : Vε/j Vε/j, j = 1, 2, defined by →

0 0 0 0 0 ψj(x + tν(x )) = x jtν(x ), (x , t) ∂V (0, ε), j = 1, 2, (12) − ∈ × are C2,1-diffeomorphisms. Forward calculations give Z Eu 2 + Eu 2 + 2Eu 2 dx R3\V | | |∇ | |∇ | 2 Z 2 X 2 2 2 2 χε C2,1(R3) u˜ + u˜ + u˜ dx ≤ k k + | | |∇ | |∇ | j=1 Vε/2 2   Z 2 X 2 −1 2 2 2 2 κ χε C2,1(R3) ψj 2,1 + + 1 Dψj 1 − u + u dx, C (Vε/j ) C (Vε/j ) − ≤ k k j=1 k k k k Vε |∇ | |∇ |

−1 −1 with Dψj is the gradient of the vector field ψj , and κ > 0 is a universal constant. Therefore

Eu 2 3 C u 2 , k kH (R ) ≤ k kH (V ) where 2   2 2 X 2 −1 2 C = κ χε C2,1(R3) ψj 2,1 + + 1 Dψj 1 − . C (Vε/j ) C (Vε/j ) k k j=1 k k k k

Similarly, tedious calculations lead to the following estimate of the seminorm

Z Z 2 2 DβEu(x) DβEu(y) 2 2 2 DβEu s,R3 = | − 3+2s | dx dy Ces ( Dβu s,V + u H2(V )), | | R3 R3 x y ≤ | | k k | − | 3 for all β N , satisfying β = 2, where Ces > 0, depends on s, ε and V . By taking ∈ | | Cs = max(1,C, Ces), we complete the proof of the proposition.

5 The defined extension operator E opens the way to the following variational formulation of mollification: 2 2 Minimize Jα(u, v, T ) := v T u + (I Cα)Eu 2 R3 (P) k − kG k − kH ( ) subject to u H2(V ). ∈ Our aim is now to prove

1. the well-posedness of the above variational problem, that is, that the solution uα depends continuously on the data v;

† 2. the consistency of the regularization, that is, that uα converges to T u in some sense as α 0, where T † is the pseudo-inverse of T . ↓ Lemma 3. Let K, s > 0. Let ϕ L1(R3) be such that ∈ ϕˆ(0) = 1 and 1 ϕˆ(ξ) K ξ s as ξ 0. − ∼ k k → Assume in addition that ϕˆ(ξ) = 1 for every ξ R3 0 . Define 6 ∈ \{ } 2 2 mα = min 1 ϕˆ(αξ) and Mα = max 1 ϕˆ(αξ) . kξk=1 | − | kξk=1 | − | Then,

2 (i) for every α > 0, 0 < mα Mα (1 + ϕ ) ; ≤ ≤ k k1 (ii) sup Mα/mα < and Mα 0 as α 0; α>0 ∞ → ↓ 3 (iii) there exists ν◦,A◦ > 0 such that, for every α (0, 1], for every ξ R 0 , ∈ ∈ \{ }   2 2s 1 1 ϕˆ(αξ) 2s ν ξ 1 (ξ) + 1 c (ξ) µ ξ . ◦ B1/α B1/α | − | 2 ◦ k k Mα ≤ 1 ϕˆ(αξ/ ξ ) ≤ k k | − k k | Proof. See [2, Lemma 12].

Corollary 4. Let ϕ, mα, Mα, ν◦, µ◦ be as in Lemma3, and let Cα be the operator of convolution 2 3 with ϕα, For every u L (R ), ∈ Z   2 2s 1 2 (I C )u 2 3 ν m ξ 1 (ξ) + 1 c (ξ) uˆ(ξ) dξ. (13) α L (R ) ◦ α B1/α B1/α k − k ≥ R3 k k Mα | |

Proof. From Lemma3, we have: Z 2 2 2 (I Cα)u L2(R3) = 1 ϕˆ(αξ) uˆ(ξ) dξ k − k R3 | − | | | Z 2 2 1 ϕˆ(αξ) 2 = 1 ϕˆ(αξ/ ξ ) | − | 2 uˆ(ξ) dξ R3 | − k k | 1 ϕˆ(αξ/ ξ ) | | Z  | − k k |  2s 1 2 m ν ξ 1 (ξ) + 1 c (ξ) uˆ(ξ) dξ. α ◦ B1/α B1/α ≥ R3 k k Mα | |

6 Lemma 5. Let Cα be the operator of convolution with ϕα, where ϕ is as in Lemma3. There exists a positive constant A◦, depending on V only, such that for every s 0 and every u Hs(V ), ≥ ∈ 2 2 A◦mα u s 3 (I Cα)u s 3 . (14) k kH (R ) ≤ k − kH (R )

Proof. We start with the case s = 0. From (13), we have: Z   2 2s 1 2 (I C )u 2 3 m ν ξ 1 (ξ) + 1 c (ξ) uˆ(ξ) dξ α L (R ) α ◦ B1/α\B1 B1/α k − k ≥ R3 k k Mα | | Z −2 2 m ν (1 + ϕ ) 1 c (ξ) uˆ(ξ) dξ α ◦ 1 B1 ≥ k k R3 | | 2 −2 −1 2 mαν◦(1 + ϕ ) TBc u 2 R3 , ≥ k k1 1 k kL ( ) c in which T c := 1 c F , the operator of Fourier truncation to B . Thus B1 B1 1 2 −2 −1 A◦ = ν◦(1 + ϕ ) TBc k k1 1 is suitable. Notice that, for s = 0, Parseval’s identity enables to rewrite (14) in the form

2 2 A◦mα uˆ 2 3 (1 ϕˆ(α )ˆu 2 3 , (15) k kL (R ) ≤ k − · kL (R ) in which F denotes the Fourier-Plancherel operator. Now, let s > 0 and assume that u Hs(V ). We readily see that F −1(1 + ξ 2)s/2uˆ(ξ) belongs to L2(R3). Applying (15) to∈ the latter function yields k k

2 s/2 2 2 s/2 2 A◦mα (1 + ) uˆ 2 3 (1 ϕˆ(α ))(1 + ) uˆ 2 3 , k·k L (R ) ≤ − · k·k L (R ) and (14) follows.

s◦+s 3 Lemma 6. Let Cα be as in the previous lemma, s > 0, and s◦ 0. If u H (R ), then ≥ ∈ 2 2 (I Cα)u s 3 µ◦Mα u s +s 3 , k − kH ◦ (R ) ≤ k kH ◦ (R ) with µ◦ is the positive constant provided by Lemma3.

Proof. We have: Z 2 2 2 1 ϕˆ(αξ) 2 s◦ 2 (I Cα)u Hs◦ (R3) = 1 ϕˆ(αξ/ ξ ) | − | 2 (1 + ξ ) uˆ(ξ) dξ k − k R3 | − k k | 1 ϕˆ(αξ/ ξ ) k k | | Z | − k k | 2s 2 s◦ 2 µ◦Mα ξ (1 + ξ ) uˆ(ξ) dξ ≤ R3 k k k k | | Z 2 s◦+s 2 µ◦Mα (1 + ξ ) uˆ(ξ) dξ ≤ R3 k k | | 2 = µ◦Mα u s +s 3 , k kH ◦ (R ) in which the first inequality stems from Lemma3.

7 † 2+s R3 † † † Theorem 7. Assume u H ( ), with s (0, 1), and v = T u|V , so that u|V = T v. Let ∈ ∈ † 2 uα be the solution to Problem (P). Then uα T v in H (V ) as α 0. → ↓

Proof. We walk in the steps of the proof of Theorem 11 in [2], which we adapt to the present context. The main differences lie in that the regularization term uses a Sobolev norm and in that we make use of the extension operator E in order to cope with boundary 2 constraints. In Step 1, we show that the family (uα) is bounded in H (V ), thus weekly † compact; in Step 2, we establish the weak convergence of uα to T v, and finally in Step 3, we use a compactness argument to show that the convergence is, in fact, strong. Step 1. By construction, we have:

2 2 † † (I Cα)Euα H2(R3) Jα(uα, v, T ) Jα(u|V , v, T ) (I Cα)Eu|V . (16) k − k ≤ ≤ ≤ − H2(R3) Using Lemma5 and Lemma6, we obtain:

µ M 2 2 2 ◦ α † uα H2(V ) Euα H2(R3) Eu|V , k k ≤ k k ≤ A◦ mα H2+s(R3)

2 and Lemma3(ii) then shows that the set ( uα)α∈(0,1] is bounded in H (V ), therefore is weakly compact.

Step 2. Denote G the natural norm in the G. Now, let (αn)n be a converging to 0.k·k There then exists a subsequence (u ) which converges weakly in H2(V ). αnk k Letu ˜ be the weak limit of this subsequence. We then have:

2 v T uαn Jαn (uαn , v, T ) − k G ≤ k k † Jα (u , v, T ) ≤ nk |V 2 † = (I Cαn )u|V − k H2(R3) 2 † µ◦Mαn u|V . ≤ k H2+s(R3) 2 Since Mα goes to zero as k , so does v T uα . By weak lower semicontinuity of nk nk G the norm on the Hilbert space→G ∞, we see that the− weak limitu ˜ satisfies:

2 2 2 v T u˜ G lim inf v T uαn = lim v T uαn = 0. k − k ≤ k→∞ − k G k→∞ − k G Therefore, T u˜ = v and the injectivity of T (Proposition1) implies that ˜u = u†. Step 3. We will show that, for every multi-index β N3 such that β 2, ∈ | | ≤ † 2 3 DβEuα DβEu in L (R ) as α 0, (17) → |V ↓ which will imply the announced strong convergence. Observe first that, by the previous step and the continuity of E, † 2 3 Euα * Eu in H (R ) as α 0, |V ↓ 8 so that, for every multi-index β N3 such that β 2, ∈ | | ≤ † 2 3 DβEuα *DβEu in L (R ) as α 0, (18) |V ↓ 0 3 0 Fix β β N β 2 and let (αn)n be a sequence converging to 0, as in the previous ∈ { ∈ | | | ≤ } step. For convenience, let un := uαn , Cn := Cαn , Mn := Mαn and mn := mαn . Since Eun has compact support, it is obvious that Z 2 lim sup DβEun(x) dx = 0. (19) R→∞ n kxk≥R | |

3 Now, for every h R and every function u, let Thu denote the translated function x u(x h). We proceed∈ to show that 7→ − 2 sup ThDβEun DβEun L2(R3) 0 as h 0. (20) n k − k → k k → Together with (18) and (19), this will establish (17) via the Fr´echet-Kolmogorov Theorem (see e.g. [16, Theorem 3.8 page 175]). We have:

2 2 ThDβEun DβEun L2(R3) = F (ThDβEun DβEun) L2(R3) k − k kZ − k −2iπhh,ξi 2 2 = e 1 FDβEun(ξ) dξ | − | | | = I1 + I2, in which Z −2iπhh,ξi 2 2 I1 := e 1 FDβEun(ξ) dξ, kξk≤1/αn | − | | | Z −2iπhh,ξi 2 2 I2 := e 1 FDβEun(ξ) dξ. kξk>1/αn | − | | |

We now bound I1 and I2. On the one hand,

Z −2iπhh,ξi 2 e 1 2s 2 I1 = | 2s− | ξ FDβEun(ξ) dξ 0

9 2 2 Since (un) is bounded in H (V ) independently of n, so is ( FDβEun 2 3 ). Moreover, by k kL (R ) using Corollary4 with DβEun in place of u, we get Z 2s 2 1 2 ξ FDβLun(ξ) dξ (I Cn)DβEun L2(R3) kξk≤1/αn k k | | ≤ ν◦mn k − k 1 2 = Dβ(I Cn)Eun L2(R3) ν◦mn k − k 1 2 (I Cn)Eun H2(R3) ≤ ν◦mn k − k 1 2 † (I Cn)Eu|V ≤ ν◦mn − H2(R3) µ M 2 ◦ n † Eu|V , ≤ ν◦ mn H2+s(R3) in which the last two inequalities are respectively due to the inequality (16) and Lemma6 2s (with s◦ = 2). It follows that I1 = O( h ). On the other hand, using again Corollary4 k k with DβEun in place of u, we have: Z 2 I2 4 FDβEun(ξ) dξ ≤ kξk>1/αn | | 4 Mn 2 (I Cn)DβEun L2(R3) ≤ ν◦ mn k − k 4 Mn 2 = Dβ(I Cn)Eun L2(R3) ν◦ mn k − k 4 Mn 2 (I Cn)Eun H2(R3) ≤ ν◦ mn k − k 4 M 2 n † (I Cn)Eu|V ≤ ν◦ mn − H2(R3) µ M 2+s ◦ n † 4 Mn Eu|V , ≤ ν◦ mn H2(R3) in which the last two inequalities are respectively due to the inequality (16) and Lemma6 (with s◦ = 2). It follows that I2 = O(Mn). Gathering the obtained bounds on I1 and I2, we see that there exists a positive constant K such that 2 2s  ThDβEun DβEun 2 3 K h + Mn . (21) k − kL (R ) ≤ k k ∗ Now, fix ε > 0. There exists n◦ N such that for every n n◦, Mn ε. From (21), we see that ∈ ≥ ≤

2 sup ThDβEun DβEun L2(R3) n k − k  2 2s  max max ThDβEun DβEun L2(R3) ,K h + ε . ≤ 1≤n≤n◦ k − k k k By the L2-continuity of translation, we have: N∗ 2 n , ThDβEun DβEun 2 3 0 as h 0. ∀ ∈ k − kL (R ) → k k → 10 Consequently, 2 max ThDβEun DβEun L2(R3) 0 as h 0, 1≤n≤n◦ k − k → k k → so that 2 lim sup sup ThDβEun DβEun L2(R3) Kε. h→0 n k − k ≤ Since ε > 0 was arbitrary, (20) is established, which achieves the proof.

Remark 8. Notice that the real s > 0 in Theorem7 can be taken arbitrary large. The proof is similar, we only need to consider an extension operator E that is bounded from H2+s(V ) into 2+s R3 H0 ( ).

3 Numerical experiments

In this section, we consider two numerical examples in order to illustrate the accuracy and the efficiency of our regularization approach in the resolution of the in-homogeneous Helmholtz equation with non-constant refraction index. In order to reduce the computational complexity, we consider the system (1)-(2)-(3) in two dimensions (as in [15]) on a rectangular domain as follows:

2 uxx(x, y) + uyy(x, y) + k η(x, y)u(x, y) = S(x, y), (x, y) [a, b] [0, 1], (22) ∈ × uy(x, 0) = f(x), x [a, b], (23) ∈ u(x, 0) = g(x), x [a, b]. (24) ∈ Given the boundary data f and g at y = 0, we aim at approximating the solution u( , y) for y (0, 1]. · ∈

Example 1: For the first example, we set [a, b] = [ 1, 1], k = 3 and define the refraction index η and the exact solution u as: −

( 2 2 2 (x2 + (2y−1) )1/2 if x2 + (2y−1) 1, η(x, y) = − 0.82 0.82 ≤ 1 otherwise and  k  u(x, y) = (x 2y + 1) sin (x + 2y 1) . − √2 − The source term S and the boundary data f and g are defined accordingly:      S(x, y) = 5 k2(x 2y + 1) sin √k (x + 2y 1) 3k√2 cos √k (x + 2y 1)  2 2 2  − − − − −  + k2η(x, y)u(x, y),     f(x) = k√2(x + 1) cos √k (x 1) 2 sin √k (x 1) ,  2 2   − − − g(x) = (x + 1) sin √k (x 1) . 2 − 11 Example 2: For the second example, we consider a simpler setting where [a, b] = [ 1.5, 1.5], k = 1, and define the refraction index η (depending only on y) and the exact solution− u as:

η(x, y) = 1 + y2, and 4(1 + y) 2 u(x, y) = e−8x . √2π The source term S and the boundary data f and g are defined accordingly:

( 4(1+y) 2 S(x, y) = √ e−8x (256x2 15 + y2)) , 2π 2 − f(x) = g(x) = √4 e−8x . 2π

In both cases, we consider a Gaussian convolution kernel ϕα i.e.

2 2 1 − x +y ϕ (x, y) = e 2α2 , (25) α α22π which satisfies the Levy-kernel condition of Lemma3 with s = 2.

Discretization setting

For the of the system (22)-(23)-(24), we use a finite difference method of order 2 described as follows.

We first define the uniform grid Γ on the bounded domain [a, b] [0, 1]: × ( n xj = a + (j 1)∆x, j = 1, ..., nx Γj = (xj, yn) with − yn = (n 1)∆y, n = 1, ..., ny, − where ∆x and ∆y are the discretization steps given by

∆x = (b a)/(nx 1), ∆y = 1/(ny 1). − − −

We then approximate the second derivatives uxx and uyy by means of the five-point stencil finite-difference scheme and uy using a central finite difference and derive the discrete system: un+1 2un + un−1 un 2un + un j − j j j+1 j j−1 2 n n n 2 + − 2 + k ηj uj = Sj (26) ∆y ∆x 2 0 uj uj − = fj (27) 2∆y 1 uj = gj. (28) where

n n n u u(xj, yn), η := η(xj, yn),S := S(xj, yn), fj := f(xj), and gj := g(xj). j ≈ j j 12 Notice that in (27), by using the central difference to approximate uy(xj, 0), we define addi- 0 tional nodes Γj = (xj, ∆y) lying outside the initial domain [a, b] [0, 1] and consequently − 0 × have additional unknowns u u(xj, ∆y). j ≈ − In equations (27) and (28), the index j runs from 1 to nx, while in (26), j ranges from 2 to nx 1 and n ranges from 1 to ny 1. At the boundary node along x-direction (i.e. j = 1 − − and j = nx), uxx(xj, yn) is approximated by the second order scheme: ( u (x , y ) (2un 5un + 4un un)/∆2, xx 1 n 1 2 3 4 x (29) ≈ n− n − n n 2 uxx(xn , yn) (2u 5u + 4u u )/∆ . x ≈ nx − nx−1 nx−2 − nx−3 x Hence at the boundary nodes j = 1 and j = nx, equation (26) is replaced respectively by n+1 n n−1 n n n n u1 2u1 + u1 2u1 5u2 + 4u3 u4 2 n n n − 2 + − 2 − + k η1 uj = S1 , (30) ∆y ∆x un+1 2un + un−1 2un 5un + 4un un nx nx nx + nx nx−1 nx−2 nx−3 + k2ηn un = Sn . (31) − 2 − 2 − nx j nx ∆y ∆x In summary, we obtain the following iterative scheme: 1 uj = gj (32) 2 0 u u = 2∆yfj (33) j − j un+1 (Λn 4γ)un + γ ( un + 4un 5un) + un−1 = ∆2Sn (34) 1 − 1 − 1 − 4 3 − 2 1 y 1 un+1 Λnun + un−1 + γ un + un  = ∆2Sn (35) j − j j j j+1 j−1 y j un+1 (Λn 4γ)un + γ un + 4un 5un  + un−1 = ∆2Sn (36) nx − nx − nx − nx−3 nx−2 − nx−1 nx y nx where Λn = 2 + 2γ k2∆2ηn and γ = ∆2/∆2. j − y j y x In (34),(35),(36), the index n runs from 1 to ny 1. In (35), the index j runs from 2 to − nx 1. − By defining the column vector n n n n > U = (u1 , u2 , ..., unx ) , n = 0, 1, ..., ny. we can rewrite the discrete system (32)-(36) in the matrix form: U 1 = G (37) 2 0 U U = 2∆zF (38) 2 1 − 0 2 1 U A1U + U = ∆yS , (39) n+1 − n n−1 2 n U AnU + U = ∆ S , n = 2, ..., ny 1. (40) − y − where       g(x1) f(x1) S(x1, yn)  g(x2)   f(x2)   S(x2, yn)   .   .   .   .   .  n  .  G :=  .  ,F :=  .  ,S :=  .  ,  .   .   .   .   .   . 

g(xnx ) f(xnx ) S(xnx , yn)

13 and An is the nearly tridiagonal matrix defined by

 n  Λ1 + 4γ 5γ 4γ γ 0 0 − − − ··· .  γ Λn γ 0 0 .   2   − ···. .   0 γ Λn γ 0 .. .   − 3   ......  An =  0 0 . . . . 0  . (41)  ......   ......   . 0   . n   . 0 0 γ Λnx−1 γ  0 ··· 0 γ 4γ − 5γ Λn + 4γ ··· − − − nx From (38) and (39), we can get rid of the additional unknown vector U 0 and get the system  U 1 = G  2 1 2 1 2U + A1U = ∆yS + 2∆yF, (42)  n+1 n n−1 2 n U + AnU + U = ∆ S , n = 2, ..., ny 1. y −

In order to model the noise in the measured data f and g, we consider the noisy versions G and F of the vectors G and F defined by

G = G + ϑ, and F = F + ϑ, (43) where ϑ is a nx-column vector of zero mean drawn using the normal distribution. From (42), we can rewrite our discrete system into a single matrix equation:

AU = B, where U, B are nxny-column vectors and A is the nxny nxny block-triangular matrix respectively defined by ×

1  U   G  2 2 1  U  ∆yS + 2∆yF  3   2 2   U   ∆yS   .   2 3   .   ∆yS     .  U =  .  ,B =  .  ,  .   .     .  U ny−2  .   ny−1  2 ny−2  U   ∆yS  ny 2 ny−1 U ∆yS

14 and   Inx 0 0 ···············. . A 2I .. .   1 nx   . .  I A I .. .   nx 2 nx   .. .   0 Inx A3 Inx . .  A =   ,  ......   ......   ......   ......     . ..   . . Inx Any−2 Inx 0  0 0 In An −1 In ········· x y x where Inx is the square identity matrix of size nx and the matrices An are the sub-matrices defined in (41).  The regularized solution Uα is defined as the solution of the minimization problem

 Uα = argminU∈Rnxny 2 2 2 AU B + (I Cα)Eu + Dx(I Cα)EU k − k k 2− k k −2 k + Dy(I Cα)EU + Dxx(I Cα)EU (44) k − k 2 k − k 2 + Dyy(I Cα)EU + 2 Dxy(I Cα)EU , k − k k − k where Dx,Dy,Dxx, Dyy,Dxy are discrete versions of the partial differential operators ∂x, ∂y, ∂xx, ∂yy, ∂xy, Cα is the matrix approximating the convolution with the function ϕα defined  in (25) and E is the matrix modeling the extension operator. From (44), we compute Uα as the solution of the matrix equation

 > > >   > A A + E (In n Cα) D(In n Cα)E U = A B, (45) x y − x y − α where α is the regularization parameter, Inxny is the square identity matrix of size nxny and D is the matrix defined by

> > > > > D = Inxny + Dx Dx + Dy Dy + DxxDxx + DyyDyy + 2DxyDxy.

Selection of the regularization parameter

The choice of the regularization parameter α is a crucial step of the regularization. In-  deed, the reconstruction error U Uα has two components: the regularization error U Uα −  − (corresponding to exact data) and the data error propagation Uα Uα. The former error is generally monotonically increasing with respect to α and attains its− minimum at α = 0 while the latter error blows up as α goes to 0 and decreases when α gets larger. Consequently, the  reconstruction error norm U Uα is minimal in some located region (depending on the noise level  in the data) where|| − both|| error terms have approximately the same magnitude. Outside that region, the reconstruction error is dominated by one of the two error terms  which leads to an undesirable approximate solution Uα.

15 In the following, we consider the heuristic selection rule (46)-(47) which has a similitude with the discrete quasi-optimality rule [3,4,5,6] except for the denominator which in our case is not equal to one.

Let (αn)n be a sample of the regularization parameter α on a discrete grid defined as

n 2 αn := α0q , α0 (0, T ], 0 < q < 1, n = 1, ..., N0, (46) ∈ k k we consider the parameter α()∗ defined by

U  U  ∗ αn − αn+1 α() = αn∗ , with n∗ = argminn∈{1,...,N0} . (47) αn αn+1 − The heuristic behind the rule (47) is the following: Indeed, we aim at approximating the best regularization parameter α() (over the chosen  grid) which minimizes the reconstruction error norm U U over the grid (αn)n, i.e. || − α||  α() = αn(opt) with n(opt) = argmin K(αn) := U U . n − αn Given that minimizers of a differentiable function are critical points of that function, provided the function K is differentiable, αn(opt) can be characterized as a minimizer of the absolute value of the of function K, that is

0 n(opt) argmin K (αn) . ≈ n | | 0 By approximating the derivative K (αn) of the function K at αn by its growth rates over the grid (αn)n, we get that

K(αn+1) K(αn) n(opt) argminn − . ≈ αn+1 αn − However, since the exact solution u is unknown, we cannot evaluate the function K. In such a setting, we search a tight upper bound of the function K and aim at minimizing that upper bound. Using the triangle inequality, we have

    K(αn+1) K(αn) = U U U U U U (48) | − | − αn+1 − − αn ≤ αn+1 − αn

Hence from (48), we get an upper bound of the unknown term K(αn+1) K(αn+1) which is actually computable. | − |

By approximating K(αn+1) K(αn+1) by its upper bound in (48), we get | − |     Uαn+1 Uαn Uαn+1 Uαn n(opt) argminn − = − . ≈ αn+1 αn αn αn+1 − − which is precisely the definition of our heuristic selection rule (47). To illustrate the efficiency of the selection rule (47), on Figure1 (resp. Figure2), we exhibit the curve of the reconstruction error along with the selected parameter α()∗ for each noise level for Example 1 (resp. Example 2).

16 2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼ 3 3 3    U U 2/ U 2 U U 2/ U 2 U U 2/ U 2 || − α()∗ || || || || − α()∗ || || || || − α()∗ || || || α()∗ α()∗ α()∗

2 2 2

1 1 1 reconstruction error reconstruction error reconstruction error

0 0 0 1.25 1.3 1.35 1.4 1.45 1.5 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.1 1.2 1.3 1.4 1.5 reg. par. α 3 reg. par. α 3 reg. par. α 3 10− 10− 10− · · ·

Figure 1: Reconstruction error versus regularization parameter α along with selected regu- larization parameter α()∗ for Example 1.

2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼ 3 3 3    U U 2/ U 2 U U 2/ U 2 U U 2/ U 2 || − α()∗ || || || || − α()∗ || || || || − α()∗ || || || α()∗ α()∗ α()∗

2 2 2

1 1 1 reconstruction error reconstruction error reconstruction error

0 0 0 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 reg. par. α 3 reg. par. α 3 reg. par. α 3 10− 10− 10− · · ·

Figure 2: Reconstruction error versus regularization parameter α along with selected regu- larization parameter α()∗ for Example 2.

Results and comments

In the simulations, we consider three noise levels i (i = 2, 3, 4) such that the relative error in the data (Red) satisfies

F F G G Red = k − i k2 k − i k2 10−i. F ≈ G ≈ k k2 k k2 We choose Matlab as the coding environment and we solve equation (45) using a gener- alised minimal residual method (GMRES) with ortho-normalization based on Householder reflection. We choose as initial guess the solution from the Matlab direct solver lmdivide.  Figures3 (resp.6) compares the exact solution u to the reconstruction uα()∗ for each noise level for Example 1 (resp. Example 2). From these Figures, we observe that the reconstruction gets better as the noise level decreases. On Figures4 and5 (resp.7 and8), we compare the exact function u and the regularized  solution uα()∗ at y = 0.75 and y = 1 for Example 1 (resp. Example 2) for each noise level. Table1 (resp.2) presents the numerical values of the relative errors

 u( , y) uα()∗ ( , y) · − · 2 u( , y) k · k2 17 for y = 0, 0.25, 0.5, 0.75 and y = 1 for Example 1( resp. Example 2) for each noise level. From these Tables, we observe that the reconstruction error get smaller as y approaches 0 and as the noise level decreases. From Figures3 to8 and Tables1 and2, we can see that our mollifier regularization approach yields quite good results. Moreover, as predictable, the reconstruction gets better when the noise level decreases and when we get closer to the boundary side where boundary data are given.

Acknowledgement The authors are grateful to N. Alibaud for interesting comments and discussions during the development of the proposed methodology. They also wish to thank T. Le Minh, for nice and fruitful exchanges.

 2  3  4 U : Relative noise level 10− U : Relative noise level 10− U : Relative noise level 10− Exact solution U α()∗ ∼ α()∗ ∼ α()∗ ∼

1 1 1 1

0 0 0 0

1 1 1 1 1 1 1 1 − − − − -1 0.5 -1 0.5 -1 0.5 -1 0.5 -0.5 -0.5 -0.5 -0.5 0 y 0 y 0 y 0 y 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 x x x x

Figure 3: Comparison of the exact solution u of Example 1 and the regularized solution  uα()∗ for each noise level.

2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼

1 1 1

0.5 0.5 0.5

0 0 0

0.5 0.5 0.5 − − − 1 0.5 0 0.5 1 1 0.5 0 0.5 1 1 0.5 0 0.5 1 − − − − − − x x x

Figure 4: Comparison of the exact solution u( , y) (black curve) of Example 1 and the  · regularized solution u ∗ ( , y) at y = 0.75 (blue curve) for each noise level. α() ·

18 2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼ 0.5 0 0 0

0.5 0.5 − 0.5 − −

1 1 1 − − −

1.5 − 1.5 1 0.5 0 0.5 1 1 0.5 0 0.5 1 − 1 0.5 0 0.5 1 − − − − − − x x x

Figure 5: Comparison of the exact solution u( , y) (black curve) of Example 1 and the  · regularized solution u ∗ ( , y) (blue curve) at y = 1 for each noise level. α() ·

 u( , y) u ( , y) 2/ u( , y) 2 α()∗ || · 4 − · ||3 || · || 2 y Red 10− Red 10− Red 10− ∼ 5 ∼ 4 ∼ 3 0 8.2467 10− 7.4086 10− 6.8610 10− × 3 × 3 × 2 0.25 2.0430 10− 4.1855 10− 5.1843 10− × 3 × 2 × 1 0.5 6.7946 10− 1.5606 10− 2.2334 10− × 2 × 2 × 1 0.75 5.5590 10− 8.6677 10− 5.0774 10− × 1 × 1 × 1 1 1.4908 10− 1.8441 10− 4.4632 10− × × ×

Table 1: Relative L2 error between the exact solution u( , y) of Example 1 and the regularized  · solution u ∗ ( , y) for y = 0, 0.25, 0.5, 0.75, 1. α() ·

 2  3  4 U : Relative noise level 10− U : Relative noise level 10− U : Relative noise level 10− Exact solution U α()∗ ∼ α()∗ ∼ α()∗ ∼

3 3 3 3 2 2 2 2 1 1 1 1 1 1 1 1 0 0 0 0 0.5 0.5 0.5 0.5 -1 -0.5 -1 -0.5 -1 -0.5 -1 -0.5 0 0.5 y 0 0.5 y 0 0.5 y 0 0.5 y 1 0 1 0 1 0 1 0 x x x x

Figure 6: Comparison of the exact solution u of Example 2 and the regularized solution  uα()∗ for each noise level.

19 2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼ 3 3 3

2 2 2

1 1 1

0 0 0 1 0.5 0 0.5 1 1 0.5 0 0.5 1 1 0.5 0 0.5 1 − − − − − − x x x

Figure 7: Comparison of the exact solution u( , y) (black curve) of Example 2 and the  · regularized solution u ∗ ( , y) at y = 0.75 (blue curve) for each noise level. α() ·

2 3 4 Relative noise level 10− Relative noise level 10− Relative noise level 10− ∼ ∼ ∼

3 3 3

2 2 2

1 1 1

0 0 0

1 0.5 0 0.5 1 1 0.5 0 0.5 1 1 0.5 0 0.5 1 − − − − − − x x x

Figure 8: Comparison of the exact solution u( , y) (black curve) of Example 2 and the  · regularized solution u ∗ ( , y) (blue curve) at y = 1 for each noise level. α() ·

 u( , y) u ( , y) 2/ u( , y) 2 α()∗ || · 4 − · ||3 || · || 2 y Red 10− Red 10− Red 10− ∼ 5 ∼ 4 ∼ 3 0 4.6134 10− 4.7109 10− 1.8718 10− × 4 × 3 × 3 0.25 3.8348 10− 1.6691 10− 2.9628 10− × 3 × 3 × 2 0.5 2.7372 10− 8.3065 10− 1.0682 10− × 2 × 2 × 2 0.75 1.7155 10− 4.0734 10− 4.6936 10− × 1 × 1 × 1 1 1.1859 10− 2.1162 10− 2.3045 10− × × ×

Table 2: Relative L2 error between the exact solution u( , y) of Example 2 and the regularized  · solution u ∗ ( , y) for y = 0, 0.25, 0.5, 0.75, 1. α() ·

20 References

[1] G. Alessandrini, L. Rondi, E. Rosset, and S. Vessella. The stability for the cauchy problem for elliptic equations. Inverse problems, 25(12):123004, 2009.

[2] N. Alibaud, P. Mar´echal, and Y. Saesor. A variational approach to the inversion of truncated fourier operators. Inverse Problems, 25(4):045002, 2009.

[3] F. Bauer. Some considerations concerning regularization and parameter choice algo- rithms. Inverse Problems, 23(2):837, 2007.

[4] F. Bauer and S. Kindermann. The quasi-optimality criterion for classical inverse prob- lems. Inverse Problems, 24(3):035002, 2008.

[5] F. Bauer and S. Kindermann. Recent results on the quasi-optimality principle. Journal of Inverse and Ill-posed Problems, 17(1):5–18, 2009.

[6] F. Bauer and M. Reiß. Regularization independent of the noise level: an analysis of quasi-optimality. Inverse Problems, 24(5):055009, 2008.

[7] X. Bonnefond and P. Mar´echal. A variational approach to the inversion of some compact operators. Pacific journal of optimization, 5(1):97–110, 2009.

[8] A. Calder´on.Lebesgue spaces of differentiable functions. In Proc. Sympos. Pure Math, volume 4, pages 33–49, 1961.

[9] M. Choulli. Applications of elliptic Carleman inequalities to Cauchy and inverse prob- lems. Springer, 2016.

[10] C. Fefferman, A. Israel, and G. Luli. Sobolev extension by linear operators. Journal of the American Mathematical Society, 27(1):69–145, 2014.

[11] K. O. Friedrichs. The identity of weak and strong extensions of differential operators. Transactions of the American Mathematical Society, 55(1):132–151, 1944.

[12] D. Gilbarg and N. S. Trudinger. Elliptic partial differential equations of second order, volume 224. springer, 2015.

[13] J. Hadamard. Lectures on Cauchy’s problem in linear partial differential equations. Courier Corporation, 2003.

[14] D. N. H`ao. A mollification method for ill-posed problems. Numerische Mathematik, 68:469–506, 1994.

[15] P. T. Hieu and P. H. Quan. On regularization and error estimates for the cauchy problem of the modified inhomogeneous helmholtz equation. Journal of Inverse and Ill-posed Problems, 24(5):515–526, 2016.

[16] F. Hirsch and G. Lacombe. Elements of functional analysis, volume 192. Springer Science & Business Media, 2012.

21 [17] P. L. Hong, T. Le Minh, and Q. P. Hoang. On a three dimensional cauchy problem for inhomogeneous helmholtz equation associated with perturbed wave number. Journal of Computational and , 335:86–98, 2018.

[18] V. Isakov. Inverse problems for partial differential equations, volume 127. Springer, 2006.

[19] F. John. Partial Differential Equations, 1952-1953. Courant Institute of Mathematical Sciences, , 1953.

[20] A. Lannes, S. Roques, and M.-J. Casanove. Stabilized reconstruction in signal and image processing: I. partial deconvolution and spectral extrapolation with limited field. Journal of modern Optics, 34(2):161–226, 1987.

[21] A. K. Louis and P. Maass. A mollifier method for linear operator equations of the first kind. Inverse problems, 6(3):427, 1990.

[22] V. G. Maz’ja. On continuity and boundedness of functions in sobolev spaces. In Sobolev Spaces, pages 270–295. Springer, 1985.

[23] D. A. Murio. The mollification method and the numerical solution of ill-posed problems. John Wiley & Sons, 2011.

[24] T. Schuster. The method of approximate inverse: theory and applications, volume 1906. Springer, 2007.

[25] E. M. Stein. Singular and differentiability properties of functions, volume 2. Princeton university press, 1970.

[26] F. Triki and Q. Xue. H¨olderstability of quantitative photoacoustic tomography based on partial data. arXiv preprint arXiv:2103.16677, 2021.

[27] Wikipedia contributors. Mollifier — Wikipedia, the free encyclopedia, 2020. [Online; accessed 17-April-2020].

22