Numerical Solution of Inverse Problems by Weak Adversarial Networks

Numerical Solution of Inverse Problems by Weak Adversarial Networks

Inverse Problems Inverse Problems 36 (2020) 115003 (31pp) https://doi.org/10.1088/1361-6420/abb447 Numerical solution of inverse problems by weak adversarial networks 1 2 1, Gang Bao ,XiaojingYe ,YaohuaZang ∗ and Haomin Zhou3 1 School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, People’s Republic of China 2 Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, 30303, United States of America 3 School of Mathematics, Georgia Institute of Technology, Atlanta, GA, 30332, United States of America E-mail: [email protected], [email protected], [email protected] and [email protected] Received26February2020,revised22August2020 Acceptedforpublication1September2020 PublishedOctober2020 Abstract In this paper, a weak adversarial network approach is developed to numerically solve a class of inverse problems, including electrical impedance tomography and dynamic electrical impedance tomography problems. The weak formula- tion of the partial differential equation for the given inverse problem is lever- aged, where the solution and the test function are parameterized as deep neural networks. Then, the weak formulation and the boundary conditions induce a minimax problem of a saddle function of the network parameters. As the param- eters are alternatively updated, the network gradually approximates the solution of the inverse problem. Theoretical justi!cations are provided on the conver- gence of the proposed algorithm. The proposed method is completely mesh-free without any spatial discretization, and is particularly suitable for problems with high dimensionality and low regularity on solutions. Numerical experiments on a variety of test inverse problems demonstrate the promising accuracy and ef!ciency of this approach. Keywords: inverse problem, deep learning, weak formulation, adversarial net- work, stochastic gradient (Some !gures may appear in colour only in the online journal) ∗Author to whom any correspondence should be addressed. 1361-6420/20/115003+31$33.00 © 2020 IOP Publishing Ltd Printed in the UK 1 Inverse Problems 36 (2020) 115003 GBaoet al 1. Introduction Inverse problems (IP) are ubiquitous in a vast number of scienti!c disciplines, including geo- physics [50], signal processing and imaging [7], computer vision [41], remote sensing and control [57], statistics [35], and machine learning [23]. Let Ω be an open and bounded set in Rd,thenanIPde!nedonΩ can be presented in a general form as: [u, γ] = 0, in Ω (1a) A [u, γ] = 0, on ∂Ω (1b) B where [u, γ]speci!esadifferentialequation,inwhichu is the solution and γ the coef!cient in the inverseA medium problem or the source function in the inverse source problem. Equation can be an ordinary differential equation, or a partial differential equation (PDE), or an integro-A differential equation, that (u, γ)needstosatisfy(almost)everywhereinsidetheregionΩ.The boundary value (and initial value if applicable) is given by [u, γ]on∂Ω.Dependingonspe- ci!c applications, partial information of u and/or γ may be availableB in the interior of Ω.Then IP (1)isto!nd(u, γ)thatsatis!esboth(1a)and(1b). To instantiate our approach, we mostly use the classical inverse conductivity problem in electrical impedance tomography (EIT) [11, 37] as an example to present our main idea and the derivations in this paper. However, our methodology can be readily applied to other classes of IPs with modi!cations. An example of dynamic EIT problem will be shown in section 4.The goal of EIT is to determine the electrical conductivity distribution γ(x)ofanunknownmedium de!ned on Ω based on the potential u,thecurrent γ∂nu measurements, and the knowledge − # of γ (and hence ∂#nu)on/neartheboundary∂Ω of the domain Ω: (γ u) f = 0, in Ω (2a) −∇ · ∇ − u ub = 0, γ γb = 0, ∂nu un = 0, on ∂Ω (2b) − − # − where ub is the measured voltage, γb is the conductivity near the surface of the object and un ! u #n with #n being the outer normal of ∂Ω.Notethatourapproachisnottoestimate the Dirichlet-to-Neumann∇ · (DtN) map associated with the conductivity function as in classical methods speci!c to the EIT problem [12, 21, 40]. Instead, our goal is to directly solve a general class of IPs (1)numericallyusingthegivendata,withtheEITproblem(2)asaprototype example without exploiting its special structure(e.g.,theDtNmap).Tomakeourpresentation concise and focused, we only consider IPs with [u, γ]characterizedbyPDEsin(1a), and assume that the given IP is well-de!ned and admitsA at least one (weak) solution. Our approach is to train deep neural networks that can represent the solution (u, γ)ofa given IP, with substantial improvement over classical numerical methods especially for prob- lems with high dimensionality. More speci!cally, we leverage the weak formulation of the PDE (1a)andconverttheIPintoanoperatornormminimizationproblemofu and γ.Then we parameterize both u,theunknowncoef!cientγ,andthetestfunctionϕ as deep neural networks uθ, γθ,andϕη respectively, with network parameters (θ, η), and form a minimax problem of a saddle function of the parameters (θ, η). Finally, we apply the stochastic gradient descent (SGD) method to alternately update the network parameters so that (uθ, γθ)gradu- ally approximates the solution of the IP. The parameterization of (u, γ)usingdeepneural networks requires no discretization of the spatial and temporal domain, and hence is com- pletely mesh free. This is a promising alternative compared to the classical !nite difference method (FDM) and !nite element methods which suffer the issue of the so-called curse of 2 Inverse Problems 36 (2020) 115003 GBaoet al dimensionality, a term !rst used in [6]. Moreover, our approach combines the training of the weak solution (primal network) (u, γ)andthetestfunction(adversarialnetwork)ϕ gov- erned by the weak formulation of the PDE, which requires less regularity of the solution (u, γ)andcanbemoreadvantageousinmanyreal-worldapplicationswhenthesolutionhas singularities. The remainder of this paper is organized as follows. We !rst review the recent work on deep learning based solutions to forward and IP in section 2.Insection3,weprovidethe detailed derivation of our method and a series of theoretical results to support the validity of the proposed approach. We discuss several implementation techniques that can improve practical performance and conduct a series of numerical experiments to demonstrate the effectiveness of the proposed approach in section 4.Section5 concludes this paper with some general remarks. 2. Related work The past few years have witnessed an emerging trend of using deep learning based methods to solve forward and IP. These methods can be roughly classi!ed into two categories. The !rst category includes methods that approximate the solution of a given problem based on supervised learning approaches. These methods require a large number of input–output pairs through numerical simulation and experiments to train the desired networks. In this category, deep neural networks are used to generate approximate intermediate results from measurement data for further re!nement [20, 45, 48, 53, 54, 58], applied to improve the solution of classical numerical methods in the post-processing phase [5, 26, 27, 29, 33, 38, 46, 56], or approximate the mapping from given parameters of an IP to its solution but require spatial discretization and cannot be applied to high-dimensional problems [2, 32, 43]. The second category features unsupervised learning methods that directly solve the for- ward or IP based on the problem formulation rather than additional training data, which can be more advantageous than those in the !rst category in practice. For example, feed-forward neural networks are used to parameterize the coef!cient functions and trained by minimiz- ing the performance function in [13]. In [39], a neural network architecture called SwitchNet is proposed to solve the inverse scattering problem through the mapping between the scat- terers and the scattered !eld. In [18], a deep learning approach speci!c to 2D and 3D EIT problems is developed to represent the DtN map by a compact neural network architecture. The backward stochastic differential equation (BSDE) corresponding to the PDE in a forward problem is parameterized in part by neural networks, such that the solution of the PDE can be obtained by integrating the BSDE for a target point in the domain [9, 16, 28]. In [17], the solution of a forward problem is parameterized as a deep neural network, which is trained by minimizing the loss function composed of the energy functional associated with the PDE and a penalty term on the boundary value condition. Another mesh-free framework, called physics-informed neural networks (PINN), for solving both the forward and IP using deep neural networks based on the strong formulation of PDEs is proposed in [51], where a con- stant coef!cient function is considered for the IP part. Speci!cally, PINN parameterizes the unknowns of a given PDE using deep neural networks, which are trained by minimizing the loss function formed as the least squares of the violation of the PDE at sampled points in the domain and boundary conditions. Some empirical study of PINN is also conducted in [14]. Solutions to IPs based on PINN with data given in problem domain are also considered in [34], and re!nement of solutions using adaptively sampled collocation points is proposed in [4]. In [59], the weak formulation of the PDE is leveraged as the objective function, where the solu- tion of the PDE and the test function are both parameterized as deep neural networks trying to 3 Inverse Problems 36 (2020) 115003 GBaoet al minimize and maximize the objective function, respectively. In [36], a similar variational form is used where the test function is !xed basis instead of neural networks to be learned. In [47], three neural networks, one for low-!delity data and the other two for the linear and nonlinear functions for high-!delity data, are used by following the PINN approach. The PINN with a multi-!delity network structure is also proposed for stochastic PDE cases, where polynomial chaotic expansions are used to express the solutions, i.e., as a linear combination of random basis with coef!cient functions to be learned [10].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us