banner_potsibanner.jpg (JPEG Image, 1050x178 pixels) http://math.ucalgary.ca/potsi/sites/potsi.math.ucalgary.ca/files/banners/b...

Full Waveform Inversion of Crosswell Seismic Data

Using Automatic Differentiation

Wenyuan Liao Department of Math. & Stat., University of Calgary

Danping Cao School of Geosciences, China University of Petroleum

CREWES Tech Talk February 1, 2013

1 of 1 05/29/2011 04:32 PM Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Introduction

Fixed Receivers – varying sources Workflow of FWI Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Introduction PDE-constrainedIntroductionIntroduction optimizationPDE-constrained problem optimizationElastic problem wave propagationElastic wave propagationAdjoint ModelAdjoint Modelof the of Elastic the Elastic Wave Wave Equation Equation AutomaticAutomatic Di↵ Dierentiation↵erentiationOptimizationOptimization Algorithms AlgorithmsNumerical ResultsNumericalConclusion Results and AcknowledgementsConclusion and Acknowledgements Inverse Problem: PDE-constrained optimization(I) Inverse Problem:Inverse Problem: PDE-constrained PDE-constrained optimization(I) optimization(I) • DefineDefineMathematical the the objective objective function Formulation: function PDE-constrained Optimization IntroductionDefinePDE-constrained the objective optimization problem functionElastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Nrt Nr 1 1tf f (m)= (d i id i (m))i 2 dt + 2 m Introduction PDE-constrained optimizationt problemN(r mElastic)= wave propagationobsAdjoint(dobs Modelcal ofd thecal Elastic(m Wave)) Equationdt + Automaticm Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Inverse Problem: PDE-constrained1 Jf 2 0 optimization(I) k k Introduction PDE-constrained optimization problem Elastic wave propagationJ Adjointi Z Model2 ofi0=1 thei Elastic Wave2 Equation Automatic Di↵erentiation Optimizationk k Algorithms Numerical Results Conclusion and Acknowledgements Define the objective(m )= function (d ZXd i(=1m)) dt +  m J 2 obs calX k k Inverse Problem:where PDE-constrained0 i=1 optimization(I) Inverse Problem: PDE-constrained Nr Z optimization(I) Define the objectivewheretf functionX i i 1 i i md2 obs dcal (m) Define the objective(V functionp)= (d d (Vp)) dt +  Vp i i J 2 whereobs cal mdk k d (V ) where 0 tf Nr obs cal p Nr Z i1=1  m RegularityTerm tf X i i 2 1 (m)=i i 2 (dobs i dcal (mk)) kdti +  m (m)= J (d obs 2 d cal (mdm )) dt :+  Modelm dparameter((Vkp)k wave velocity) whereJ 2 0 i=1 obsk k  calm : RegularityTerm 0 i=1 is theZ parameter to be determined, u is the state solution of the Z X i X i k k V Elastic p d Waveobs equation:d cal Observational(Vp) based on the parameter data p,and˜u is the where where is the parameterm : RegularityTerm to be determined, u is the statet solution of the i data measurement,mdi k ik d ori the(m) extra boundary condition, the term  p md obs  V p d cal: RegularityTerm( m ) obs : Syntheticcal seismogram based onk mk through the wave eq. is the parameterisElastick to a regularityk be determined,Wave term equation for theu based purposeis the on state of the well-posedness solution parameter of ofp the,and˜ the inverseut is the is the parameter m to be RegularityTerm determined,  m uRegularityTerm :is theRegularity state solution term of the (Optional, depending on prior knowledge) Elastic Wavek k equationproblem.data measurement,k basedk on the or parameter the extra boundaryp,and˜u condition,is the the term  p is theElastic parameter Waveis to the beequation parameter determined, based tou beis thedetermined, on statethe parameter solutionu is of the thep state,and˜ solutionut is the of the t k k is a regularity term for the purpose of well-posedness of the inverse Elasticdatadata Wave measurement, equation measurement,Elastic Wavebased or on equationThe the the parameterextra inverse or based the boundary problem onp extra,and˜ theu parameter condition,t boundaryis the thusp formulated the,and˜ termcondition,ut is the asp the the following term  p k k data measurement,is a regularitydata or measurement, termthe extra for boundaryproblem. theThe or purpose the inverse condition, extra of boundary well-posedness the problem term condition, p of the theis terminversesolved p through k k is a regularity termconstrained for the optimization purpose of problem:k k well-posednessk k of the inverse is a regularityproblem. termis a regularity for the purpose term of for well-posedness the purpose of of the well-posedness inverse of the inverse problem. problem.problem. The inverse problem is thusmin formulated(m) as the following The inverseThe inverse problem problem is thus formulated is thus formulated as the following as the followingm H(⌦) J The inverse problem is thus formulated as the following2 constrainedconstrained optimization optimization problem:constrained problem: optimization problem: The inverseconstrained problem optimization is problem: thus formulated as the following constrained optimization problem: Calgary min (Vminp) (Vp) Vp H(⌦) J min (Vp) 2 Vp H(⌦) minJ (Vp) 2 Vp H(⌦) J Vp H(⌦) J 2 2 min (Vp) Calgary Calgary Vp H(⌦) J Calgary 2 Calgary

Calgary IntroductionIntroduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements

InverseIntroduction PDE-constrained Problem: optimization problem Elastic PDE-constrained wave propagation Adjoint Model of the Elastic Wave optimization Equation Automatic Di↵erentiation Optimization (II) Algorithms Numerical Results Conclusion and Acknowledgements The derivative(or Gradient) of the objective function with respect • InversePDE-Constrainedto p Problem:is calculated PDE-constrained as Optimization: optimization Gradient (II) Calculation The derivative(or Gradient) of the objective function with respect to p is calculated as tf Nr i @ N @d @u @ m @ tf r @dii @u i @ mcal J =J = (d i d i ) (dobscal dcaldt )+  k k dt +  k k @m @m 0 obs cal · @u · @m · @@mu · @m @m 0 i=1 i=1 Z X Z✓ X ✓ ◆ ◆ (m)= (u(m), m) J J (m)= (u(m), m) J @u J Direct computationL(u(m), m)=0 of is difficult and expensive! @m where ⇠ = u is the adjoint variable, which can be obtained by @u p L(u(m), m)=0 solving the adjoint PDEs @m

Adjoint@L -state method1 is an2 effective way to resolve this issue where =⇠ L=p +uLpu⇠is+ L theut ⇠t + adjointLD1uD ⇠ variable,+ LD2uD ⇠ =0 which can be obtained by @p solving the adjoint PDEs with properly chosen initial condition at t = T and boundary conditions. Here L is a di↵erential operator such that the forward Elastic wave@ equationL is stated as 1 2 = Lp + Lu⇠ + Lut ⇠t + LD1uD ⇠ + LD2uD ⇠ =0 @p 1 2 1 L(u, ut , D u, D u, D p, p, t)=0

with properly chosen initial condition at t = T and boundaryCalgary conditions. Here L is a di↵erential operator such that the forward Elastic wave equation is stated as

1 2 1 L(u, ut , D u, D u, D p, p, t)=0

Calgary Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Inverse Problem: PDE-constrained optimization (II) InverseIntroduction Problem:PDE-constrained PDE-constrained optimization problem Elastic optimization wave propagation (II)Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements TheThe derivative(or derivative(or Gradient) Gradient) of the of the objective objective function function with respect with respect toto pAdjointisis calculated calculated as as State Method Inverse Problem:Nr PDE-constrained optimization (II) t tNf r i @ f i i @d i @dcal @u @ m @TheJ = derivative(ori( Gradient)d i d ) ofcal the@u objectivedt@ + functionVp k k with respect @Jm= (dobs obsdcal) cal · @u · @dtm +  k k @m @Vp 0Z0 i=1 ✓ · @u · @Vp ◆ @Vp to p• isDefineZ calculatedXi=1X✓ the as cost functional ◆as

(VN)=r(m)=(u(V(u),(mV )), m) @ tf Jp J p p @ d i @u @ m J = J J(d i d i ) cal dt +  k k @whichm may depend onobs the calmodel· @ uparameter· @m implicitly@m if no regularity 0 i=1 ✓ @u ◆ term. ZL(uL(Xm(u)(,Vmp))=0, Vp)=0 @m where ⇠ = up is the adjoint variable, which can be obtained by where ⇠ = up is the adjoint variable,(m)= which(u can(m) be, m obtained) by solvingsolving the theThe adjoint adjoint governing PDEs PDEs PDE(acousticJ J wave equation in this case) is stated as

@ L 1 2 @L= Lp + Lu⇠ + Lut ⇠t + LD1uD ⇠ +1LD2uD ⇠ =02@u @ p = Lp + Lu⇠ +LL(uut(⇠mt +), mLD)=01uD ⇠ + LD2uD ⇠ =0 @p @m with properly chosen initial condition at t = T and boundary withwhere properly⇠ = chosenup is initial the adjoint condition variable, at t = T whichand boundary can be obtained by conditions.Here Here L isis aan di↵ operatorerential operator defining such that the the initial-boundary forward value problem of conditions.solving theHere adjointis a di PDEs↵erential operator such that the forward Elastic wavethe equationwaveL equation. is stated as Elastic wave equation is stated as @ 1 2 1 L L(u, ut , D u, D u, D p, p, t)=0 1 2 = Lp + L1u⇠ +2Lut ⇠1t + LD1uD ⇠ + LD2uD ⇠ =0 @p L(u, ut , D u, D u, D p, p, t)=0 Calgary with properly chosen initial condition at t = T and boundary Calgary conditions. Here L is a di↵erential operator such that the forward Elastic wave equation is stated as

1 2 1 L(u, ut , D u, D u, D p, p, t)=0

Calgary Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements

Inverse Problem:Introduction PDE-constrained PDE-constrained optimization problem Elastic wave propagation optimizationAdjoint Model of the Elastic (II) Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements The derivative(orIntroduction Gradient)PDE-constrained of optimization the objective problem functionElastic wave propagation with respectAdjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Inverseto p is calculated Problem:Inverse as PDE-constrained Problem: PDE-constrained optimization (II) optimization (II) InverseThe Problem: derivative(or Gradient) PDE-constrained of the objective function optimization with respect (II) The derivative(or Gradient)N of the objective function with respect tfThe derivative(orr to p is calculated Gradient) as of the objectivei function with respect to@p is calculated as i i @dcal @u @ m J = to p is calculated(dobs as dcal ) dt +  k k N @m 0 @ tf r · @u · @m @d i @u @m @ m NiIntroduction=1 PDE-constrainedNr optimizationi problem Elastici wavecal propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements Z tf r ✓ J t=f i (d i d ) ◆ dt +  k k AdjointX@ State Method:i@d i obs@ u@Perturbationdcal cal@u @ m @ Theorym J = @i m i 0(d cald ) · dt@u+ · @km k @m = @m (dobs dcalZ) obsi=1 ✓cal · @u dt· @+m  k k @m◆ J 0 Z0 i=1 ✓ · X@u · @m ◆ @m Z Xi=1Inverse✓ ( Problem:mX)= (u( PDE-constrainedm), m◆) optimization (II) IntroduceTheJ derivative(or a perturbation Gradient)J to of the( them)= objectiveparameter:(u function(m) , m with) respect to p is calculated as(m)= J(u(m), m)J m uJ J ) ) J tf Nr i @ i i @dcal @u @ m J = (dobs dcal ) dt +  k k @m @u 0 @u @u· @u · @m @m L ( u , m )=0L ( u , m )=0Z Xi =1 ✓ + @u L(u + u, m◆L+(um+)=0u, m + m)=0 L(u, m)=0 L(u(m), m)=0 @m L@(mu + u, m + m)=0 @ m @m (m)= (u@(m)(,um, m) ) ˜(u, m@,L⇠()=u, m)(u, m) ⇠, L(u, m) where ⇠ = up is the adjointJ variable,=J J which@ J( canu⇠,, m be) obtainedJ @Lmh(u by, m) i J @=m J @m< ⇠, > m solving the adjoint PDEs@ (u, m✓)J @@⌧Lm(u,m) ◆ @m ✓ ◆ =where ⇠ =Jup is the adjoint< variable,⇠,@u which can be> obtainedm by @ J wherewhere @ Lthe(mu⇠, m= adjoint)=0upis-state the adjoint variable@ variable,m isL (definedu + whichu, m as+ canm)=0 be obtained by L solving✓ the adjoint PDEs @1m 2 ◆ = Lp + Lsolvingu⇠ + Lu thet ⇠t adjoint+ LD1u PDEsD ⇠ + LD2uD ⇠ =0 @p 1 where ⇠ = up is the adjoint@L @L(u, m) variable,⇤ @ (u which, m) 1 can@ beL(u, obtained2m) ⇤ @ ( byu, m) = Lp + Lu⇠⇠+=LutJ⇠t + LD1uD⇠ ⇠=+ LD2uD ⇠ =0 J @p @@u @u ) @u @u solvingwith properly the adjoint chosen PDEs initial✓ conditionL ◆  at t = T and boundary✓ 1 ◆  2 = Lp + Lu⇠ + Lut ⇠t + LD1uD ⇠ + LD2uD ⇠ =0 @p @ (u, m) @L(u, m) conditions. HerewithL is properly a di↵erential chosen initial= operatorJ condition such at⇠,t that= T theand boundary forwardm conditions. Here isJ a di↵erential@m operator @ suchm that the forward Elastic@L wave equationwith is stated properlyL as chosen✓ initial⌧1 condition at◆t =2 T and boundary = LpElastic+ L waveu⇠ + equationLu@t˜⇠(ut is, m+ stated, ⇠L) D as@1u(Du, m)⇠ + L@DL(2uu, mD) ⇠ =0 @p conditions.J Here is= aJ di↵erential⇠, operator such that the forward 1 2 @m L1 @m @m (u, ut , D u, D u, D 1p, p2, t)=1 0 ⌧ L Elastic waveL(u, equationut , D u, D isu stated, D p, p, ast)=0 1 @ ˜(u, m, ⇠) @ (u, m) @L(u, m) ⇤ @L(u, m) ⇤ @ (u, m) with properly chosen initialJ condition=0 J at t = T and⇠ =0 boundary⇠ = J @u ) @u 1 @u2 1 ) @u @u conditions. Here is a di↵erentialL(u operator, ut , D✓ u, D suchu,◆D thatp, p, t the)=✓0 forward◆ Calgary Calgary L Calgary Elastic wave equation is stated as Calgary 1 2 1 L(u, ut , D u, D u, D p, p, t)=0

Calgary Introduction PDE-constrained optimization problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements

Introduction PDE-constrained optimizationInverse problem Problem:Elastic wave PDE-constrained propagation Adjoint Model of the optimization Elastic Wave Equation (II)Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements The derivative(or Gradient) of the objective function with respect Inverse Problem: PDE-constrainedto p is calculated as optimization (II) Introduction PDE-constrained optimizationN problem Elastic wave propagation Adjoint Model of the Elastic Wave Equation Automatic Di↵erentiation Optimization Algorithms Numerical Results Conclusion and Acknowledgements The derivative(or Gradient)@ of the objectivetf r function with@d i respect@u @ m J = (d i d i ) cal dt +  k k to p is calculated as @m obs cal · @u · @m @m Inverse Problem:Z0 i=1 PDE-constrained✓ optimization◆ (II) The derivative(or Gradient)X of the objective function with respect tf Nr to p is calculated as i @ Adjointi Statei Method:@dcal @u Lagrange@ m Multipliers J = (dobs dcal )Nr dt +  k k @(m)= tf (u(m), m) @˜d(iu, m@,u⇠)= (@um, m) ⇠, (u, m) @m 0 J = · (d@i u d·i @) m cal dt + @kmk L Z i=1 ✓ J@m J obs cal ·J@u◆· @m J @m h i X Z0 i=1 ✓ ◆ Redefine a newX cost functional as

(m)= (u(m), m) ˜(u, m, ⇠)= (u, m) ⇠, L(u, m) 1 J ˜ J J@u J h i @L(u, m) ⇤ @ (u, m) (m)= (u(m), m)L(u, m)=0(u, m, ⇠)= (u, m) L(⇠u+, Lu(,um, m+)m)=0 ⇠ = J J J J @Jm h i @u @u ✓ ◆  1 @u @L(u, m) ⇤ @ (u, m) L(u, m)=0 L(u+u, m+m)=0 ⇠ = J Solving the unconstrained@m @ (u, m optimization) @L(u problem, m) we@u obtain the @ u = ⇠, ✓m ◆  J 1 gradient as J @m @m @u @ ✓(u, m) @L(u,⌧m) ◆ @L(u, m) ⇤ @ (u, m) L(u, m)=0 = L(Ju+u, m⇠,+m)=0m ⇠ = J @m J@ ˜(u, m@m, ⇠) @ (@um, m) @L(u, m) @u @u J✓ =⌧ J ◆ ⇠, ✓ ◆  @ ˜(u, m@, ⇠m) @ (u, m) @m @L(u, m) @m where J = J ⇠, @m @m @m ⌧ ⌧ @ (u, m) @L(u, m) 1 Calgary @ ˜(u, m, ⇠) @ (u, m) @L(u, m) ⇤ @L(u, m) ⇤ @ (u, m) = J J =0 ⇠,J ⇠m=0 ⇠ = J J @m@u ) @u @m @u ) @u @u ✓ ◆ ✓ ◆ ✓ ⌧ ◆ @ ˜(u, m, ⇠) @ (u, m) @L(u, m) Calgary J = J ⇠, @m Adjoint –>@ mDiscretization or @Discretizationm -> Adjoint ⌧ Calgary Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Automatic differentiation

• Automatic Differentiation (AD), sometimes alternatively called algorithmic differentiation, is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. Forward mode AD Reverse mode AD AD tools l AD Model Builder ( /C++ ) l GRESS ( Fortran77 ) l ADC ( C/C++ ) l HSL_AD02 ( Fortran95 ) l ADF ( Fortran77,Fortran95 ) l INTLAB ( MATLAB ) l ADIC ( C/C++ ) l NAGWare l ADIFOR ( Fortran77 ) 95 ( Fortran77,Fortran95 ) l ADiMat ( MATLAB ) l OpenAD ( C/C++,Fortran77/95 ) l ADMAT / ADMIT ( MATLAB ) l PCOMP ( Fortran77 ) l ADOL-C ( C/C++ ) l pyadolc ( python ) l ADOL-F ( Fortran95 ) l pycppad ( Interpreted,python ) l APMonitor ( Interpreted ) l Rapsodia ( C/C++,Fortran95 ) l AUTODIF ( C/C++ ) l Sacado ( C/C++ ) l AutoDiff .NET ( .NET ) l TAF ( Fortran77,Fortran95 ) l AUTO_DERIV ( Fortran77/95 ) l TAMC ( Fortran77 ) l ColPack ( C/C++ ) l TAPENADE ( C/C+ l COSY INFINITY ( Fortran77/95,C/ +,Fortran77/95 ) C++ ) l TaylUR ( Fortran95 ) l CppAD ( C/C++ ) l The Taylor Center (independent ) l CTaylor ( C/C++ ) l TOMLAB /MAD ( MATLAB ) l FAD ( C/C++ ) l TOMLAB /TomSym ( MATLAB ) l FADBAD/TADIFF ( C/C++ ) l Treeverse / Revolve ( C/C+ l FFADLib ( C/C++ ) +,Fortran77/95 ) l YAO ( C/C++ ) Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Workflow of FWI

Initial model

Forward modeling Model updating

−1 !∂2 J (m )$ ∂J (m ) m 0 0 Δ = −# 2 & "# ∂m %& ∂m

Field data Misfit calculation Gradient calculation

1 T J (m) = d − d (m) d − d (m) 2 ( obs cal ) ( obs cal ) Converged N ? Y FWI solution one by one

Forward Modeling: -2nd-order in time and 4th-order in space -Stagger-grid finite difference -PML absorbing boundary Model Updating: -L-BFGS selected -Limited memory required -Quasi-Newton method

Gradient Calculation: -Adjoint state method -AD tools used -TAPENADE test FWI workflow with AD

Initial model

Model updating Forward modeling

Automatic Differentiation Misfit calculation

AD tool solve the Converged N gradient according to the ? forward modeling program Y Benefit of FWI with AD

• Simplify the gradient calculation • Focus on forward modeling and optimization method • High efficiency forward modeling program will lead to high efficiency gradient calculation code • FWI workflow is simplified

Forward AD -> Model modeling gradient update Accuracy of Gradient calculation

Gradient by TAPENADE Gradient by central difference quotient

20 Gradient calculation: 40 -True model 60 -Synthetic record

80 -Initial model

100 20 40 60 80 100 Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Model test 1

10

20

30

40

50

60

70

80

90

100 20 40 60 80 100

Model:101 X 101 Spatial sample:1m Time sample:0.1ms Source: ricker wavelet Main frequency: 180 Hz Boundary: PML Inversion result - 1 shot

true model initial model 10 iteration 30 iteration

20 20 20 20

40 40 40 40

60 60 60 60

80 80 80 80

100 100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100

50 iteration 100 iteration 300 iteration 1000 iteration

20 20 20 20

40 40 40 40

60 60 60 60

80 80 80 80

100 100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Inversion result - different shot

initial model 1 shot 3 shot

20 20 20

40 40 40

60 60 60

80 80 80

100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100

5 shot 7 shot 11 shot

20 20 20

40 40 40

60 60 60

80 80 80

100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Model test 2

20

40

60

80

100 20 40 60 80 100

Model:101 X 101 Spatial sample:1m Time sample:0.1ms Source: ricker wavelet Main frequency: 180 Hz Boundary: PML Inversion result - different shot

true model initial model 1 shots 3 shots

20 20 20 20

40 40 40 40

60 60 60 60

80 80 80 80

100 100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100

5 shots 7 shots 11 shots 21 shots

20 20 20 20

40 40 40 40

60 60 60 60

80 80 80 80

100 100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Inversion result - 1 shot

initial model 10 iterations 30 iterations

20 20 20

40 40 40

60 60 60

80 80 80

100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100

50 iterations 100 iterations 1000 iterations

20 20 20

40 40 40

60 60 60

80 80 80

100 100 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Outline

1. Introduction 2. Adjoint State Method 3. Automatic Differentiation(AD) 4. FWI using AD 5. Model Test 6. Conclusions Conclusion

• Automatic differentiation (AD) is a promising yet not popular approach in Geoscience.

• The gradient calculated through AD is accurate.

• The full waveform inversion workflow is simplified with the usage of the AD tool.

• Model tests show that the full waveform inversion method with AD is effective and efficient in the inversion of the crosswell seismic data. Future work

• Improve the forward modeling: finite difference 4th-order in time • Test the large-scale data inversion using checkpoint technology • Test with other AD tools, and Optimization algorithms • Test the surface seismic inversion • Address inverse modeling issues under the current framework • Test on other types of wave equations Acknowledgement

• Gary Margrave • CREWES • POTSI • NSERC • Department of Math. & Stat. • Chinese Scholarship Council, National Nature Science Foundation of China