Nonlinear Optics

Total Page:16

File Type:pdf, Size:1020Kb

Nonlinear Optics Nonlinear Optics: A very brief introduction to the phenomena Paul Tandy ADVANCED OPTICS (PHYS 545), Spring 2010 Dr. Mendes INTRO So far we have considered only the case where the polarization density is proportionate to the electric field. The electric field itself is generally supplied by a laser source. PE= ε0 χ Where P is the polarization density, E is the electric field, χ is the susceptibility, and ε 0 is the permittivity of free space. We can imagine that the electric field somehow induces a polarization in the material. As the field increases the polarization also in turn increases. But we can also imagine that this process does not continue indefinitely. As the field grows to be very large the field itself starts to become on the order of the electric fields between the atoms in the material (medium). As this occurs the material itself becomes slightly altered and the behavior is no longer linear in nature but instead starts to become weakly nonlinear. We can represent this as a Taylor series expansion model in the electric field. ∞ n 23 PAEAEAEAE==++∑ n 12 3+... n=1 This model has become so popular that the coefficient scaling has become standardized using the letters d and χ (3) as parameters corresponding the quadratic and cubic terms. Higher order terms would play a role too if the fields were extremely high. 2(3)3 PEdEE=++()εχ0 (2) ( 4χ ) + ... To get an idea of how large a typical field would have to be before we see a nonlinear effect, let us scale the above equation by the first order term. Pd24χ (3) =1++EE2 +... εχ000E εχ εχ When we plot this function for two cases: ADP (non‐centrosymmetric quadratic linearity dominant) and CS2 (centrosymmetric cubic nonlinearity dominant) respectively we see that the nonlinearity in both cases is about of size 1, which is the first order term. This starts to occur when the electric filed is on the order of the linear term at 10^12 V/m in the case of ADP(left) and 10^12 V/m in the case of carbon disulfide(right). The respective polarization curves: ADP (linear), and CS2 (quadratic). These nonlinearities are not seen until the electric field is very large. Until the very strong light sources became available (usually from finely focused laser beams) this effect had pretty much gone unnoticed. Derivation of a nonlinear wave equation for the electric field We start our derivation as usual with the standard equation for the arbitrary homogeneous dielectric medium. 1 ∂22EP∂ ∇−2 E =μ ct22∂ 0 ∂ t 2 As noted earlier the polarization density can be expressed as a Taylor series. We truncate it to 3rd order and substitute it into our equation above. ∞ 2 n ∂ AE 22(3)3 2 ∑ n ∂ [εχ EdE++ 2 4χ E + ...] 2 1 ∂ E n=1 ()()0 () ∇−E 22 =μμ002 = 2 ct0 ∂∂ t ∂t 222223 2(1 ∂∂∂EEE3)∂E ∇−Ed22 =εμχ00 2 +24 μ0 2 + χ μ0 2 ct0 ∂∂∂ t t∂ t 22 2211∂∂EE ∇−EE{}222 +εμχ00 =∇− 2 ctc0 ∂∂t 22 1 ∂∂22EE∂2 P ⎛⎞⎛⎛⎞∂∂EE 2⎛⎞∂E⎞ ∇−2 Ed =μμ0 +41⎜⎟⎜EE + +2μχ(3) E+2⎟ 2200 2 ⎜⎟⎜2⎜⎟ 0 2 ⎜⎟⎟ ct∂∂ t⎝⎠⎝ ∂∂ t⎝⎠ t ∂∂t⎝⎠ t⎠ We get back our original equation plus 4 extra terms, but these are not “good” terms in that we can still easily solve the wave equation. We immediately see that the new wave equation is nonlinear in the electric field and in the first derivative of the electric field. We are lucky in the sense that it is not nonlinear in the highest order derivative. The extra term are very small compared to the original terms, which means that we possibly could set up a perturbation scheme and make an approximate analytic approximation. A numerical solution would also be possible using a finite difference or finite element scheme. Our textbook mentions two other solutions, the Born approximation and coupled wave theory. For the sake of this very brief intro, suffice it to say that indeed we can with some effort solve this approximately. But instead of solving it, let’s look at a few properties of this equation and note some applications that exploit these properties. Second –order nonlinear optics Second‐harmonic generation SHG For the case of the material that has negligible nonlinear terms above second order our wave equation simplifies. 2 1 ∂∂22EE∂2 P ⎛⎞⎛⎞∂E ∇−2 Ed =μμ0 +4 ⎜⎟E + 2200 2 ⎜⎟2⎜⎟ ct∂∂ t⎝⎠ ∂∂ t⎝⎠ t If we view the equation as having a linear part on the right hand side and nonlinear part on the right hand side lest examine what occurs if the nonlinear part is exposed to a simple plane wave electric field. The nonlinear term is: 2 PdENL = ()2 Et()= Re{( Eω ) ejtω } ⎧⎫1 2 2*⎡⎤jωωtj−−t⎡⎤ j ωωωωtj *tj⎡⎤tj*−t PNL ==2 dEdEeEe 2⎨⎬⎣⎦ ()ωω + () = dEeEe⎣⎦() ωω + ()⎣⎦ EeEe () ωω+ () ⎩⎭4 d 2 =+Ee22()ωωωωjtωωω 2() EeEejt * ()−−jt +() E ()*2 ejt ω 2 ( ) d 2 =+Ee22()ωωjtωω() E () *2 e− jt + 2()*() E ωω E 2 ( ) 2 d ⎛⎞1 ⎡⎤22jtωω *2− jt * = ⎜⎟Ee()ωω++() E () e EE()() ωω 22⎝⎠⎣⎦⎢⎥ =+dEe()Re{()2(2)*ωωωjtω } EE ()() *2jt(2ω) =+dE()()ωω E d Re{() E ω e } = PNL (0,2)ϖ We immediately see that the nonlinear term induces a new frequency harmonic of double the value of the incoming frequency. It also creates a time independent electric field. The first term that is time independent can be viewed as optical rectification, in the sense that it is a constant electric field where none was present before passing through this nonlinear medium. This field is measurable but obviously small. The generation of the second harmonic is called SHG. Electro‐Optic Effect One can easily see that if we send different optical signals through this medium we can expect a wide variety of outputs. By the same token, we can send a plane wave mixed with a constant electric field. 2 PdENL = ()2 Et()=+ E (0) Re{( Eω ) ejtω } 2 jtωωjt PdEdEEeENL ==22(0)Re{()}(0)Re{()}() + ωω()+ Ee 2 =+2(0)2(0)Re{()}Re{()}dE( 2 E Eωω ejtωω +() E ejt ) ⎛⎞2*jtωω1 2j(2) t =+2dE⎜⎟ (0) 2 E (0) Re{ E (ωωωω ) e } +() E ( ) E ( ) + Re{ E ( ) e } ⎝⎠2 =+dE()22* (0) E ()()ωω E + 4(0) E Re{Ee (ωω )jtωω }+ Re{ E2(2) ( ) ej t } = PNL (0,ϖϖ ,2 ) Not only do we get a second harmonic in this case but also we get another interesting effect. If the constant electric field is significantly larger than the magnitude of the oscillating electric field, we see that we can control the size of the term EEe(0)Re{ (ω )jωt }as other time varying terms are small. This reduces the equation to a plane wave of the input frequency riding on a constant electric field, but as we change the constant field we can control the size of the time varying field. This is called the electro‐ optic effect or Pockel’s effect. This is an analogy to a load line calculation in amplifier theory. Three Wave mixing Continuing with a third and last application. One could imagine what would happen if we had a mixture of two plane waves of different frequency. 2 PdENL = ()2 jtω1 jtω 2 Et()Re{()=+ Eωω12 e } Re{() E e } 222jtωω1 jtωωjt1 jt PNL ==2 dEd 2() Re{() Eeωω12 }+ Re{() Ee }() Re{() Ee ωω 12 }+ Re{() Ee } 22 =++2dEe⎡⎤⎡⎤ Re{ (ωω )jtωω12 } Re{ Ee ( )jt } 2Re{ EeEe ( ωω )jtω 2 }Re{ ( )jtω1 } (⎣⎦⎣⎦12 21) 22 ⎛ 1 jtω1 *2−−jtωω121 jtωω*jt jt 2jtω1 ⎞ =+2()dEeE⎜ ⎡⎤ω1 ()ωωωωω1222eEeEeEeEe++⎡ () () ⎤ + 2Re{() }Re{()1 }⎟ ⎝ 4 ⎣⎦4 ⎣ ⎦ ⎠ ⎛⎞EE*2()()Re{()ωω+ Ee ωjt(2ω1 ) } ⎜⎟11 1 *2jt(2ω2 ) =+2()()Re{()}dE⎜⎟ωω22 E + E ω 2 e ⎜⎟jtω 2 jtω1 ⎝⎠+2Re{Ee (ωω21 ) }Re{ Ee ( ) } ⎛⎞ *2jt(2ω1 ) ⎜⎟EE()()Re{()ωω11+ Ee ω 1 } ⎜⎟ *2jt(2ω2 ) =+2()()Re{()}dE⎜⎟ωω22 E + E ω 2 e ⎜⎟1 ⎜+++⎡⎤Ee()ωωjtωω11 E* () e− jt ⎡ E (ωω)()eEejtωω22* − jt⎤ ⎟ ⎝⎠2 ⎣⎦11⎣ 22⎦ ⎛⎞ ⎜⎟ *2jt(2ω1 ) ⎜⎟EE()()Re{()ωω11+ Ee ω 1 } ⎜⎟*2jt(2ω2 ) =+2()()Re{()}dE⎜⎟ωω22 E + E ω 2 e ⎜⎟jtωω12 jt jt ω 1* − jt ω 2 1 ⎡EeEe()ωω12 ()++ EeEe () ω 1 () ω 2 ⎤ ⎜⎟+ ⎢ ⎥ ⎜⎟2 **−−jtωω12 jt jt ω 1*−j ω 2 t ⎝⎠⎣⎢⎥EeEeEeEe()ωω12 ()+ () ω 1 () ω 2⎦ ⎛⎞ ⎜⎟ *2jt(2ω1 ) ⎜⎟EE()()Re{()ωω11+ Ee ω 1 } ⎜⎟ *2jt(2ω2 ) =+2()()Re{()}dE⎜⎟ωω22 E + E ω 2 e ⎜⎟ ⎡⎤jt()ωω12+−* jt()ωω12 ⎜⎟1 EEe()()ωω12 + EE () ω1 () ω 2 e + ⎢⎥ ⎜⎟2 **−−jt()ωω12 *−+jt()ωω12 ⎝⎠⎣⎦⎢⎥++EEe()()ωω12 EEe ()() ω1 ω 2 = PNL (0,2ϖϖωωωω121212 ,2 ,+− , ) Not only do we get frequency doubling of both signals, but we the difference and addition of both frequencies. This is called three‐wave mixing and is the foundation of an optical parametric amplifier. In this case one of the input frequencies is pumped, mixed, then the pump frequency and post additive frequency is filtered, leaving only an amplified input frequency. Third order analysis for centrosymmetric materials is performed similarly, only the 2nd order term is neglected. References • Fundamentals of Photonics Saleh & Teich • High Energy Lasers and their applications Jacobs, Sargent, Scully • Laser Fundamentals Silfvast • Optics and Lasers Young .
Recommended publications
  • Appendix a Short Course in Taylor Series
    Appendix A Short Course in Taylor Series The Taylor series is mainly used for approximating functions when one can identify a small parameter. Expansion techniques are useful for many applications in physics, sometimes in unexpected ways. A.1 Taylor Series Expansions and Approximations In mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. It is named after the English mathematician Brook Taylor. If the series is centered at zero, the series is also called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. It is common practice to use a finite number of terms of the series to approximate a function. The Taylor series may be regarded as the limit of the Taylor polynomials. A.2 Definition A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function f(x) about a point x ¼ a is given by; f 00ðÞa f 3ðÞa fxðÞ¼faðÞþf 0ðÞa ðÞþx À a ðÞx À a 2 þ ðÞx À a 3 þÁÁÁ 2! 3! f ðÞn ðÞa þ ðÞx À a n þÁÁÁ ðA:1Þ n! © Springer International Publishing Switzerland 2016 415 B. Zohuri, Directed Energy Weapons, DOI 10.1007/978-3-319-31289-7 416 Appendix A: Short Course in Taylor Series If a ¼ 0, the expansion is known as a Maclaurin Series. Equation A.1 can be written in the more compact sigma notation as follows: X1 f ðÞn ðÞa ðÞx À a n ðA:2Þ n! n¼0 where n ! is mathematical notation for factorial n and f(n)(a) denotes the n th derivation of function f evaluated at the point a.
    [Show full text]
  • Student Understanding of Taylor Series Expansions in Statistical Mechanics
    PHYSICAL REVIEW SPECIAL TOPICS - PHYSICS EDUCATION RESEARCH 9, 020110 (2013) Student understanding of Taylor series expansions in statistical mechanics Trevor I. Smith,1 John R. Thompson,2,3 and Donald B. Mountcastle2 1Department of Physics and Astronomy, Dickinson College, Carlisle, Pennsylvania 17013, USA 2Department of Physics and Astronomy, University of Maine, Orono, Maine 04469, USA 3Maine Center for Research in STEM Education, University of Maine, Orono, Maine 04469, USA (Received 2 March 2012; revised manuscript received 23 May 2013; published 8 August 2013) One goal of physics instruction is to have students learn to make physical meaning of specific mathematical expressions, concepts, and procedures in different physical settings. As part of research investigating student learning in statistical physics, we are developing curriculum materials that guide students through a derivation of the Boltzmann factor using a Taylor series expansion of entropy. Using results from written surveys, classroom observations, and both individual think-aloud and teaching interviews, we present evidence that many students can recognize and interpret series expansions, but they often lack fluency in creating and using a Taylor series appropriately, despite previous exposures in both calculus and physics courses. DOI: 10.1103/PhysRevSTPER.9.020110 PACS numbers: 01.40.Fk, 05.20.Ày, 02.30.Lt, 02.30.Mv students’ use of Taylor series expansion to derive and I. INTRODUCTION understand the Boltzmann factor as a component of proba- Over the past several decades, much work has been done bility in the canonical ensemble. While this is a narrow investigating ways in which students understand and learn topic, we feel that it serves as an excellent example of physics (see Refs.
    [Show full text]
  • Fundamentals of Duct Acoustics
    Fundamentals of Duct Acoustics Sjoerd W. Rienstra Technische Universiteit Eindhoven 16 November 2015 Contents 1 Introduction 3 2 General Formulation 4 3 The Equations 8 3.1 AHierarchyofEquations ........................... 8 3.2 BoundaryConditions. Impedance.. 13 3.3 Non-dimensionalisation . 15 4 Uniform Medium, No Mean Flow 16 4.1 Hard-walled Cylindrical Ducts . 16 4.2 RectangularDucts ............................... 21 4.3 SoftWallModes ................................ 21 4.4 AttenuationofSound.............................. 24 5 Uniform Medium with Mean Flow 26 5.1 Hard-walled Cylindrical Ducts . 26 5.2 SoftWallandUniformMeanFlow . 29 6 Source Expansion 32 6.1 ModalAmplitudes ............................... 32 6.2 RotatingFan .................................. 32 6.3 Tyler and Sofrin Rule for Rotor-Stator Interaction . ..... 33 6.4 PointSourceinaLinedFlowDuct . 35 6.5 PointSourceinaDuctWall .......................... 38 7 Reflection and Transmission 40 7.1 A Discontinuity in Diameter . 40 7.2 OpenEndReflection .............................. 43 VKI - 1 - CONTENTS CONTENTS A Appendix 49 A.1 BesselFunctions ................................ 49 A.2 AnImportantComplexSquareRoot . 51 A.3 Myers’EnergyCorollary ............................ 52 VKI - 2 - 1. INTRODUCTION CONTENTS 1 Introduction In a duct of constant cross section, with a medium and boundary conditions independent of the axial position, the wave equation for time-harmonic perturbations may be solved by means of a series expansion in a particular family of self-similar solutions, called modes. They are related to the eigensolutions of a two-dimensional operator, that results from the wave equation, on a cross section of the duct. For the common situation of a uniform medium without flow, this operator is the well-known Laplace operator 2. For a non- uniform medium, and in particular with mean flow, the details become mo∇re complicated, but the concept of duct modes remains by and large the same1.
    [Show full text]
  • Fourier Series
    Chapter 10 Fourier Series 10.1 Periodic Functions and Orthogonality Relations The differential equation ′′ y + 2y = F cos !t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. If 2 = !2 a particular solution is easily found by undetermined coefficients (or by∕ using Laplace transforms) to be F y = cos !t. p 2 !2 − If the forcing function is a linear combination of simple cosine functions, so that the differential equation is N ′′ 2 y + y = Fn cos !nt n=1 X 2 2 where = !n for any n, then, by linearity, a particular solution is obtained as a sum ∕ N F y (t)= n cos ! t. p 2 !2 n n=1 n X − This simple procedure can be extended to any function that can be repre- sented as a sum of cosine (and sine) functions, even if that summation is not a finite sum. It turns out that the functions that can be represented as sums in this form are very general, and include most of the periodic functions that are usually encountered in applications. 723 724 10 Fourier Series Periodic Functions A function f is said to be periodic with period p> 0 if f(t + p)= f(t) for all t in the domain of f. This means that the graph of f repeats in successive intervals of length p, as can be seen in the graph in Figure 10.1. y p 2p 3p 4p 5p Fig. 10.1 An example of a periodic function with period p.
    [Show full text]
  • Graduate Macro Theory II: Notes on Log-Linearization
    Graduate Macro Theory II: Notes on Log-Linearization Eric Sims University of Notre Dame Spring 2017 The solutions to many discrete time dynamic economic problems take the form of a system of non-linear difference equations. There generally exists no closed-form solution for such problems. As such, we must result to numerical and/or approximation techniques. One particularly easy and very common approximation technique is that of log linearization. We first take natural logs of the system of non-linear difference equations. We then linearize the logged difference equations about a particular point (usually a steady state), and simplify until we have a system of linear difference equations where the variables of interest are percentage deviations about a point (again, usually a steady state). Linearization is nice because we know how to work with linear difference equations. Putting things in percentage terms (that's the \log" part) is nice because it provides natural interpretations of the units (i.e. everything is in percentage terms). First consider some arbitrary univariate function, f(x). Taylor's theorem tells us that this can be expressed as a power series about a particular point x∗, where x∗ belongs to the set of possible x values: f 0(x∗) f 00 (x∗) f (3)(x∗) f(x) = f(x∗) + (x − x∗) + (x − x∗)2 + (x − x∗)3 + ::: 1! 2! 3! Here f 0(x∗) is the first derivative of f with respect to x evaluated at the point x∗, f 00 (x∗) is the second derivative evaluated at the same point, f (3) is the third derivative, and so on.
    [Show full text]
  • Approximation of a Function by a Polynomial Function
    1 Approximation of a function by a polynomial function 1. step: Assuming that f(x) is differentiable at x = a, from the picture we see: 푓(푥) = 푓(푎) + ∆푓 ≈ 푓(푎) + 푓′(푎)∆푥 Linear approximation of f(x) at point x around x = a 푓(푥) = 푓(푎) + 푓′(푎)(푥 − 푎) 푅푒푚푎푛푑푒푟 푹 = 푓(푥) − 푓(푎) − 푓′(푎)(푥 − 푎) will determine magnitude of error Let’s try to get better approximation of f(x). Let’s assume that f(x) has all derivatives at x = a. Let’s assume there is a possible power series expansion of f(x) around a. ∞ 2 3 푛 푓(푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ = ∑ 푐푛(푥 − 푎) ∀ 푥 푎푟표푢푛푑 푎 푛=0 n 푓(푛)(푥) 푓(푛)(푎) 2 3 0 푓 (푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ 푓(푎) = 푐0 2 1 푓′(푥) = 푐1 + 2푐2(푥 − 푎) + 3푐3(푥 − 푎) + ⋯ 푓′(푎) = 푐1 2 2 푓′′(푥) = 2푐2 + 3 ∙ 2푐3(푥 − 푎) + 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′(푎) = 2푐2 2 3 푓′′′(푥) = 3 ∙ 2푐3 + 4 ∙ 3 ∙ 2(푥 − 푎) + 5 ∙ 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′′(푎) = 3 ∙ 2푐3 ⋮ (푛) (푛) n 푓 (푥) = 푛(푛 − 1) ∙∙∙ 3 ∙ 2 ∙ 1푐푛 + (푛 + 1)푛(푛 − 1) ⋯ 2푐푛+1(푥 − 푎) 푓 (푎) = 푛! 푐푛 ⋮ Taylor’s theorem If f (x) has derivatives of all orders in an open interval I containing a, then for each x in I, ∞ 푓(푛)(푎) 푓(푛)(푎) 푓(푛+1)(푎) 푓(푥) = ∑ (푥 − 푎)푛 = 푓(푎) + 푓′(푎)(푥 − 푎) + ⋯ + (푥 − 푎)푛 + (푥 − 푎)푛+1 + ⋯ 푛! 푛! (푛 + 1)! 푛=0 푛 ∞ 푓(푘)(푎) 푓(푘)(푎) = ∑ (푥 − 푎)푘 + ∑ (푥 − 푎)푘 푘! 푘! 푘=0 푘=푛+1 convention for the sake of algebra: 0! = 1 = 푇푛 + 푅푛 푛 푓(푘)(푎) 푇 = ∑ (푥 − 푎)푘 푝표푙푦푛표푚푎푙 표푓 푛푡ℎ 표푟푑푒푟 푛 푘! 푘=0 푓(푛+1)(푧) 푛+1 푅푛(푥) = (푥 − 푎) 푠 푟푒푚푎푛푑푒푟 푅푛 (푛 + 1)! Taylor’s theorem says that there exists some value 푧 between 푎 and 푥 for which: ∞ 푓(푘)(푎) 푓(푛+1)(푧) ∑ (푥 − 푎)푘 can be replaced by 푅 (푥) = (푥 − 푎)푛+1 푘! 푛 (푛 + 1)! 푘=푛+1 2 Approximating function by a polynomial function So we are good to go.
    [Show full text]
  • Fourier Series
    LECTURE 11 Fourier Series A fundamental idea introduced in Math 2233 is that the solution set of a linear differential equation is a vector space. In fact, it is a vector subspace of a vector space of functions. The idea that functions can be thought of as vectors in a vector space is also crucial in what will transpire in the rest of this court. However, it is important that when you think of functions as elements of a vector space V , you are thinking primarily of an abstract vector space - rather than a geometric rendition in terms of directed line segments. In the former, abstract point of view, you work with vectors by first adopting a basis {v1,..., vn} for V and then expressing the elements of V in terms of their coordinates with respect to that basis. For example, you can think about a polynomial p = 1 + 2x + 3x2 − 4x3 as a vector, by using the monomials 1, x, x2, x3,... as a basis and then thinking of the above expression for p as “an expression of p” in terms of the basis 1, x, x2, x3,... But you can express p in terms of its Taylor series about x = 1: 3 4 p = 2 − 8 (x − 1) − 21(x − 1)2 − 16 (x − 1) − 4 (x − 1) 2 3 and think of the polynomials 1, x − 1, (x − 1) , (x − 1) ,... as providing another basis for the vector space of polynomials. Granted the second expression for p is uglier than the first, abstractly the two expressions are on an equal footing and moveover, in some situations the second expression might be more useful - for example, in understanding the behavior of p near x = 1.
    [Show full text]
  • 19. Taylor Series and Techniques
    19. TAYLOR SERIES AND TECHNIQUES Taylor polynomials can be generated for a given function through a certain linear combination of its derivatives. The idea is that we can approximate a function by a polynomial, at least locally. In calculus I we discussed the tangent line approximation to a function. We found that the linearization of a function gives a good approximation for points close to the point of tangency. If we calculate second derivatives we can similarly find a quadratic approximation for the function. Third derivatives go to finding a cubic approximation about some point. I should emphasize from the outset that a Taylor polynomial is a polynomial, it will not be able to exactly represent a function which is not a polynomial. In order to exactly represent an analytic function we’ll need to take infinitely many terms, we’ll need a power series. The Taylor series for a function is formed in the same way as a Taylor polynomial. The difference is that we never stop adding terms, the Taylor series is formed from an infinite sum of a function’s derivatives evaluated at the series’ center. There is a subtle issue here, is it possible to find a series representation for a given function? Not always. However, when it is possible we call the function analytic. Many functions that arise in applications are analytic. Often functions are analytic on subdomains of their entire domain, we need to find different series representations on opposite sides of a vertical asymptote. What we learned in the last chapter still holds, there is an interval of convergence, the series cannot be convergent on some disconnected domain.
    [Show full text]
  • Convergent Factorial Series Solutions of Linear Difference Equations*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF DIFFERENTIAL EQUATIONS 29, 345-361 (1978) Convergent Factorial Series Solutions of Linear Difference Equations* W. J. FITZPATRICK+ AND L. J. GRIMM Depal-tment of Mathematics, University of Missowi-Rolla, Rolh, Missotwi 65801 Received July 27, 1977 1. IPU’TRODUCTION Although the analytic theory of linear difference equations dates back to an 1885 memoir of Poincare, continuing work essentially began around 1910 with publication of several papers of G. D. Birkhoff, R. D. Carmichael, J. Horn, and N. E. Norlund; see, for instance, [l, 2, 4, 51. Since that time, C. R. Adams, M7. J. Trjitzinsky, and others have carried on the development of the theory; numerous contributions have been made in recent years by TV. A. Harris, Jr., P. Sibuya, and H. L. Turrittin; see, for instance, [7-9, 16-171. The monographs of Norlund [14] and L. M. Milne-Thomson [13] give a comprehensive account of earlier work, and references to more recent literature are given in the surrey paper of Harris [S]. The similarity between difference equations and differential equations has been noted by many investigators; for instance, Birkhoff [3] pointed out that the analytic theory of linear difference equations provides a “methodological pattern” for the “essentially simpler but analogous” theory of linear differential equations. In this paper, we apply a projection method which has been useful in the treat- ment of analytic differential equations [6, 101 to obtain existence theorems for convergent factorial series solutions of analytic difference equations.
    [Show full text]
  • Physics 112 Mathematical Notes Winter 2000 1. Power Series
    Physics 112 Mathematical Notes Winter 2000 1. Power Series Expansion of the Fermi-Dirac Integral The Fermi-Dirac integral is defined as: ∞ 1 xn−1 dx fn(z) ≡ , Γ(n) z−1ex +1 0 µ/kT where x ≡ /kT and z ≡ e . We wish to derive a power series expansion for fn(z)that is useful in the limit of z → 0. We manipulate fn(z) as follows: ∞ z xn−1 dx fn(z)= Γ(n) ex + z 0 ∞ z xn−1 dx = Γ(n) ex(1 + ze−x) 0 ∞ ∞ z −x n−1 m −x m = e x dx (−1) (ze ) Γ(n) 0 m=0 ∞ ∞ z = (−1)mzm e−(m+1)xxn−1 dx . Γ(n) m=0 0 Using the well known integral: ∞ Γ(n) e−Axxn−1 dx = , An 0 and changing the summation variable to p = m + 1, we end up with: ∞ (−1)p−1zp fn(z)= n . (1) p=1 p which is the desired power series expansion. This result is also useful for deriving additional properties ofthe fn(z). For example, it is a simple matter use eq. (1) to verify: ∂ fn−1(z)=z fn(z) . ∂z Note that the classical limit is obtained for z 1. In this limit, which also corresponds to the high temperature limit, fn(z) z. One additional special case is noteworthy. If 1 z =1,weobtain ∞ (−1)p−1 fn(1) = n . p=0 p To evaluate this sum, we rewrite it as follows: 1 − 1 fn(1) = n n p odd p p even p 1 1 − 1 = n + n 2 n p odd p p even p p even p ∞ ∞ 1 − 1 = n 2 n p=1 p p=1 (2p) ∞ − 1 1 = 1 n−1 n .
    [Show full text]
  • 3 Fourier Series
    3 Fourier Series 3.1 Introduction Although it was not apparent in the early historical development of the method of separation of variables, what we are about to do is the analog for function spaces of the following basic observation, which explains why the standard basis i = e1, j = e2, and k = e3, in 3-space is so useful. The basis vectors are an orthonormal set of vectors, which means e1 e2 =0, e1 e3 =0, e2 e3 =0, · · · e1 e1 =1, e2 e2 =1, e3 e3 =1. · · · Any vector v in 3-space, has a unique representation as v = b1e1 + b2e2+b3e3. Furthermore, the coefficients b1,b2, and b3 are easily computed from v : b1 = v e1,b2 = v e2,b3 = v e3. · · · Just as was the case for the laterally insulated heat-conducting rod and for the small transverse vibrations of a string, whenever the method of separation of variables is used to solve an IBVP, you reach a point where certain data given by a function f (x) must be expanded in a series of one of the following forms in which L>0: ∞ nπx f (x)= b sin , n L n=1 FourierX Sine Series Expansion a ∞ nπx f (x)= 0 + a cos , 2 n L n=1 Fourier CosineX Series Expansion a ∞ nπx nπx f (x)= 0 + a cos + b sin , 2 n L n L n=1 XFourier³ Series Expansion ´ or, more generally, ∞ f (x)= fnϕn (x) n=1 EigenfunctionX Expansion where ϕn (x) are the eigenfunctions of an EVP that arises in the separation of variables process.
    [Show full text]
  • Module M4.5 Taylor Expansions and Polynomial Approximations
    FLEXIBLE LEARNING APPROACH TO PHYSICS Module M4.5 Taylor expansions and polynomial approximations 1 Opening items 3.2 Taylor series (expanding about zero) 1.1 Module introduction 3.3 Taylor polynomials (near x = a) 1.2 Fast track questions 3.4 Taylor series about a general point 1.3 Ready to study? 3.5 Some useful Taylor series 2 Polynomial approximations 3.6 Simplifying the derivation of Taylor expansions 2.1 Polynomials 3.7 Applications and examples 2.2 Increasingly accurate approximations for sin (x) 4 Closing items 2.3 Increasingly accurate approximations for exp(x) 4.1 Module summary 3 Finding polynomial approximations by Taylor 4.2 Achievements expansions 4.3 Exit test 3.1 Taylor polynomials (near x = 0) Exit module FLAP M4.5 Taylor expansions and polynomial approximations COPYRIGHT © 1998 THE OPEN UNIVERSITY S570 V1.1 1 Opening items 1.1 Module introduction Many of the functions used in physics (including sin1(x), tan1(x) and loge1(x)) are impossible to evaluate exactly for all values of their arguments. In this module we study a particular way of representing such functions by 2 3 means of simpler polynomial functions of the form a0 + a1x + a2x + a2x …, where a0, a1, a2, etc. are constants, the values of which depend on the function being represented. Sums of terms of this kind are generally referred to as series. The particular series we will be concerned with are known as Taylor series. We also show how, in certain circumstances, the first few terms of a Taylor series can give a useful approximation to the corresponding function.
    [Show full text]