Math Tripos Part IA: Differential Equations

Total Page:16

File Type:pdf, Size:1020Kb

Math Tripos Part IA: Differential Equations Math Tripos Part IA: Differential Equations Michael Li December 17, 2014 1 Basic calculus Informal treatment of differentiation as a limit, the chain rule, Leibnitz's rule, Taylor series, informal treat- ment of O and o notation and l'H^opitalsrule; integration as an area, fundamental theorem of calculus, integration by substitution and parts. [3] Informal treatment of partial derivatives, geometrical interpretation, statement (only) of symmetry of mixed partial derivatives, chain rule, implicit differentiation. Informal treatment of differentials, including exact differentials. Differentiation of an integral with respect to a parameter. [2] First-order linear differential equations Equations with constant coefficients: exponential growth, comparison with discrete equations, series solution; modelling examples including radioactive decay. Equations with non-constant coefficients: solution by integrating factor. [2] Nonlinear first-order equations Separable equations. Exact equations. Sketching solution trajectories. Equilibrium solutions, stability by perturbation; examples, including logistic equation and chemical kinetics. Discrete equations: equilibrium solutions, stability; examples including the logistic map. [4] Higher-order linear differential equations Complementary function and particular integral, linear independence, Wronskian (for second-order equa- tions), Abel's theorem. Equations with constant coefficients and examples including radioactive sequences, comparison in simple cases with difference equations, reduction of order, resonance, transients, damping. Homogeneous equations. Response to step and impulse function inputs; introduction to the notions of the Heaviside step-function and the Dirac delta-function. Series solutions including statement only of the need for the logarithmic solution. [8] Multivariate functions: applications Directional derivatives and the gradient vector. Statement of Taylor series for functions on Rn. Local extrema of real functions, classification using the Hessian matrix. Coupled first order systems: equivalence to single higher order equations; solution by matrix methods. Non-degenerate phase portraits local to equilibrium points; stability. Simple examples of first- and second-order partial differential equations, solution of the wave equation in the form f(x + ct) + g(x − ct). [5] 2 Contents 1 Differentiation 4 1.1 Differentiation . 4 1.2 Small o and big O notations . 4 1.3 Methods of Differentiation . 4 1.4 L'Hopital's Rule . 5 2 Integration 5 2.1 Integration . 5 2.2 Methods of integration . 6 2.3 Integration by Parts . 6 3 Partial Differentiation 6 3.1 partial Differentiation . 6 3.2 Chain rule . 7 3.3 Implicit Differentiation . 7 3.4 Differentiation of Integral w.r.t parameter in Integrand . 8 4 First-order differential equation 8 4.1 The Exponential Function . 8 4.2 First-order Linear Homogenous Ordinary Differential Equation . 9 4.2.1 Discrete equation . 9 4.3 Forced (inhomogeneous) equations . 10 4.4 Non-constant coefficients . 10 4.5 Nonlinear first-order equations . 10 4.5.1 Separable Equations . 10 4.5.2 Exact Equations . 11 4.6 Solution curves (Trajectories) . 12 4.6.1 Stability of Fixed Points . 12 4.6.2 Perturbation Analysis . 12 4.6.3 Autonomous systems . 13 4.7 Logistic Equation . 13 4.7.1 Fighting for limited resources . 13 4.8 Discrete equations(Difference Equation) . 14 4.8.1 Behaviour of Logistic map . 14 5 Second-order Differential Equations 14 5.1 Constant Coefficients . 14 5.1.1 Complementary function . 15 5.1.2 Detuning . 15 5.1.3 Method for Second Complementary Function . 15 5.1.4 Phase Space . 16 5.2 Particular Integrals . 16 5.2.1 Guessing . 16 5.2.2 Detuning . 17 3 5.2.3 Variation of parameters . 17 5.3 Homogeneous equations, a.k.a Linear equidimensional equations . 17 5.3.1 Solving . 17 5.4 Difference Equation . 18 5.4.1 Guesswork . 18 5.5 Transcients and damping . 18 5.5.1 Free (natural) response f =0........................... 18 5.5.2 Underdamped . 19 5.5.3 Critically damped . 19 5.5.4 Overdamped . 19 5.5.5 Forcing . 19 5.6 Impulses and Point Forces . 19 5.7 Heaviside Step Function . 20 5.7.1 Solving differential equations with discontinuous functions . 20 6 Series Solution 20 6.1 Behaviour near x = x0 ................................... 22 7 Directional Derivatives 23 7.0.1 The gradient vector of f(x; y)........................... 23 7.1 Stationary Points . 23 7.2 Taylor series for Multivariable Functions . 23 7.3 Classification of stationary points . 24 7.3.1 Determination of definiteness . 24 7.3.2 Contours of f(x; y)................................. 25 8 Systems of Linear differential equations 25 8.1 Phase-space trajectories . 25 8.2 Nonlinear dynamical systems . 26 8.2.1 Equilibrium (fixed) points . 27 8.2.2 Stability . 27 9 Partial differential equations (PDEs) 27 9.1 First-order wave equation . 27 9.2 Second-order wave equation . 28 10 The Diffusion Equation 29 4 1 Differentiation 1.1 Differentiation Definition. The derivative of a function fx w.r.t to an independent variable x is defined as the rate of change of f(x) with x, or symbolically: df f(x + h) − f(x) = lim dx h!0 h f(x) is differentiable at a point x = a iff the limit exists. (jxj is not differentiable at x = 0) df 0 d d2f Note. dx ; f (x); dx f(x) all mean the same thing. Higher derivatives follow the same rule ( dx2 = 00 d d 0 f (x) = dx ( dx f(x))). However, f represents the derivative w.r.t the argument. 1.2 Small o and big O notations Definition. f(x) (i) f(x) = o(g(x)) as x ! x0 if lim g(x) = 0, so f(x) is much smaller than g(x). x!x0 f(x) (ii) f(x) = O(g(x)) as x ! x0 if g(x) is bounded as x ! x0, so f(x) is about the size of g(x). Note. Bounded doesn't mean limit exists. f(x) = o(g(x)) ) f(x) = O(g(x)) 0 Corollary. f(x0 + h) = f(x0) + f (x0)h + o(h) Proof. f(x + h) − f(x ) 0(h) f 0(x ) = 0 0 + 0 h h from definition of the derivative and the small o notation. Result follows. 1.3 Methods of Differentiation Theorem (Chain rule). Given f(x) = F (g(x)), then df dF dg = : dx dg dx dg Proof. Assuming that dx exists and is therefore finite, we have df F (g(x + h)) − F (g(x)) = lim dx h!0 h F (g(x)) + (hg0(x) + o(h))F 0(g(x)) + o(hg0(x) + o(h)) − F (g(x)) = lim h!0 h df dg = g0(x)F 0(g(x)) = dg dx 5 Theorem (Leibinz's Rule). Given f = uv, then n X n f (n)(x) = u(r)v(n−r) r r=0 where f (n) is the n-th derivative of f. 1.4 L'Hopital's Rule Theorem. If f(x); g(x) differentiable at x0, and lim f(x) = lim g(x) = 0, then: x!x0 x!x0 f(x) f 0(x) lim = lim 0 : x!x0 g(x) x!x0 g (x) 0 Proof. From the Taylor's Theorem, we have f(x) = f(x0)+(x−x0)f (x0)+o(x−x0), and similarly for g(x). Thus 0 f(x) f(x0) + (x − x0)f (x0) + o(x − x0) lim = lim 0 x!x0 g(x) x!x0 g(x0) + (x − x0)g (x0) + o(x − x0) 0 o(x−x0) f (x0) + = lim x−x0 o(x−x ) x!x0 g0(x ) + 0 0 x−x0 f 0(x) = lim 0 x!x0 g (x) 2 Integration 2.1 Integration Definition. An integral is a limit of an (infinite) sum, or the "area under the graph": N Z b X f(x) dx = lim f(xn)∆x: ∆x!0 a n=0 If f is differentiable, the total area under the graph from a to b is: N−1 N−1 Z b X 2 X lim (f(xn)∆x) + N · O(∆x ) = lim (f(xn)∆x) + O(∆x) = f(x) dx N!1 N!1 n=0 n=0 a due to the missing "triangle"O(∆x2) of the area in the sum. R x 0 Theorem (Fundamental Theorem of Calculus). Let F (x) = a f(t) dt. Then F (x) = f(x). 6 Proof. d 1 Z x+h F (x) = lim f(t) dt dx h!0 h x 1 = lim [f(x)h + O(h2)] h!0 h = f(x) d R b d R g(x) 0 Similarly, we have dx x f(t) dt = −f(x) and dx a f(t) dt = f(g(x))g (x). Note. We write R f(x) dx = R x f(t) dt,; the unspecified lower limit gives constant +C 2.2 Methods of integration Useful Hints for trig substitution: Useful identity pPart of integrand Substitution cosθ + sinθ = 1 1 − x2 x = sin θ 2 2 2 1 + tan θ = sec θ p1 + x x = tan θ 2 2 2 cosh u − sinh u = 1 px − 1 x = cosh u cosh2 u − sinh2 u = 1 1 − x2 x = sinh u 1 − tanh2 u = sech2u 1 − x2 x = tanh u 2.3 Integration by Parts Integrating the product rule, we have: Z Z uv0 dx = uv − vu0 dx: Example 1 (THE example for Integration by Parts). Consider R log x dx. Let u = log x and 0 0 1 v = 1. Then u = x and v = x. So we have Z Z log x dx = x log x − dx = x log x − x + C 3 Partial Differentiation 3.1 partial Differentiation Definition (Partial derivative). Given a function of several variables f(x; y), the partial derivative of f with respect to x is the rate of change of f as x varies, keeping y constant.
Recommended publications
  • SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian Matrix
    Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 1 SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian matrix In the fitting of the Hessian matrix elements as functions of the reaction coordinate, one can take advantage of symmetry. The non-diagonal elements may be written in the form of numerical derivatives as E(Δα, Δβ ) − E(Δα,−Δβ ) − E(−Δα, Δβ ) + E(−Δα,−Δβ ) H (α, β ) = . (S1) 4δ 2 Here, α and β label any of the 15 atomic coordinates in {Cx, Cy, …, H4z}, E(Δα,Δβ) denotes the DFT energy of CH4 interacting with Ni(111) with a small displacement along α and β, and δ the small displacement used in the second order differencing. For example, the H(Cx,Cy) (or H(Cy,Cx)) is 0, since E(ΔCx ,ΔC y ) = E(ΔC x ,−ΔC y ) and E(−ΔCx ,ΔC y ) = E(−ΔCx ,−ΔC y ) . From Eq.S1, one can deduce the symmetry properties of H of methane interacting with Ni(111) in a geometry belonging to the Cs symmetry (Fig. 1 in the paper): (1) there are always 18 zero elements in the lower triangle of the Hessian matrix (see Fig. S1), (2) the main block matrices A, B and F (see Fig. S1) can be split up in six 3×3 sub-blocks, namely A1, A2 , B1, B2, F1 and F2, in which the absolute values of all the corresponding elements in the sub-blocks corresponding to each other are numerically identical to each other except for the sign of their off-diagonal elements, (3) the triangular matrices E1 and E2 are also numerically the same except for the sign of their off-diagonal terms, (4) the 1 Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 2 block D is a unique block and its off-diagonal terms differ only from each other in their sign.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • Perturbation Theory and Exact Solutions
    PERTURBATION THEORY AND EXACT SOLUTIONS by J J. LODDER R|nhtdnn Report 76~96 DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS J J. LODOER ASSOCIATIE EURATOM-FOM Jun»»76 FOM-INST1TUUT VOOR PLASMAFYSICA RUNHUIZEN - JUTPHAAS - NEDERLAND DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS by JJ LODDER R^nhuizen Report 76-95 Thisworkwat performed at part of th«r«Mvchprogmmncof thcHMCiattofiafrccmentof EnratoniOTd th« Stichting voor FundtmenteelOiutereoek der Matctk" (FOM) wtihnnmcWMppoft from the Nederhmdie Organiutic voor Zuiver Wetemchap- pcigk Onderzoek (ZWO) and Evntom It it abo pabHtfMd w a the* of Ac Univenrty of Utrecht CONTENTS page SUMMARY iii I. INTRODUCTION 1 II. GENERALIZED FUNCTIONS DEFINED ON DISCONTINUOUS TEST FUNC­ TIONS AND THEIR FOURIER, LAPLACE, AND HILBERT TRANSFORMS 1. Introduction 4 2. Discontinuous test functions 5 3. Differentiation 7 4. Powers of x. The partie finie 10 5. Fourier transforms 16 6. Laplace transforms 20 7. Hubert transforms 20 8. Dispersion relations 21 III. PERTURBATION THEORY 1. Introduction 24 2. Arbitrary potential, momentum coupling 24 3. Dissipative equation of motion 31 4. Expectation values 32 5. Matrix elements, transition probabilities 33 6. Harmonic oscillator 36 7. Classical mechanics and quantum corrections 36 8. Discussion of the Pu strength function 38 IV. EXACTLY SOLVABLE MODELS FOR DISSIPATIVE MOTION 1. Introduction 40 2. General quadratic Kami1tonians 41 3. Differential equations 46 4. Classical mechanics and quantum corrections 49 5. Equation of motion for observables 51 V. SPECIAL QUADRATIC HAMILTONIANS 1. Introduction 53 2. Hamiltcnians with coordinate coupling 53 3. Double coordinate coupled Hamiltonians 62 4. Symmetric Hamiltonians 63 i page VI. DISCUSSION 1. Introduction 66 ?.
    [Show full text]
  • Chapter 8 Stability Theory
    Chapter 8 Stability theory We discuss properties of solutions of a first order two dimensional system, and stability theory for a special class of linear systems. We denote the independent variable by ‘t’ in place of ‘x’, and x,y denote dependent variables. Let I ⊆ R be an interval, and Ω ⊆ R2 be a domain. Let us consider the system dx = F (t, x, y), dt (8.1) dy = G(t, x, y), dt where the functions are defined on I × Ω, and are locally Lipschitz w.r.t. variable (x, y) ∈ Ω. Definition 8.1 (Autonomous system) A system of ODE having the form (8.1) is called an autonomous system if the functions F (t, x, y) and G(t, x, y) are constant w.r.t. variable t. That is, dx = F (x, y), dt (8.2) dy = G(x, y), dt Definition 8.2 A point (x0, y0) ∈ Ω is said to be a critical point of the autonomous system (8.2) if F (x0, y0) = G(x0, y0) = 0. (8.3) A critical point is also called an equilibrium point, a rest point. Definition 8.3 Let (x(t), y(t)) be a solution of a two-dimensional (planar) autonomous system (8.2). The trace of (x(t), y(t)) as t varies is a curve in the plane. This curve is called trajectory. Remark 8.4 (On solutions of autonomous systems) (i) Two different solutions may represent the same trajectory. For, (1) If (x1(t), y1(t)) defined on an interval J is a solution of the autonomous system (8.2), then the pair of functions (x2(t), y2(t)) defined by (x2(t), y2(t)) := (x1(t − s), y1(t − s)), for t ∈ s + J (8.4) is a solution on interval s + J, for every arbitrary but fixed s ∈ R.
    [Show full text]
  • Removing External Degrees of Freedom from Transition State
    Removing External Degrees of Freedom from Transition State Search Methods using Quaternions Marko Melander1,2*, Kari Laasonen1,2, Hannes Jónsson3,4 1) COMP Centre of Excellence, Aalto University, FI-00076 Aalto, Finland 2) Department of Chemistry, Aalto University, FI-00076 Aalto, Finland 3) Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland 4) Faculty of Physical Sciences, University of Iceland, 107 Reykjavík, Iceland 1 ABSTRACT In finite systems, such as nanoparticles and gas-phase molecules, calculations of minimum energy paths (MEP) connecting initial and final states of transitions as well as searches for saddle points are complicated by the presence of external degrees of freedom, such as overall translation and rotation. A method based on quaternion algebra for removing the external degrees of freedom is described here and applied in calculations using two commonly used methods: the nudged elastic band (NEB) method for finding minimum energy paths and DIMER method for finding the minimum mode in minimum mode following searches of first order saddle points. With the quaternion approach, fewer images in the NEB are needed to represent MEPs accurately. In both NEB and DIMER calculations of finite systems, the number of iterations required to reach convergence is significantly reduced. The algorithms have been implemented in the Atomic Simulation Environment (ASE) open source software. Keywords: Nudged Elastic Band, DIMER, quaternion, saddle point, transition. 2 1. INTRODUCTION Chemical reactions, diffusion events and configurational changes of molecules are transitions from some initial arrangement of the atoms to another, from an initial state minimum on the energy surface to a final state minimum.
    [Show full text]
  • Linear Algebra Review James Chuang
    Linear algebra review James Chuang December 15, 2016 Contents 2.1 vector-vector products ............................................... 1 2.2 matrix-vector products ............................................... 2 2.3 matrix-matrix products ............................................... 4 3.2 the transpose .................................................... 5 3.3 symmetric matrices ................................................. 5 3.4 the trace ....................................................... 6 3.5 norms ........................................................ 6 3.6 linear independence and rank ............................................ 7 3.7 the inverse ...................................................... 7 3.8 orthogonal matrices ................................................. 8 3.9 range and nullspace of a matrix ........................................... 8 3.10 the determinant ................................................... 9 3.11 quadratic forms and positive semidefinite matrices ................................ 10 3.12 eigenvalues and eigenvectors ........................................... 11 3.13 eigenvalues and eigenvectors of symmetric matrices ............................... 12 4.1 the gradient ..................................................... 13 4.2 the Hessian ..................................................... 14 4.3 gradients and hessians of linear and quadratic functions ............................. 15 4.5 gradients of the determinant ............................................ 16 4.6 eigenvalues
    [Show full text]
  • An Introduction to the HESSIAN
    An introduction to the HESSIAN 2nd Edition Photograph of Ludwig Otto Hesse (1811-1874), circa 1860 https://upload.wikimedia.org/wikipedia/commons/6/65/Ludwig_Otto_Hesse.jpg See page for author [Public domain], via Wikimedia Commons I. As the students of the first two years of mathematical analysis well know, the Wronskian, Jacobian and Hessian are the names of three determinants and matrixes, which were invented in the nineteenth century, to make life easy to the mathematicians and to provide nightmares to the students of mathematics. II. I don’t know how the Hessian came to the mind of Ludwig Otto Hesse (1811-1874), a quiet professor father of nine children, who was born in Koenigsberg (I think that Koenigsberg gave birth to a disproportionate 1 number of famous men). It is possible that he was studying the problem of finding maxima, minima and other anomalous points on a bi-dimensional surface. (An alternative hypothesis will be presented in section X. ) While pursuing such study, in one variable, one first looks for the points where the first derivative is zero (if it exists at all), and then examines the second derivative at each of those points, to find out its “quality”, whether it is a maximum, a minimum, or an inflection point. The variety of anomalies on a bi-dimensional surface is larger than for a one dimensional line, and one needs to employ more powerful mathematical instruments. Still, also in two dimensions one starts by looking for points where the two first partial derivatives, with respect to x and with respect to y respectively, exist and are both zero.
    [Show full text]
  • Linear Algebra Primer
    Linear Algebra Review Linear Algebra Primer Slides by Juan Carlos Niebles*, Ranjay Krishna* and Maxime Voisin *Stanford Vision and Learning Lab 27 - Sep - 2018 Another, very in-depth linear algebra review from CS229 is available here: http://cs229.stanford.edu/section/cs229-linalg.pdf And a video discussion of linear algebra from EE263 is here (lectures 3 and 4): https://see.stanford.edu/Course/EE263 1 Stanford University Outline Linear Algebra Review • Vectors and matrices – Basic Matrix Operations – Determinants, norms, trace – Special Matrices • Transformation Matrices – Homogeneous coordinates – Translation • Matrix inverse 27 - • Matrix rank Sep - • Eigenvalues and Eigenvectors 2018 • Matrix Calculus 2 Stanford University Outline Vectors and matrices are just Linear Algebra Review • Vectors and matrices collections of ordered numbers – Basic Matrix Operations that represent something: location in space, speed, pixel brightness, – Determinants, norms, trace etc. We’ll define some common – Special Matrices uses and standard operations on • Transformation Matrices them. – Homogeneous coordinates – Translation • Matrix inverse 27 - • Matrix rank Sep - • Eigenvalues and Eigenvectors 2018 • Matrix Calculus 3 Stanford University Vector Linear Algebra Review • A column vector where • A row vector where 27 - Sep - 2018 denotes the transpose operation 4 Stanford University Vector • We’ll default to column vectors in this class Linear Algebra Review 27 - Sep - • You’ll want to keep track of the orientation of your 2018 vectors when programming in python 5 Stanford University Some vectors have a geometric interpretation, others don’t… Linear Algebra Review • Other vectors don’t have a • Some vectors have a geometric interpretation: geometric – Vectors can represent any kind of interpretation: 27 data (pixels, gradients at an - – Points are just vectors image keypoint, etc) Sep - from the origin.
    [Show full text]
  • ATTRACTORS: STRANGE and OTHERWISE Attractor - in Mathematics, an Attractor Is a Region of Phase Space That "Attracts" All Nearby Points As Time Passes
    ATTRACTORS: STRANGE AND OTHERWISE Attractor - In mathematics, an attractor is a region of phase space that "attracts" all nearby points as time passes. That is, the changing values have a trajectory which moves across the phase space toward the attractor, like a ball rolling down a hilly landscape toward the valley it is attracted to. PHASE SPACE - imagine a standard graph with an x-y axis; it is a phase space. We plot the position of an x-y variable on the graph as a point. That single point summarizes all the information about x and y. If the values of x and/or y change systematically a series of points will plot as a curve or trajectory moving across the phase space. Phase space turns numbers into pictures. There are as many phase space dimensions as there are variables. The strange attractor below has 3 dimensions. LIMIT CYCLE (OR PERIODIC) ATTRACTOR STRANGE (OR COMPLEX) ATTRACTOR A system which repeats itself exactly, continuously, like A strange (or chaotic) attractor is one in which the - + a clock pendulum (left) Position (right) trajectory of the points circle around a region of phase space, but never exactly repeat their path. That is, they do have a predictable overall form, but the form is made up of unpredictable details. Velocity More important, the trajectory of nearby points diverge 0 rapidly reflecting sensitive dependence. Many different strange attractors exist, including the Lorenz, Julian, and Henon, each generated by a Velocity Y different equation. 0 Z The attractors + (right) exhibit fractal X geometry. Velocity 0 ATTRACTORS IN GENERAL We can generalize an attractor as any state toward which a Velocity system naturally evolves.
    [Show full text]
  • Polarization Fields and Phase Space Densities in Storage Rings: Stroboscopic Averaging and the Ergodic Theorem
    Physica D 234 (2007) 131–149 www.elsevier.com/locate/physd Polarization fields and phase space densities in storage rings: Stroboscopic averaging and the ergodic theorem✩ James A. Ellison∗, Klaus Heinemann Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Received 1 May 2007; received in revised form 6 July 2007; accepted 9 July 2007 Available online 14 July 2007 Communicated by C.K.R.T. Jones Abstract A class of orbital motions with volume preserving flows and with vector fields periodic in the “time” parameter θ is defined. Spin motion coupled to the orbital dynamics is then defined, resulting in a class of spin–orbit motions which are important for storage rings. Phase space densities and polarization fields are introduced. It is important, in the context of storage rings, to understand the behavior of periodic polarization fields and phase space densities. Due to the 2π time periodicity of the spin–orbit equations of motion the polarization field, taken at a sequence of increasing time values θ,θ 2π,θ 4π,... , gives a sequence of polarization fields, called the stroboscopic sequence. We show, by using the + + Birkhoff ergodic theorem, that under very general conditions the Cesaro` averages of that sequence converge almost everywhere on phase space to a polarization field which is 2π-periodic in time. This fulfills the main aim of this paper in that it demonstrates that the tracking algorithm for stroboscopic averaging, encoded in the program SPRINT and used in the study of spin motion in storage rings, is mathematically well-founded.
    [Show full text]
  • Lecture 11: Maxima and Minima
    LECTURE 11 Maxima and Minima n n Definition 11.1. Let f : R → R be a real-valued function of several variables. A point xo ∈ R is called a local minimum of f if there is a neighborhood U of xo such that f(x) ≥ f(xo) for all x ∈ U. n A point xo ∈ R is called a local maxmum of f if there is a neighborhood U of xo such that f(x) ≤ f(xo) for all x ∈ U . n A point xo ∈ R is called a local extremum if it is either a local minimum or a local maximum. Definition 11.2. Let f : Rn → R be a real-valued function of several variables. A critical point of f is a point xo where ∇f(xo)=0. If a critical point is not also a local extremum then it is called a saddle point. n Theorem 11.3. Suppose U is an open subset of R , f : U → R is differentiable, and xo is a local extremum. Then ∇f(xo)=0 i.e., xo is a critical point of f. Proof. Suppose xo is an extremum of f. Then there exists a neighborhood N of xo such that either f(x) ≥ f (xo) , for all x ∈ N or f(x) ≤ f (xo) , for all x ∈ NR. n Let σ : I ⊂ R → R be any smooth path such that σ(0) = xo. Since σ is in particular continuous, there must be a subinterval IN of I containing 0 such that σ(t) ∈ N,for all t ∈ IN . But then if we define h(t) ≡ f (σ(t)) we see that since σ(t)liesinNfor all t ∈ IN , we must have either h(t)=f(σ(t)) ≥ f (xo)=h(0) , for all t ∈ IN or h(t)=f(σ(t)) ≤ f (xo)=h(0) , for all t ∈ IN .
    [Show full text]
  • Phase Plane Methods
    Chapter 10 Phase Plane Methods Contents 10.1 Planar Autonomous Systems . 680 10.2 Planar Constant Linear Systems . 694 10.3 Planar Almost Linear Systems . 705 10.4 Biological Models . 715 10.5 Mechanical Models . 730 Studied here are planar autonomous systems of differential equations. The topics: Planar Autonomous Systems: Phase Portraits, Stability. Planar Constant Linear Systems: Classification of isolated equilib- ria, Phase portraits. Planar Almost Linear Systems: Phase portraits, Nonlinear classi- fications of equilibria. Biological Models: Predator-prey models, Competition models, Survival of one species, Co-existence, Alligators, doomsday and extinction. Mechanical Models: Nonlinear spring-mass system, Soft and hard springs, Energy conservation, Phase plane and scenes. 680 Phase Plane Methods 10.1 Planar Autonomous Systems A set of two scalar differential equations of the form x0(t) = f(x(t); y(t)); (1) y0(t) = g(x(t); y(t)): is called a planar autonomous system. The term autonomous means self-governing, justified by the absence of the time variable t in the functions f(x; y), g(x; y). ! ! x(t) f(x; y) To obtain the vector form, let ~u(t) = , F~ (x; y) = y(t) g(x; y) and write (1) as the first order vector-matrix system d (2) ~u(t) = F~ (~u(t)): dt It is assumed that f, g are continuously differentiable in some region D in the xy-plane. This assumption makes F~ continuously differentiable in D and guarantees that Picard's existence-uniqueness theorem for initial d ~ value problems applies to the initial value problem dt ~u(t) = F (~u(t)), ~u(0) = ~u0.
    [Show full text]