Fast-Phase Space Computation of Multiple Arrivals

Total Page:16

File Type:pdf, Size:1020Kb

Fast-Phase Space Computation of Multiple Arrivals Fast-phase space computation of multiple arrivals S. Fomel and J. A. Sethian* Department of Mathematics, University of California, Berkeley, CA 94720 Edited by Cathleen S. Morawetz, New York University, New York, NY, and approved January 28, 2002 (received for review September 7, 2001) We present a fast, general computational technique for computing of angle-gather migration (2, S. Brandsberg-Dahl, M.V. de Hoop the phase-space solution of static Hamilton–Jacobi equations. Start- & B. Ursin, unpublished work). In contrast with our approach, ing with the Liouville formulation of the characteristic equations, we we note that to obtain the exit time and position from each of derive ‘‘Escape Equations’’ which are static, time-independent Eule- the N2 interior grid points using a Lagrangian approach would rian PDEs. They represent all arrivals to the given boundary from all require integration from each of N3 starting values; a typical possible starting configurations. The solution is numerically con- integration would require N steps, giving an operation count of structed through a ‘‘one-pass’’ formulation, building on ideas from N4. Analogously, for a problem in three-dimensional physical semi-Lagrangian methods, Dijkstra-like methods for the Eikonal space with N points on each edge of a computational cube in equation, and Ordered Upwind Methods. To compute all possible physical space, we find all possible exit trajectories for all trajectories corresponding to all possible boundary conditions, the possible boundary conditions in O(N5 log N). technique is of computational order O(N log N), where N is the total In the case where particular boundary conditions are known number of points in the computational phase-space domain; any in advance, computational speedup is possible; this is discussed particular set of boundary conditions then is extracted through rapid after the algorithm is introduced. post-processing. Suggestions are made for speeding up the algorithm in the case when the particular distribution of sources is provided in Formulation of Problem advance. As an application, we apply the technique to the problem of Consider the static Hamilton–Jacobi equation H(x, ٌu) ϭ 0. A computing first, multiple, and most energetic arrivals to the Eikonal nonlinear Hamiltonian H may not have a unique solution, even with equation. smooth boundary data and smooth H. A particular, viscosity-type solution can be selected (3, 4), corresponding to the earliest arrival e present a fast, general computational technique for from the given boundary. Fast algorithms for computing these Wcomputing phase-space solutions of static Hamilton– viscosity-satisfying first-arrival solutions have been developed in Jacobi equations. We derive a set of ‘‘Escape Equations’’ that are recent years. Tsitsiklis developed the first such method (5) for static, time-independent Eulerian partial differential equations solving the Eikonal equation in a Dijkstra-like setting based on an which represent all arrivals to the given boundary from all optimal control perspective; Fast Marching Methods (6, 7) take a possible starting configurations. Following the strategy proposed finite difference perspective to obtain higher order schemes and in (1) we solve these Escape Equations by systematically con- schemes on unstructured meshes (see also ref. 8 for comparison of structing space marching the solution in increasing order, using a similar algorithm with a volume-of-fluid approach); and Sethian a ‘‘one-pass’’ formulation. This means that the solution at each and Vladimirsky (ref. 9, and J. Sethian & A. Vladimirsky, unpub- point in the computational mesh is computed only k times, where lished work) developed so-called ‘‘Ordered Upwind Methods’’ to k does not depend on the number of points in the mesh. compute solutions of general convex static Hamilton–Jacobi equa- The algorithm combines ideas of semi-Lagrangian methods, tions which arise in anisotropic front propagation and optimal Dijkstra-like methods for the Eikonal equation, and Ordered control. These first arrivals are of considerable importance in a Upwind Methods. The method is unconditionally stable, with no large collection of problems, such as computing seismic travel times time-step restriction, and can be made higher-order accurate. (10); see refs. 7 and 11 for reviews. APPLIED We demonstrate the applicability of this technique by computing However, later arrivals may carry additional valuable infor- MATHEMATICS multiple arrivals to the Eikonal equation in a variety of settings. mation, and it is often desirable to compute all possible solutions. The methods presented here are efficient. The Escape Equa- For example, in geophysical simulations, first arrivals may not tions are posed time-independent Eulerian equations in phase correspond to the most energetic arrivals, and this can cause space, whose solution gives the exit time and location for all problems in seismic imaging (12, 13). possible trajectories, starting from all interior points, initialized There are two approaches to multiple arrivals. in all directions. The computational speed depends on whether Y The first is the Lagrangian (ray tracing) approach (14, 15) and one wants to, in fact, obtain results for all possible boundary its variations (16, 17). Here, the phase space characteristic conditions, or, in fact, only for a particular subset of possibilities. equations are integrated, often from a source point, resulting To illustrate, consider a two-dimensional problem consisting in a Lagrangian structure which fans out over the domain. This of a region and its boundary; we discretize the region with a is a valuable and common approach; however, it can face square mesh with N points on each side. Thus, the physical space difficulties either in low-ray density zones where there are very corresponding to the interior consists of N2 points, with N points few rays or near caustics where rays cross. In addition, the use on the boundary (we ignore constants). of irregular computational grid is often inconvenient. In the most general form of boundary conditions, such as those Y A different approach is to work with an Eulerian description which occur in applications such as tomography and seismic of the problem, in either the physical domain or phase space, migration, one needs to solve multiple boundary problems with and attempt to extract multiple arrivals. In recent years, this -H(x, ٌu) ϭ 0 and the point-source boundary condition u(x) ϭ has led to many fascinating and clever Eulerian partial dif 0 for x ϭ s with a set of sources s distributed on the surface of ferential equation based approaches to computing multiple the observational domain. In this case, the solutions span arrivals, including slowness-matching algorithms (18), dy- three-dimensional space, composed of x and s. Because of our namic surface extension algorithms (19) and its modification use of a fast-ordering scheme, we can find all possible exit times and locations for all possible trajectories in O(N3 log N). One can use the output of such computation either for extracting multiple This paper was submitted directly (Track II) to the PNAS office. arrivals for a particular set of sources or directly, as in the method *To whom reprint requests should be addressed. E-mail: [email protected]. www.pnas.org͞cgi͞doi͞10.1073͞pnas.102476599 PNAS ͉ May 28, 2002 ͉ vol. 99 ͉ no. 11 ͉ 7329–7334 Downloaded by guest on September 28, 2021 (20), segment projection methods, and ‘‘big-ray tracing’’ (21) collide, the solution in the physical space becomes multi-valued, and (see also ref. 22). We note that the regularity of the phase interpolating it onto a regular x grid presents a difficult computa- space has been utilized previously in theoretical studies on the tional problem (26). asymptotic wave propagation (23, 24). Liouville Formulation of Phase-Space Solution. We now convert the As an example, consider a one-dimensional closed curve bound- phase-space approach into a set of Liouville equations; these have ing a region in the plane, and suppose one has a collection of sources been used extensively in different applications by Chorin, Hald, and located along the entire boundary; the goal is to consider a front Kuperfman (27, 28). Eqs. 3 and 4 form a system of coupled ordinary propagating inwards from this boundary. The Lagrangian approach differential equations, starting with a particular set of initial con- is to work in phase space and discretize this boundary into a set of ditions. The Liouville equation is a partial differential equation for marker points, whose motion is determined by solving the charac- the same solution with the differentiation performed with respect teristic equations. The curve then evolves in three-dimensional to the initial conditions; it describes the local change in the solution phase space, and the projection of the curve back into two dimen- in response to changes in the initial conditions. sional physical space produces the multiple arrivals. An Eulerian To simplify notation, let us denote the phase-space vector (x, p), formulation of this same approach was pursued by Engquist, by y, the right-hand side of system given in Eq. 3 by vector function Runborg, and Tornberg (35) by using the Vlasov equation to R(y), and the right-hand side of Eq. 4 by the function r(y). In this describe the motion of this curve. In their Segment Projection Method, the curve moving in three-dimensional phase space is notation, the Hamilton–Jacobi system takes the form viewed from several different coordinate systems, so that it always Ѩy͑y , ␴͒ Ѩu͑y , ␴͒ 0 ϭ ͑ ͒ 0 ϭ ͑ ͒ remains locally a graph. A version of this approach using two-level Ѩ␴ R y ; Ѩ␴ r y , [5] set functions was performed by Osher, Cheng, Kang, Shim, and Tsai (unpublished work). and is initialized at ␴ ϭ 0asy ϭ y and u ϭ 0.† The approach presented in this paper computes the solution 0 In the Appendix, we show that the solution of system Eq.
Recommended publications
  • Perturbation Theory and Exact Solutions
    PERTURBATION THEORY AND EXACT SOLUTIONS by J J. LODDER R|nhtdnn Report 76~96 DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS J J. LODOER ASSOCIATIE EURATOM-FOM Jun»»76 FOM-INST1TUUT VOOR PLASMAFYSICA RUNHUIZEN - JUTPHAAS - NEDERLAND DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS by JJ LODDER R^nhuizen Report 76-95 Thisworkwat performed at part of th«r«Mvchprogmmncof thcHMCiattofiafrccmentof EnratoniOTd th« Stichting voor FundtmenteelOiutereoek der Matctk" (FOM) wtihnnmcWMppoft from the Nederhmdie Organiutic voor Zuiver Wetemchap- pcigk Onderzoek (ZWO) and Evntom It it abo pabHtfMd w a the* of Ac Univenrty of Utrecht CONTENTS page SUMMARY iii I. INTRODUCTION 1 II. GENERALIZED FUNCTIONS DEFINED ON DISCONTINUOUS TEST FUNC­ TIONS AND THEIR FOURIER, LAPLACE, AND HILBERT TRANSFORMS 1. Introduction 4 2. Discontinuous test functions 5 3. Differentiation 7 4. Powers of x. The partie finie 10 5. Fourier transforms 16 6. Laplace transforms 20 7. Hubert transforms 20 8. Dispersion relations 21 III. PERTURBATION THEORY 1. Introduction 24 2. Arbitrary potential, momentum coupling 24 3. Dissipative equation of motion 31 4. Expectation values 32 5. Matrix elements, transition probabilities 33 6. Harmonic oscillator 36 7. Classical mechanics and quantum corrections 36 8. Discussion of the Pu strength function 38 IV. EXACTLY SOLVABLE MODELS FOR DISSIPATIVE MOTION 1. Introduction 40 2. General quadratic Kami1tonians 41 3. Differential equations 46 4. Classical mechanics and quantum corrections 49 5. Equation of motion for observables 51 V. SPECIAL QUADRATIC HAMILTONIANS 1. Introduction 53 2. Hamiltcnians with coordinate coupling 53 3. Double coordinate coupled Hamiltonians 62 4. Symmetric Hamiltonians 63 i page VI. DISCUSSION 1. Introduction 66 ?.
    [Show full text]
  • Chapter 8 Stability Theory
    Chapter 8 Stability theory We discuss properties of solutions of a first order two dimensional system, and stability theory for a special class of linear systems. We denote the independent variable by ‘t’ in place of ‘x’, and x,y denote dependent variables. Let I ⊆ R be an interval, and Ω ⊆ R2 be a domain. Let us consider the system dx = F (t, x, y), dt (8.1) dy = G(t, x, y), dt where the functions are defined on I × Ω, and are locally Lipschitz w.r.t. variable (x, y) ∈ Ω. Definition 8.1 (Autonomous system) A system of ODE having the form (8.1) is called an autonomous system if the functions F (t, x, y) and G(t, x, y) are constant w.r.t. variable t. That is, dx = F (x, y), dt (8.2) dy = G(x, y), dt Definition 8.2 A point (x0, y0) ∈ Ω is said to be a critical point of the autonomous system (8.2) if F (x0, y0) = G(x0, y0) = 0. (8.3) A critical point is also called an equilibrium point, a rest point. Definition 8.3 Let (x(t), y(t)) be a solution of a two-dimensional (planar) autonomous system (8.2). The trace of (x(t), y(t)) as t varies is a curve in the plane. This curve is called trajectory. Remark 8.4 (On solutions of autonomous systems) (i) Two different solutions may represent the same trajectory. For, (1) If (x1(t), y1(t)) defined on an interval J is a solution of the autonomous system (8.2), then the pair of functions (x2(t), y2(t)) defined by (x2(t), y2(t)) := (x1(t − s), y1(t − s)), for t ∈ s + J (8.4) is a solution on interval s + J, for every arbitrary but fixed s ∈ R.
    [Show full text]
  • ATTRACTORS: STRANGE and OTHERWISE Attractor - in Mathematics, an Attractor Is a Region of Phase Space That "Attracts" All Nearby Points As Time Passes
    ATTRACTORS: STRANGE AND OTHERWISE Attractor - In mathematics, an attractor is a region of phase space that "attracts" all nearby points as time passes. That is, the changing values have a trajectory which moves across the phase space toward the attractor, like a ball rolling down a hilly landscape toward the valley it is attracted to. PHASE SPACE - imagine a standard graph with an x-y axis; it is a phase space. We plot the position of an x-y variable on the graph as a point. That single point summarizes all the information about x and y. If the values of x and/or y change systematically a series of points will plot as a curve or trajectory moving across the phase space. Phase space turns numbers into pictures. There are as many phase space dimensions as there are variables. The strange attractor below has 3 dimensions. LIMIT CYCLE (OR PERIODIC) ATTRACTOR STRANGE (OR COMPLEX) ATTRACTOR A system which repeats itself exactly, continuously, like A strange (or chaotic) attractor is one in which the - + a clock pendulum (left) Position (right) trajectory of the points circle around a region of phase space, but never exactly repeat their path. That is, they do have a predictable overall form, but the form is made up of unpredictable details. Velocity More important, the trajectory of nearby points diverge 0 rapidly reflecting sensitive dependence. Many different strange attractors exist, including the Lorenz, Julian, and Henon, each generated by a Velocity Y different equation. 0 Z The attractors + (right) exhibit fractal X geometry. Velocity 0 ATTRACTORS IN GENERAL We can generalize an attractor as any state toward which a Velocity system naturally evolves.
    [Show full text]
  • Polarization Fields and Phase Space Densities in Storage Rings: Stroboscopic Averaging and the Ergodic Theorem
    Physica D 234 (2007) 131–149 www.elsevier.com/locate/physd Polarization fields and phase space densities in storage rings: Stroboscopic averaging and the ergodic theorem✩ James A. Ellison∗, Klaus Heinemann Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Received 1 May 2007; received in revised form 6 July 2007; accepted 9 July 2007 Available online 14 July 2007 Communicated by C.K.R.T. Jones Abstract A class of orbital motions with volume preserving flows and with vector fields periodic in the “time” parameter θ is defined. Spin motion coupled to the orbital dynamics is then defined, resulting in a class of spin–orbit motions which are important for storage rings. Phase space densities and polarization fields are introduced. It is important, in the context of storage rings, to understand the behavior of periodic polarization fields and phase space densities. Due to the 2π time periodicity of the spin–orbit equations of motion the polarization field, taken at a sequence of increasing time values θ,θ 2π,θ 4π,... , gives a sequence of polarization fields, called the stroboscopic sequence. We show, by using the + + Birkhoff ergodic theorem, that under very general conditions the Cesaro` averages of that sequence converge almost everywhere on phase space to a polarization field which is 2π-periodic in time. This fulfills the main aim of this paper in that it demonstrates that the tracking algorithm for stroboscopic averaging, encoded in the program SPRINT and used in the study of spin motion in storage rings, is mathematically well-founded.
    [Show full text]
  • Phase Plane Methods
    Chapter 10 Phase Plane Methods Contents 10.1 Planar Autonomous Systems . 680 10.2 Planar Constant Linear Systems . 694 10.3 Planar Almost Linear Systems . 705 10.4 Biological Models . 715 10.5 Mechanical Models . 730 Studied here are planar autonomous systems of differential equations. The topics: Planar Autonomous Systems: Phase Portraits, Stability. Planar Constant Linear Systems: Classification of isolated equilib- ria, Phase portraits. Planar Almost Linear Systems: Phase portraits, Nonlinear classi- fications of equilibria. Biological Models: Predator-prey models, Competition models, Survival of one species, Co-existence, Alligators, doomsday and extinction. Mechanical Models: Nonlinear spring-mass system, Soft and hard springs, Energy conservation, Phase plane and scenes. 680 Phase Plane Methods 10.1 Planar Autonomous Systems A set of two scalar differential equations of the form x0(t) = f(x(t); y(t)); (1) y0(t) = g(x(t); y(t)): is called a planar autonomous system. The term autonomous means self-governing, justified by the absence of the time variable t in the functions f(x; y), g(x; y). ! ! x(t) f(x; y) To obtain the vector form, let ~u(t) = , F~ (x; y) = y(t) g(x; y) and write (1) as the first order vector-matrix system d (2) ~u(t) = F~ (~u(t)): dt It is assumed that f, g are continuously differentiable in some region D in the xy-plane. This assumption makes F~ continuously differentiable in D and guarantees that Picard's existence-uniqueness theorem for initial d ~ value problems applies to the initial value problem dt ~u(t) = F (~u(t)), ~u(0) = ~u0.
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • Visualizing Quantum Mechanics in Phase Space
    Visualizing quantum mechanics in phase space Heiko Baukea) and Noya Ruth Itzhak Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany (Dated: 17 January 2011) We examine the visualization of quantum mechanics in phase space by means of the Wigner function and the Wigner function flow as a complementary approach to illustrating quantum mechanics in configuration space by wave func- tions. The Wigner function formalism resembles the mathematical language of classical mechanics of non-interacting particles. Thus, it allows a more direct comparison between classical and quantum dynamical features. PACS numbers: 03.65.-w, 03.65.Ca Keywords: 1. Introduction 2. Visualizing quantum dynamics in position space or momentum Quantum mechanics is a corner stone of modern physics and technology. Virtually all processes that take place on an space atomar scale require a quantum mechanical description. For example, it is not possible to understand chemical reactionsor A (one-dimensional) quantum mechanical position space the characteristics of solid states without a sound knowledge wave function maps each point x on the position axis to a of quantum mechanics. Quantum mechanical effects are uti- time dependent complex number Ψ(x, t). Equivalently, one lized in many technical devices ranging from transistors and may consider the wave function Ψ˜ (p, t) in momentum space, Flash memory to tunneling microscopes and quantum cryp- which is given by a Fourier transform of Ψ(x, t), viz. tography, just to mention a few applications. Quantum effects, however, are not directly accessible to hu- 1 ixp/~ Ψ˜ (p, t) = Ψ(x, t)e− dx . (1) man senses, the mathematical formulations of quantum me- (2π~)1/2 chanics are abstract and its implications are often unintuitive in terms of classical physics.
    [Show full text]
  • Phase Space Formulation of Quantum Mechanics
    PHASE SPACE FORMULATION OF QUANTUM MECHANICS. INSIGHT INTO THE MEASUREMENT PROBLEM D. Dragoman* – Univ. Bucharest, Physics Dept., P.O. Box MG-11, 76900 Bucharest, Romania Abstract: A phase space mathematical formulation of quantum mechanical processes accompanied by and ontological interpretation is presented in an axiomatic form. The problem of quantum measurement, including that of quantum state filtering, is treated in detail. Unlike standard quantum theory both quantum and classical measuring device can be accommodated by the present approach to solve the quantum measurement problem. * Correspondence address: Prof. D. Dragoman, P.O. Box 1-480, 70700 Bucharest, Romania, email: [email protected] 1. Introduction At more than a century after the discovery of the quantum and despite the indubitable success of quantum theory in calculating the energy levels, transition probabilities and other parameters of quantum systems, the interpretation of quantum mechanics is still under debate. Unlike relativistic physics, which has been founded on a new physical principle, i.e. the constancy of light speed in any reference frame, quantum mechanics is rather a successful mathematical algorithm. Quantum mechanics is not founded on a fundamental principle whose validity may be questioned or may be subjected to experimental testing; in quantum mechanics what is questionable is the meaning of the concepts involved. The quantum theory offers a recipe of how to quantize the dynamics of a physical system starting from the classical Hamiltonian and establishes rules that determine the relation between elements of the mathematical formalism and measurable quantities. This set of instructions works remarkably well, but on the other hand, the significance of even its landmark parameter, the Planck’s constant, is not clearly stated.
    [Show full text]
  • Why Must We Work in the Phase Space?
    IPPT Reports on Fundamental Technological Research 1/2016 Jan J. Sławianowski, Frank E. Schroeck Jr., Agnieszka Martens WHY MUST WE WORK IN THE PHASE SPACE? Institute of Fundamental Technological Research Polish Academy of Sciences Warsaw 2016 http://rcin.org.pl IPPT Reports on Fundamental Technological Research ISSN 2299-3657 ISBN 978-83-89687-98-2 Editorial Board/Kolegium Redakcyjne: Wojciech Nasalski (Editor-in-Chief/Redaktor Naczelny), Paweł Dłużewski, Zbigniew Kotulski, Wiera Oliferuk, Jerzy Rojek, Zygmunt Szymański, Yuriy Tasinkevych Reviewer/Recenzent: prof. dr Paolo Maria Mariano Received on 21st January 2016 Copyright °c 2016 by IPPT PAN Instytut Podstawowych Problemów Techniki Polskiej Akademii Nauk (IPPT PAN) (Institute of Fundamental Technological Research, Polish Academy of Sciences) Pawińskiego 5B, PL 02-106 Warsaw, Poland Printed by/Druk: Drukarnia Braci Grodzickich, Piaseczno, ul. Geodetów 47A http://rcin.org.pl Why must we work in the phase space? Jan J. Sławianowski1, Frank E. Schroeck Jr.2, Agnieszka Martens1 1Institute of Fundamental Technological Research, Polish Academy of Sciences 2Department of Mathematics, University of Denver Abstract We are going to prove that the phase-space description is fundamental both in the classical and quantum physics. It is shown that many problems in statis- tical mechanics, quantum mechanics, quasi-classical theory and in the theory of integrable systems may be well-formulated only in the phase-space language. There are some misunderstandings and confusions concerning the concept of induced probability and entropy on the submanifolds of the phase space. First of all, they are restricted only to hypersurfaces in the phase space, i.e., to the manifolds of the defect of dimension equal to one.
    [Show full text]
  • The Phase Space Elementary Cell in Classical and Generalized Statistics
    Entropy 2013, 15, 4319-4333; doi:10.3390/e15104319 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article The Phase Space Elementary Cell in Classical and Generalized Statistics Piero Quarati 1;2;* and Marcello Lissia 2 1 DISAT, Politecnico di Torino, C.so Duca degli Abruzzi 24, Torino I-10129, Italy 2 Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, Monserrato I-09042, Italy; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +39-0706754899; Fax: +39-070510212. Received: 05 September 2013; in revised form: 25 September 2013/ Accepted: 27 September 2013 / Published: 15 October 2013 Abstract: In the past, the phase-space elementary cell of a non-quantized system was set equal to the third power of the Planck constant; in fact, it is not a necessary assumption. We discuss how the phase space volume, the number of states and the elementary-cell volume of a system of non-interacting N particles, changes when an interaction is switched on and the system becomes or evolves to a system of correlated non-Boltzmann particles and derives the appropriate expressions. Even if we assume that nowadays the volume of the elementary cell is equal to the cube of the Planck constant, h3, at least for quantum systems, we show that there is a correspondence between different values of h in the past, with important and, in principle, measurable cosmological and astrophysical consequences, and systems with an effective smaller (or even larger) phase-space volume described by non-extensive generalized statistics.
    [Show full text]
  • Geometric Formulation of Quantum Mechanics
    Geometric formulation of quantum mechanics Hoshang Heydari Abstract Quantum mechanics is among the most important and successful mathemati- cal model for describing our physical reality. The traditional formulation of quan- tum mechanics is linear and algebraic. In contrast classical mechanics is a geo- metrical and non-linear theory that is defined on a symplectic manifold. However, after invention of general relativity, we are convinced that geometry is physical and effect us in all scale. Hence the geometric formulation of quantum mechanics sought to give a unified picture of physical systems based on its underling geo- metrical structures, e.g., now, the states are represented by points of a symplectic manifold with a compatible Riemannian metric, the observables are real-valued functions on the manifold, and the quantum evolution is governed by a symplectic flow that is generated by a Hamiltonian function. In this work we will give a com- pact introduction to main ideas of geometric formulation of quantum mechanics. We will provide the reader with the details of geometrical structures of both pure and mixed quantum states. We will also discuss and review some important appli- cations of geometric quantum mechanics. Contents 1 Introduction 2 2 Mathematical structures 4 2.1 Hamiltonian dynamics . .4 2.2 Principal fiber bundle . .6 2.3 Momentum map . .8 arXiv:1503.00238v2 [quant-ph] 12 May 2016 3 Geometric formulation of pure quantum states 10 3.1 Quantum phase space . 10 3.2 Quantum dynamics . 14 3.3 Geometric uncertainty relation . 16 3.4 Quantum measurement . 20 1 3.5 Postulates of geometric quantum mechanics .
    [Show full text]
  • Classical Phase Space Phase Space and Probability Density
    Classical phase space Corresponding to a macro state of the system there are thus sets of micro states which belong to the ensemble Phase space and probability density with the probability ½(P ) d¡. ½(P ) is the probability We consider a system of N particles in a d-dimensional density which satis¯es the condition space. Canonical coordinates and momenta Z d¡ ½(P ) = 1: q = (q1; : : : ; qdN ) p = (p1; : : : ; pdN ) The statistical average, or the ensemble expectation value, of a measurable quantity f = f(P ) is determine exactly the microscopic state of the system. Z The phase space is the 2dN-dimensional space f(p; q)g, hfi = d¡ f(P )½(P ): whose every point P = (p; q) corresponds to a possible state of the system. We associate every phase space point with the velocity A trajectory is such a curve in the phase space along ¯eld µ ¶ which the point P (t) as a function of time moves. @H @H V = (q; _ p_) = ; ¡ : Trajectories are determined by the classical equations of @p @q motion The probability current is then V½. The probability dq @H weight of an element ¡ evolves then like i = 0 dt @p Z Z i @ dpi @H ½ d¡ = ¡ V½ ¢ dS: = ¡ ; @t ¡0 @¡0 dt @qi d S where H = H(q1; : : : ; qdN ; p1; : : : ; pdN ; t) G 0 = H(q; p; t) = H(P; t) is the Hamiltonian function of the system. The trajectory is stationary, if H does not depend on Because Z Z time: trajectories starting from the same initial point P V½ ¢ dS = r ¢ (V½) d¡; are identical.
    [Show full text]