Chapter 8 Stability Theory

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 8 Stability Theory Chapter 8 Stability theory We discuss properties of solutions of a first order two dimensional system, and stability theory for a special class of linear systems. We denote the independent variable by ‘t’ in place of ‘x’, and x,y denote dependent variables. Let I ⊆ R be an interval, and Ω ⊆ R2 be a domain. Let us consider the system dx = F (t, x, y), dt (8.1) dy = G(t, x, y), dt where the functions are defined on I × Ω, and are locally Lipschitz w.r.t. variable (x, y) ∈ Ω. Definition 8.1 (Autonomous system) A system of ODE having the form (8.1) is called an autonomous system if the functions F (t, x, y) and G(t, x, y) are constant w.r.t. variable t. That is, dx = F (x, y), dt (8.2) dy = G(x, y), dt Definition 8.2 A point (x0, y0) ∈ Ω is said to be a critical point of the autonomous system (8.2) if F (x0, y0) = G(x0, y0) = 0. (8.3) A critical point is also called an equilibrium point, a rest point. Definition 8.3 Let (x(t), y(t)) be a solution of a two-dimensional (planar) autonomous system (8.2). The trace of (x(t), y(t)) as t varies is a curve in the plane. This curve is called trajectory. Remark 8.4 (On solutions of autonomous systems) (i) Two different solutions may represent the same trajectory. For, (1) If (x1(t), y1(t)) defined on an interval J is a solution of the autonomous system (8.2), then the pair of functions (x2(t), y2(t)) defined by (x2(t), y2(t)) := (x1(t − s), y1(t − s)), for t ∈ s + J (8.4) is a solution on interval s + J, for every arbitrary but fixed s ∈ R. (2) However, traces of both the solutions on the respective intervals is the same. If the independent variable t is interpreted as time, then we note that the two different solutions visit every point on the trajectory with a time lag s. Also see Example 8.5 67 68 §8.0 (ii) The trajectories do not cross. For, (1) Suppose two trajectories γ1 and γ2 cross at a point (x0, y0) ∈ Ω. Let (x1(t), y1(t)) defined on interval I1 be a solution whose trace is γ1, and (x2(t), y2(t)) defined on interval I2 be a solution whose trace is γ2. By the assumption of crossing, there exist t1 ∈ I1, and t2 ∈ I2 such that (x1(t1), y1(t1)) = (x0, y0) = (x2(t2), y2(t2)). (8.5) Let us define a pair of functions (x3(t), y3(t)) on interval t1 − t2 + I2 by (x3(t), y3(t)) := (x2(t − t1 + t2), y2(t − t1 + t2)) (8.6) It can be easily checked, via Chain rule, that (x3(t), y3(t)) is a solution of (8.2) on the interval t1 − t2 + I2. Note that (x3(t1), y3(t1)) = (x2(t1 − t1 + t2), y2(t1 − t1 + t2)) = (x2(t2), y2(t2)) = (x0, y0) (8.7) Thus, (x3(t), y3(t)) and (x1(t), y1(t)) are solutions of the same initial value prob- lem, contradicting the uniqueness of solutions to IVP. Therefore two trajectories do not cross each other. (iii) The trajectories fill the domain Ω, since through every point a trajectory passes. This is a consequence of existence theorem. (iv) Through every point in the phase space Ω, exactly one trajecory passes. This is a consequence of uniqueness of solutions to IVPs. (v) From the last two remarks, it follows that the trajectories partition the phase space Ω. In fact, defining a relation on Ω by saying that two points (x1, y1), (x2, y2) ∈ Ω are related if (x1, y1) and (x2, y2) lie on the same trajectory, it is easy to verify that this relation is an equivalence relation and thereby giving rise to a partition of Ω in terms of equivalnece classes. Each equivalence class is a trajectory. (vi) Note that trajectories consisting of single point correspond to critical points. (vii) types of trajectories: For autonomous systems with two dimensional phase space, three types of trajectories are possible. A trajectory consisting of single point (corresponding to equilib- rium solutions), and if trajectory has more than one point then it could be a closed curve (corresponding to periodic solutions), or a curve without self-intersection. (viii) For linear autonomous systems, a special class of systems (8.2) for which F and G are linear in x, y, note that saturated solutions are global, i.e., saturated solutions are defined on the entire real line R. Hence for linear autonomous systems, we do not mention the interval on which a given solution is defined. The above remark is illustrated by the following example. Example 8.5 dx dy = y, = −4x. (8.8) dt dt Note that (x1(t), y1(t)) := (cos 2t, −2 sin 2t) is a solution of (8.8). The trajectory passing through 2 2 y2 the point (1, 0) ∈ R is the ellipse x + 4 = 1 travelled counterclockwise. Consider the solution π π (x2(t), y2(t)) := (cos(t − π/2), −2 sin 2(t − π/2)). This solution satisfies (x2( 2 ), y2( 2 )) = (1, 0), and hence has the same trajectory as (x1(t), y1(t)). Draw figure MA 417: Ordinary Differential Equations Sivaji Ganesh Sista Chapter 8 : Stability theory 69 8.1 Solving linear planar systems with constant coefficients Consider the system of ODE µ ¶ µ ¶ µ ¶ µ ¶ x0 a b x x = =: A . (8.9) y0 c d y y 8.1.1 Fundamental matrix Definition 8.6 (Fundamental matrix) A matrix valued function Φ whose columns are solu- tions of the system of ODE (8.9) is called a solution matrix. A solution matrix Φ is called a fundamental matrix if the columns of Φ form a fundamental pair of solutions for the system (8.9). A fundamental matrix Φ is called the standard fundamental matrix if Φ(0) is the identity matrix. Remark 8.7 Since the columns of a solution matrixΦ are solutions of (8.9), the matrix valued function Φ satisfies the system of ODE Φ0 = AΦ. (8.10) Exercise 8.8 A solution matrix is a fundamental matrix if and only if its determinant is not zero. Exercise 8.9 Prove that if Ψ is a fundamental matrix then ΨC is also a fundamental matrix for every constant invertible matrix C. Prove that all fundamental matrices occur this way. Computation of fundamental matrix By definition of a fundamental matrix, finding a fundamental pair of solutions to system of ODE (8.9) is equivalent to finding a fundamental matrix. In view of Exercise 8.9, fundamental matrix is not unique but the standard fundamental matrix is unique. µ ¶ a (1) Observe that eλt is a non-trivial solution of (8.9) if and only if b µ ¶ µ ¶ µ ¶ µ ¶ a 0 a a 6= ,A = λ . (8.11) b 0 b b µ ¶ a That is, λ is an eigenvalue and is an eigenvector corresponding to λ. b (2) Questionµ ¶ Is it possible to find a fundamental pair, both of which are of the form a eλt ? b µ ¶ µ ¶ a c Answer Supposing that φ (t) = eλ1t and φ (t) = eλ2t are two solutions of 1 b 2 d (8.9), φ1, φ2 form a fundamental pair if and only if ¯ ¯ ¯ a c ¯ ¯ ¯ 6= 0, (8.12) ¯ b d ¯ since the above determinant is the Wronskian of φ1, φ2 at t = 0. That is, the matrix A should have two linearly independent eigenvectors. Note that this is equivalent to saying that A is diagonalisable. Sivaji Ganesh Sista MA 417: Ordinary Differential Equations 70 8.1. Solving linear planar systems with constant coefficients (3) Question What if the matrix A does not have two linearly independent eigenvectors? This can happen when A has only one eigenvalue of multiplicity two. Inspired by a similar situation in the contextµ of¶ constant coefficientµ second¶ order linear ODE, we are a a tempted to try φ (t) = eλ1t and φ (t) = teλ1t as a fundamental pair. But 1 b 2 b note that φ1, φ2 does not form a fundamental pair since Wronskian at t = 0 will be zero, also note that φ2 is not even a solution of the linear system (8.9). Nevertheless, we can find a solution having the form of φ1. Therefore, we try a variant of above suggestion to find another solution that together φ1 constitutes a fundamental pair. Let µ ¶ µ ¶ a c φ (t) = teλ1t + eλ1t (8.13) 2 b d Then φ2(t) solves the system (8.9) if and only if µ ¶ µ ¶ c a (A − λ I) = . (8.14) 1 d b µ ¶ µ ¶ a c One can easily verify that , are linearly independent. Thus, φ , φ defined b d 1 2 by µ ¶ µ ¶ µ ¶ a a c φ (t) = eλ1t , φ (t) = teλ1t + eλ1t (8.15) 1 b 2 b d µ ¶ µ ¶ a c is a fundamental pair, where , are related by the equation (8.14). b d (4) In case the matrix A does not have real eigenvalues, then eigenvalues are complex conjugates of each other. In this case, (λ, v) is an eigen pair if and only if (λ, v) is also an eigen pair for A. (8.16) µ ¶ α + iβ Denoting λ = r + iq (note q 6= 0), v = , define γ + iδ µ ¶ µ ¶ α cos qt − β sin qt α sin qt + β cos qt φ (t) = ert , φ (t) = ert .
Recommended publications
  • Solution Bounds, Stability and Attractors' Estimates of Some Nonlinear Time-Varying Systems
    1 Solution Bounds, Stability and Attractors’ Estimates of Some Nonlinear Time-Varying Systems Mark A. Pinsky and Steve Koblik Abstract. Estimation of solution norms and stability for time-dependent nonlinear systems is ubiquitous in numerous applied and control problems. Yet, practically valuable results are rare in this area. This paper develops a novel approach, which bounds the solution norms, derives the corresponding stability criteria, and estimates the trapping/stability regions for a broad class of the corresponding systems. Our inferences rest on deriving a scalar differential inequality for the norms of solutions to the initial systems. Utility of the Lipschitz inequality linearizes the associated auxiliary differential equation and yields both the upper bounds for the norms of solutions and the relevant stability criteria. To refine these inferences, we introduce a nonlinear extension of the Lipschitz inequality, which improves the developed bounds and estimates the stability basins and trapping regions for the corresponding systems. Finally, we conform the theoretical results in representative simulations. Key Words. Comparison principle, Lipschitz condition, stability criteria, trapping/stability regions, solution bounds AMS subject classification. 34A34, 34C11, 34C29, 34C41, 34D08, 34D10, 34D20, 34D45 1. INTRODUCTION. We are going to study a system defined by the equation n (1) xAtxftxFttt , , [0 , ), x , ft ,0 0 where matrix A nn and functions ft:[ , ) nn and F : n are continuous and bounded, and At is also continuously differentiable. It is assumed that the solution, x t, x0 , to the initial value problem, x t0, x 0 x 0 , is uniquely defined for tt0 . Note that the pertained conditions can be found, e.g., in [1] and [2].
    [Show full text]
  • Strange Attractors and Classical Stability Theory
    Nonlinear Dynamics and Systems Theory, 8 (1) (2008) 49–96 Strange Attractors and Classical Stability Theory G. Leonov ∗ St. Petersburg State University Universitetskaya av., 28, Petrodvorets, 198504 St.Petersburg, Russia Received: November 16, 2006; Revised: December 15, 2007 Abstract: Definitions of global attractor, B-attractor and weak attractor are introduced. Relationships between Lyapunov stability, Poincare stability and Zhukovsky stability are considered. Perron effects are discussed. Stability and instability criteria by the first approximation are obtained. Lyapunov direct method in dimension theory is introduced. For the Lorenz system necessary and sufficient conditions of existence of homoclinic trajectories are obtained. Keywords: Attractor, instability, Lyapunov exponent, stability, Poincar´esection, Hausdorff dimension, Lorenz system, homoclinic bifurcation. Mathematics Subject Classification (2000): 34C28, 34D45, 34D20. 1 Introduction In almost any solid survey or book on chaotic dynamics, one encounters notions from classical stability theory such as Lyapunov exponent and characteristic exponent. But the effect of sign inversion in the characteristic exponent during linearization is seldom mentioned. This effect was discovered by Oscar Perron [1], an outstanding German math- ematician. The present survey sets forth Perron’s results and their further development, see [2]–[4]. It is shown that Perron effects may occur on the boundaries of a flow of solutions that is stable by the first approximation. Inside the flow, stability is completely determined by the negativeness of the characteristic exponents of linearized systems. It is often said that the defining property of strange attractors is the sensitivity of their trajectories with respect to the initial data. But how is this property connected with the classical notions of instability? For continuous systems, it was necessary to remember the almost forgotten notion of Zhukovsky instability.
    [Show full text]
  • Perturbation Theory and Exact Solutions
    PERTURBATION THEORY AND EXACT SOLUTIONS by J J. LODDER R|nhtdnn Report 76~96 DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS J J. LODOER ASSOCIATIE EURATOM-FOM Jun»»76 FOM-INST1TUUT VOOR PLASMAFYSICA RUNHUIZEN - JUTPHAAS - NEDERLAND DISSIPATIVE MOTION PERTURBATION THEORY AND EXACT SOLUTIONS by JJ LODDER R^nhuizen Report 76-95 Thisworkwat performed at part of th«r«Mvchprogmmncof thcHMCiattofiafrccmentof EnratoniOTd th« Stichting voor FundtmenteelOiutereoek der Matctk" (FOM) wtihnnmcWMppoft from the Nederhmdie Organiutic voor Zuiver Wetemchap- pcigk Onderzoek (ZWO) and Evntom It it abo pabHtfMd w a the* of Ac Univenrty of Utrecht CONTENTS page SUMMARY iii I. INTRODUCTION 1 II. GENERALIZED FUNCTIONS DEFINED ON DISCONTINUOUS TEST FUNC­ TIONS AND THEIR FOURIER, LAPLACE, AND HILBERT TRANSFORMS 1. Introduction 4 2. Discontinuous test functions 5 3. Differentiation 7 4. Powers of x. The partie finie 10 5. Fourier transforms 16 6. Laplace transforms 20 7. Hubert transforms 20 8. Dispersion relations 21 III. PERTURBATION THEORY 1. Introduction 24 2. Arbitrary potential, momentum coupling 24 3. Dissipative equation of motion 31 4. Expectation values 32 5. Matrix elements, transition probabilities 33 6. Harmonic oscillator 36 7. Classical mechanics and quantum corrections 36 8. Discussion of the Pu strength function 38 IV. EXACTLY SOLVABLE MODELS FOR DISSIPATIVE MOTION 1. Introduction 40 2. General quadratic Kami1tonians 41 3. Differential equations 46 4. Classical mechanics and quantum corrections 49 5. Equation of motion for observables 51 V. SPECIAL QUADRATIC HAMILTONIANS 1. Introduction 53 2. Hamiltcnians with coordinate coupling 53 3. Double coordinate coupled Hamiltonians 62 4. Symmetric Hamiltonians 63 i page VI. DISCUSSION 1. Introduction 66 ?.
    [Show full text]
  • Robot Dynamics and Control Lecture 8: Basic Lyapunov Stability Theory
    MCE/EEC 647/747: Robot Dynamics and Control Lecture 8: Basic Lyapunov Stability Theory Reading: SHV Appendix Mechanical Engineering Hanz Richter, PhD MCE503 – p.1/17 Stability in the sense of Lyapunov A dynamic system x˙ = f(x) is Lyapunov stable or internally stable about an equilibrium point xeq if state trajectories are confined to a bounded region whenever the initial condition x0 is chosen sufficiently close to xeq. Mathematically, given R> 0 there always exists r > 0 so that if ||x0 − xeq|| <r, then ||x(t) − xeq|| <R for all t> 0. As seen in the figure R defines a desired confinement region, while r defines the neighborhood of xeq where x0 must belong so that x(t) does not exit the confinement region. R r xeq x0 x(t) MCE503 – p.2/17 Stability in the sense of Lyapunov... Note: Lyapunov stability does not require ||x(t)|| to converge to ||xeq||. The stronger definition of asymptotic stability requires that ||x(t)|| → ||xeq|| as t →∞. Input-Output stability (BIBO) does not imply Lyapunov stability. The system can be BIBO stable but have unbounded states that do not cause the output to be unbounded (for example take x1(t) →∞, with y = Cx = [01]x). The definition is difficult to use to test the stability of a given system. Instead, we use Lyapunov’s stability theorem, also called Lyapunov’s direct method. This theorem is only a sufficient condition, however. When the test fails, the results are inconclusive. It’s still the best tool available to evaluate and ensure the stability of nonlinear systems.
    [Show full text]
  • ATTRACTORS: STRANGE and OTHERWISE Attractor - in Mathematics, an Attractor Is a Region of Phase Space That "Attracts" All Nearby Points As Time Passes
    ATTRACTORS: STRANGE AND OTHERWISE Attractor - In mathematics, an attractor is a region of phase space that "attracts" all nearby points as time passes. That is, the changing values have a trajectory which moves across the phase space toward the attractor, like a ball rolling down a hilly landscape toward the valley it is attracted to. PHASE SPACE - imagine a standard graph with an x-y axis; it is a phase space. We plot the position of an x-y variable on the graph as a point. That single point summarizes all the information about x and y. If the values of x and/or y change systematically a series of points will plot as a curve or trajectory moving across the phase space. Phase space turns numbers into pictures. There are as many phase space dimensions as there are variables. The strange attractor below has 3 dimensions. LIMIT CYCLE (OR PERIODIC) ATTRACTOR STRANGE (OR COMPLEX) ATTRACTOR A system which repeats itself exactly, continuously, like A strange (or chaotic) attractor is one in which the - + a clock pendulum (left) Position (right) trajectory of the points circle around a region of phase space, but never exactly repeat their path. That is, they do have a predictable overall form, but the form is made up of unpredictable details. Velocity More important, the trajectory of nearby points diverge 0 rapidly reflecting sensitive dependence. Many different strange attractors exist, including the Lorenz, Julian, and Henon, each generated by a Velocity Y different equation. 0 Z The attractors + (right) exhibit fractal X geometry. Velocity 0 ATTRACTORS IN GENERAL We can generalize an attractor as any state toward which a Velocity system naturally evolves.
    [Show full text]
  • Polarization Fields and Phase Space Densities in Storage Rings: Stroboscopic Averaging and the Ergodic Theorem
    Physica D 234 (2007) 131–149 www.elsevier.com/locate/physd Polarization fields and phase space densities in storage rings: Stroboscopic averaging and the ergodic theorem✩ James A. Ellison∗, Klaus Heinemann Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Received 1 May 2007; received in revised form 6 July 2007; accepted 9 July 2007 Available online 14 July 2007 Communicated by C.K.R.T. Jones Abstract A class of orbital motions with volume preserving flows and with vector fields periodic in the “time” parameter θ is defined. Spin motion coupled to the orbital dynamics is then defined, resulting in a class of spin–orbit motions which are important for storage rings. Phase space densities and polarization fields are introduced. It is important, in the context of storage rings, to understand the behavior of periodic polarization fields and phase space densities. Due to the 2π time periodicity of the spin–orbit equations of motion the polarization field, taken at a sequence of increasing time values θ,θ 2π,θ 4π,... , gives a sequence of polarization fields, called the stroboscopic sequence. We show, by using the + + Birkhoff ergodic theorem, that under very general conditions the Cesaro` averages of that sequence converge almost everywhere on phase space to a polarization field which is 2π-periodic in time. This fulfills the main aim of this paper in that it demonstrates that the tracking algorithm for stroboscopic averaging, encoded in the program SPRINT and used in the study of spin motion in storage rings, is mathematically well-founded.
    [Show full text]
  • Phase Plane Methods
    Chapter 10 Phase Plane Methods Contents 10.1 Planar Autonomous Systems . 680 10.2 Planar Constant Linear Systems . 694 10.3 Planar Almost Linear Systems . 705 10.4 Biological Models . 715 10.5 Mechanical Models . 730 Studied here are planar autonomous systems of differential equations. The topics: Planar Autonomous Systems: Phase Portraits, Stability. Planar Constant Linear Systems: Classification of isolated equilib- ria, Phase portraits. Planar Almost Linear Systems: Phase portraits, Nonlinear classi- fications of equilibria. Biological Models: Predator-prey models, Competition models, Survival of one species, Co-existence, Alligators, doomsday and extinction. Mechanical Models: Nonlinear spring-mass system, Soft and hard springs, Energy conservation, Phase plane and scenes. 680 Phase Plane Methods 10.1 Planar Autonomous Systems A set of two scalar differential equations of the form x0(t) = f(x(t); y(t)); (1) y0(t) = g(x(t); y(t)): is called a planar autonomous system. The term autonomous means self-governing, justified by the absence of the time variable t in the functions f(x; y), g(x; y). ! ! x(t) f(x; y) To obtain the vector form, let ~u(t) = , F~ (x; y) = y(t) g(x; y) and write (1) as the first order vector-matrix system d (2) ~u(t) = F~ (~u(t)): dt It is assumed that f, g are continuously differentiable in some region D in the xy-plane. This assumption makes F~ continuously differentiable in D and guarantees that Picard's existence-uniqueness theorem for initial d ~ value problems applies to the initial value problem dt ~u(t) = F (~u(t)), ~u(0) = ~u0.
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • Visualizing Quantum Mechanics in Phase Space
    Visualizing quantum mechanics in phase space Heiko Baukea) and Noya Ruth Itzhak Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany (Dated: 17 January 2011) We examine the visualization of quantum mechanics in phase space by means of the Wigner function and the Wigner function flow as a complementary approach to illustrating quantum mechanics in configuration space by wave func- tions. The Wigner function formalism resembles the mathematical language of classical mechanics of non-interacting particles. Thus, it allows a more direct comparison between classical and quantum dynamical features. PACS numbers: 03.65.-w, 03.65.Ca Keywords: 1. Introduction 2. Visualizing quantum dynamics in position space or momentum Quantum mechanics is a corner stone of modern physics and technology. Virtually all processes that take place on an space atomar scale require a quantum mechanical description. For example, it is not possible to understand chemical reactionsor A (one-dimensional) quantum mechanical position space the characteristics of solid states without a sound knowledge wave function maps each point x on the position axis to a of quantum mechanics. Quantum mechanical effects are uti- time dependent complex number Ψ(x, t). Equivalently, one lized in many technical devices ranging from transistors and may consider the wave function Ψ˜ (p, t) in momentum space, Flash memory to tunneling microscopes and quantum cryp- which is given by a Fourier transform of Ψ(x, t), viz. tography, just to mention a few applications. Quantum effects, however, are not directly accessible to hu- 1 ixp/~ Ψ˜ (p, t) = Ψ(x, t)e− dx . (1) man senses, the mathematical formulations of quantum me- (2π~)1/2 chanics are abstract and its implications are often unintuitive in terms of classical physics.
    [Show full text]
  • Phase Space Formulation of Quantum Mechanics
    PHASE SPACE FORMULATION OF QUANTUM MECHANICS. INSIGHT INTO THE MEASUREMENT PROBLEM D. Dragoman* – Univ. Bucharest, Physics Dept., P.O. Box MG-11, 76900 Bucharest, Romania Abstract: A phase space mathematical formulation of quantum mechanical processes accompanied by and ontological interpretation is presented in an axiomatic form. The problem of quantum measurement, including that of quantum state filtering, is treated in detail. Unlike standard quantum theory both quantum and classical measuring device can be accommodated by the present approach to solve the quantum measurement problem. * Correspondence address: Prof. D. Dragoman, P.O. Box 1-480, 70700 Bucharest, Romania, email: [email protected] 1. Introduction At more than a century after the discovery of the quantum and despite the indubitable success of quantum theory in calculating the energy levels, transition probabilities and other parameters of quantum systems, the interpretation of quantum mechanics is still under debate. Unlike relativistic physics, which has been founded on a new physical principle, i.e. the constancy of light speed in any reference frame, quantum mechanics is rather a successful mathematical algorithm. Quantum mechanics is not founded on a fundamental principle whose validity may be questioned or may be subjected to experimental testing; in quantum mechanics what is questionable is the meaning of the concepts involved. The quantum theory offers a recipe of how to quantize the dynamics of a physical system starting from the classical Hamiltonian and establishes rules that determine the relation between elements of the mathematical formalism and measurable quantities. This set of instructions works remarkably well, but on the other hand, the significance of even its landmark parameter, the Planck’s constant, is not clearly stated.
    [Show full text]
  • Why Must We Work in the Phase Space?
    IPPT Reports on Fundamental Technological Research 1/2016 Jan J. Sławianowski, Frank E. Schroeck Jr., Agnieszka Martens WHY MUST WE WORK IN THE PHASE SPACE? Institute of Fundamental Technological Research Polish Academy of Sciences Warsaw 2016 http://rcin.org.pl IPPT Reports on Fundamental Technological Research ISSN 2299-3657 ISBN 978-83-89687-98-2 Editorial Board/Kolegium Redakcyjne: Wojciech Nasalski (Editor-in-Chief/Redaktor Naczelny), Paweł Dłużewski, Zbigniew Kotulski, Wiera Oliferuk, Jerzy Rojek, Zygmunt Szymański, Yuriy Tasinkevych Reviewer/Recenzent: prof. dr Paolo Maria Mariano Received on 21st January 2016 Copyright °c 2016 by IPPT PAN Instytut Podstawowych Problemów Techniki Polskiej Akademii Nauk (IPPT PAN) (Institute of Fundamental Technological Research, Polish Academy of Sciences) Pawińskiego 5B, PL 02-106 Warsaw, Poland Printed by/Druk: Drukarnia Braci Grodzickich, Piaseczno, ul. Geodetów 47A http://rcin.org.pl Why must we work in the phase space? Jan J. Sławianowski1, Frank E. Schroeck Jr.2, Agnieszka Martens1 1Institute of Fundamental Technological Research, Polish Academy of Sciences 2Department of Mathematics, University of Denver Abstract We are going to prove that the phase-space description is fundamental both in the classical and quantum physics. It is shown that many problems in statis- tical mechanics, quantum mechanics, quasi-classical theory and in the theory of integrable systems may be well-formulated only in the phase-space language. There are some misunderstandings and confusions concerning the concept of induced probability and entropy on the submanifolds of the phase space. First of all, they are restricted only to hypersurfaces in the phase space, i.e., to the manifolds of the defect of dimension equal to one.
    [Show full text]
  • The Phase Space Elementary Cell in Classical and Generalized Statistics
    Entropy 2013, 15, 4319-4333; doi:10.3390/e15104319 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article The Phase Space Elementary Cell in Classical and Generalized Statistics Piero Quarati 1;2;* and Marcello Lissia 2 1 DISAT, Politecnico di Torino, C.so Duca degli Abruzzi 24, Torino I-10129, Italy 2 Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, Monserrato I-09042, Italy; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +39-0706754899; Fax: +39-070510212. Received: 05 September 2013; in revised form: 25 September 2013/ Accepted: 27 September 2013 / Published: 15 October 2013 Abstract: In the past, the phase-space elementary cell of a non-quantized system was set equal to the third power of the Planck constant; in fact, it is not a necessary assumption. We discuss how the phase space volume, the number of states and the elementary-cell volume of a system of non-interacting N particles, changes when an interaction is switched on and the system becomes or evolves to a system of correlated non-Boltzmann particles and derives the appropriate expressions. Even if we assume that nowadays the volume of the elementary cell is equal to the cube of the Planck constant, h3, at least for quantum systems, we show that there is a correspondence between different values of h in the past, with important and, in principle, measurable cosmological and astrophysical consequences, and systems with an effective smaller (or even larger) phase-space volume described by non-extensive generalized statistics.
    [Show full text]