Qualitative Analysis of Differential Equations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Math 2374: Multivariable Calculus and Vector Analysis
Math 2374: Multivariable Calculus and Vector Analysis Part 25 Fall 2012 Extreme points Definition If f : U ⊂ Rn ! R is a scalar function, • a point x0 2 U is called a local minimum of f if there is a neighborhood V of x0 such that for all points x 2 V, f (x) ≥ f (x0), • a point x0 2 U is called a local maximum of f if there is a neighborhood V of x0 such that for all points x 2 V, f (x) ≤ f (x0), • a point x0 2 U is called a local, or relative, extremum of f if it is either a local minimum or a local maximum, • a point x0 2 U is called a critical point of f if either f is not differentiable at x0 or if rf (x0) = Df (x0) = 0, • a critical point that is not a local extremum is called a saddle point. First Derivative Test for Local Extremum Theorem If U ⊂ Rn is open, the function f : U ⊂ Rn ! R is differentiable, and x0 2 U is a local extremum, then Df (x0) = 0; that is, x0 is a critical point. Proof. Suppose that f achieves a local maximum at x0, then for all n h 2 R , the function g(t) = f (x0 + th) has a local maximum at t = 0. Thus from one-variable calculus g0(0) = 0. By chain rule 0 g (0) = [Df (x0)] h = 0 8h So Df (x0) = 0. Same proof for a local minimum. Examples Ex-1 Find the maxima and minima of the function f (x; y) = x2 + y 2. -
A Stability Boundary Based Method for Finding Saddle Points on Potential Energy Surfaces
JOURNAL OF COMPUTATIONAL BIOLOGY Volume 13, Number 3, 2006 © Mary Ann Liebert, Inc. Pp. 745–766 A Stability Boundary Based Method for Finding Saddle Points on Potential Energy Surfaces CHANDAN K. REDDY and HSIAO-DONG CHIANG ABSTRACT The task of finding saddle points on potential energy surfaces plays a crucial role in under- standing the dynamics of a micromolecule as well as in studying the folding pathways of macromolecules like proteins. The problem of finding the saddle points on a high dimen- sional potential energy surface is transformed into the problem of finding decomposition points of its corresponding nonlinear dynamical system. This paper introduces a new method based on TRUST-TECH (TRansformation Under STability reTained Equilibria CHaracter- ization) to compute saddle points on potential energy surfaces using stability boundaries. Our method explores the dynamic and geometric characteristics of stability boundaries of a nonlinear dynamical system. A novel trajectory adjustment procedure is used to trace the stability boundary. Our method was successful in finding the saddle points on different potential energy surfaces of various dimensions. A simplified version of the algorithm has also been used to find the saddle points of symmetric systems with the help of some analyt- ical knowledge. The main advantages and effectiveness of the method are clearly illustrated with some examples. Promising results of our method are shown on various problems with varied degrees of freedom. Key words: potential energy surfaces, saddle points, stability boundary, minimum gradient point, computational chemistry. I. INTRODUCTION ecently, there has been a lot of interest across various disciplines to understand a wide Rvariety of problems related to bioinformatics. -
Lesson 21 –Resultants
Lesson 21 –Resultants Elimination theory may be considered the origin of algebraic geometry; its history can be traced back to Newton (for special equations) and to Euler and Bezout. The classical method for computing varieties has been through the use of resultants, the focus of today’s discussion. The alternative of using Groebner bases is more recent, becoming fashionable with the onset of computers. Calculations with Groebner bases may reach their limitations quite quickly, however, so resultants remain useful in elimination theory. The proof of the Extension Theorem provided in the textbook (see pages 164-165) also makes good use of resultants. I. The Resultant Let be a UFD and let be the polynomials . The definition and motivation for the resultant of lies in the following very simple exercise. Exercise 1 Show that two polynomials and in have a nonconstant common divisor in if and only if there exist nonzero polynomials u, v such that where and . The equation can be turned into a numerical criterion for and to have a common factor. Actually, we use the equation instead, because the criterion turns out to be a little cleaner (as there are fewer negative signs). Observe that iff so has a solution iff has a solution . Exercise 2 (to be done outside of class) Given polynomials of positive degree, say , define where the l + m coefficients , ,…, , , ,…, are treated as unknowns. Substitute the formulas for f, g, u and v into the equation and compare coefficients of powers of x to produce the following system of linear equations with unknowns ci, di and coefficients ai, bj, in R. -
Algorithmic Factorization of Polynomials Over Number Fields
Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 5-18-2017 Algorithmic Factorization of Polynomials over Number Fields Christian Schulz Rose-Hulman Institute of Technology Follow this and additional works at: https://scholar.rose-hulman.edu/math_mstr Part of the Number Theory Commons, and the Theory and Algorithms Commons Recommended Citation Schulz, Christian, "Algorithmic Factorization of Polynomials over Number Fields" (2017). Mathematical Sciences Technical Reports (MSTR). 163. https://scholar.rose-hulman.edu/math_mstr/163 This Dissertation is brought to you for free and open access by the Mathematics at Rose-Hulman Scholar. It has been accepted for inclusion in Mathematical Sciences Technical Reports (MSTR) by an authorized administrator of Rose-Hulman Scholar. For more information, please contact [email protected]. Algorithmic Factorization of Polynomials over Number Fields Christian Schulz May 18, 2017 Abstract The problem of exact polynomial factorization, in other words expressing a poly- nomial as a product of irreducible polynomials over some field, has applications in algebraic number theory. Although some algorithms for factorization over algebraic number fields are known, few are taught such general algorithms, as their use is mainly as part of the code of various computer algebra systems. This thesis provides a summary of one such algorithm, which the author has also fully implemented at https://github.com/Whirligig231/number-field-factorization, along with an analysis of the runtime of this algorithm. Let k be the product of the degrees of the adjoined elements used to form the algebraic number field in question, let s be the sum of the squares of these degrees, and let d be the degree of the polynomial to be factored; then the runtime of this algorithm is found to be O(d4sk2 + 2dd3). -
Arxiv:2004.03341V1
RESULTANTS OVER PRINCIPAL ARTINIAN RINGS CLAUS FIEKER, TOMMY HOFMANN, AND CARLO SIRCANA Abstract. The resultant of two univariate polynomials is an invariant of great impor- tance in commutative algebra and vastly used in computer algebra systems. Here we present an algorithm to compute it over Artinian principal rings with a modified version of the Euclidean algorithm. Using the same strategy, we show how the reduced resultant and a pair of B´ezout coefficient can be computed. Particular attention is devoted to the special case of Z/nZ, where we perform a detailed analysis of the asymptotic cost of the algorithm. Finally, we illustrate how the algorithms can be exploited to improve ideal arithmetic in number fields and polynomial arithmetic over p-adic fields. 1. Introduction The computation of the resultant of two univariate polynomials is an important task in computer algebra and it is used for various purposes in algebraic number theory and commutative algebra. It is well-known that, over an effective field F, the resultant of two polynomials of degree at most d can be computed in O(M(d) log d) ([vzGG03, Section 11.2]), where M(d) is the number of operations required for the multiplication of poly- nomials of degree at most d. Whenever the coefficient ring is not a field (or an integral domain), the method to compute the resultant is given directly by the definition, via the determinant of the Sylvester matrix of the polynomials; thus the problem of determining the resultant reduces to a problem of linear algebra, which has a worse complexity. -
3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 161 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay00 By0 Cy 0. These are linear equations with constant coefficients A; B; and C .C The graphsC showD solutions y on the horizontal axis and their slopes y0 dy=dt on the vertical axis. These pairs .y.t/;y0.t// depend on time, but time is not inD the pictures. The paths show where the solution goes, but they don’t show when. Each specific solution starts at a particular point .y.0/;y0.0// given by the initial conditions. The point moves along its path as the time t moves forward from t 0. D We know that the solutions to Ay00 By0 Cy 0 depend on the two solutions to 2 C C D As Bs C 0 (an ordinary quadratic equation for s). When we find the roots s1 and C C D s2, we have found all possible solutions : s1t s2t s1t s2t y c1e c2e y c1s1e c2s2e (1) D C 0 D C The numbers s1 and s2 tell us which picture we are in. Then the numbers c1 and c2 tell us which path we are on. Since s1 and s2 determine the picture for each equation, it is essential to see the six possibilities. We write all six here in one place, to compare them. Later they will appear in six different places, one with each figure. The first three have real solutions s1 and s2. The last three have complex pairs s a i!. -
Bessel's Differential Equation Mathematical Physics
R. I. Badran Bessel's Differential Equation Mathematical Physics Bessel's Differential Equation: 2 Recalling the DE y n y 0 which has a sinusoidal solution (i.e. sin nx and cos nx) and knowing that these solutions can be treated as power series, we can find a solution to the Bessel's DE which is written as: 2 2 2 x y xy (x p )y 0 . The solution is represented by a series. This series very much look like a damped sine or cosine. It is called a Bessel function. To solve the Bessel' DE, we apply the Frobenius method by mn assuming a series solution of the form y an x . n0 Substitute this solution into the DE equation and after some mathematical steps you may find that the indicial equation is 2 2 m p 0 where m= p. Also you may get a1=0 and hence all odd a's are zero as well. The recursion relation can be found as a a n n2 (n m 2)2 p 2 with (n=0, 1, 2, 3, ……etc.). Case (1): m= p (seeking the first solution of Bessel DE) We get the solution 2 4 p x x y a x 1 2(2p 2) 2 4(2p 2)(2p 4) This can be rewritten as: x p2n J (x) y a (1)n p 2n n0 2 n!( p n)! 1 Put a 2 p p! The Bessel function has the factorial form x p2n J (x) (1)n p 2n p , n0 n!( p n)!2 R. -
Hk Hk38 21 56 + + + Y Xyx X 8 40
New York City College of Technology [MODULE 5: FACTORING. SOLVE QUADRATIC EQUATIONS BY FACTORING] MAT 1175 PLTL Workshops Name: ___________________________________ Points: ______ 1. The Greatest Common Factor The greatest common factor (GCF) for a polynomial is the largest monomial that divides each term of the polynomial. GCF contains the GCF of the numerical coefficients and each variable raised to the smallest exponent that appear. a. Factor 8x4 4x3 16x2 4x 24 b. Factor 9 ba 43 3 ba 32 6ab 2 2. Factoring by Grouping a. Factor 56 21k 8h 3hk b. Factor 5x2 40x xy 8y 3. Factoring Trinomials with Lead Coefficients of 1 Steps to factoring trinomials with lead coefficient of 1 1. Write the trinomial in descending powers. 2. List the factorizations of the third term of the trinomial. 3. Pick the factorization where the sum of the factors is the coefficient of the middle term. 4. Check by multiplying the binomials. a. Factor x2 8x 12 b. Factor y 2 9y 36 1 Prepared by Profs. Ghezzi, Han, and Liou-Mark Supported by BMI, NSF STEP #0622493, and Perkins New York City College of Technology [MODULE 5: FACTORING. SOLVE QUADRATIC EQUATIONS BY FACTORING] MAT 1175 PLTL Workshops c. Factor a2 7ab 8b2 d. Factor x2 9xy 18y 2 e. Factor x3214 x 45 x f. Factor 2xx2 24 90 4. Factoring Trinomials with Lead Coefficients other than 1 Method: Trial-and-error 1. Multiply the resultant binomials. 2. Check the sum of the inner and the outer terms is the original middle term bx . a. Factor 2h2 5h 12 b. -
Lecture 8: Maxima and Minima
Maxima and minima Lecture 8: Maxima and Minima Rafikul Alam Department of Mathematics IIT Guwahati Rafikul Alam IITG: MA-102 (2013) Maxima and minima Local extremum of f : Rn ! R Let f : U ⊂ Rn ! R be continuous, where U is open. Then • f has a local maximum at p if there exists r > 0 such that f (x) ≤ f (p) for x 2 B(p; r): • f has a local minimum at a if there exists > 0 such that f (x) ≥ f (p) for x 2 B(p; ): A local maximum or a local minimum is called a local extremum. Rafikul Alam IITG: MA-102 (2013) Maxima and minima Figure: Local extremum of z = f (x; y) Rafikul Alam IITG: MA-102 (2013) Maxima and minima Necessary condition for extremum of Rn ! R Critical point: A point p 2 U is a critical point of f if rf (p) = 0: Thus, when f is differentiable, the tangent plane to z = f (x) at (p; f (p)) is horizontal. Theorem: Suppose that f has a local extremum at p and that rf (p) exists. Then p is a critical point of f , i.e, rf (p) = 0: 2 2 Example: Consider f (x; y) = x − y : Then fx = 2x = 0 and fy = −2y = 0 show that (0; 0) is the only critical point of f : But (0; 0) is not a local extremum of f : Rafikul Alam IITG: MA-102 (2013) Maxima and minima Saddle point Saddle point: A critical point of f that is not a local extremum is called a saddle point of f : Examples: 2 2 • The point (0; 0) is a saddle point of f (x; y) = x − y : 2 2 2 • Consider f (x; y) = x y + y x: Then fx = 2xy + y = 0 2 and fy = 2xy + x = 0 show that (0; 0) is the only critical point of f : But (0; 0) is a saddle point. -
The Standard Form of a Differential Equation
Fall 06 The Standard Form of a Differential Equation Format required to solve a differential equation or a system of differential equations using one of the command-line differential equation solvers such as rkfixed, Rkadapt, Radau, Stiffb, Stiffr or Bulstoer. For a numerical routine to solve a differential equation (DE), we must somehow pass the differential equation as an argument to the solver routine. A standard form for all DEs will allow us to do this. Basic idea: get rid of any second, third, fourth, etc. derivatives that appear, leaving only first derivatives. Example 1 2 d d 5 yx()+3 ⋅ yx()−5yx ⋅ () 4x⋅ 2 dx dx This DE contains a second derivative. How do we write a second derivative as a first derivative? A second derivative is a first derivative of a first derivative. d2 d d yx() yx() 2 dx dxxd Step 1: STANDARDIZATION d Let's define two functions y0(x) and y1(x) as y0()x yx() and y1()x y0()x dx Then this differential equation can be written as d 5 y1()x +3y ⋅ 1()x −5y ⋅ 0()x 4x dx This DE contains 2 functions instead of one, but there is a strong relationship between these two functions d y1()x y0()x dx So, the original DE is now a system of two DEs, d d 5 y1()x y0()x and y1()x +3y ⋅ 1()x −5y ⋅ 0()x 4x⋅ dx dx The convention is to write these equations with the derivatives alone on the left-hand side. d y0()x y1()x dx This is the first step in the standardization process. -
Effective Noether Irreducibility Forms and Applications*
Appears in Journal of Computer and System Sciences, 50/2 pp. 274{295 (1995). Effective Noether Irreducibility Forms and Applications* Erich Kaltofen Department of Computer Science, Rensselaer Polytechnic Institute Troy, New York 12180-3590; Inter-Net: [email protected] Abstract. Using recent absolute irreducibility testing algorithms, we derive new irreducibility forms. These are integer polynomials in variables which are the generic coefficients of a multivariate polynomial of a given degree. A (multivariate) polynomial over a specific field is said to be absolutely irreducible if it is irreducible over the algebraic closure of its coefficient field. A specific polynomial of a certain degree is absolutely irreducible, if and only if all the corresponding irreducibility forms vanish when evaluated at the coefficients of the specific polynomial. Our forms have much smaller degrees and coefficients than the forms derived originally by Emmy Noether. We can also apply our estimates to derive more effective versions of irreducibility theorems by Ostrowski and Deuring, and of the Hilbert irreducibility theorem. We also give an effective estimate on the diameter of the neighborhood of an absolutely irreducible polynomial with respect to the coefficient space in which absolute irreducibility is preserved. Furthermore, we can apply the effective estimates to derive several factorization results in parallel computational complexity theory: we show how to compute arbitrary high precision approximations of the complex factors of a multivariate integral polynomial, and how to count the number of absolutely irreducible factors of a multivariate polynomial with coefficients in a rational function field, both in the complexity class . The factorization results also extend to the case where the coefficient field is a function field. -
Calculus and Differential Equations II
Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).