Differential Equations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Chapter 11. Three Dimensional Analytic Geometry and Vectors
Chapter 11. Three dimensional analytic geometry and vectors. Section 11.5 Quadric surfaces. Curves in R2 : x2 y2 ellipse + =1 a2 b2 x2 y2 hyperbola − =1 a2 b2 parabola y = ax2 or x = by2 A quadric surface is the graph of a second degree equation in three variables. The most general such equation is Ax2 + By2 + Cz2 + Dxy + Exz + F yz + Gx + Hy + Iz + J =0, where A, B, C, ..., J are constants. By translation and rotation the equation can be brought into one of two standard forms Ax2 + By2 + Cz2 + J =0 or Ax2 + By2 + Iz =0 In order to sketch the graph of a quadric surface, it is useful to determine the curves of intersection of the surface with planes parallel to the coordinate planes. These curves are called traces of the surface. Ellipsoids The quadric surface with equation x2 y2 z2 + + =1 a2 b2 c2 is called an ellipsoid because all of its traces are ellipses. 2 1 x y 3 2 1 z ±1 ±2 ±3 ±1 ±2 The six intercepts of the ellipsoid are (±a, 0, 0), (0, ±b, 0), and (0, 0, ±c) and the ellipsoid lies in the box |x| ≤ a, |y| ≤ b, |z| ≤ c Since the ellipsoid involves only even powers of x, y, and z, the ellipsoid is symmetric with respect to each coordinate plane. Example 1. Find the traces of the surface 4x2 +9y2 + 36z2 = 36 1 in the planes x = k, y = k, and z = k. Identify the surface and sketch it. Hyperboloids Hyperboloid of one sheet. The quadric surface with equations x2 y2 z2 1. -
A Quick Algebra Review
A Quick Algebra Review 1. Simplifying Expressions 2. Solving Equations 3. Problem Solving 4. Inequalities 5. Absolute Values 6. Linear Equations 7. Systems of Equations 8. Laws of Exponents 9. Quadratics 10. Rationals 11. Radicals Simplifying Expressions An expression is a mathematical “phrase.” Expressions contain numbers and variables, but not an equal sign. An equation has an “equal” sign. For example: Expression: Equation: 5 + 3 5 + 3 = 8 x + 3 x + 3 = 8 (x + 4)(x – 2) (x + 4)(x – 2) = 10 x² + 5x + 6 x² + 5x + 6 = 0 x – 8 x – 8 > 3 When we simplify an expression, we work until there are as few terms as possible. This process makes the expression easier to use, (that’s why it’s called “simplify”). The first thing we want to do when simplifying an expression is to combine like terms. For example: There are many terms to look at! Let’s start with x². There Simplify: are no other terms with x² in them, so we move on. 10x x² + 10x – 6 – 5x + 4 and 5x are like terms, so we add their coefficients = x² + 5x – 6 + 4 together. 10 + (-5) = 5, so we write 5x. -6 and 4 are also = x² + 5x – 2 like terms, so we can combine them to get -2. Isn’t the simplified expression much nicer? Now you try: x² + 5x + 3x² + x³ - 5 + 3 [You should get x³ + 4x² + 5x – 2] Order of Operations PEMDAS – Please Excuse My Dear Aunt Sally, remember that from Algebra class? It tells the order in which we can complete operations when solving an equation. -
Writing the Equation of a Line
Name______________________________ Writing the Equation of a Line When you find the equation of a line it will be because you are trying to draw scientific information from it. In math, you write equations like y = 5x + 2 This equation is useless to us. You will never graph y vs. x. You will be graphing actual data like velocity vs. time. Your equation should therefore be written as v = (5 m/s2) t + 2 m/s. See the difference? You need to use proper variables and units in order to compare it to theory or make actual conclusions about physical principles. The second equation tells me that when the data collection began (t = 0), the velocity of the object was 2 m/s. It also tells me that the velocity was changing at 5 m/s every second (m/s/s = m/s2). Let’s practice this a little, shall we? Force vs. mass F (N) y = 6.4x + 0.3 m (kg) You’ve just done a lab to see how much force was necessary to get a mass moving along a rough surface. Excel spat out the graph above. You labeled each axis with the variable and units (well done!). You titled the graph starting with the variable on the y-axis (nice job!). Now we turn to the equation. First we replace x and y with the variables we actually graphed: F = 6.4m + 0.3 Then we add units to our slope and intercept. The slope is found by dividing the rise by the run so the units will be the units from the y-axis divided by the units from the x-axis. -
Automatic Linearity Detection
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Mathematical Institute Eprints Archive Report no. [13/04] Automatic linearity detection Asgeir Birkisson a Tobin A. Driscoll b aMathematical Institute, University of Oxford, 24-29 St Giles, Oxford, OX1 3LB, UK. [email protected]. Supported for this work by UK EPSRC Grant EP/E045847 and a Sloane Robinson Foundation Graduate Award associated with Lincoln College, Oxford. bDepartment of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA. [email protected]. Supported for this work by UK EPSRC Grant EP/E045847. Given a function, or more generally an operator, the question \Is it lin- ear?" seems simple to answer. In many applications of scientific computing it might be worth determining the answer to this question in an automated way; some functionality, such as operator exponentiation, is only defined for linear operators, and in other problems, time saving is available if it is known that the problem being solved is linear. Linearity detection is closely connected to sparsity detection of Hessians, so for large-scale applications, memory savings can be made if linearity information is known. However, implementing such an automated detection is not as straightforward as one might expect. This paper describes how automatic linearity detection can be implemented in combination with automatic differentiation, both for stan- dard scientific computing software, and within the Chebfun software system. The key ingredients for the method are the observation that linear operators have constant derivatives, and the propagation of two logical vectors, ` and c, as computations are carried out. -
The Geometry of the Phase Diffusion Equation
J. Nonlinear Sci. Vol. 10: pp. 223–274 (2000) DOI: 10.1007/s003329910010 © 2000 Springer-Verlag New York Inc. The Geometry of the Phase Diffusion Equation N. M. Ercolani,1 R. Indik,1 A. C. Newell,1,2 and T. Passot3 1 Department of Mathematics, University of Arizona, Tucson, AZ 85719, USA 2 Mathematical Institute, University of Warwick, Coventry CV4 7AL, UK 3 CNRS UMR 6529, Observatoire de la Cˆote d’Azur, 06304 Nice Cedex 4, France Received on October 30, 1998; final revision received July 6, 1999 Communicated by Robert Kohn E E Summary. The Cross-Newell phase diffusion equation, (|k|)2T =∇(B(|k|) kE), kE =∇2, and its regularization describes natural patterns and defects far from onset in large aspect ratio systems with rotational symmetry. In this paper we construct explicit solutions of the unregularized equation and suggest candidates for its weak solutions. We confirm these ideas by examining a fourth-order regularized equation in the limit of infinite aspect ratio. The stationary solutions of this equation include the minimizers of a free energy, and we show these minimizers are remarkably well-approximated by a second-order “self-dual” equation. Moreover, the self-dual solutions give upper bounds for the free energy which imply the existence of weak limits for the asymptotic minimizers. In certain cases, some recent results of Jin and Kohn [28] combined with these upper bounds enable us to demonstrate that the energy of the asymptotic minimizers converges to that of the self-dual solutions in a viscosity limit. 1. Introduction The mathematical models discussed in this paper are motivated by physical systems, far from equilibrium, which spontaneously form patterns. -
Transformations, Polynomial Fitting, and Interaction Terms
FEEG6017 lecture: Transformations, polynomial fitting, and interaction terms [email protected] The linearity assumption • Regression models are both powerful and useful. • But they assume that a predictor variable and an outcome variable are related linearly. • This assumption can be wrong in a variety of ways. The linearity assumption • Some real data showing a non- linear connection between life expectancy and doctors per million population. The linearity assumption • The Y and X variables here are clearly related, but the correlation coefficient is close to zero. • Linear regression would miss the relationship. The independence assumption • Simple regression models also assume that if a predictor variable affects the outcome variable, it does so in a way that is independent of all the other predictor variables. • The assumed linear relationship between Y and X1 is supposed to hold no matter what the value of X2 may be. The independence assumption • Suppose we're trying to predict happiness. • For men, happiness increases with years of marriage. For women, happiness decreases with years of marriage. • The relationship between happiness and time may be linear, but it would not be independent of sex. Can regression models deal with these problems? • Fortunately they can. • We deal with non-linearity by transforming the predictor variables, or by fitting a polynomial relationship instead of a straight line. • We deal with non-independence of predictors by including interaction terms in our models. Dealing with non-linearity • Transformation is the simplest method for dealing with this problem. • We work not with the raw values of X, but with some arbitrary function that transforms the X values such that the relationship between Y and f(X) is now (closer to) linear. -
Numerical Algebraic Geometry and Algebraic Kinematics
Numerical Algebraic Geometry and Algebraic Kinematics Charles W. Wampler∗ Andrew J. Sommese† January 14, 2011 Abstract In this article, the basic constructs of algebraic kinematics (links, joints, and mechanism spaces) are introduced. This provides a common schema for many kinds of problems that are of interest in kinematic studies. Once the problems are cast in this algebraic framework, they can be attacked by tools from algebraic geometry. In particular, we review the techniques of numerical algebraic geometry, which are primarily based on homotopy methods. We include a review of the main developments of recent years and outline some of the frontiers where further research is occurring. While numerical algebraic geometry applies broadly to any system of polynomial equations, algebraic kinematics provides a body of interesting examples for testing algorithms and for inspiring new avenues of work. Contents 1 Introduction 4 2 Notation 5 I Fundamentals of Algebraic Kinematics 6 3 Some Motivating Examples 6 3.1 Serial-LinkRobots ............................... ..... 6 3.1.1 Planar3Rrobot ................................. 6 3.1.2 Spatial6Rrobot ................................ 11 3.2 Four-BarLinkages ................................ 13 3.3 PlatformRobots .................................. 18 ∗General Motors Research and Development, Mail Code 480-106-359, 30500 Mound Road, Warren, MI 48090- 9055, U.S.A. Email: [email protected] URL: www.nd.edu/˜cwample1. This material is based upon work supported by the National Science Foundation under Grant DMS-0712910 and by General Motors Research and Development. †Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556-4618, U.S.A. Email: [email protected] URL: www.nd.edu/˜sommese. This material is based upon work supported by the National Science Foundation under Grant DMS-0712910 and the Duncan Chair of the University of Notre Dame. -
Bessel's Differential Equation Mathematical Physics
R. I. Badran Bessel's Differential Equation Mathematical Physics Bessel's Differential Equation: 2 Recalling the DE y n y 0 which has a sinusoidal solution (i.e. sin nx and cos nx) and knowing that these solutions can be treated as power series, we can find a solution to the Bessel's DE which is written as: 2 2 2 x y xy (x p )y 0 . The solution is represented by a series. This series very much look like a damped sine or cosine. It is called a Bessel function. To solve the Bessel' DE, we apply the Frobenius method by mn assuming a series solution of the form y an x . n0 Substitute this solution into the DE equation and after some mathematical steps you may find that the indicial equation is 2 2 m p 0 where m= p. Also you may get a1=0 and hence all odd a's are zero as well. The recursion relation can be found as a a n n2 (n m 2)2 p 2 with (n=0, 1, 2, 3, ……etc.). Case (1): m= p (seeking the first solution of Bessel DE) We get the solution 2 4 p x x y a x 1 2(2p 2) 2 4(2p 2)(2p 4) This can be rewritten as: x p2n J (x) y a (1)n p 2n n0 2 n!( p n)! 1 Put a 2 p p! The Bessel function has the factorial form x p2n J (x) (1)n p 2n p , n0 n!( p n)!2 R. -
Calculus Terminology
AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential -
Chapter 1: Analytic Geometry
1 Analytic Geometry Much of the mathematics in this chapter will be review for you. However, the examples will be oriented toward applications and so will take some thought. In the (x,y) coordinate system we normally write the x-axis horizontally, with positive numbers to the right of the origin, and the y-axis vertically, with positive numbers above the origin. That is, unless stated otherwise, we take “rightward” to be the positive x- direction and “upward” to be the positive y-direction. In a purely mathematical situation, we normally choose the same scale for the x- and y-axes. For example, the line joining the origin to the point (a,a) makes an angle of 45◦ with the x-axis (and also with the y-axis). In applications, often letters other than x and y are used, and often different scales are chosen in the horizontal and vertical directions. For example, suppose you drop something from a window, and you want to study how its height above the ground changes from second to second. It is natural to let the letter t denote the time (the number of seconds since the object was released) and to let the letter h denote the height. For each t (say, at one-second intervals) you have a corresponding height h. This information can be tabulated, and then plotted on the (t, h) coordinate plane, as shown in figure 1.0.1. We use the word “quadrant” for each of the four regions into which the plane is divided by the axes: the first quadrant is where points have both coordinates positive, or the “northeast” portion of the plot, and the second, third, and fourth quadrants are counted off counterclockwise, so the second quadrant is the northwest, the third is the southwest, and the fourth is the southeast. -
The Standard Form of a Differential Equation
Fall 06 The Standard Form of a Differential Equation Format required to solve a differential equation or a system of differential equations using one of the command-line differential equation solvers such as rkfixed, Rkadapt, Radau, Stiffb, Stiffr or Bulstoer. For a numerical routine to solve a differential equation (DE), we must somehow pass the differential equation as an argument to the solver routine. A standard form for all DEs will allow us to do this. Basic idea: get rid of any second, third, fourth, etc. derivatives that appear, leaving only first derivatives. Example 1 2 d d 5 yx()+3 ⋅ yx()−5yx ⋅ () 4x⋅ 2 dx dx This DE contains a second derivative. How do we write a second derivative as a first derivative? A second derivative is a first derivative of a first derivative. d2 d d yx() yx() 2 dx dxxd Step 1: STANDARDIZATION d Let's define two functions y0(x) and y1(x) as y0()x yx() and y1()x y0()x dx Then this differential equation can be written as d 5 y1()x +3y ⋅ 1()x −5y ⋅ 0()x 4x dx This DE contains 2 functions instead of one, but there is a strong relationship between these two functions d y1()x y0()x dx So, the original DE is now a system of two DEs, d d 5 y1()x y0()x and y1()x +3y ⋅ 1()x −5y ⋅ 0()x 4x⋅ dx dx The convention is to write these equations with the derivatives alone on the left-hand side. d y0()x y1()x dx This is the first step in the standardization process. -
Glossary: a Dictionary for Linear Algebra
GLOSSARY: A DICTIONARY FOR LINEAR ALGEBRA Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node T i to node j; otherwise aij = 0. A = A for an undirected graph. Affine transformation T (v) = Av + v 0 = linear transformation plus shift. Associative Law (AB)C = A(BC). Parentheses can be removed to leave ABC. Augmented matrix [ A b ]. Ax = b is solvable when b is in the column space of A; then [ A b ] has the same rank as A. Elimination on [ A b ] keeps equations correct. Back substitution. Upper triangular systems are solved in reverse order xn to x1. Basis for V . Independent vectors v 1,..., v d whose linear combinations give every v in V . A vector space has many bases! Big formula for n by n determinants. Det(A) is a sum of n! terms, one term for each permutation P of the columns. That term is the product a1α ··· anω down the diagonal of the reordered matrix, times det(P ) = ±1. Block matrix. A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication of AB is allowed if the block shapes permit (the columns of A and rows of B must be in matching blocks). Cayley-Hamilton Theorem. p(λ) = det(A − λI) has p(A) = zero matrix. P Change of basis matrix M. The old basis vectors v j are combinations mijw i of the new basis vectors. The coordinates of c1v 1 +···+cnv n = d1w 1 +···+dnw n are related by d = Mc.