A NUMERICAL METHOD in TERMS of the THIRD DERIVATIVE for a DELAY INTEGRAL EQUATION from BIOMATHEMATICS Volume 6, Issue 2, Article 42, 2005

Total Page:16

File Type:pdf, Size:1020Kb

A NUMERICAL METHOD in TERMS of the THIRD DERIVATIVE for a DELAY INTEGRAL EQUATION from BIOMATHEMATICS Volume 6, Issue 2, Article 42, 2005 Journal of Inequalities in Pure and Applied Mathematics A NUMERICAL METHOD IN TERMS OF THE THIRD DERIVATIVE FOR A DELAY INTEGRAL EQUATION FROM BIOMATHEMATICS volume 6, issue 2, article 42, 2005. Received 02 March, 2003; ALEXANDRU BICA AND CRACIUN˘ IANCU accepted 01 March, 2005. Communicated by: S.S. Dragomir Department of Mathematics University of Oradea Str. Armatei Romane 5 Oradea, 3700, Romania. EMail: [email protected] Abstract Faculty of Mathematics and Informatics Babe¸s-Bolyai University Contents Cluj-Napoca, Str. N. Kogalniceanu no.1 Cluj-Napoca, 3400, Romania. EMail: [email protected] JJ II J I Home Page Go Back Close c 2000 Victoria University ISSN (electronic): 1443-5756 Quit 024-03 Abstract This paper presents a numerical method for approximating the positive, bounded and smooth solution of a delay integral equation which occurs in the study of the spread of epidemics. We use the cubic spline interpolation and obtain an algorithm based on a perturbed trapezoidal quadrature rule. 2000 Mathematics Subject Classification: Primary 45D05; Secondary 65R32, A Numerical Method in Terms of 92C60. the Third Derivative for a Delay Key words: Delay Integral Equation, Successive approximations, Perturbed trape- Integral Equation from zoidal quadrature rule. Biomathematics Contents Alexandru Bica and Craciun˘ Iancu 1 Introduction ......................................... 3 Title Page 2 Existence and Uniqueness of the Positive Smooth Solution .. 4 3 Approximation of the Solution on the Initial Interval ....... 7 Contents 4 Main Results ........................................ 11 JJ II References J I Go Back Close Quit Page 2 of 33 J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au 1. Introduction Consider the delay integral equation: Z t (1.1) x(t) = f(s, x(s))ds. t−τ This equation is a mathematical model for the spread of certain infectious diseases with a contact rate that varies seasonally. Here x(t) is the proportion of infectives in the population at time t, τ > 0, is the lenght of time in which an A Numerical Method in Terms of the Third Derivative for a Delay individual remains infectious and f(t, x(t)) is the proportion of new infectives Integral Equation from per unit time. Biomathematics There are known results about the existence of a positive bounded solution Alexandru Bica and Craciun˘ Iancu (see [3], [8]), which is periodic in certain conditions ([9]), or about the exis- tence and uniqueness of the positive periodic solution (in [10], [11]). In [8] the Title Page author obtains the existence and uniqueness of the continuous positive bounded solution, and in [6] a numerical method for the approximation of this solution Contents is provided using the trapezoidal quadrature rule. JJ II Here, we obtain the existence and uniqueness of the positive, bounded and smooth solution and use the cubic spline of interpolation from [5] to approx- J I imate this solution on the initial interval [−τ, 0]. We suppose that on [−τ, 0], Go Back the solution Φ is known only in the discrete moments t , i = 0, n, and use the i Close values Φ(ti) = yi, i = 0, n for the spline interpolation. Afterward, we outline a numerical method and an algorithm to approximate the solution on [0,T ], with Quit T > 0 fixed, using the quadrature rule from [1] and [2]. Page 3 of 33 J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au 2. Existence and Uniqueness of the Positive Smooth Solution Impose the initial condition x(t) = Φ(t), t ∈ [−τ, 0] to equation (1.1) and consider T > 0, be fixed. We obtain the initial value problem: R t f(s, x(s))ds, ∀t ∈ [0,T ] t−τ (2.1) x(t) = Φ(t), ∀t ∈ [−τ, 0]. A Numerical Method in Terms of the Third Derivative for a Delay Suppose that the following conditions are fulfilled: Integral Equation from Biomathematics 1 (i) Φ ∈ C [−τ, 0] and we have Alexandru Bica and Craciun˘ Iancu 0 Z 0 b = Φ(0) = f(s, x(s))ds with Φ (0) = f(0, b) − f(−τ, Φ(−τ)); Title Page −τ Contents (ii) b > 0 and ∃a, M, β ∈ R, M > 0, 0 < a ≤ β such that a ≤ Φ(t) ≤ β, ∀t ∈ [−τ, 0]; JJ II (iii) f ∈ C([−τ, T ] × [a, β]), f(t, x) ≥ 0, f(t, y) ≤ M, ∀t ∈ [−τ, T ], ∀x ≥ J I 0, ∀y ∈ [a, β]; Go Back (iv) Mτ ≤ β and there is an integrable function g such that f(t, x) ≥ g(t), ∀t ∈ Close [−τ, T ], ∀x ≥ a and Quit Z t Page 4 of 33 g(s)ds ≥ a, ∀t ∈ [0,T ]; t−τ J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au (v) ∃L > 0 such that |f(t, x) − f(t, y)| ≤ L |x − y| , ∀t ∈ [−τ, T ], ∀x, y ∈ [a, ∞). Then, we obtain the following result: Theorem 2.1. Suppose that assumptions (i)-(v) are satisfied. Then the equation (1.1) has a unique continuous solution x(t) on [−τ, T ], with a ≤ x(t) ≤ β, ∀t ∈ [−τ, T ] such that x(t) = Φ(t) for t ∈ [−τ, 0]. Also, max {|xn(t) − x(t)| : t ∈ [0,T ]} −→ 0 A Numerical Method in Terms of the Third Derivative for a Delay as n → ∞ where x (t) = Φ(t) for t ∈ [−τ, 0] , n ∈ , x (t) = b and Integral Equation from n N 0 Biomathematics t Z Alexandru Bica and Craciun˘ Iancu xn(t) = f(s, xn−1(s))ds t−τ Title Page for t ∈ [0,T ] , n ∈ N∗. Moreover, the solution x belongs to C1[−τ, T ]. Contents Proof. From [9] under the conditions (i), (ii), (iv), (v), it follows that the exis- tence of an unique positive continuous on [−τ, T ] solution for (1.1) such that JJ II x(t) ≥ a, ∀t ∈ [−τ, T ] and x(t) = Φ(t) for t ∈ [−τ, 0]. Using Theorem 2 from J I [7] we conclude that max {|xn(t) − x(t)| : t ∈ [0,T ]} −→ 0 as n → ∞ . From (iv) we see that Go Back Z t Z t Close x(t) = f(s, x(s))ds ≤ Mds = Mτ ≤ β, ∀t ∈ [0,T ]. Quit t−τ t−τ Page 5 of 33 Because a ≤ Φ(t) ≤ β for t ∈ [−τ, 0] and x(t) = Φ(t) for t ∈ [−τ, 0] we deduce that a ≤ x(t) ≤ β, ∀t ∈ [−τ, T ], and the solution is bounded. Since J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au R t x is a solution for (1.1) we have x(t) = t−τ f(s, x(s))ds, for all t ∈ [0,T ], and because f ∈ C([−τ, T ] × [a, β]) we can state that x is differentiable on [0,T ] , and x0 is continuous on [0,T ]. From condition (iii) it follows that x is differentiable with x0 continuous on [−τ, 0] (including the continuity in the point t = 0). Then x ∈ C1[−τ, T ] and the proof is complete. Corollary 2.2. In the conditions of the Theorem2.1, if f ∈ C1([−τ, T ] × [a, β]), Φ ∈ C2[−τ, 0] and ∂f ∂f A Numerical Method in Terms of Φ00(0) = (0, b) + (0, b)[f(0, b) − f(−τ, , Φ(−τ))] the Third Derivative for a Delay ∂t ∂x Integral Equation from ∂f ∂f Biomathematics − (−τ, Φ(−τ)) − (−τ, Φ(−τ)) Φ0(−τ), ∂t ∂x Alexandru Bica and Craciun˘ Iancu then x ∈ C2[−τ, T ]. Title Page Proof. Follows directly from the above theorem. Contents JJ II J I Go Back Close Quit Page 6 of 33 J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au 3. Approximation of the Solution on the Initial Interval Suppose that on the interval [−τ, 0] the function Φ is known only in the points, ti, i = 0, n, which form the uniform partition (3.1) ∆n : −τ = t0 < t1 < ··· < tn−1 < tn = 0, τ where we have the values Φ(ti) = yi, i = 0, n and ti+1 − ti = h = n , A Numerical Method in Terms of ∀i = 0, n − 1. the Third Derivative for a Delay Integral Equation from Let Biomathematics 1 ∆2y ∆3y ∆4y Alexandru Bica and Craciun˘ Iancu m = ∆y − 0 + 0 − 0 , and 0 h 0 2 3 4 4 1 2 3 11∆ y0 Title Page M0 = 2 ∆ y0 − ∆ y0 + , h 12 Contents where, JJ II 2 ∆y0 = y1 − y0, ∆ y0 = y2 − 2y1 + y0, J I 3 4 ∆ y0 = y3 − 3y2 + 3y1 − y0, ∆ y0 = y4 − 4y3 + 6y2 − 4y1 + y0. Go Back Close We build a cubic spline of interpolation which corresponds to the following conditions: Quit 0 00 Page 7 of 33 (3.2) Φ(ti) = yi, i = 0, n, Φ (t0) = m0, Φ (t0) = M0. J. Ineq. Pure and Appl. Math. 6(2) Art. 42, 2005 http://jipam.vu.edu.au This spline function is s ∈ C2[−τ, 0] which, according to [5], have on the each subinterval [ti−1, ti], i = 1, n, the expression: Mi − Mi−1 3 Mi−1 2 (3.3) s(t) = · (t − ti−1) + · (t − ti−1) 6hi 2 + mi−1 · (t − ti−1) + yi−1, 0 00 for t ∈ [ti−1, ti] where yi = s(ti), mi = s (ti),Mi = s (ti), i = 0, n. For these values, according to [5], there exists the recurence relations: A Numerical Method in Terms of yi − yi−1 6mi−1 Mi = 6 · − − 2Mi−1 the Third Derivative for a Delay h2 h Integral Equation from (3.4) , i = 1, n. Biomathematics yi − yi−1 1 m = 3 · − 2m − M · h Alexandru Bica and Craciun˘ Iancu i h i−1 2 i−1 Then, from [5] Lemma 2.1, there exists a unique cubic spline function of interpolation, s, which satisfy the conditions: Title Page s(t ) = y , ∀i = 0, n Contents i i 0 (3.5) s (t0) = m0 JJ II 00 s (t0) = M0, J I and on the each subinterval of ∆n is defined by the relation (3.3).
Recommended publications
  • The Mean Value Theorem Math 120 Calculus I Fall 2015
    The Mean Value Theorem Math 120 Calculus I Fall 2015 The central theorem to much of differential calculus is the Mean Value Theorem, which we'll abbreviate MVT. It is the theoretical tool used to study the first and second derivatives. There is a nice logical sequence of connections here. It starts with the Extreme Value Theorem (EVT) that we looked at earlier when we studied the concept of continuity. It says that any function that is continuous on a closed interval takes on a maximum and a minimum value. A technical lemma. We begin our study with a technical lemma that allows us to relate 0 the derivative of a function at a point to values of the function nearby. Specifically, if f (x0) is positive, then for x nearby but smaller than x0 the values f(x) will be less than f(x0), but for x nearby but larger than x0, the values of f(x) will be larger than f(x0). This says something like f is an increasing function near x0, but not quite. An analogous statement 0 holds when f (x0) is negative. Proof. The proof of this lemma involves the definition of derivative and the definition of limits, but none of the proofs for the rest of the theorems here require that depth. 0 Suppose that f (x0) = p, some positive number. That means that f(x) − f(x ) lim 0 = p: x!x0 x − x0 f(x) − f(x0) So you can make arbitrarily close to p by taking x sufficiently close to x0.
    [Show full text]
  • Newton and Halley Are One Step Apart
    Methods for Solving Nonlinear Equations Local Methods for Unconstrained Optimization The General Sparsity of the Third Derivative How to Utilize Sparsity in the Problem Numerical Results Newton and Halley are one step apart Trond Steihaug Department of Informatics, University of Bergen, Norway 4th European Workshop on Automatic Differentiation December 7 - 8, 2006 Institute for Scientific Computing RWTH Aachen University Aachen, Germany (This is joint work with Geir Gundersen) Trond Steihaug Newton and Halley are one step apart Methods for Solving Nonlinear Equations Local Methods for Unconstrained Optimization The General Sparsity of the Third Derivative How to Utilize Sparsity in the Problem Numerical Results Overview - Methods for Solving Nonlinear Equations: Method in the Halley Class is Two Steps of Newton in disguise. - Local Methods for Unconstrained Optimization. - How to Utilize Structure in the Problem. - Numerical Results. Trond Steihaug Newton and Halley are one step apart Methods for Solving Nonlinear Equations Local Methods for Unconstrained Optimization The Halley Class The General Sparsity of the Third Derivative Motivation How to Utilize Sparsity in the Problem Numerical Results Newton and Halley A central problem in scientific computation is the solution of system of n equations in n unknowns F (x) = 0 n n where F : R → R is sufficiently smooth. Sir Isaac Newton (1643 - 1727). Sir Edmond Halley (1656 - 1742). Trond Steihaug Newton and Halley are one step apart Methods for Solving Nonlinear Equations Local Methods for Unconstrained Optimization The Halley Class The General Sparsity of the Third Derivative Motivation How to Utilize Sparsity in the Problem Numerical Results A Nonlinear Newton method Taylor expansion 0 1 00 T (s) = F (x) + F (x)s + F (x)ss 2 Nonlinear Newton: Given x.
    [Show full text]
  • Calculus Online Textbook Chapter 2
    Contents CHAPTER 1 Introduction to Calculus 1.1 Velocity and Distance 1.2 Calculus Without Limits 1.3 The Velocity at an Instant 1.4 Circular Motion 1.5 A Review of Trigonometry 1.6 A Thousand Points of Light 1.7 Computing in Calculus CHAPTER 2 Derivatives The Derivative of a Function Powers and Polynomials The Slope and the Tangent Line Derivative of the Sine and Cosine The Product and Quotient and Power Rules Limits Continuous Functions CHAPTER 3 Applications of the Derivative 3.1 Linear Approximation 3.2 Maximum and Minimum Problems 3.3 Second Derivatives: Minimum vs. Maximum 3.4 Graphs 3.5 Ellipses, Parabolas, and Hyperbolas 3.6 Iterations x, + ,= F(x,) 3.7 Newton's Method and Chaos 3.8 The Mean Value Theorem and l'H8pital's Rule CHAPTER 2 Derivatives 2.1 The Derivative of a Function This chapter begins with the definition of the derivative. Two examples were in Chapter 1. When the distance is t2, the velocity is 2t. When f(t) = sin t we found v(t)= cos t. The velocity is now called the derivative off (t). As we move to a more formal definition and new examples, we use new symbols f' and dfldt for the derivative. 2A At time t, the derivativef'(t)or df /dt or v(t) is fCt -t At) -f (0 f'(t)= lim (1) At+O At The ratio on the right is the average velocity over a short time At. The derivative, on the left side, is its limit as the step At (delta t) approaches zero.
    [Show full text]
  • Efficient Computation of Sparse Higher Derivative Tensors⋆
    Efficient Computation of Sparse Higher Derivative Tensors? Jens Deussen and Uwe Naumann RWTH Aachen University, Software and Tools for Computational Engineering, Aachen, Germany fdeussen,[email protected] Abstract. The computation of higher derivatives tensors is expensive even for adjoint algorithmic differentiation methods. In this work we in- troduce methods to exploit the symmetry and the sparsity structure of higher derivatives to considerably improve the efficiency of their compu- tation. The proposed methods apply coloring algorithms to two-dimen- sional compressed slices of the derivative tensors. The presented work is a step towards feasibility of higher-order methods which might benefit numerical simulations in numerous applications of computational science and engineering. Keywords: Adjoints · Algorithmic Differentiation · Coloring Algorithm · Higher Derivative Tensors · Recursive Coloring · Sparsity. 1 Introduction The preferred numerical method to compute derivatives of a computer program is Algorithmic Differentiation (AD) [1,2]. This technique produces exact derivatives with machine accuracy up to an arbitrary order by exploiting elemental symbolic differentiation rules and the chain rule. AD distinguishes between two basic modes: the forward mode and the reverse mode. The forward mode builds a directional derivative (also: tangent) code for the computation of Jacobians at a cost proportional to the number of input parameters. Similarly, the reverse mode builds an adjoint version of the program but it can compute the Jacobian at a cost that is proportional to the number of outputs. For that reason, the adjoint method is advantageous for problems with a small set of output parameters, as it occurs frequently in many fields of computational science and engineering.
    [Show full text]
  • 2.8 the Derivative As a Function
    2 Derivatives Copyright © Cengage Learning. All rights reserved. 2.8 The Derivative as a Function Copyright © Cengage Learning. All rights reserved. The Derivative as a Function We have considered the derivative of a function f at a fixed number a: Here we change our point of view and let the number a vary. If we replace a in Equation 1 by a variable x, we obtain 3 The Derivative as a Function Given any number x for which this limit exists, we assign to x the number f′(x). So we can regard f′ as a new function, called the derivative of f and defined by Equation 2. We know that the value of f′ at x, f′(x), can be interpreted geometrically as the slope of the tangent line to the graph of f at the point (x, f(x)). The function f′ is called the derivative of f because it has been “derived” from f by the limiting operation in Equation 2. The domain of f′ is the set {x | f′(x) exists} and may be smaller than the domain of f. 4 Example 1 – Derivative of a Function given by a Graph The graph of a function f is given in Figure 1. Use it to sketch the graph of the derivative f′. Figure 1 5 Example 1 – Solution We can estimate the value of the derivative at any value of x by drawing the tangent at the point (x, f(x)) and estimating its slope. For instance, for x = 5 we draw the tangent at P in Figure 2(a) and estimate its slope to be about , so f′(5) ≈ 1.5.
    [Show full text]
  • Evaluating Higher Derivative Tensors by Forward Propagation of Univariate Taylor Series
    MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117{1130 S 0025-5718(00)01120-0 Article electronically published on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES ANDREAS GRIEWANK, JEAN UTKE, AND ANDREA WALTHER Abstract. This article considers the problem of evaluating all pure and mixed partial derivatives of some vector function defined by an evaluation procedure. The natural approach to evaluating derivative tensors might appear to be their recursive calculation in the usual forward mode of computational differentia- tion. However, with the approach presented in this article, much simpler data access patterns and similar or lower computational counts can be achieved through propagating a family of univariate Taylor series of a suitable degree. It is applicable for arbitrary orders of derivatives. Also it is possible to cal- culate derivatives only in some directions instead of the full derivative tensor. Explicit formulas for all tensor entries as well as estimates for the correspond- ing computational complexities are given. 1. Introduction Many applications in scientific computing require second- and higher-order der- ivatives. Therefore, this article deals with the problem of calculating derivative tensors of some vector function y = f(x)withf : D⊂Rn¯ 7! Rm that is the composition of (at least locally) smooth elementary functions. Assume that f is given by an evaluation procedure in C or some other programming lan- guage. Then f can be differentiated automatically [6]. The Jacobian matrix of first partial derivatives can be computed by the forward or reverse mode of the chain- rule based technique known commonly as computational differentiation (CD).
    [Show full text]
  • Week 3 Quiz: Differential Calculus: the Derivative and Rules of Differentiation
    Week 3 Quiz: Differential Calculus: The Derivative and Rules of Differentiation SGPE Summer School 2016 Limits Question 1: Find limx!3f(x): x2 − 9 f(x) = x − 3 (A) +1 (B) -6 (C) 6 (D) Does not exist! (E) None of the above x2−9 (x−3)(x+3) Answer: (C) Note the the function f(x) = x−3 = x−3 = x + 3 is actually a line. However it is important to note the this function is undefined at x = 3. Why? x = 3 requires dividing by zero (which is inadmissible). As x approaches 3 from below and from above, the value of the function f(x) approaches f(3) = 6. Thus the limit limx!3f(x) = 6. Question 2: Find limx!2f(x): f(x) = 1776 (A) +1 (B) 1770 (C) −∞ (D) Does not exist! (E) None of the above Answer: (E) The limit of any constant function at any point, say f(x) = C, where C is an arbitrary constant, is simply C. Thus the correct answer is limx!2f(x) = 1776. Question 3: Find limx!4f(x): f(x) = ax2 + bx + c (A) +1 (B) 16a + 4b + c (C) −∞ (D) Does not exist! (E) None of the above 1 Answer: (B) Applying the rules of limits: 2 2 limx!4ax + bx + c = limx!4ax + limx!4bx + limx!4c 2 = a [limx!4x] + blimx!4x + c = 16a + 4b + c Question 4: Find the limits in each case: (i) lim x2 x!0 jxj (ii) lim 2x+3 x!3 4x−9 (iii) lim x2−3x x!6 x+3 2 Answer: (i) lim x2 = lim (jxj) = lim j x j= 0 x!0 jxj x!0 jxj x!0 (ii) lim 2x+3 = 2·3+3 = 3 x!3 4x−9 4·3−9 (iii) lim x2−3x = 62−3·6 = 2 x!6 x+3 6+3 Question 5: Show that lim sin x = 0 (Hint: −x ≤ sin x ≤ x for all x ≥ 0.) x!0 Answer: Given hint and squeeze theorem we have lim −x = 0 ≤ lim sin x ≤ 0 = lim x hence, x!0 x!0 x!0 lim sin x = 0 x to0 Question 6: Show that lim x sin( 1 ) = 0 x!0 x 1 Answer: Note first that for any real number t we have −1 ≤ sin t ≤ 1 so −1 ≤ sin( x ) ≤ 1.
    [Show full text]
  • Higher-Order Derivatives D Y 3 + 4X2y 5 5X = D 10 Dx − Dx J
    fundamental rules of derivatives fundamental rules of derivatives Implicit Differentiation MCV4U: Calculus & Vectors Recap Determine the derivative of y 3 + 4x2y 5 5x = 10. − dy Remember that all terms involving y will have dx in their derivations, and that the central term uses the product rule. Higher-Order Derivatives d y 3 + 4x2y 5 5x = d 10 dx − dx J. Garvin d y 3 + 4 d x2y 5 5 d x = 0 dx dx − dx 3y 2 dy + 4 2xy 5 + x2(5y 4) dy 5 = 0 dx dx − 3y 2 dy+ 8xy 5 + 20x2y 4 dy 5 = 0 dx dx − 5 8xy 5 dy = − dx 3y 2 + 20x2y 4 J. Garvin— Higher-Order Derivatives Slide 1/17 Slide 2/17 fundamental rules of derivatives fundamental rules of derivatives Higher-Order Derivatives Higher-Order Derivatives Recall that the derivative of a function represents its rate of Example change. This is often called the first derivative. Determine the second derivative of f (x) = 5x3 + 4x2. − What if we are interested in the rate of change of the derivative itself – that is, how is the derivative changing with Using the power rule, f (x) = 15x2 + 8x. 0 − respect to the independent variable? Differentiating the first derivative, f (x) = 30x + 8. 00 − This concept is known as the second derivative. Example The Second Derivative Determine the second derivative of f (x) = √x + 3. The second derivative is the derivative of the derivative 1 1 1 function. In Lagrange notation, the it is denoted f 00(x). In 2 2 Using f (x) = x + 3, f 0(x) = 2 x− .
    [Show full text]
  • Derivative Process Using Fractal Indices K Equals One-Half, One-Third, and One-Fourth
    Journal of Applied Mathematics and Physics, 2017, 5, 453-461 http://www.scirp.org/journal/jamp ISSN Online: 2327-4379 ISSN Print: 2327-4352 Derivative Process Using Fractal Indices k Equals One-Half, One-Third, and One-Fourth Salvador A. Loria College of Engineering & Graduate School, Nueva Ecija University of Science and Technology, Cabanatuan City, Philippines How to cite this paper: Loria, S.A. (2017) Abstract Derivative Process Using Fractal Indices k Equals One-Half, One-Third, and One- In this paper, the researcher explored and analyzed the function between Fourth. Journal of Applied Mathematics integral indices of derivative. It is proved that getting half-derivative twice is and Physics, 5, 453-461. equivalent to first derivative. Also getting the triple of one-third derivative is https://doi.org/10.4236/jamp.2017.52040 equal to first derivative. Similarly, getting four times of one-fourth derivative Received: January 12, 2017 is equal to first derivative. Accepted: February 18, 2017 Published: February 21, 2017 Keywords Copyright © 2017 by author and Fractal Index, Derivative, Gamma Function, Factorial Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). 1. Introduction http://creativecommons.org/licenses/by/4.0/ Open Access Fractal is a general term used to express both the geometry and the procedures which display self-similarity, scale invariance, and fractional dimension [1]. Geo- metry deals with shapes or objects described in integral dimension. A point has 0-dimension, a line having 1-dimension, a surface has 2-dimension and the solid has 3-dimensions [2].
    [Show full text]
  • Putnam Practice—Integration and Differentiation
    PUTNAM PRACTICE|INTEGRATION AND DIFFERENTIATION Problem 1 (1992) Define C(α) to be the coefficient of x1992 in the power series expansion about x = 0 of (1 + x)α. Evaluate Z 1 1 1 1 C(−y − 1)( + + ··· + )dy: 0 y + 1 y + 2 y + 1992 Problem 2 (1992) Let b be an infinitely differentiable real-valued function defined on the real numbers. if f(1=n) = n2=(n2 + 1), n = 1; 2; 3; ··· , compute the values of the derivatives f (k)(0), k = 1; 2; 3; ··· . Problem 3 (1993)The function K(x; y) is positive and continuous for 0 ≤ x ≤ 1; 0 ≤ y ≤ 1, and the functions f(x) and g(x) are positive and continuous for 0 ≤ x ≤ 1. Suppose that for all x, 0 ≤ x ≤ 1, Z 1 Z 1 f(g)K(x; y)dy = g(x); and g(y)K(x; y)dy = f(x): 0 0 Show that f(x) = g(x) for 0 ≤ x ≤ 1. Problem 4 (1994) Find the set of all real numbers k with the following property: For any positive differentiable function f that satisfies f 0(x) > f(x) for all x, there is some number N such that f(x) > ekx for all x > N. Problem 5 (1995) For what pairs (a; b) of positive real numbers does the improper integral Z 1 qp p qp p ( x + a − x − x − x − b)dx b converge? Problem 6 (1997) Let f be twice-differentiable real-valued function satisfying f(x) + f 00(x) = −xg(x)f 0(x), where g(x) ≥ 0 for all real x.
    [Show full text]
  • MA 2231 Lecture 18 - Higher Order Derivatives, Implicit Differentiation
    MA 2231 Lecture 18 - Higher Order Derivatives, Implicit Differentiation Friday, March 15, 2019 dy Objectives: Introduce dx notation, higher order derivatives, and implicit differentiation. Derivative Notation For a function f, we have been using a prime, as in f 0 to indicate the derivative function. According to Wikipedia, this notation is named after Joseph-Louis Lagrange, one of the pioneers in the development of calculus. We generally think of Isaac Newton and Gottfried Leibniz as being the inventors of calculus, but many of the basic ideas go back much further. Leibniz’s Notation. The derivative is a slope, change in y (1) . change in x We find the derivative by taking the slope of a secant line through two points on the graph of f, and then take the limit as one point approaches the other, and both the change in y and change in x go to zero. If you think of these limits as infinitesimals (infinitely small numbers), dy and dx, the derivative is the fraction 0 dy (2) f (x) = . dx It’s useful sometimes to think of derivatives this way, although it’s best to not take it too far. In any case, the notation is commonly used, and for a function (3) y = f(x), all of the following represent the derivative of f: dy 0 df d 0 (4) = f (x) = = f(x) = y . dx dx dx Newton’s and Euler’s Notations. These might come up, but only in passing. Newton’s Notation indicates a derivative with a dot over the letter, and usually is associated with an input variable representing time, so the derivative might be written dy (5) =y.
    [Show full text]
  • What Are the Mean Value and Taylor Theorems Saying?
    What Are the Mean Value and Taylor Theorems Saying? We have studied two propositions about the derivative of a function that sound vaguely alike. (1) On the one hand, the mean value theorem (Week 13, Stewart 3.2) says that f(x)=f(a)+f0(c)(x a) (exactly!) − for some c between a and x. For example, ex =1+ecx. (2) On the other hand, the best linear approximation (Week 16, Stew- art 2.9) says that f(x)=f(a)+f0(a)(x a) + something small − if x is close to a. For example, ex 1+x(that is, ex =1+x+ a small error). ≈ Quick exercise: What are the two crucial differences between (1) and (2)? Read closely. An immediately interesting question about statement (2) is: How small? The word “small” by itself is too vague to be useful in technical work. We can cast some light on the relation between (1) and (2) by stepping back a bit to the definition of continuity, which can be paraphrased this way: (0) If f is continuous, then f(x)=f(a) + something small if x is close to a. For example, ex e0 =1asx 0. → → Again one should ask, “How small?” In this case the answer is given by the detailed definition of continuity (through the definition of a limit): For every there is a δ such that if x a <δ, then f(x) f(a) <.We will talk about statements like this| in− more| depth in| Week− 26; for| now the only thing to understand is that (0) merely says that as x a goes to 0, − 1 f(x) f(a) also goes to 0 — it says nothing about how fast f(x) f(a) approaches− 0.
    [Show full text]