AMATH 731: Applied Functional Analysis Fall 2008 the Dirac “Delta Function” Distribution

Total Page:16

File Type:pdf, Size:1020Kb

AMATH 731: Applied Functional Analysis Fall 2008 the Dirac “Delta Function” Distribution AMATH 731: Applied Functional Analysis Fall 2008 The Dirac “delta function” distribution (The material on pages 1-8 is taken from ERV’s Fall 2005 lecture notes for AMATH 351, “Differential Equations II.”) Introduction Suppose you have a chemical (or radioactive, for that matter) species “X” in a beaker that decays according to the rate law dx = kx. (1) dt − where x(t) is the concentration at time t. Suppose that at time t = 0, there is x0 amount of X present. Of course, if the beaker is left alone, then the amount of X at time t 0 will be given by ≥ −kt x(t)= x0e . (2) Now suppose that at time a> 0, you quickly (i.e., instantaneously) add an amount A> 0 of X to the beaker. Then what is x(t), the amount of X at time t 0? ≥ Well, for 0 t<a, i.e., for all times before you add A units of X to the beaker, x(t) is given by Eq. (2) ≤ −ka −ka above. Then at t = a, there would have been x0e in the beaker, but you added A, to give x(a)= x0e +A. Then starting at t = a, the system will evolve according to the rate law. We can consider x(a) as the initial condition and measure time from t = a. The amount of X in the beaker will be x(t) = (x e−ka + A)e−kt, for t a. (3) 0 ≥ Let’s summarize our result compactly: For the above experiment, the amount of X in the beaker will be given by x e−kt, 0 t<a, x(t)= 0 ≤ (4) (x + A)e−k(t−a), t a. 0 ≥ A qualitative sketch of the solution is given below. Clearly, the solution x(t) has a discontinuity at t = a, the time of instantaneous addition of the amount A. Otherwise, it is differentiable at all other points. x0 x(t) vs. t x(a)+ A A x(a) 0 a t Note that we can write the solution in Eq. (4) even more compactly as follows, x(t)= x e−kt + Ae−k(t−a)H(t a), t 0, (5) 0 − ≥ where H(t) is the Heaviside function reviewed earlier. We shall return to this solution a little later. We now ask whether the above operation – the instanteous addition of an amount A of X to the beaker – can be represented mathematically, perhaps with a function f(t) so that the evolution of x(t) can be expressed as dx = kx + f(t), (6) dt − 1 where f(t) models the instantaneous addition of an amount A of X at time t = a. The answer is “Yes,” and f(t) will be the so-called “Dirac delta function” f(t)= Aδ(t a). − But in order to appreciate this result, let us now consider the case of a less brutal addition of X to the beaker. Suppose that we add an amount of A units of X but over a time interval of length ∆. We’ll also add X to the beaker at a a constant rate of A/∆ units per unit time. This means that our evolution equation for x(t) will take the form kx, 0 t<a, dx − ≤ = kx + A , a t a + ∆, (7) dt − ∆ ≤ ≤ kx, t a + ∆ − ≥ We can express this in compact form as dx = kx + f (t), (8) dt − ∆ where A f (t)= [H(t a) H(t (a + ∆))] . (9) ∆ ∆ − − − A graph of the function f∆(t) is sketched below. Note that the area under the curve and above the x-axis is A, as it should be – when you integrate a rate function over a time interval [a,b], you obtain the amount added over that time interval. f∆(t) vs. t A ∆ 0 a a +∆ t We can solve the DE in (8) in two ways: (1) as a linear first-order inhomogeneous DE, (2) using Laplace Transforms. Here, we’ll use Method (2). Taking LTs of both sides of (8) gives sX(s) x + kX(s)= F (s). (10) − 0 ∆ Solving for X(s): x 1 X(s)= 0 + F (s). (11) s + k s + k ∆ 1 Noting that the inverse LT of is e−kt, we have, after taking inverse LTs: s + k x(t)= x e−kt + f (t) e−kt. (12) 0 ∆ ∗ −kt Note that the first term x0e is the solution to the homogeneous DE associated with (8). −kt Now compute the convolution of f∆ with e as follows: t −kt −k(t−τ) f∆(t) e = f∆(τ)e dτ (13) ∗ Z0 A t A t = e−k(t−τ)H(τ a) dτ e−k(t−τ)H(τ (a + ∆)) dτ ∆ Z0 − − ∆ Z0 − = I1(t)+ I2(t) Because of the Heaviside function H(τ a), the integrand in the first integral I (t) – hence the integral itself – − 1 will be zero for 0 t<a. For t a, we can compute I (t) as follows: ≤ ≥ 1 t t A −k(t−τ) A −kt I1(t)= e dτ = e dτ (14) ∆ Za ∆ Za · · · 2 A = 1 e−k(t−a) , t a. k∆ h − i ≥ Likewise, we can determine the second integral to be A −k(t−(a+∆)) I2(t)= 1 e , t a + ∆. (15) −k∆ h − i ≥ The final result for x(t) is x(t)= x e−kt + I (t)H(t a)+ I (t)H(t (a + ∆)). (16) 0 1 − 2 − The qualitative behaviour of the graph of x(t) is sketched below. It is possible that x(t) is decreasing over the interval [a,a + ∆], during which time the amount A is added. But if ∆ is sufficiently small, i.e., the rate A/∆ is large enough, x(t) will be increasing over this interval. x0 x(t) vs. t x(a)+ A A x(a) 0 a a +∆ t ∆ But these points are rather secondary. What is of prime importance is the difference in concentrations between time t = a, the time at which we began to add X to the beaker, and time t = a + ∆, the time at which we stopped adding X. Remember that regardless of ∆, we are always adding a total amount of A to the beaker. This difference in concentrations is given by A x(a + ∆) x(a) = x e−k(a+∆) + 1 e−k∆ x e−ka (17) − 0 k∆ − − 0 A = 1 e−k∆ x e−ka . − k∆ − 0 In particular, we are interested in what happens to this difference as ∆ 0, i.e., the time interval over which → we add A units of X goes to zero. To find this limit, if it exists, we expand the exponential involving ∆ to give e−k∆ =1 k∆+ O(∆2) as ∆ 0, (18) − → so that 1 e−k∆ = k∆+ O(∆2) as ∆ 0. (19) − → Substituting this result into (17) gives the result x(a + ∆) x(a) A as ∆ 0. (20) − → → In other words, in the limit ∆ 0, the graph of x(t) will exhibit a discontinuity at t = a. The magnitude of → this “jump” is A, precisely was found with the earlier method. We now return to the inhomogeneous DE in (8) and examine the behaviour of the inhomogeneous term f (t) as ∆ 0. Recall that this is the “driving” term, the function that models the addition of a total of A ∆ → 3 units of X to the beaker over the time interval [a,a + ∆]. The width of the box that makes up the graph of A f (t) is ∆. The height of the box is . In this way, the area of the box – the total amount of X delivered ∆ ∆ – is A. As ∆ decreases, the box gets thinner and higher. In the limit ∆ 0, we have produced a function → f0(t) that is zero everywhere except at t = a, where it is undefined. This is the idea behind the “Dirac delta function,” which we explore in more detail below. The Dirac delta function Let us define the following function Iǫ(t) for an ǫ> 0: 1 ǫ , 0 t ǫ, Iǫ(t)= ≤ ≤ (21) 0, t>ǫ The graph of Iǫ(t) is sketched below. Iǫ(t) vs. t 1 ǫ 0 ǫ t Clearly ∞ Iǫ(t) dt =1, for all ǫ> 0. (22) Z−∞ Now let f(t) be a continuous function on [0, ) and consider the integrals ∞ ∞ 1 ǫ f(t)Iǫ(t) dt = f(t) dt, (23) Z−∞ ǫ Z0 in particular for small ǫ and limit ǫ 0. For any ǫ > 0, because of the continuity of f(t), there exists, by the → Mean Value Theorem for Integrals, a cǫ [0,ǫ] such that ∈ ǫ f(t) dt = f(cǫ) ǫ. (24) Z0 · Therefore, ǫ f(t)Iǫ(t) dt = f(cǫ). (25) Z0 As ǫ 0, cǫ 0 since the interval [0,ǫ] 0 . Therefore → → →{ } ǫ lim f(t)Iǫ(t) dt = f(0). (26) ǫ→0 Z0 We may, of course, translate the function Iǫ(t) to produce the result ∞ lim f(t)Iǫ(t a) dt = f(a). (27) ǫ→0 Z−∞ − 4 This is essentially a definition of the “Dirac delta function,” which is not a function but rather a “generalized function” that is defined in terms of integrals over continuous functions. In proper mathematical parlance, the Dirac delta function is a distribution. The Dirac delta function δ(t) is defined by the integral ∞ f(t)δ(t) dt = f(0). (28) Z−∞ Moreover d f(t)δ(t) dt =0, if c> 0. (29) Zc As well, we have the translation result ∞ f(t)δ(t a) dt = f(a).
Recommended publications
  • Problem Set 2
    22.02 – Introduction to Applied Nuclear Physics Problem set # 2 Issued on Wednesday Feb. 22, 2012. Due on Wednesday Feb. 29, 2012 Problem 1: Gaussian Integral (solved problem) 2 2 1 x /2σ The Gaussian function g(x) = e− is often used to describe the shape of a wave packet. Also, it represents √2πσ2 the probability density function (p.d.f.) of the Gaussian distribution. 2 2 ∞ 1 x /2σ a) Calculate the integral √ 2 e− −∞ 2πσ Solution R 2 2 x x Here I will give the calculation for the simpler function: G(x) = e− . The integral I = ∞ e− can be squared as: R−∞ 2 2 2 2 2 2 ∞ x ∞ ∞ x y ∞ ∞ (x +y ) I = dx e− = dx dy e− e− = dx dy e− Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ This corresponds to making an integral over a 2D plane, defined by the cartesian coordinates x and y. We can perform the same integral by a change of variables to polar coordinates: x = r cos ϑ y = r sin ϑ Then dxdy = rdrdϑ and the integral is: 2π 2 2 2 ∞ r ∞ r I = dϑ dr r e− = 2π dr r e− Z0 Z0 Z0 Now with another change of variables: s = r2, 2rdr = ds, we have: 2 ∞ s I = π ds e− = π Z0 2 x Thus we obtained I = ∞ e− = √π and going back to the function g(x) we see that its integral just gives −∞ ∞ g(x) = 1 (as neededR for a p.d.f). −∞ 2 2 R (x+b) /c Note: we can generalize this result to ∞ ae− dx = ac√π R−∞ Problem 2: Fourier Transform Give the Fourier transform of : (a – solved problem) The sine function sin(ax) Solution The Fourier Transform is given by: [f(x)][k] = 1 ∞ dx e ikxf(x).
    [Show full text]
  • The Error Function Mathematical Physics
    R. I. Badran The Error Function Mathematical Physics The Error Function and Stirling’s Formula The Error Function: x 2 The curve of the Gaussian function y e is called the bell-shaped graph. The error function is defined as the area under part of this curve: x 2 2 erf (x) et dt 1. . 0 There are other definitions of error functions. These are closely related integrals to the above one. 2. a) The normal or Gaussian distribution function. x t2 1 1 1 x P(, x) e 2 dt erf ( ) 2 2 2 2 Proof: Put t 2u and proceed, you might reach a step of x 1 2 P(0, x) eu du P(,x) P(,0) P(0,x) , where 0 1 x P(0, x) erf ( ) Here you can prove that 2 2 . This can be done by using the definition of error function in (1). 0 u2 I I e du Now you need to find P(,0) where . To find this integral you have to put u=x first, then u= y and multiply the two resulting integrals. Make the change of variables to polar coordinate you get R. I. Badran The Error Function Mathematical Physics 0 2 2 I 2 er rdr d 0 From this latter integral you get 1 I P(,0) 2 and 2 . 1 1 x P(, x) erf ( ) 2 2 2 Q. E. D. x 2 t 1 2 1 x 2.b P(0, x) e dt erf ( ) 2 0 2 2 (as proved earlier in 2.a).
    [Show full text]
  • 1. How Different Is the T Distribution from the Normal?
    Statistics 101–106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M §7.1 and §7.2, ignoring starred parts. Reread M&M §3.2. The eects of estimated variances on normal approximations. t-distributions. Comparison of two means: pooling of estimates of variances, or paired observations. In Lecture 6, when discussing comparison of two Binomial proportions, I was content to estimate unknown variances when calculating statistics that were to be treated as approximately normally distributed. You might have worried about the effect of variability of the estimate. W. S. Gosset (“Student”) considered a similar problem in a very famous 1908 paper, where the role of Student’s t-distribution was first recognized. Gosset discovered that the effect of estimated variances could be described exactly in a simplified problem where n independent observations X1,...,Xn are taken from (, ) = ( + ...+ )/ a normal√ distribution, N . The sample mean, X X1 Xn n has a N(, / n) distribution. The random variable X Z = √ / n 2 2 Phas a standard normal distribution. If we estimate by the sample variance, s = ( )2/( ) i Xi X n 1 , then the resulting statistic, X T = √ s/ n no longer has a normal distribution. It has a t-distribution on n 1 degrees of freedom. Remark. I have written T , instead of the t used by M&M page 505. I find it causes confusion that t refers to both the name of the statistic and the name of its distribution. As you will soon see, the estimation of the variance has the effect of spreading out the distribution a little beyond what it would be if were used.
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • 1 One Parameter Exponential Families
    1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e.
    [Show full text]
  • Neural Network for the Fast Gaussian Distribution Test Author(S)
    Document Title: Neural Network for the Fast Gaussian Distribution Test Author(s): Igor Belic and Aleksander Pur Document No.: 208039 Date Received: December 2004 This paper appears in Policing in Central and Eastern Europe: Dilemmas of Contemporary Criminal Justice, edited by Gorazd Mesko, Milan Pagon, and Bojan Dobovsek, and published by the Faculty of Criminal Justice, University of Maribor, Slovenia. This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this final report available electronically in addition to NCJRS Library hard-copy format. Opinions and/or reference to any specific commercial products, processes, or services by trade name, trademark, manufacturer, or otherwise do not constitute or imply endorsement, recommendation, or favoring by the U.S. Government. Translation and editing were the responsibility of the source of the reports, and not of the U.S. Department of Justice, NCJRS, or any other affiliated bodies. IGOR BELI^, ALEKSANDER PUR NEURAL NETWORK FOR THE FAST GAUSSIAN DISTRIBUTION TEST There are several problems where it is very important to know whether the tested data are distributed according to the Gaussian law. At the detection of the hidden information within the digitized pictures (stega- nography), one of the key factors is the analysis of the noise contained in the picture. The incorporated noise should show the typically Gaussian distribution. The departure from the Gaussian distribution might be the first hint that the picture has been changed – possibly new information has been inserted. In such cases the fast Gaussian distribution test is a very valuable tool.
    [Show full text]
  • Chapter 5 Sections
    Chapter 5 Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions Continuous univariate distributions: 5.6 Normal distributions 5.7 Gamma distributions Just skim 5.8 Beta distributions Multivariate distributions Just skim 5.9 Multinomial distributions 5.10 Bivariate normal distributions 1 / 43 Chapter 5 5.1 Introduction Families of distributions How: Parameter and Parameter space pf /pdf and cdf - new notation: f (xj parameters ) Mean, variance and the m.g.f. (t) Features, connections to other distributions, approximation Reasoning behind a distribution Why: Natural justification for certain experiments A model for the uncertainty in an experiment All models are wrong, but some are useful – George Box 2 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Bernoulli distributions Def: Bernoulli distributions – Bernoulli(p) A r.v. X has the Bernoulli distribution with parameter p if P(X = 1) = p and P(X = 0) = 1 − p. The pf of X is px (1 − p)1−x for x = 0; 1 f (xjp) = 0 otherwise Parameter space: p 2 [0; 1] In an experiment with only two possible outcomes, “success” and “failure”, let X = number successes. Then X ∼ Bernoulli(p) where p is the probability of success. E(X) = p, Var(X) = p(1 − p) and (t) = E(etX ) = pet + (1 − p) 8 < 0 for x < 0 The cdf is F(xjp) = 1 − p for 0 ≤ x < 1 : 1 for x ≥ 1 3 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Binomial distributions Def: Binomial distributions – Binomial(n; p) A r.v.
    [Show full text]
  • Lectures on the Local Semicircle Law for Wigner Matrices
    Lectures on the local semicircle law for Wigner matrices Florent Benaych-Georges∗ Antti Knowlesy September 11, 2018 These notes provide an introduction to the local semicircle law from random matrix theory, as well as some of its applications. We focus on Wigner matrices, Hermitian random matrices with independent upper-triangular entries with zero expectation and constant variance. We state and prove the local semicircle law, which says that the eigenvalue distribution of a Wigner matrix is close to Wigner's semicircle distribution, down to spectral scales containing slightly more than one eigenvalue. This local semicircle law is formulated using the Green function, whose individual entries are controlled by large deviation bounds. We then discuss three applications of the local semicircle law: first, complete delocalization of the eigenvectors, stating that with high probability the eigenvectors are approximately flat; second, rigidity of the eigenvalues, giving large deviation bounds on the locations of the individual eigenvalues; third, a comparison argument for the local eigenvalue statistics in the bulk spectrum, showing that the local eigenvalue statistics of two Wigner matrices coincide provided the first four moments of their entries coincide. We also sketch further applications to eigenvalues near the spectral edge, and to the distribution of eigenvectors. arXiv:1601.04055v4 [math.PR] 10 Sep 2018 ∗Universit´eParis Descartes, MAP5. Email: [email protected]. yETH Z¨urich, Departement Mathematik. Email: [email protected].
    [Show full text]
  • Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F
    Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F Course : Basic Econometrics : HC43 / Statistics B.A. Hons Economics, Semester IV/ Semester III Delhi University Course Instructor: Siddharth Rathore Assistant Professor Economics Department, Gargi College Siddharth Rathore guj75845_appC.qxd 4/16/09 12:41 PM Page 461 APPENDIX C SOME IMPORTANT PROBABILITY DISTRIBUTIONS In Appendix B we noted that a random variable (r.v.) can be described by a few characteristics, or moments, of its probability function (PDF or PMF), such as the expected value and variance. This, however, presumes that we know the PDF of that r.v., which is a tall order since there are all kinds of random variables. In practice, however, some random variables occur so frequently that statisticians have determined their PDFs and documented their properties. For our purpose, we will consider only those PDFs that are of direct interest to us. But keep in mind that there are several other PDFs that statisticians have studied which can be found in any standard statistics textbook. In this appendix we will discuss the following four probability distributions: 1. The normal distribution 2. The t distribution 3. The chi-square (␹2 ) distribution 4. The F distribution These probability distributions are important in their own right, but for our purposes they are especially important because they help us to find out the probability distributions of estimators (or statistics), such as the sample mean and sample variance. Recall that estimators are random variables. Equipped with that knowledge, we will be able to draw inferences about their true population values.
    [Show full text]
  • The Normal Moment Generating Function
    MSc. Econ: MATHEMATICAL STATISTICS, 1996 The Moment Generating Function of the Normal Distribution Recall that the probability density function of a normally distributed random variable x with a mean of E(x)=and a variance of V (x)=2 is 2 1 1 (x)2/2 (1) N(x; , )=p e 2 . (22) Our object is to nd the moment generating function which corresponds to this distribution. To begin, let us consider the case where = 0 and 2 =1. Then we have a standard normal, denoted by N(z;0,1), and the corresponding moment generating function is dened by Z zt zt 1 1 z2 Mz(t)=E(e )= e √ e 2 dz (2) 2 1 t2 = e 2 . To demonstate this result, the exponential terms may be gathered and rear- ranged to give exp zt exp 1 z2 = exp 1 z2 + zt (3) 2 2 1 2 1 2 = exp 2 (z t) exp 2 t . Then Z 1t2 1 1(zt)2 Mz(t)=e2 √ e 2 dz (4) 2 1 t2 = e 2 , where the nal equality follows from the fact that the expression under the integral is the N(z; = t, 2 = 1) probability density function which integrates to unity. Now consider the moment generating function of the Normal N(x; , 2) distribution when and 2 have arbitrary values. This is given by Z xt xt 1 1 (x)2/2 (5) Mx(t)=E(e )= e p e 2 dx (22) Dene x (6) z = , which implies x = z + . 1 MSc. Econ: MATHEMATICAL STATISTICS: BRIEF NOTES, 1996 Then, using the change-of-variable technique, we get Z 1 1 2 dx t zt p 2 z Mx(t)=e e e dz 2 dz Z (2 ) (7) t zt 1 1 z2 = e e √ e 2 dz 2 t 1 2t2 = e e 2 , Here, to establish the rst equality, we have used dx/dz = .
    [Show full text]
  • Semi-Parametric Likelihood Functions for Bivariate Survival Data
    Old Dominion University ODU Digital Commons Mathematics & Statistics Theses & Dissertations Mathematics & Statistics Summer 2010 Semi-Parametric Likelihood Functions for Bivariate Survival Data S. H. Sathish Indika Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/mathstat_etds Part of the Mathematics Commons, and the Statistics and Probability Commons Recommended Citation Indika, S. H. S.. "Semi-Parametric Likelihood Functions for Bivariate Survival Data" (2010). Doctor of Philosophy (PhD), Dissertation, Mathematics & Statistics, Old Dominion University, DOI: 10.25777/ jgbf-4g75 https://digitalcommons.odu.edu/mathstat_etds/30 This Dissertation is brought to you for free and open access by the Mathematics & Statistics at ODU Digital Commons. It has been accepted for inclusion in Mathematics & Statistics Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. SEMI-PARAMETRIC LIKELIHOOD FUNCTIONS FOR BIVARIATE SURVIVAL DATA by S. H. Sathish Indika BS Mathematics, 1997, University of Colombo MS Mathematics-Statistics, 2002, New Mexico Institute of Mining and Technology MS Mathematical Sciences-Operations Research, 2004, Clemson University MS Computer Science, 2006, College of William and Mary A Dissertation Submitted to the Faculty of Old Dominion University in Partial Fulfillment of the Requirement for the Degree of DOCTOR OF PHILOSOPHY DEPARTMENT OF MATHEMATICS AND STATISTICS OLD DOMINION UNIVERSITY August 2010 Approved by: DayanandiN; Naik Larry D. Le ia M. Jones ABSTRACT SEMI-PARAMETRIC LIKELIHOOD FUNCTIONS FOR BIVARIATE SURVIVAL DATA S. H. Sathish Indika Old Dominion University, 2010 Director: Dr. Norou Diawara Because of the numerous applications, characterization of multivariate survival dis­ tributions is still a growing area of research.
    [Show full text]
  • Delta Functions and Distributions
    When functions have no value(s): Delta functions and distributions Steven G. Johnson, MIT course 18.303 notes Created October 2010, updated March 8, 2017. Abstract x = 0. That is, one would like the function δ(x) = 0 for all x 6= 0, but with R δ(x)dx = 1 for any in- These notes give a brief introduction to the mo- tegration region that includes x = 0; this concept tivations, concepts, and properties of distributions, is called a “Dirac delta function” or simply a “delta which generalize the notion of functions f(x) to al- function.” δ(x) is usually the simplest right-hand- low derivatives of discontinuities, “delta” functions, side for which to solve differential equations, yielding and other nice things. This generalization is in- a Green’s function. It is also the simplest way to creasingly important the more you work with linear consider physical effects that are concentrated within PDEs, as we do in 18.303. For example, Green’s func- very small volumes or times, for which you don’t ac- tions are extremely cumbersome if one does not al- tually want to worry about the microscopic details low delta functions. Moreover, solving PDEs with in this volume—for example, think of the concepts of functions that are not classically differentiable is of a “point charge,” a “point mass,” a force plucking a great practical importance (e.g. a plucked string with string at “one point,” a “kick” that “suddenly” imparts a triangle shape is not twice differentiable, making some momentum to an object, and so on.
    [Show full text]