Titles of Books, Papers and Journals Abbreviated Here Are Given More Fully in the Bibliography

Total Page:16

File Type:pdf, Size:1020Kb

Titles of Books, Papers and Journals Abbreviated Here Are Given More Fully in the Bibliography NOTES Titles of books, papers and journals abbreviated here are given more fully in the bibliography. The following abbreviations for Laplace's works are used throughout these notes: Essai - Essai Philosophique sur les Probabilites. T.A.P. I. (or II.) - Theone Analytique des Probabilites, Book I (or II), as reprinted in Volume 7 of the (Euvres Completes de Laplace. When "Supple­ ment" is added to "T.A.P." , the reference is to the appropriately numbered Supplement appearing in the third edition of the T .A.P. as printed in the (Euvres Completes de Laplace. o. C. - (Euvres Completes de Laplace. Le~on - Sur les probabilites, a lecture given in the tenth session of the Le~ons de Mathematiques. The 1932 German translation of the Essai, edited by R. von Mises, will be referenced throughout as ''von Mises." A reference "see Note n" will always indicate Note n to the article cur­ rently under comment. My indebtedness to the writer (Bernard Bru) of the notes in the Bru/Thom edition [1986] of the Essai will be evident to anyone who has studied that work. The notes by Hilda Pollaczek-Geiringer in von Mises [1932] have also been useful; these notes should be referred to for details of the "collective" approach to probability. 125 126 Notes: Foreword Foreword 1. This lecture, originally entitled "Sur les probabilites," was given in the tenth session of the Le~ons de Mathimatiquesj it was published in 1812. An early version of the opening part of the Essai was published in 1810j it is reprinted in full in Gillispie [1979]. 2. Writing of Laplace's work on the apparent anomalies of the mean motions of Jupiter and Saturn, Wilson explains that, in Laplace's terminology, Analyse refers primarily to the set of operational symbols and sym­ bolic procedures developed by Leibniz, the Bernoullis, Eu­ ler, and others, for the formulation and solution of differen­ tial equations. Although the Analyse employed by Laplace consisted of algorithmic processes, their successful use was far from being merely a matter of calculation. The user needed to assess and choose among procedures with a clear idea of the goal at which he aimed, the kind of solution to be expected or hoped for. [1985, pp. 24-25] However, not too much violence will be done to this notion if we think of Analyse as "mathematics." 3. The word "chance," in French as in English, is patient of a number of interpretations, the most common being "probability" and "for­ tuitous event." It is with the latter meaning that Laplace uses the word in the Essai. The use of "chances" in the sense of ''things which may befall" was not uncommon in the 18th and 19th centuries. For example, Emerson writes Chance is an event, or something that happens without the design or direction of any agentj and is directed or brought about by nothing but the laws of nature. [1776, p. 2] This is perhaps why Bayes felt it necessary to state in his Essay of 1763 "By chance I mean the same as probability" [po 376]. On probability 1. A similar sentiment had been expressed earlier by David Hume, who, in Section VIII of his Inquiry Concerning Human Understanding, wrote "It is universally allowed, that nothing exists without a cause of its existence" [Hume, 1894, p. 366]. Laplace is using "cause" in a fairly usual sense here. Keynes [1973] notes the "metaphysical diffi­ culties which surround the true meaning of cause" [po 306], and goes on to say Notes: On probability 127 I have followed a practice not uncommon amongst writers on probability, who constantly use the term cause, where hypothesis might seem more appropriate. [loco cit.] De Morgan is even more forthright: he says By a cause, is to be understood simply a state of things antecedent to the happening of an event, without the in­ troduction of any notion of agency, physical or moral. [1838, p. 53] Quetelet in fact distinguishes three principal classes of causes, viz. Constant causes are those which act in a continuous man­ ner, with the same intensity, and in the same direction. Variable causes act in a continuous manner, with ener­ gies and tendencies which change either according to de­ termined laws or without any apparent law ... Accidental causes only manifest themselves fortuitously, and act indifferently in any direction. [1849, p. 107] One is tempted to say of cause what Mr. Pickwick said of politics, viz. "The word ... comprises, in itself, a difficult study of no incon­ siderable magnitude." (See Chap. XV of Dickens [1837].) 2. What appears in earlier editions as un effet ("an effect") appears mistakenly in the Bru/Thom edition as en effet ("indeed"). 3. See Leibniz [1710]. The following extracts are pertinent: (a) All we have just said likewise agrees perfectly with the maxims of philosophers, who teach that a cause could not operate, without having a disposition to the action; and it is this disposition that contains a predetermina­ tion, be it what the agent has received from without, or be it what he has had by virtue of his own previous constitution. [§46] (b) Therefore I acknowledge indifference only in one sense, which it gives notice of in the same way as contingency, or non-necessity. But as I have carefully explained more than once, I do not acknowledge an indifference of equi­ librium, and I do not believe that one ever chooses, when one is completely indifferent. Such a choice would be a kind of pure chance, without conclusive reasoning, and as much visible as hidden. But such a chance, such an absolute and real fortuitousness, is a chimera that is never found in nature. Wise men all agree that chance is only a specious thing, like fortune; it is ignorance 128 Notes: On probability of causes that leads to it. But if there had been such a vague indifference, or else if one had chosen with­ out there having been anything to lead one to choose, the chances would be something real, similar to what is found in that small change of direction in particles, occurring for no rhyme or reason, to the feeling of Epi­ curus, who introduced it to eschew necessity, of which Cicero has justifiably made so much fun. [§303] 4. Laplace had expressed this sentiment before: in his [1773] he wrote The present state of the system of nature is clearly a sequel to what it was a moment before, and, if we conceive of an intelligence that, at a given instant, encompasses all the relationships of the beings of this universe, it would be able to determine at any time whatsoever, in the past or in the future, the respective position, the movements and generally the attachments of all these beings. [O.C. VIII, p.I44] Laplace's position was by no means unreservedly accepted. Poincare in fact wrote A very slight cause, which escapes us, determines a con­ siderable effect which we can not help seeing, and then we say this effect is due to chance. If we could know exactly the laws of nature and the situation of the universe at the initial instant, we should be able to predict exactly the sit­ uation of this same universe at a subsequent instant. But even when the natural laws should have no further secret for us, we could know the initial situation only approxi­ mately. If that permits us to foresee the subsequent situa.­ tion with the same degree of approximation, this is all we require, we say the phenomenon has been predicted, that it is ruled by laws. But this is not always the case; it may hap­ pen that slight differences in the initial conditions produce very great differences in the final phenomena; a slight error in the former would make an enormous error in the latter. Prediction becomes impossible and we have the fortuitous phenomenon. [1946, pp. 397-398] 5. The word ''world'' may be interpreted throughout this translation in the broad sense of ''universe.'' 6. An anonymous tract of 1716 was devoted to a discussion of the evils to be expected from the appearance of a particularly noteworthy aurora borealis. De Morgan comments on the work as follows: Notes: On probability 129 The prodigy, as described, was what we should call a very decided and unusual aurora borealis. The inference was, that men's sins were bringing on the end of the world. [1915, vol. I, p. 134) 7. The evil effects foreshadowed by the appearance of comets had long been recognized. In his essay entitled An Excellent Discourse of the Names, Genus, Species, Efficient and Final Causes of all Comets, &c., Wharton wrote It has been a received Opinion in all Ages, that Comets are certain FUnebrious Appearances, secret Fires and Torches of Death rather than of Life, and were ever look'd upon as the threatening Eyes of Divine Vengeance, and the Tongue of an Ireful Deity, portending the Death of Princes, Plague of the People, Famine, and Earthquakes, with horrible and terrible Tempests. [1683, p. 159) As holders, or at least reporters, of these opinions Wharton cited Aristotle, Cicero, Pliny, the Holy Fathers (Tertullian, St Augustine), meteorologers (Fromandus) and astronomers (Tycho Brahe, Kepler). He also noted the role played by the fixed stars near to which the comet appears - for example, The Dominion of Mercury portends great Calamity unto all those that Live by their own Industry, and such as love and favour the Muses, with the Death of some great Personage, Wars, Famine, and Pestilence; of Diseases, the Phrenzy, Lethargy, Epilepsie, and griefs of the Head.
Recommended publications
  • Appendix a Short Course in Taylor Series
    Appendix A Short Course in Taylor Series The Taylor series is mainly used for approximating functions when one can identify a small parameter. Expansion techniques are useful for many applications in physics, sometimes in unexpected ways. A.1 Taylor Series Expansions and Approximations In mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. It is named after the English mathematician Brook Taylor. If the series is centered at zero, the series is also called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. It is common practice to use a finite number of terms of the series to approximate a function. The Taylor series may be regarded as the limit of the Taylor polynomials. A.2 Definition A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function f(x) about a point x ¼ a is given by; f 00ðÞa f 3ðÞa fxðÞ¼faðÞþf 0ðÞa ðÞþx À a ðÞx À a 2 þ ðÞx À a 3 þÁÁÁ 2! 3! f ðÞn ðÞa þ ðÞx À a n þÁÁÁ ðA:1Þ n! © Springer International Publishing Switzerland 2016 415 B. Zohuri, Directed Energy Weapons, DOI 10.1007/978-3-319-31289-7 416 Appendix A: Short Course in Taylor Series If a ¼ 0, the expansion is known as a Maclaurin Series. Equation A.1 can be written in the more compact sigma notation as follows: X1 f ðÞn ðÞa ðÞx À a n ðA:2Þ n! n¼0 where n ! is mathematical notation for factorial n and f(n)(a) denotes the n th derivation of function f evaluated at the point a.
    [Show full text]
  • Student Understanding of Taylor Series Expansions in Statistical Mechanics
    PHYSICAL REVIEW SPECIAL TOPICS - PHYSICS EDUCATION RESEARCH 9, 020110 (2013) Student understanding of Taylor series expansions in statistical mechanics Trevor I. Smith,1 John R. Thompson,2,3 and Donald B. Mountcastle2 1Department of Physics and Astronomy, Dickinson College, Carlisle, Pennsylvania 17013, USA 2Department of Physics and Astronomy, University of Maine, Orono, Maine 04469, USA 3Maine Center for Research in STEM Education, University of Maine, Orono, Maine 04469, USA (Received 2 March 2012; revised manuscript received 23 May 2013; published 8 August 2013) One goal of physics instruction is to have students learn to make physical meaning of specific mathematical expressions, concepts, and procedures in different physical settings. As part of research investigating student learning in statistical physics, we are developing curriculum materials that guide students through a derivation of the Boltzmann factor using a Taylor series expansion of entropy. Using results from written surveys, classroom observations, and both individual think-aloud and teaching interviews, we present evidence that many students can recognize and interpret series expansions, but they often lack fluency in creating and using a Taylor series appropriately, despite previous exposures in both calculus and physics courses. DOI: 10.1103/PhysRevSTPER.9.020110 PACS numbers: 01.40.Fk, 05.20.Ày, 02.30.Lt, 02.30.Mv students’ use of Taylor series expansion to derive and I. INTRODUCTION understand the Boltzmann factor as a component of proba- Over the past several decades, much work has been done bility in the canonical ensemble. While this is a narrow investigating ways in which students understand and learn topic, we feel that it serves as an excellent example of physics (see Refs.
    [Show full text]
  • Nonlinear Optics
    Nonlinear Optics: A very brief introduction to the phenomena Paul Tandy ADVANCED OPTICS (PHYS 545), Spring 2010 Dr. Mendes INTRO So far we have considered only the case where the polarization density is proportionate to the electric field. The electric field itself is generally supplied by a laser source. PE= ε0 χ Where P is the polarization density, E is the electric field, χ is the susceptibility, and ε 0 is the permittivity of free space. We can imagine that the electric field somehow induces a polarization in the material. As the field increases the polarization also in turn increases. But we can also imagine that this process does not continue indefinitely. As the field grows to be very large the field itself starts to become on the order of the electric fields between the atoms in the material (medium). As this occurs the material itself becomes slightly altered and the behavior is no longer linear in nature but instead starts to become weakly nonlinear. We can represent this as a Taylor series expansion model in the electric field. ∞ n 23 PAEAEAEAE==++∑ n 12 3+... n=1 This model has become so popular that the coefficient scaling has become standardized using the letters d and χ (3) as parameters corresponding the quadratic and cubic terms. Higher order terms would play a role too if the fields were extremely high. 2(3)3 PEdEE=++()εχ0 (2) ( 4χ ) + ... To get an idea of how large a typical field would have to be before we see a nonlinear effect, let us scale the above equation by the first order term.
    [Show full text]
  • Fundamentals of Duct Acoustics
    Fundamentals of Duct Acoustics Sjoerd W. Rienstra Technische Universiteit Eindhoven 16 November 2015 Contents 1 Introduction 3 2 General Formulation 4 3 The Equations 8 3.1 AHierarchyofEquations ........................... 8 3.2 BoundaryConditions. Impedance.. 13 3.3 Non-dimensionalisation . 15 4 Uniform Medium, No Mean Flow 16 4.1 Hard-walled Cylindrical Ducts . 16 4.2 RectangularDucts ............................... 21 4.3 SoftWallModes ................................ 21 4.4 AttenuationofSound.............................. 24 5 Uniform Medium with Mean Flow 26 5.1 Hard-walled Cylindrical Ducts . 26 5.2 SoftWallandUniformMeanFlow . 29 6 Source Expansion 32 6.1 ModalAmplitudes ............................... 32 6.2 RotatingFan .................................. 32 6.3 Tyler and Sofrin Rule for Rotor-Stator Interaction . ..... 33 6.4 PointSourceinaLinedFlowDuct . 35 6.5 PointSourceinaDuctWall .......................... 38 7 Reflection and Transmission 40 7.1 A Discontinuity in Diameter . 40 7.2 OpenEndReflection .............................. 43 VKI - 1 - CONTENTS CONTENTS A Appendix 49 A.1 BesselFunctions ................................ 49 A.2 AnImportantComplexSquareRoot . 51 A.3 Myers’EnergyCorollary ............................ 52 VKI - 2 - 1. INTRODUCTION CONTENTS 1 Introduction In a duct of constant cross section, with a medium and boundary conditions independent of the axial position, the wave equation for time-harmonic perturbations may be solved by means of a series expansion in a particular family of self-similar solutions, called modes. They are related to the eigensolutions of a two-dimensional operator, that results from the wave equation, on a cross section of the duct. For the common situation of a uniform medium without flow, this operator is the well-known Laplace operator 2. For a non- uniform medium, and in particular with mean flow, the details become mo∇re complicated, but the concept of duct modes remains by and large the same1.
    [Show full text]
  • Fourier Series
    Chapter 10 Fourier Series 10.1 Periodic Functions and Orthogonality Relations The differential equation ′′ y + 2y = F cos !t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. If 2 = !2 a particular solution is easily found by undetermined coefficients (or by∕ using Laplace transforms) to be F y = cos !t. p 2 !2 − If the forcing function is a linear combination of simple cosine functions, so that the differential equation is N ′′ 2 y + y = Fn cos !nt n=1 X 2 2 where = !n for any n, then, by linearity, a particular solution is obtained as a sum ∕ N F y (t)= n cos ! t. p 2 !2 n n=1 n X − This simple procedure can be extended to any function that can be repre- sented as a sum of cosine (and sine) functions, even if that summation is not a finite sum. It turns out that the functions that can be represented as sums in this form are very general, and include most of the periodic functions that are usually encountered in applications. 723 724 10 Fourier Series Periodic Functions A function f is said to be periodic with period p> 0 if f(t + p)= f(t) for all t in the domain of f. This means that the graph of f repeats in successive intervals of length p, as can be seen in the graph in Figure 10.1. y p 2p 3p 4p 5p Fig. 10.1 An example of a periodic function with period p.
    [Show full text]
  • Graduate Macro Theory II: Notes on Log-Linearization
    Graduate Macro Theory II: Notes on Log-Linearization Eric Sims University of Notre Dame Spring 2017 The solutions to many discrete time dynamic economic problems take the form of a system of non-linear difference equations. There generally exists no closed-form solution for such problems. As such, we must result to numerical and/or approximation techniques. One particularly easy and very common approximation technique is that of log linearization. We first take natural logs of the system of non-linear difference equations. We then linearize the logged difference equations about a particular point (usually a steady state), and simplify until we have a system of linear difference equations where the variables of interest are percentage deviations about a point (again, usually a steady state). Linearization is nice because we know how to work with linear difference equations. Putting things in percentage terms (that's the \log" part) is nice because it provides natural interpretations of the units (i.e. everything is in percentage terms). First consider some arbitrary univariate function, f(x). Taylor's theorem tells us that this can be expressed as a power series about a particular point x∗, where x∗ belongs to the set of possible x values: f 0(x∗) f 00 (x∗) f (3)(x∗) f(x) = f(x∗) + (x − x∗) + (x − x∗)2 + (x − x∗)3 + ::: 1! 2! 3! Here f 0(x∗) is the first derivative of f with respect to x evaluated at the point x∗, f 00 (x∗) is the second derivative evaluated at the same point, f (3) is the third derivative, and so on.
    [Show full text]
  • Approximation of a Function by a Polynomial Function
    1 Approximation of a function by a polynomial function 1. step: Assuming that f(x) is differentiable at x = a, from the picture we see: 푓(푥) = 푓(푎) + ∆푓 ≈ 푓(푎) + 푓′(푎)∆푥 Linear approximation of f(x) at point x around x = a 푓(푥) = 푓(푎) + 푓′(푎)(푥 − 푎) 푅푒푚푎푛푑푒푟 푹 = 푓(푥) − 푓(푎) − 푓′(푎)(푥 − 푎) will determine magnitude of error Let’s try to get better approximation of f(x). Let’s assume that f(x) has all derivatives at x = a. Let’s assume there is a possible power series expansion of f(x) around a. ∞ 2 3 푛 푓(푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ = ∑ 푐푛(푥 − 푎) ∀ 푥 푎푟표푢푛푑 푎 푛=0 n 푓(푛)(푥) 푓(푛)(푎) 2 3 0 푓 (푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ 푓(푎) = 푐0 2 1 푓′(푥) = 푐1 + 2푐2(푥 − 푎) + 3푐3(푥 − 푎) + ⋯ 푓′(푎) = 푐1 2 2 푓′′(푥) = 2푐2 + 3 ∙ 2푐3(푥 − 푎) + 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′(푎) = 2푐2 2 3 푓′′′(푥) = 3 ∙ 2푐3 + 4 ∙ 3 ∙ 2(푥 − 푎) + 5 ∙ 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′′(푎) = 3 ∙ 2푐3 ⋮ (푛) (푛) n 푓 (푥) = 푛(푛 − 1) ∙∙∙ 3 ∙ 2 ∙ 1푐푛 + (푛 + 1)푛(푛 − 1) ⋯ 2푐푛+1(푥 − 푎) 푓 (푎) = 푛! 푐푛 ⋮ Taylor’s theorem If f (x) has derivatives of all orders in an open interval I containing a, then for each x in I, ∞ 푓(푛)(푎) 푓(푛)(푎) 푓(푛+1)(푎) 푓(푥) = ∑ (푥 − 푎)푛 = 푓(푎) + 푓′(푎)(푥 − 푎) + ⋯ + (푥 − 푎)푛 + (푥 − 푎)푛+1 + ⋯ 푛! 푛! (푛 + 1)! 푛=0 푛 ∞ 푓(푘)(푎) 푓(푘)(푎) = ∑ (푥 − 푎)푘 + ∑ (푥 − 푎)푘 푘! 푘! 푘=0 푘=푛+1 convention for the sake of algebra: 0! = 1 = 푇푛 + 푅푛 푛 푓(푘)(푎) 푇 = ∑ (푥 − 푎)푘 푝표푙푦푛표푚푎푙 표푓 푛푡ℎ 표푟푑푒푟 푛 푘! 푘=0 푓(푛+1)(푧) 푛+1 푅푛(푥) = (푥 − 푎) 푠 푟푒푚푎푛푑푒푟 푅푛 (푛 + 1)! Taylor’s theorem says that there exists some value 푧 between 푎 and 푥 for which: ∞ 푓(푘)(푎) 푓(푛+1)(푧) ∑ (푥 − 푎)푘 can be replaced by 푅 (푥) = (푥 − 푎)푛+1 푘! 푛 (푛 + 1)! 푘=푛+1 2 Approximating function by a polynomial function So we are good to go.
    [Show full text]
  • Fourier Series
    LECTURE 11 Fourier Series A fundamental idea introduced in Math 2233 is that the solution set of a linear differential equation is a vector space. In fact, it is a vector subspace of a vector space of functions. The idea that functions can be thought of as vectors in a vector space is also crucial in what will transpire in the rest of this court. However, it is important that when you think of functions as elements of a vector space V , you are thinking primarily of an abstract vector space - rather than a geometric rendition in terms of directed line segments. In the former, abstract point of view, you work with vectors by first adopting a basis {v1,..., vn} for V and then expressing the elements of V in terms of their coordinates with respect to that basis. For example, you can think about a polynomial p = 1 + 2x + 3x2 − 4x3 as a vector, by using the monomials 1, x, x2, x3,... as a basis and then thinking of the above expression for p as “an expression of p” in terms of the basis 1, x, x2, x3,... But you can express p in terms of its Taylor series about x = 1: 3 4 p = 2 − 8 (x − 1) − 21(x − 1)2 − 16 (x − 1) − 4 (x − 1) 2 3 and think of the polynomials 1, x − 1, (x − 1) , (x − 1) ,... as providing another basis for the vector space of polynomials. Granted the second expression for p is uglier than the first, abstractly the two expressions are on an equal footing and moveover, in some situations the second expression might be more useful - for example, in understanding the behavior of p near x = 1.
    [Show full text]
  • 19. Taylor Series and Techniques
    19. TAYLOR SERIES AND TECHNIQUES Taylor polynomials can be generated for a given function through a certain linear combination of its derivatives. The idea is that we can approximate a function by a polynomial, at least locally. In calculus I we discussed the tangent line approximation to a function. We found that the linearization of a function gives a good approximation for points close to the point of tangency. If we calculate second derivatives we can similarly find a quadratic approximation for the function. Third derivatives go to finding a cubic approximation about some point. I should emphasize from the outset that a Taylor polynomial is a polynomial, it will not be able to exactly represent a function which is not a polynomial. In order to exactly represent an analytic function we’ll need to take infinitely many terms, we’ll need a power series. The Taylor series for a function is formed in the same way as a Taylor polynomial. The difference is that we never stop adding terms, the Taylor series is formed from an infinite sum of a function’s derivatives evaluated at the series’ center. There is a subtle issue here, is it possible to find a series representation for a given function? Not always. However, when it is possible we call the function analytic. Many functions that arise in applications are analytic. Often functions are analytic on subdomains of their entire domain, we need to find different series representations on opposite sides of a vertical asymptote. What we learned in the last chapter still holds, there is an interval of convergence, the series cannot be convergent on some disconnected domain.
    [Show full text]
  • Convergent Factorial Series Solutions of Linear Difference Equations*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF DIFFERENTIAL EQUATIONS 29, 345-361 (1978) Convergent Factorial Series Solutions of Linear Difference Equations* W. J. FITZPATRICK+ AND L. J. GRIMM Depal-tment of Mathematics, University of Missowi-Rolla, Rolh, Missotwi 65801 Received July 27, 1977 1. IPU’TRODUCTION Although the analytic theory of linear difference equations dates back to an 1885 memoir of Poincare, continuing work essentially began around 1910 with publication of several papers of G. D. Birkhoff, R. D. Carmichael, J. Horn, and N. E. Norlund; see, for instance, [l, 2, 4, 51. Since that time, C. R. Adams, M7. J. Trjitzinsky, and others have carried on the development of the theory; numerous contributions have been made in recent years by TV. A. Harris, Jr., P. Sibuya, and H. L. Turrittin; see, for instance, [7-9, 16-171. The monographs of Norlund [14] and L. M. Milne-Thomson [13] give a comprehensive account of earlier work, and references to more recent literature are given in the surrey paper of Harris [S]. The similarity between difference equations and differential equations has been noted by many investigators; for instance, Birkhoff [3] pointed out that the analytic theory of linear difference equations provides a “methodological pattern” for the “essentially simpler but analogous” theory of linear differential equations. In this paper, we apply a projection method which has been useful in the treat- ment of analytic differential equations [6, 101 to obtain existence theorems for convergent factorial series solutions of analytic difference equations.
    [Show full text]
  • Physics 112 Mathematical Notes Winter 2000 1. Power Series
    Physics 112 Mathematical Notes Winter 2000 1. Power Series Expansion of the Fermi-Dirac Integral The Fermi-Dirac integral is defined as: ∞ 1 xn−1 dx fn(z) ≡ , Γ(n) z−1ex +1 0 µ/kT where x ≡ /kT and z ≡ e . We wish to derive a power series expansion for fn(z)that is useful in the limit of z → 0. We manipulate fn(z) as follows: ∞ z xn−1 dx fn(z)= Γ(n) ex + z 0 ∞ z xn−1 dx = Γ(n) ex(1 + ze−x) 0 ∞ ∞ z −x n−1 m −x m = e x dx (−1) (ze ) Γ(n) 0 m=0 ∞ ∞ z = (−1)mzm e−(m+1)xxn−1 dx . Γ(n) m=0 0 Using the well known integral: ∞ Γ(n) e−Axxn−1 dx = , An 0 and changing the summation variable to p = m + 1, we end up with: ∞ (−1)p−1zp fn(z)= n . (1) p=1 p which is the desired power series expansion. This result is also useful for deriving additional properties ofthe fn(z). For example, it is a simple matter use eq. (1) to verify: ∂ fn−1(z)=z fn(z) . ∂z Note that the classical limit is obtained for z 1. In this limit, which also corresponds to the high temperature limit, fn(z) z. One additional special case is noteworthy. If 1 z =1,weobtain ∞ (−1)p−1 fn(1) = n . p=0 p To evaluate this sum, we rewrite it as follows: 1 − 1 fn(1) = n n p odd p p even p 1 1 − 1 = n + n 2 n p odd p p even p p even p ∞ ∞ 1 − 1 = n 2 n p=1 p p=1 (2p) ∞ − 1 1 = 1 n−1 n .
    [Show full text]
  • 3 Fourier Series
    3 Fourier Series 3.1 Introduction Although it was not apparent in the early historical development of the method of separation of variables, what we are about to do is the analog for function spaces of the following basic observation, which explains why the standard basis i = e1, j = e2, and k = e3, in 3-space is so useful. The basis vectors are an orthonormal set of vectors, which means e1 e2 =0, e1 e3 =0, e2 e3 =0, · · · e1 e1 =1, e2 e2 =1, e3 e3 =1. · · · Any vector v in 3-space, has a unique representation as v = b1e1 + b2e2+b3e3. Furthermore, the coefficients b1,b2, and b3 are easily computed from v : b1 = v e1,b2 = v e2,b3 = v e3. · · · Just as was the case for the laterally insulated heat-conducting rod and for the small transverse vibrations of a string, whenever the method of separation of variables is used to solve an IBVP, you reach a point where certain data given by a function f (x) must be expanded in a series of one of the following forms in which L>0: ∞ nπx f (x)= b sin , n L n=1 FourierX Sine Series Expansion a ∞ nπx f (x)= 0 + a cos , 2 n L n=1 Fourier CosineX Series Expansion a ∞ nπx nπx f (x)= 0 + a cos + b sin , 2 n L n L n=1 XFourier³ Series Expansion ´ or, more generally, ∞ f (x)= fnϕn (x) n=1 EigenfunctionX Expansion where ϕn (x) are the eigenfunctions of an EVP that arises in the separation of variables process.
    [Show full text]