Fractional Poisson Process in Terms of Alpha-Stable Densities

Total Page:16

File Type:pdf, Size:1020Kb

Fractional Poisson Process in Terms of Alpha-Stable Densities FRACTIONAL POISSON PROCESS IN TERMS OF ALPHA-STABLE DENSITIES by DEXTER ODCHIGUE CAHOY Submitted in partial fulfillment of the requirements For the degree of Doctor of Philosophy Dissertation Advisor: Dr. Wojbor A. Woyczynski Department of Statistics CASE WESTERN RESERVE UNIVERSITY August 2007 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the dissertation of DEXTER ODCHIGUE CAHOY candidate for the Doctor of Philosophy degree * Committee Chair: Dr. Wojbor A. Woyczynski Dissertation Advisor Professor Department of Statistics Committee: Dr. Joe M. Sedransk Professor Department of Statistics Committee: Dr. David Gurarie Professor Department of Mathematics Committee: Dr. Matthew J. Sobel Professor Department of Operations August 2007 *We also certify that written approval has been obtained for any proprietary material contained therein. Table of Contents Table of Contents . iii List of Tables . v List of Figures . vi Acknowledgment . viii Abstract . ix 1 Motivation and Introduction 1 1.1 Motivation . 1 1.2 Poisson Distribution . 2 1.3 Poisson Process . 4 1.4 α-stable Distribution . 8 1.4.1 Parameter Estimation . 10 1.5 Outline of The Remaining Chapters . 14 2 Generalizations of the Standard Poisson Process 16 2.1 Standard Poisson Process . 16 2.2 Standard Fractional Generalization I . 19 2.3 Standard Fractional Generalization II . 26 2.4 Non-Standard Fractional Generalization . 27 2.5 Fractional Compound Poisson Process . 28 2.6 Alternative Fractional Generalization . 29 3 Fractional Poisson Process 32 3.1 Some Known Properties of fPp . 32 3.2 Asymptotic Behavior of the Waiting Time Density . 35 3.3 Simulation of Waiting Time . 38 3.4 The Limiting Scaled nth Arrival Time Distribution . 41 3.5 Intermittency . 46 3.6 Stationarity and Dependence of Increments . 49 3.7 Covariance Structure and Self-Similarity . 54 3.8 Limiting Scaled Fractional Poisson Distribution . 58 3.9 Alternative fPp . 63 iii 4 Estimation 67 4.1 Method of Moments . 67 4.2 Asymptotic Normality of Estimators . 71 4.3 Numerical Experiment . 76 4.3.1 Simulated fPp Data . 77 5 Summary, Conclusions, and Future Research Directions 80 5.1 Summary . 80 5.2 Conclusions . 81 5.3 Future Research Directions . 81 Appendix 83 Appendix A. Some Properties of α+−Stable Densities . 83 Appendix B. Scaled Fractional Poisson Quantiles (3.12)....... 86 Bibliography 90 iv List of Tables 3.1 Properties of fPp compared with those of the ordinary Poisson process. 33 3.2 χ2 Goodness-of-fit Test Statistics with µ = 1. .............. 41 3.3 Parameter estimates of the fitted model atb, µ = 1. ........... 54 4.1 Test statistics for comparing parameter (ν, µ) = (0.9, 10) estimators using a simulated fPp data. ........................ 78 4.2 Test statistics for comparing parameter (ν, µ) = (0.3, 1) estimators using a simulated fPp data. ........................ 78 4.3 Test statistics for comparing parameter (ν, µ) = (0.2, 100) estimators using a simulated fPp data. ........................ 78 4.4 Test statistics for comparing parameter (ν, µ) = (0.6, 1000) estimators using a simulated fPp data. ........................ 79 5.1 Probability density (3.12) values for ν = 0.05(0.05)0.50 and z = 0.0(0.1)3.0. 86 5.2 Probability density (3.12) values for ν = 0.05(0.05)0.50 and z = 3.1(0.1)5.0. 87 5.3 Probability density (3.12) values for ν = 0.55(0.05)0.95 and z = 0.0(0.1)3.0. 88 5.4 Probability density (3.12) values for ν = 0.55(0.05)0.95 and z = 3.1(0.1)5.0. 89 v List of Figures 3.1 The mean of fPp as a function of time t and fractional order ν.... 34 3.2 The variance of fPp as a function of time t and fractional order ν... 35 3.3 Waiting time densities of fPp (3.1) using µ = 1, and ν = 0.1(0.1)1 (log-log scale). ............................... 37 3.4 Scaled nth arrival time distributions for standard Poisson process (3.8) with n = 1, 2, 3, 5, 10, 30, and µ = 1. ................... 42 3.5 Scaled nth fPp arrival time distributions (3.11) corresponding to ν = 0.5, n = 1, 3, 10, 30, and µ = 1 (log-log scale). .............. 46 3.6 Histograms for standard Poisson process (leftmost panel) and fractional Poisson processes of orders ν = 0.9 & 0.5 (middle and rightmost panels). 48 3.7 The limit proportion of empty bins R(ν) using a total of B=50 bins. 49 3.8 Dependence structure of the fPp increments for fractional orders ν = 0.4, 0.6, 0.7, 0.8, 0.95, and 0.99999, with µ = 1. ............. 50 3.9 Distribution of the fPp increments on the sampling intervals a)[0, 600], b)(600, 1200], c) (1200, 1800], and (1800, 2400] corresponding to frac- tional order ν = 0.99999, and µ = 1. .................. 51 3.10 Distribution of the fPp increments on the sampling intervals a)[0, 600], b)(600, 1200], c) (1200, 1800], and (1800, 2400] corresponding to frac- tional order ν = 0.8, and µ = 1. ..................... 52 3.11 Distribution of the fPp increments on the sampling intervals a)[0, 600], b)(600, 1200], c) (1200, 1800], and (1800, 2400] corresponding to frac- tional order ν = 0.6, and µ = 1. ..................... 53 3.12 The parameter estimate b as a function of ν, with µ = 1 . 55 3.13 The function atb fitted to the simulated covariance of fPp for different fractional order ν, with µ =1. ..................... 56 3.14 Two-dimensional covariance structure of fPp for fractional orders a) ν = 0.25, b) ν = 0.5, c) ν = 0.75, and d) ν = 1, with µ = 1. ..... 57 3.15 Limiting distribution (3.12) for ν= 0.1(0.1)0.9 and 0.95, with µ =1. 63 3.16 Sample trajectories of (a) standard Poisson process, (b) fPp, and (c) the alternative fPp generated by stochastic fractional differential equa- tion (3.17), with ν = 0.5. ......................... 66 vi A α+-Stable Densities . 85 vii ACKNOWLEDGMENTS With special thanks to the following individuals who have helped me in the de- velopment of my paper and who have supported me throughout my graduate studies: Dr. Wojbor A. Woyczynski, for your guidance and support as my graduate ad- visor, and for your confidence in my capabilities that has inspired me to become a better individual and statistician; Dr. Vladimir V. Uchaikin, for the opportunity to work with you, and Dr. Enrico Scalas, for your help in clarifying some details in renewal theory; My panelists and professors in the department - Dr. Joe M. Sedransk, Dr. David Gurarie and Dr. Matthew J. Sobel. I am grateful for having you in my committee and for helping me improve my thesis with your superb suggestions and advice, Dr. Jiayang Sun for your encouragement in developing my leadership and research skills, and Dr. Joe M. Sedransk, the training I received in your class has significantly enhanced my research experience, and Dr. James C. Alexander and Dr. Jill E. Korbin for the graduate assistantship grant; Ms. Sharon Dingess, for your untiring assistance with my office needs throughout my tenure as a student in the department, and to my friends and classmates in the Department of Statistics; My parents, brother and sisters for your support of my goals and aspirations; The First Assembly of God Church, particularly the Multimedia Ministry, for providing spiritual shelter and moral back-up; Edwin and Ferly Santos, Malou Dham, Lowell Lorenzo, Pastor Clint and Sally Bryan, and Marcelo and Lynn Gonzalez for your friendship; My wife, Armi and my daughter, Ysa, for your love and prayers and the inspiration you have given me to be a better husband and father; Most of all, to my Heavenly Father who is the source of all good and perfect gift, and to my Lord and Savior Jesus Christ from whom all blessings flow, my love and praises. viii Fractional Poisson Process in Terms of Alpha-Stable Densities Abstract by Dexter Odchigue Cahoy The link between fractional Poisson process (fPp) and α-stable density is established by solving an integral equation. The result is then used to study the properties of fPp such as asymptotical n-th arrival time, number of events distributions, covariance structure, stationarity and dependence of increments, self-similarity, and intermit- tency property. Asymptotically normal parameter estimators and their variants are derived; their properties are studied and compared using synthetic data. An alterna- tive fPp model is also proposed.Finally, the asymptotic distribution of a scaled fPp random variable is shown to be free of some parameters; formulae for integer-order, non-central moments are also derived. Keywords: fractional Poisson process, α-stable, intermittency, scaled fPp, self- similarity ix Chapter 1 Motivation and Introduction 1.1 Motivation For almost two centuries, Poisson process served as the simplest, and yet one of the most important stochastic models. Its main properties, namely, absence of memory and jump-shaped increments model a large number of processes in several scientific fields such as epidemiology, industry, biology, queueing theory, traffic flow, and commerce (see Haight (1967, chap. 7)). On the other hand, there are many processes that exhibit long memory (e.g., network traffic and other complex systems) as well. It would be useful if one could generalize the standard Poisson process to include systems or processes that don’t have rapid memory loss in the long run. It is largely this appealing feature that drives this thesis to investigate further the sta- tistical properties of a particular generalization of a Poisson process called fractional Poisson process (fPp). Moreover, the generalization has some parameters that need to be estimated in order for the model to be applicable to a wide variety of interesting counting phe- nomena.
Recommended publications
  • Introduction to Lévy Processes
    Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents Contents 1 1 Introduction and Examples 2 1.1 Infinitely divisible distributions . 2 1.2 Examples of infinitely divisible distributions . 2 1.3 The Lévy Khintchine formula . 4 1.4 Digression on Relativity . 6 2 Lévy processes 8 2.1 Definition of a Lévy process . 8 2.2 Examples of Lévy processes . 9 2.3 Exploring the Jumps of a Lévy Process . 11 3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition . 19 3.2 Consequences of the Lévy-Itô Decomposition . 21 3.3 Exercises . 23 3.4 Discussion . 23 4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group . 24 4.2 The Generator . 26 4.3 Recurrence and Transience . 28 4.4 Fractional Derivatives . 29 5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications . 31 5.2 Stochastic Integration . 32 5.3 Construction of the Stochastic Integral . 33 5.4 Quadratic Variation and Itô Formula with jumps . 34 5.5 Stochastic Differential Equation . 35 Bibliography 38 1 Chapter 1 Introduction and Examples In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • Modeling the Physics of RRAM Defects
    Modeling the physics of RRAM defects A model simulating RRAM defects on a macroscopic physical level T. Hol Modeling the physics of RRAM defects A model simulating RRAM defects on a macroscopic physical level by T. Hol to obtain the degree of Master of Science at the Delft University of Technology, to be defended publicly on Friday August 28, 2020 at 13:00. Student number: 4295528 Project duration: September 1, 2019 – August 28, 2020 Thesis committee: Prof. dr. ir. S. Hamdioui TU Delft, supervisor Dr. Ir. R. Ishihara TU Delft Dr. Ir. S. Vollebregt TU Delft Dr. Ir. M. Taouil TU Delft Ir. M. Fieback TU Delft, supervisor An electronic version of this thesis is available at https://repository.tudelft.nl/. Abstract Resistive RAM, or RRAM, is one of the emerging non-volatile memory (NVM) technologies, which could be used in the near future to fill the gap in the memory hierarchy between dynamic RAM (DRAM) and Flash, or even completely replace Flash. RRAM operates faster than Flash, but is still non-volatile, which enables it to be used in a dense 3D NVM array. It is also a suitable candidate for computation-in-memory, neuromorphic computing and reconfigurable computing. However, the show stopping problem of RRAM is that it suffers from unique defects, which is the reason why RRAM is still not widely commercially adopted. These defects differ from those that appear in CMOS technology, due to the arbitrary nature of the forming process. They can not be detected by conventional tests and cause defective devices to go unnoticed.
    [Show full text]
  • Kinematics, Kinetics, Dynamics, Inertia Kinematics Is a Branch of Classical
    Course: PGPathshala-Biophysics Paper 1:Foundations of Bio-Physics Module 22: Kinematics, kinetics, dynamics, inertia Kinematics is a branch of classical mechanics in physics in which one learns about the properties of motion such as position, velocity, acceleration, etc. of bodies or particles without considering the causes or the driving forces behindthem. Since in kinematics, we are interested in study of pure motion only, we generally ignore the dimensions, shapes or sizes of objects under study and hence treat them like point objects. On the other hand, kinetics deals with physics of the states of the system, causes of motion or the driving forces and reactions of the system. In a particular state, namely, the equilibrium state of the system under study, the systems can either be at rest or moving with a time-dependent velocity. The kinetics of the prior, i.e., of a system at rest is studied in statics. Similarly, the kinetics of the later, i.e., of a body moving with a velocity is called dynamics. Introduction Classical mechanics describes the area of physics most familiar to us that of the motion of macroscopic objects, from football to planets and car race to falling from space. NASCAR engineers are great fanatics of this branch of physics because they have to determine performance of cars before races. Geologists use kinematics, a subarea of classical mechanics, to study ‘tectonic-plate’ motion. Even medical researchers require this physics to map the blood flow through a patient when diagnosing partially closed artery. And so on. These are just a few of the examples which are more than enough to tell us about the importance of the science of motion.
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]
  • Hamilton Description of Plasmas and Other Models of Matter: Structure and Applications I
    Hamilton description of plasmas and other models of matter: structure and applications I P. J. Morrison Department of Physics and Institute for Fusion Studies The University of Texas at Austin [email protected] http://www.ph.utexas.edu/ morrison/ ∼ MSRI August 20, 2018 Survey Hamiltonian systems that describe matter: particles, fluids, plasmas, e.g., magnetofluids, kinetic theories, . Hamilton description of plasmas and other models of matter: structure and applications I P. J. Morrison Department of Physics and Institute for Fusion Studies The University of Texas at Austin [email protected] http://www.ph.utexas.edu/ morrison/ ∼ MSRI August 20, 2018 Survey Hamiltonian systems that describe matter: particles, fluids, plasmas, e.g., magnetofluids, kinetic theories, . \Hamiltonian systems .... are the basis of physics." M. Gutzwiller Coarse Outline William Rowan Hamilton (August 4, 1805 - September 2, 1865) I. Today: Finite-dimensional systems. Particles etc. ODEs II. Tomorrow: Infinite-dimensional systems. Hamiltonian field theories. PDEs Why Hamiltonian? Beauty, Teleology, . : Still a good reason! • 20th Century framework for physics: Fluids, Plasmas, etc. too. • Symmetries and Conservation Laws: energy-momentum . • Generality: do one problem do all. • ) Approximation: perturbation theory, averaging, . 1 function. • Stability: built-in principle, Lagrange-Dirichlet, δW ,.... • Beacon: -dim KAM theorem? Krein with Cont. Spec.? • 9 1 Numerical Methods: structure preserving algorithms: • symplectic, conservative, Poisson integrators,
    [Show full text]
  • Estimation of Α-Stable Sub-Gaussian Distributions for Asset Returns
    Estimation of α-Stable Sub-Gaussian Distributions for Asset Returns Sebastian Kring, Svetlozar T. Rachev, Markus Hochst¨ otter¨ , Frank J. Fabozzi Sebastian Kring Institute of Econometrics, Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Postfach 6980, 76128, Karlsruhe, Germany E-mail: [email protected] Svetlozar T. Rachev Chair-Professor, Chair of Econometrics, Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Postfach 6980, 76128 Karlsruhe, Germany and Department of Statistics and Applied Probability University of California, Santa Barbara CA 93106-3110, USA E-mail: [email protected] Markus Hochst¨ otter¨ Institute of Econometrics, Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Postfach 6980, 76128, Karlsruhe, Germany Frank J. Fabozzi Professor in the Practice of Finance School of Management Yale University New Haven, CT USA 1 Abstract Fitting multivariate α-stable distributions to data is still not feasible in higher dimensions since the (non-parametric) spectral measure of the characteristic func- tion is extremely difficult to estimate in dimensions higher than 2. This was shown by Chen and Rachev (1995) and Nolan, Panorska and McCulloch (1996). α-stable sub-Gaussian distributions are a particular (parametric) subclass of the multivariate α-stable distributions. We present and extend a method based on Nolan (2005) to estimate the dispersion matrix of an α-stable sub-Gaussian dis- tribution and estimate the tail index α of the distribution. In particular, we de- velop an estimator for the off-diagonal entries of the dispersion matrix that has statistical properties superior to the normal off-diagonal estimator based on the covariation.
    [Show full text]
  • Model CIB-L Inertia Base Frame
    KINETICS® Inertia Base Frame Model CIB-L Description Application Model CIB-L inertia base frames, when filled with KINETICSTM CIB-L inertia base frames are specifically concrete, meet all specifications for inertia bases, designed and engineered to receive poured concrete, and when supported by proper KINETICSTM vibration for use in supporting mechanical equipment requiring isolators, provide the ultimate in equipment isolation, a reinforced concrete inertia base. support, anchorage, and vibration amplitude control. Inertia bases are used to support mechanical KINETICSTM CIB-L inertia base frames incorporate a equipment, reduce equipment vibration, provide for unique structural design which integrates perimeter attachment of vibration isolators, prevent differential channels, isolator support brackets, reinforcing rods, movement between driving and driven members, anchor bolts and concrete fill into a controlled load reduce rocking by lowering equipment center of transfer system, utilizing steel in tension and concrete gravity, reduce motion of equipment during start-up in compression, resulting in high strength and stiff- and shut-down, act to reduce reaction movement due ness with minimum steel frame weight. Completed to operating loads on equipment, and act as a noise inertia bases using model CIB-L frames are stronger barrier. and stiffer than standard inertia base frames using heavier steel members. Typical uses for KINETICSTM inertia base frames, with poured concrete and supported by KINETICSTM Standard CIB-L inertia base frames
    [Show full text]
  • KINETICS Tool of ANSA for Multi Body Dynamics
    physics on screen KINETICS tool of ANSA for Multi Body Dynamics Introduction During product manufacturing processes it is important for engineers to have an overview of their design prototypes’ motion behavior to understand how moving bodies interact with respect to each other. Addressing this, the KINETICS tool has been incorporated in ANSA as a Multi-Body Dynamics (MBD) solution. Through its use, engineers can perform motion analysis to study and analyze the dynamics of mechanical systems that change their response with respect to time. Through the KINETICS tool ANSA offers multiple capabilities that are accomplished by the functionality presented below. Model definition Multi-body models can be defined on any CAD data or on existing FE models. Users can either define their Multi-body models manually from scratch or choose to automatically convert an existing FE model setup (e.g. ABAQUS, LSDYNA) to a Multi- body model. Contact Modeling The accuracy and robustness of contact modeling are very important in MBD simulations. The implementation of algorithms based on the non-smooth dynamics theory of unilateral contacts allows users to study contact incidents providing accurate solutions that the ordinary regularized methodologies cannot offer. Furthermore, the selection between different friction types maximizes the realism in the model behavior. Image 1: A Seat-dummy FE model converted to a multibody model Image 2: A Bouncing of a piston inside a cylinder BETA CAE Systems International AG D4 Business Village Luzern, Platz 4 T +41 415453650 [email protected] CH-6039 Root D4, Switzerland F +41 415453651 www.beta-cae.com Configurator In many cases, engineers need to manipulate mechanisms under the influence of joints, forces and contacts and save them in different positions.
    [Show full text]
  • 1 Stable Distributions in Streaming Computations
    1 Stable Distributions in Streaming Computations Graham Cormode1 and Piotr Indyk2 1 AT&T Labs – Research, 180 Park Avenue, Florham Park NJ, [email protected] 2 Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge MA, [email protected] 1.1 Introduction In many streaming scenarios, we need to measure and quantify the data that is seen. For example, we may want to measure the number of distinct IP ad- dresses seen over the course of a day, compute the difference between incoming and outgoing transactions in a database system or measure the overall activity in a sensor network. More generally, we may want to cluster readings taken over periods of time or in different places to find patterns, or find the most similar signal from those previously observed to a new observation. For these measurements and comparisons to be meaningful, they must be well-defined. Here, we will use the well-known and widely used Lp norms. These encom- pass the familiar Euclidean (root of sum of squares) and Manhattan (sum of absolute values) norms. In the examples mentioned above—IP traffic, database relations and so on—the data can be modeled as a vector. For example, a vector representing IP traffic grouped by destination address can be thought of as a vector of length 232, where the ith entry in the vector corresponds to the amount of traffic to address i. For traffic between (source, destination) pairs, then a vec- tor of length 264 is defined. The number of distinct addresses seen in a stream corresponds to the number of non-zero entries in a vector of counts; the differ- ence in traffic between two time-periods, grouped by address, corresponds to an appropriate computation on the vector formed by subtracting two vectors, and so on.
    [Show full text]
  • Chapter 1 Poisson Processes
    Chapter 1 Poisson Processes 1.1 The Basic Poisson Process The Poisson Process is basically a counting processs. A Poisson Process on the interval [0 , ∞) counts the number of times some primitive event has occurred during the time interval [0 , t]. The following assumptions are made about the ‘Process’ N(t). (i). The distribution of N(t + h) − N(t) is the same for each h> 0, i.e. is independent of t. ′ (ii). The random variables N(tj ) − N(tj) are mutually independent if the ′ intervals [tj , tj] are nonoverlapping. (iii). N(0) = 0, N(t) is integer valued, right continuous and nondecreasing in t, with Probability 1. (iv). P [N(t + h) − N(t) ≥ 2]= P [N(h) ≥ 2]= o(h) as h → 0. Theorem 1.1. Under the above assumptions, the process N(·) has the fol- lowing additional properties. (1). With probability 1, N(t) is a step function that increases in steps of size 1. (2). There exists a number λ ≥ 0 such that the distribution of N(t+s)−N(s) has a Poisson distribution with parameter λ t. 1 2 CHAPTER 1. POISSON PROCESSES (3). The gaps τ1, τ2, ··· between successive jumps are independent identically distributed random variables with the exponential distribution exp[−λ x] for x ≥ 0 P {τj ≥ x} = (1.1) (1 for x ≤ 0 Proof. Let us divide the interval [0 , T ] into n equal parts and compute the (k+1)T kT expected number of intervals with N( n ) − N( n ) ≥ 2. This expected value is equal to T 1 nP N( ) ≥ 2 = n.o( )= o(1) as n →∞ n n there by proving property (1).
    [Show full text]
  • Fundamentals of Biomechanics Duane Knudson
    Fundamentals of Biomechanics Duane Knudson Fundamentals of Biomechanics Second Edition Duane Knudson Department of Kinesiology California State University at Chico First & Normal Street Chico, CA 95929-0330 USA [email protected] Library of Congress Control Number: 2007925371 ISBN 978-0-387-49311-4 e-ISBN 978-0-387-49312-1 Printed on acid-free paper. © 2007 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. 987654321 springer.com Contents Preface ix NINE FUNDAMENTALS OF BIOMECHANICS 29 Principles and Laws 29 Acknowledgments xi Nine Principles for Application of Biomechanics 30 QUALITATIVE ANALYSIS 35 PART I SUMMARY 36 INTRODUCTION REVIEW QUESTIONS 36 CHAPTER 1 KEY TERMS 37 INTRODUCTION TO BIOMECHANICS SUGGESTED READING 37 OF UMAN OVEMENT H M WEB LINKS 37 WHAT IS BIOMECHANICS?3 PART II WHY STUDY BIOMECHANICS?5 BIOLOGICAL/STRUCTURAL BASES
    [Show full text]