Time Series Analysis Probability Distribution
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Von Mises' Frequentist Approach to Probability
Section on Statistical Education – JSM 2008 Von Mises’ Frequentist Approach to Probability Milo Schield1 and Thomas V. V. Burnham2 1 Augsburg College, Minneapolis, MN 2 Cognitive Systems, San Antonio, TX Abstract Richard Von Mises (1883-1953), the originator of the birthday problem, viewed statistics and probability theory as a science that deals with and is based on real objects rather than as a branch of mathematics that simply postulates relationships from which conclusions are derived. In this context, he formulated a strict Frequentist approach to probability. This approach is limited to observations where there are sufficient reasons to project future stability – to believe that the relative frequency of the observed attributes would tend to a fixed limit if the observations were continued indefinitely by sampling from a collective. This approach is reviewed. Some suggestions are made for statistical education. 1. Richard von Mises Richard von Mises (1883-1953) may not be well-known by statistical educators even though in 1939 he first proposed a problem that is today widely-known as the birthday problem.1 There are several reasons for this lack of recognition. First, he was educated in engineering and worked in that area for his first 30 years. Second, he published primarily in German. Third, his works in statistics focused more on theoretical foundations. Finally, his inductive approach to deriving the basic operations of probability is very different from the postulate-and-derive approach normally taught in introductory statistics. See Frank (1954) and Studies in Mathematics and Mechanics (1954) for a review of von Mises’ works. This paper is based on his two English-language books on probability: Probability, Statistics and Truth (1936) and Mathematical Theory of Probability and Statistics (1964). -
Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range)
NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 7 M2 ALGEBRA I Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range) Student Outcomes . Students explain why a median is a better description of a typical value for a skewed distribution. Students calculate the 5-number summary of a data set. Students construct a box plot based on the 5-number summary and calculate the interquartile range (IQR). Students interpret the IQR as a description of variability in the data. Students identify outliers in a data distribution. Lesson Notes Distributions that are not symmetrical pose some challenges in students’ thinking about center and variability. The observation that the distribution is not symmetrical is straightforward. The difficult part is to select a measure of center and a measure of variability around that center. In Lesson 3, students learned that, because the mean can be affected by unusual values in the data set, the median is a better description of a typical data value for a skewed distribution. This lesson addresses what measure of variability is appropriate for a skewed data distribution. Students construct a box plot of the data using the 5-number summary and describe variability using the interquartile range. Classwork Exploratory Challenge 1/Exercises 1–3 (10 minutes): Skewed Data and Their Measure of Center Verbally introduce the data set as described in the introductory paragraph and dot plot shown below. Exploratory Challenge 1/Exercises 1–3: Skewed Data and Their Measure of Center Consider the following scenario. A television game show, Fact or Fiction, was cancelled after nine shows. Many people watched the nine shows and were rather upset when it was taken off the air. -
Moving Average Filters
CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq. -
Power Grid Topology Classification Based on Time Domain Analysis
Power Grid Topology Classification Based on Time Domain Analysis Jia He, Maggie X. Cheng and Mariesa L. Crow, Fellow, IEEE Abstract—Power system monitoring is significantly im- all depend on the correct network topology information. proved with the use of Phasor Measurement Units (PMUs). Topology change, no matter what caused it, must be The availability of PMU data and recent advances in ma- immediately updated. An unattended topology error may chine learning together enable advanced data analysis that lead to cascading power outages and cause large scale is critical for real-time fault detection and diagnosis. This blackout ( [2], [3]). In this paper, we demonstrate that paper focuses on the problem of power line outage. We use a machine learning framework and consider the problem using a machine learning approach and real-time mea- of line outage identification as a classification problem in surement data we can timely and accurately identify the machine learning. The method is data-driven and does outage location. We leverage PMU data and consider not rely on the results of other analysis such as power the task of identifying the location of line outage as flow analysis and solving power system equations. Since it a classification problem. The methods can be applied does not involve using power system parameters to solve in real-time. Although training a classifier can be time- equations, it is applicable even when such parameters are consuming, the prediction of power line status can be unavailable. The proposed method uses only voltage phasor done in real-time. angles obtained from PMUs. -
Logistic-Weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals
Logistic-weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals Weixuan Chen*-IEEE Student Member, Xilin Liu-IEEE Student Member, and Brian Litt-IEEE Senior Member Abstract— One of the most interesting applications of brain linear Wiener filter [7–10]. To take into account the computer interfaces (BCIs) is movement prediction. With the physiological, physical, and mechanical constraints that development of invasive recording techniques and decoding affect the flexion of limbs, some studies applied switching algorithms in the past ten years, many single neuron-based and models [11] or Bayesian models [12,13] to the results of electrocorticography (ECoG)-based studies have been able to linear regressions above. Other studies have explored the decode trajectories of limb movements. As the output variables utility of non-linear methods, including neural networks [14– are continuous in these studies, a regression model is commonly 17], multilinear perceptrons [18], and support vector used. However, the decoding of limb movements is not a pure machines [18], but they tend to have difficulty with high regression problem, because the trajectories can be apparently dimensional features and limited training data [13]. classified into a motion state and a resting state, which result in a binary property overlooked by previous studies. In this Nevertheless, the studies of limb movement translation paper, we propose an algorithm called logistic-weighted are in fact not pure regression problems, because the limbs regression to make use of the property, and apply the algorithm are not always under the motion state. Whether it is during an to a BCI system decoding flexion of human fingers from ECoG experiment or in the daily life, the resting state of the limbs is signals. -
Seasonality in Portfolio Risk Calculations
UMEÅUNIVERSITET Department of Physics Master’s Thesis in Engineering Physics HT 2019, 30 Credits Seasonality in Portfolio Risk Calculations Empirical study on Value at Risk of agricultural commodities using conditional volatility models v1.0 Author Emil Söderberg Nasdaq Umeå University Supervisor: Anders Stäring Supervisor: Joakim Ekspong Examiner: Markus Ådahl Seasonality in Portfolio Risk Calculations Abstract In this thesis, we examine if the conditional volatility models GARCH(1,1) and eGARCH(1,1) can give an improved Value-at-Risk (VaR) forecast by considering seasonal behaviours. It has been shown that some markets have a shifting volatility that is dependent on the seasons. By including sinusoidal seasonal components to the original conditional models, we try to capture this seasonal dependency. The examined data sets are from the agricultural commodity market, corn and soybean, which are known to have seasonal dependencies. This study validates the volatility performance by in-sample fit and out-of-sample fit. Value-at-Risk is estimated using two methods; quantile estimation (QE) of the return series and filtered historical simulation (FHS) from the standardized return series. The methods are validated by a backtesting process which analyse the number of outliers from each model by the Kupiec’s POF-test (proportional of failures) and the Christoffersen’s IF-test (interval forecast). The results indicates that the seasonal models has an overall better volatility validation than the non-seasonal models but the effect from the distributional assumption of standardized returns varies depending on model. Both VaR methods, for non-seasonal and seasonal models, seems however to benefit from assuming that the standardized return series experience kurtosis. -
Autocorrelation and Frequency-Resolved Optical Gating Measurements Based on the Third Harmonic Generation in a Gaseous Medium
Appl. Sci. 2015, 5, 136-144; doi:10.3390/app5020136 OPEN ACCESS applied sciences ISSN 2076-3417 www.mdpi.com/journal/applsci Article Autocorrelation and Frequency-Resolved Optical Gating Measurements Based on the Third Harmonic Generation in a Gaseous Medium Yoshinari Takao 1,†, Tomoko Imasaka 2,†, Yuichiro Kida 1,† and Totaro Imasaka 1,3,* 1 Department of Applied Chemistry, Graduate School of Engineering, Kyushu University, 744, Motooka, Nishi-ku, Fukuoka 819-0395, Japan; E-Mails: [email protected] (Y.T.); [email protected] (Y.K.) 2 Laboratory of Chemistry, Graduate School of Design, Kyushu University, 4-9-1, Shiobaru, Minami-ku, Fukuoka 815-8540, Japan; E-Mail: [email protected] 3 Division of Optoelectronics and Photonics, Center for Future Chemistry, Kyushu University, 744, Motooka, Nishi-ku, Fukuoka 819-0395, Japan † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +81-92-802-2883; Fax: +81-92-802-2888. Academic Editor: Takayoshi Kobayashi Received: 7 April 2015 / Accepted: 3 June 2015 / Published: 9 June 2015 Abstract: A gas was utilized in producing the third harmonic emission as a nonlinear optical medium for autocorrelation and frequency-resolved optical gating measurements to evaluate the pulse width and chirp of a Ti:sapphire laser. Due to a wide frequency domain available for a gas, this approach has potential for use in measuring the pulse width in the optical (ultraviolet/visible) region beyond one octave and thus for measuring an optical pulse width less than 1 fs. -
FORMULAS from EPIDEMIOLOGY KEPT SIMPLE (3E) Chapter 3: Epidemiologic Measures
FORMULAS FROM EPIDEMIOLOGY KEPT SIMPLE (3e) Chapter 3: Epidemiologic Measures Basic epidemiologic measures used to quantify: • frequency of occurrence • the effect of an exposure • the potential impact of an intervention. Epidemiologic Measures Measures of disease Measures of Measures of potential frequency association impact (“Measures of Effect”) Incidence Prevalence Absolute measures of Relative measures of Attributable Fraction Attributable Fraction effect effect in exposed cases in the Population Incidence proportion Incidence rate Risk difference Risk Ratio (Cumulative (incidence density, (Incidence proportion (Incidence Incidence, Incidence hazard rate, person- difference) Proportion Ratio) Risk) time rate) Incidence odds Rate Difference Rate Ratio (Incidence density (Incidence density difference) ratio) Prevalence Odds Ratio Difference Macintosh HD:Users:buddygerstman:Dropbox:eks:formula_sheet.doc Page 1 of 7 3.1 Measures of Disease Frequency No. of onsets Incidence Proportion = No. at risk at beginning of follow-up • Also called risk, average risk, and cumulative incidence. • Can be measured in cohorts (closed populations) only. • Requires follow-up of individuals. No. of onsets Incidence Rate = ∑person-time • Also called incidence density and average hazard. • When disease is rare (incidence proportion < 5%), incidence rate ≈ incidence proportion. • In cohorts (closed populations), it is best to sum individual person-time longitudinally. It can also be estimated as Σperson-time ≈ (average population size) × (duration of follow-up). Actuarial adjustments may be needed when the disease outcome is not rare. • In an open populations, Σperson-time ≈ (average population size) × (duration of follow-up). Examples of incidence rates in open populations include: births Crude birth rate (per m) = × m mid-year population size deaths Crude mortality rate (per m) = × m mid-year population size deaths < 1 year of age Infant mortality rate (per m) = × m live births No. -
Power Comparisons of Five Most Commonly Used Autocorrelation Tests
Pak.j.stat.oper.res. Vol.16 No. 1 2020 pp119-130 DOI: http://dx.doi.org/10.18187/pjsor.v16i1.2691 Pakistan Journal of Statistics and Operation Research Power Comparisons of Five Most Commonly Used Autocorrelation Tests Stanislaus S. Uyanto School of Economics and Business, Atma Jaya Catholic University of Indonesia, [email protected] Abstract In regression analysis, autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated. The consequence is that the estimates of coefficients and their standard errors will be wrong if the autocorrelation is ignored. There are many tests for autocorrelation, we want to know which test is more powerful. We use Monte Carlo methods to compare the power of five most commonly used tests for autocorrelation, namely Durbin-Watson, Breusch-Godfrey, Box–Pierce, Ljung Box, and Runs tests in two different linear regression models. The results indicate the Durbin-Watson test performs better in the regression model without lagged dependent variable, although the advantage over the other tests reduce with increasing autocorrelation and sample sizes. For the model with lagged dependent variable, the Breusch-Godfrey test is generally superior to the other tests. Key Words: Correlated error terms; Ordinary least squares assumption; Residuals; Regression diagnostic; Lagged dependent variable. Mathematical Subject Classification: 62J05; 62J20. 1. Introduction An important assumption of the classical linear regression model states that there is no autocorrelation in the error terms. Autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem (Plackett, 1949, 1950; Greene, 2018) does not apply, and that ordinary least squares estimators are no longer the best linear unbiased estimators. -
LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix). -
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive -
3 Autocorrelation
3 Autocorrelation Autocorrelation refers to the correlation of a time series with its own past and future values. Autocorrelation is also sometimes called “lagged correlation” or “serial correlation”, which refers to the correlation between members of a series of numbers arranged in time. Positive autocorrelation might be considered a specific form of “persistence”, a tendency for a system to remain in the same state from one observation to the next. For example, the likelihood of tomorrow being rainy is greater if today is rainy than if today is dry. Geophysical time series are frequently autocorrelated because of inertia or carryover processes in the physical system. For example, the slowly evolving and moving low pressure systems in the atmosphere might impart persistence to daily rainfall. Or the slow drainage of groundwater reserves might impart correlation to successive annual flows of a river. Or stored photosynthates might impart correlation to successive annual values of tree-ring indices. Autocorrelation complicates the application of statistical tests by reducing the effective sample size. Autocorrelation can also complicate the identification of significant covariance or correlation between time series (e.g., precipitation with a tree-ring series). Autocorrelation implies that a time series is predictable, probabilistically, as future values are correlated with current and past values. Three tools for assessing the autocorrelation of a time series are (1) the time series plot, (2) the lagged scatterplot, and (3) the autocorrelation function. 3.1 Time series plot Positively autocorrelated series are sometimes referred to as persistent because positive departures from the mean tend to be followed by positive depatures from the mean, and negative departures from the mean tend to be followed by negative departures (Figure 3.1).