Wavelet Analysis of Variability, Teleconnectivity, and Predictability of the September–November East African Rainfall

Total Page:16

File Type:pdf, Size:1020Kb

Wavelet Analysis of Variability, Teleconnectivity, and Predictability of the September–November East African Rainfall 256 JOURNAL OF APPLIED METEOROLOGY VOLUME 44 Wavelet Analysis of Variability, Teleconnectivity, and Predictability of the September–November East African Rainfall DAVISON MWALE AND THIAN YEW GAN Department of Civil and Environmental Engineering, University of Alberta, Edmonton, Alberta, Canada (Manuscript received 2 October 2003, in final form 10 July 2004) ABSTRACT By applying wavelet analysis and wavelet principal component analysis (WPCA) to individual wavelet- scale power and scale-averaged wavelet power, homogeneous zones of rainfall variability and predictability were objectively identified for September–November (SON) rainfall in East Africa (EA). Teleconnections between the SON rainfall and the Indian Ocean and South Atlantic Ocean sea surface temperatures (SST) were also established for the period 1950–97. Excluding the Great Rift Valley, located along the western boundaries of Tanzania and Uganda, and Mount Kilimanjaro in northeastern Tanzania, EA was found to exhibit a single leading mode of spatial and temporal variability. WPCA revealed that EA suffered a consistent decrease in the SON rainfall from 1962 to 1997, resulting in 12 droughts between 1965 and 1997. Using SST predictors identified in the April–June season from the Indian and South Atlantic Oceans, the prediction skill achieved for the SON (one-season lead time) season by the nonlinear model known as artificial neural network calibrated by a genetic algorithm (ANN-GA) was high [Pearson correlation ␳ ranged between 0.65 and 0.9, Hansen–Kuipers (HK) scores ranged between 0.2 and 0.8, and root-mean- square errors (rmse) ranged between 0.4 and 0.75 of the standardized precipitation], but that achieved by the linear canonical correlation analysis model was relatively modest (␳ between 0.25 and 0.55, HK score between Ϫ0.05 and 0.3, and rmse between 0.4 and 1.2 of the standardized precipitation). 1. Introduction ability than the MAM rainfall, with one flood in 1961 and another in 1997 (Philippon et al. 2002) and 12 East Africa (EA), which consists of Kenya, Uganda, droughts between 1965 and 1997 (Ntale 2001), whereas and Tanzania (Fig. 1), has two main rainy seasons: Sep- there was only one recorded failure in the MAM rain- tember–November (SON) and March–May (MAM). fall, which occurred in 1984 (Ntale 2001). Several investigators have studied the spatial and tem- Even though the spatial variability and temporal poral characteristics of EA seasonal rainfall and its as- variability of the SON rainfall have been documented sociations with sea surface temperature (SST) varia- and linked to external forcings such as SST and sea tions in the surrounding oceans (e.g., Ogallo 1989; level pressure, it is still a challenge to try to predict the Basalirwa 1995; Mutai et al. 1998; Basalirwa et al. 1999; SON rainfall accurately (e.g., Ntale et al. 2003). The Ntale et al. 2003). There is a general consensus that the deficiency in prediction skill has been observed in other spatial and temporal variabilities of EA rainfall are parts of the world, especially in the last two decades complex, and, as such, EA has previously been divided (Hastenrath 1995; Webster et al. 2002; Zebiak 2003) into 6–26 homogenous zones of rainfall variability (e.g., and has been attributed to nonstationarity in climate Ogallo 1989; Ntale 2001). The spatial inhomogeneities (Landman et al. 2001), nonlinearity in the interaction of of rainfall variability are attributed to the complex to- climate elements (Barnston et al. 1994), failure to iden- pography, the effects of inland lakes, the seasonal mi- tify robust predictor data (Singh et al. 1995), and the gration of the intertropical convergence zone (ITCZ), inadequacy of prediction models (Webster et al. 2002). and the influence of Indian Ocean SST variations The frequent drought episodes in the latter half of (Ntale 2001). In the last four decades, it has been found the twentieth century and two floods (in 1961 and 1997) that the SON rainfall exhibited greater temporal vari- are good examples of the nonstationary character of the SON rainfall of EA. In addition, previous work by Potts (1971), Rodhe and Virji (1976), and Ropelewski and Corresponding author address: Dr. Thian Yew Gan, Dept. of Halpert (1987) found oscillatory peaks in EA rainfall of Civil and Environmental Engineering, 220 Civil Electrical Engi- neering Bldg., University of Alberta, Edmonton, AB T6B 2G7, 2–8 yr. Because of nonstationarity, the temporal reso- Canada. lution of these oscillations is unknown. Therefore, to E-mail:[email protected] investigate the SON rainfall variability and its relation- © 2005 American Meteorological Society Unauthenticated | Downloaded 09/23/21 07:45 PM UTC FEBRUARY 2005 MWALE AND GAN 257 explore teleconnectivity between rainfall fields in EA and SST in the Indian and South Atlantic Oceans, 3) and, on the basis of SST predictor fields identified from objective 2, to use a linear statistical model (CCA) and a nonlinear model [artificial neural net- work calibrated by a genetic algorithm (ANN-GA)] to predict SON rainfall of EA at one-season lead time. The rest of the paper is organized as follows: Section 2 presents the data and methods used in the analysis. The dominant modes and variability of the SON rain- fall are identified and analyzed in section 3. Relation- ships between EA rainfall variability and the Indian and South Atlantic Ocean SST variability are identified and discussed in section 4. The models used to predict the SON rainfall are presented in section 5. The SON rainfall season is predicted in section 6 using SST data selected from the Indian and Atlantic Oceans. The summary, conclusions, and future work are presented in section 7. 2. Data and methods a. Data FIG. 1. Map showing the study location, East Africa (Uganda, Monthly rainfall data (1950–97) from 21 grid loca- Kenya, and Tanzania), and the Indian and Atlantic Oceans. tions at a resolution of 2.5° ϫ 3.75° latitude–longitude were extracted for EA over 2°N–12°S and 30°–43°E (Fig. 1). The rainfall data are part of a monthly precipi- ship to other climatic elements such as SST, it has be- tation dataset for land areas worldwide from 1900 to come imperative to analyze the events irregularly dis- 1998 provided by the Met Office (UKMO). These data tributed in time that depict nonstationary power. This were constructed from station datasets averaged onto paper attempts to solve the problem of nonstationarity 2.5° ϫ 3.75° grids using Thiessen polygon weights. The by using wavelet analysis and wavelet-based empirical quality control of the data is described in Hulme (1994). orthogonal function analysis (WEOF), also known in Monthly SST anomaly grid data at 5° ϫ 5° latitude– this paper as wavelet principal component analysis longitude resolution were extracted from the Indian (WPCA). Furthermore, because the relationship be- Ocean (20°N–40°S, 40°–105°E; Fig. 1). The SST tween rainfall and SST is nonlinear (e.g., Landman and datasets were 48 yr long (1950–97) and were trans- Mason 1999), this study also proposes to teleconnect formed into seasonal and annual data by computing the SON rainfall to SST using a nonlinear prediction 3-month averages [January–March (JFM), April–June model, the artificial neural network (ANN). The results (AMJ), July–September (JAS), and October–Decem- from the nonlinear prediction model are compared with ber (OND)] and 12-month averages, respectively. The the linear canonical correlation analysis (CCA) model. SST dataset is part of the UKMO historical SST6 data Efforts to improve climate prediction will have eco- (MOHSST6), a historical global dataset of mean nomic significance, because agriculture (both rain fed monthly global SST anomalies with respect to the 1961– and irrigated) and water resources are inextricably 90 normals. linked to the timing and amount of rainfall. Because there is a clear need to explore longer lead times for b. Methods predicting seasonal rainfall in EA and to continuously monitor prediction relationships, the objectives of this 1) WAVELET ANALYSIS study are as follows: The wavelet transform of a real signal X(t) with re- ⌿ 1) to identify and analyze the spatial, temporal, and spect to a mother wavelet is a convolution integral frequency variability of the dominant oscillations of given as the rainfall fields of EA using wavelet analysis and 1 T t Ϫ b WEOF, W͑b, a͒ ϭ ͵ X͑t͒⌿*ͩ ͪ dt, ͑1͒ 2) to use the information obtained from objective 1 to ͌a 0 a Unauthenticated | Downloaded 09/23/21 07:45 PM UTC 258 JOURNAL OF APPLIED METEOROLOGY VOLUME 44 where ⌿* is the complex conjugate of ⌿, t is time, T is distinguish EOF of raw data from EOF performed on the total length of the time series, and W(b, a)isa the wavelet spectra SAWP, the latter is called wavelet wavelet spectrum, a matrix of energy coefficients of the empirical orthogonal function analysis or wavelet prin- decomposed time series at each scale a and time b. The cipal components analysis and their corresponding PCs magnitude of the wavelet spectrum coefficients shows are referred to as wavelet principal components how well the wavelet matches with the time series. At (WPCs). The WPCs obtained from SAWP are inter- each scale, the wavelet spectrum coefficients also depict preted as frequency-compacted energy variability, and the amplitude of a time series. In a rainfall or SST time they also indicate the magnitude of events of time series series, power at each scale will therefore be a good averaged from various scales. measure of the magnitude of the rainfall or SST anoma- An important decision in EOF is to select an appro- lies. In addition to power at individual scales, power priate number of PCs that most strongly capture the over a range of scales [the scale-averaged wavelet joint variability of the original data, without discarding power (SAWP), which represents the mean variance of important information carried in the original data.
Recommended publications
  • Wavelet Entropy Energy Measure (WEEM): a Multiscale Measure to Grade a Geophysical System's Predictability
    EGU21-703, updated on 28 Sep 2021 https://doi.org/10.5194/egusphere-egu21-703 EGU General Assembly 2021 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. Wavelet Entropy Energy Measure (WEEM): A multiscale measure to grade a geophysical system's predictability Ravi Kumar Guntu and Ankit Agarwal Indian Institute of Technology Roorkee, Hydrology, Roorkee, India ([email protected]) Model-free gradation of predictability of a geophysical system is essential to quantify how much inherent information is contained within the system and evaluate different forecasting methods' performance to get the best possible prediction. We conjecture that Multiscale Information enclosed in a given geophysical time series is the only input source for any forecast model. In the literature, established entropic measures dealing with grading the predictability of a time series at multiple time scales are limited. Therefore, we need an additional measure to quantify the information at multiple time scales, thereby grading the predictability level. This study introduces a novel measure, Wavelet Entropy Energy Measure (WEEM), based on Wavelet entropy to investigate a time series's energy distribution. From the WEEM analysis, predictability can be graded low to high. The difference between the entropy of a wavelet energy distribution of a time series and entropy of wavelet energy of white noise is the basis for gradation. The metric quantifies the proportion of the deterministic component of a time series in terms of energy concentration, and its range varies from zero to one. One corresponds to high predictable due to its high energy concentration and zero representing a process similar to the white noise process having scattered energy distribution.
    [Show full text]
  • Biostatistics for Oral Healthcare
    Biostatistics for Oral Healthcare Jay S. Kim, Ph.D. Loma Linda University School of Dentistry Loma Linda, California 92350 Ronald J. Dailey, Ph.D. Loma Linda University School of Dentistry Loma Linda, California 92350 Biostatistics for Oral Healthcare Biostatistics for Oral Healthcare Jay S. Kim, Ph.D. Loma Linda University School of Dentistry Loma Linda, California 92350 Ronald J. Dailey, Ph.D. Loma Linda University School of Dentistry Loma Linda, California 92350 JayS.Kim, PhD, is Professor of Biostatistics at Loma Linda University, CA. A specialist in this area, he has been teaching biostatistics since 1997 to students in public health, medical school, and dental school. Currently his primary responsibility is teaching biostatistics courses to hygiene students, predoctoral dental students, and dental residents. He also collaborates with the faculty members on a variety of research projects. Ronald J. Dailey is the Associate Dean for Academic Affairs at Loma Linda and an active member of American Dental Educational Association. C 2008 by Blackwell Munksgaard, a Blackwell Publishing Company Editorial Offices: Blackwell Publishing Professional, 2121 State Avenue, Ames, Iowa 50014-8300, USA Tel: +1 515 292 0140 9600 Garsington Road, Oxford OX4 2DQ Tel: 01865 776868 Blackwell Publishing Asia Pty Ltd, 550 Swanston Street, Carlton South, Victoria 3053, Australia Tel: +61 (0)3 9347 0300 Blackwell Wissenschafts Verlag, Kurf¨urstendamm57, 10707 Berlin, Germany Tel: +49 (0)30 32 79 060 The right of the Author to be identified as the Author of this Work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
    [Show full text]
  • Big Data for Reliability Engineering: Threat and Opportunity
    Reliability, February 2016 Big Data for Reliability Engineering: Threat and Opportunity Vitali Volovoi Independent Consultant [email protected] more recently, analytics). It shares with the rest of the fields Abstract - The confluence of several technologies promises under this umbrella the need to abstract away most stormy waters ahead for reliability engineering. News reports domain-specific information, and to use tools that are mainly are full of buzzwords relevant to the future of the field—Big domain-independent1. As a result, it increasingly shares the Data, the Internet of Things, predictive and prescriptive lingua franca of modern systems engineering—probability and analytics—the sexier sisters of reliability engineering, both statistics that are required to balance the otherwise orderly and exciting and threatening. Can we reliability engineers join the deterministic engineering world. party and suddenly become popular (and better paid), or are And yet, reliability engineering does not wear the fancy we at risk of being superseded and driven into obsolescence? clothes of its sisters. There is nothing privileged about it. It is This article argues that“big-picture” thinking, which is at the rarely studied in engineering schools, and it is definitely not core of the concept of the System of Systems, is key for a studied in business schools! Instead, it is perceived as a bright future for reliability engineering. necessary evil (especially if the reliability issues in question are safety-related). The community of reliability engineers Keywords - System of Systems, complex systems, Big Data, consists of engineers from other fields who were mainly Internet of Things, industrial internet, predictive analytics, trained on the job (instead of receiving formal degrees in the prescriptive analytics field).
    [Show full text]
  • Volatility Estimation of Stock Prices Using Garch Method
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by International Institute for Science, Technology and Education (IISTE): E-Journals European Journal of Business and Management www.iiste.org ISSN 2222-1905 (Paper) ISSN 2222-2839 (Online) Vol.7, No.19, 2015 Volatility Estimation of Stock Prices using Garch Method Koima, J.K, Mwita, P.N Nassiuma, D.K Kabarak University Abstract Economic decisions are modeled based on perceived distribution of the random variables in the future, assessment and measurement of the variance which has a significant impact on the future profit or losses of particular portfolio. The ability to accurately measure and predict the stock market volatility has a wide spread implications. Volatility plays a very significant role in many financial decisions. The main purpose of this study is to examine the nature and the characteristics of stock market volatility of Kenyan stock markets and its stylized facts using GARCH models. Symmetric volatility model namly GARCH model was used to estimate volatility of stock returns. GARCH (1, 1) explains volatility of Kenyan stock markets and its stylized facts including volatility clustering, fat tails and mean reverting more satisfactorily.The results indicates the evidence of time varying stock return volatility over the sampled period of time. In conclusion it follows that in a financial crisis; the negative returns shocks have higher volatility than positive returns shocks. Keywords: GARCH, Stylized facts, Volatility clustering INTRODUCTION Volatility forecasting in financial market is very significant particularly in Investment, financial risk management and monetory policy making Poon and Granger (2003).Because of the link between volatility and risk,volatility can form a basis for efficient price discovery.Volatility implying predictability is very important phenomenon for traders and medium term - investors.
    [Show full text]
  • A Python Library for Neuroimaging Based Machine Learning
    Brain Predictability toolbox: a Python library for neuroimaging based machine learning 1 1 2 1 1 1 Hahn, S. ,​ Yuan, D.K. ,​ Thompson, W.K. ,​ Owens, M. ,​ Allgaier, N .​ and Garavan, H ​ ​ ​ ​ ​ ​ 1. Departments of Psychiatry and Complex Systems, University of Vermont, Burlington, VT 05401, USA 2. Division of Biostatistics, Department of Family Medicine and Public Health, University of California, San Diego, La Jolla, CA 92093, USA Abstract Summary Brain Predictability toolbox (BPt) represents a unified framework of machine learning (ML) tools designed to work with both tabulated data (in particular brain, psychiatric, behavioral, and physiological variables) and neuroimaging specific derived data (e.g., brain volumes and surfaces). This package is suitable for investigating a wide range of different neuroimaging based ML questions, in particular, those queried from large human datasets. Availability and Implementation BPt has been developed as an open-source Python 3.6+ package hosted at https://github.com/sahahn/BPt under MIT License, with documentation provided at ​ https://bpt.readthedocs.io/en/latest/, and continues to be actively developed. The project can be ​ downloaded through the github link provided. A web GUI interface based on the same code is currently under development and can be set up through docker with instructions at https://github.com/sahahn/BPt_app. ​ Contact Please contact Sage Hahn at [email protected] ​ Main Text 1 Introduction Large datasets in all domains are becoming increasingly prevalent as data from smaller existing studies are pooled and larger studies are funded. This increase in available data offers an unprecedented opportunity for researchers interested in applying machine learning (ML) based methodologies, especially those working in domains such as neuroimaging where data collection is quite expensive.
    [Show full text]
  • BIOSTATS Documentation
    BIOSTATS Documentation BIOSTATS is a collection of R functions written to aid in the statistical analysis of ecological data sets using both univariate and multivariate procedures. All of these functions make use of existing R functions and many are simply convenience wrappers for functions contained in other popular libraries, such as VEGAN and LABDSV for multivariate statistics, however others are custom functions written using functions in the base or stats libraries. Author: Kevin McGarigal, Professor Department of Environmental Conservation University of Massachusetts, Amherst Date Last Updated: 9 November 2016 Functions: all.subsets.gam.. 3 all.subsets.glm. 6 box.plots. 12 by.names. 14 ci.lines. 16 class.monte. 17 clus.composite.. 19 clus.stats. 21 cohen.kappa. 23 col.summary. 25 commission.mc. 27 concordance. 29 contrast.matrix. 33 cov.test. 35 data.dist. 37 data.stand. 41 data.trans. 44 dist.plots. 48 distributions. 50 drop.var. 53 edf.plots.. 55 ecdf.plots. 57 foa.plots.. 59 hclus.cophenetic. 62 hclus.scree. 64 hclus.table. 66 hist.plots. 68 intrasetcor. 70 Biostats Library 2 kappa.mc. 72 lda.structure.. 74 mantel2. 76 mantel.part. 78 mrpp2. 82 mv.outliers.. 84 nhclus.scree. 87 nmds.monte. 89 nmds.scree.. 91 norm.test. 93 ordi.monte.. 95 ordi.overlay. 97 ordi.part.. 99 ordi.scree. 105 pca.communality. 107 pca.eigenval. 109 pca.eigenvec. 111 pca.structure. 113 plot.anosim. 115 plot.mantel. 117 plot.mrpp.. 119 plot.ordi.part. 121 qqnorm.plots.. 123 ran.split. ..
    [Show full text]
  • Public Health & Intelligence
    Public Health & Intelligence PHI Trend Analysis Guidance Document Control Version Version 1.5 Date Issued March 2017 Author Róisín Farrell, David Walker / HI Team Comments to [email protected] Version Date Comment Author Version 1.0 July 2016 1st version of paper David Walker, Róisín Farrell Version 1.1 August Amending format; adding Róisín Farrell 2016 sections Version 1.2 November Adding content to sections Róisín Farrell, Lee 2016 1 and 2 Barnsdale Version 1.3 November Amending as per Róisín Farrell 2016 suggestions from SAG Version 1.4 December Structural changes Róisín Farrell, Lee 2016 Barnsdale Version 1.5 March 2017 Amending as per Róisín Farrell, Lee suggestions from SAG Barnsdale Contents 1. Introduction ........................................................................................................................................... 1 2. Understanding trend data for descriptive analysis ................................................................................. 1 3. Descriptive analysis of time trends ......................................................................................................... 2 3.1 Smoothing ................................................................................................................................................ 2 3.1.1 Simple smoothing ............................................................................................................................ 2 3.1.2 Median smoothing ..........................................................................................................................
    [Show full text]
  • Measuring Predictability: Theory and Macroeconomic Applications," Journal of Applied Econometrics, 16, 657-669
    Diebold, F.X. and Kilian, L. (2001), "Measuring Predictability: Theory and Macroeconomic Applications," Journal of Applied Econometrics, 16, 657-669. Measuring Predictability: Theory and Macroeconomic Applications Francis X. Diebold Department of Finance, Stern School of Business, New York University, New York, NY 10012- 1126, USA, and NBER. Lutz Kilian Department of Economics, University of Michigan, Ann Arbor, MI 48109-1220, USA, and CEPR. SUMMARY We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several U.S. macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several nonparametric extensions of our approach. Correspondence to: F.X. Diebold, Department of Finance, Suite 9-190, Stern School of Business, New York University, New York, NY 10012-1126, USA. E-mail: [email protected]. 1. INTRODUCTION It is natural and informative to judge forecasts by their accuracy. However, actual and forecasted values will differ, even for very good forecasts.
    [Show full text]
  • Central Limit Theorems When Data Are Dependent: Addressing the Pedagogical Gaps
    Institute for Empirical Research in Economics University of Zurich Working Paper Series ISSN 1424-0459 Working Paper No. 480 Central Limit Theorems When Data Are Dependent: Addressing the Pedagogical Gaps Timothy Falcon Crack and Olivier Ledoit February 2010 Central Limit Theorems When Data Are Dependent: Addressing the Pedagogical Gaps Timothy Falcon Crack1 University of Otago Olivier Ledoit2 University of Zurich Version: August 18, 2009 1Corresponding author, Professor of Finance, University of Otago, Department of Finance and Quantitative Analysis, PO Box 56, Dunedin, New Zealand, [email protected] 2Research Associate, Institute for Empirical Research in Economics, University of Zurich, [email protected] Central Limit Theorems When Data Are Dependent: Addressing the Pedagogical Gaps ABSTRACT Although dependence in financial data is pervasive, standard doctoral-level econometrics texts do not make clear that the common central limit theorems (CLTs) contained therein fail when applied to dependent data. More advanced books that are clear in their CLT assumptions do not contain any worked examples of CLTs that apply to dependent data. We address these pedagogical gaps by discussing dependence in financial data and dependence assumptions in CLTs and by giving a worked example of the application of a CLT for dependent data to the case of the derivation of the asymptotic distribution of the sample variance of a Gaussian AR(1). We also provide code and the results for a Monte-Carlo simulation used to check the results of the derivation. INTRODUCTION Financial data exhibit dependence. This dependence invalidates the assumptions of common central limit theorems (CLTs). Although dependence in financial data has been a high- profile research area for over 70 years, standard doctoral-level econometrics texts are not always clear about the dependence assumptions needed for common CLTs.
    [Show full text]
  • Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks
    Article Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks Renjie Zhou 1,2, Chen Yang 1,2, Jian Wan 1,2,*, Wei Zhang 1,2, Bo Guan 3,*, Naixue Xiong 4,* 1 School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; [email protected] (R.Z.); [email protected] (C.Y.); [email protected] (W.Z.) 2 Key Laboratory of Complex Systems Modeling and Simulation of Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China 3 School of Electronic and Information Engineer, Ningbo University of Technology, Ningbo 315211, China 4 Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74464, USA * Correspondence: [email protected] (J.W.); [email protected] (B.G.); [email protected] (N.X.) Tel.: +1-404-645-4067 (N.X.) Academic Editor: Mohamed F. Younis Received: 15 November 2016; Accepted: 24 March 2017; Published: 6 April 2017 Abstract: Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity.
    [Show full text]
  • Incubation Periods Impact the Spatial Predictability of Cholera and Ebola Outbreaks in Sierra Leone
    Incubation periods impact the spatial predictability of cholera and Ebola outbreaks in Sierra Leone Rebecca Kahna,1, Corey M. Peaka,1, Juan Fernández-Graciaa,b, Alexandra Hillc, Amara Jambaid, Louisa Gandae, Marcia C. Castrof, and Caroline O. Buckeea,2 aCenter for Communicable Disease Dynamics, Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA 02115; bInstitute for Cross-Disciplinary Physics and Complex Systems, Universitat de les Illes Balears - Consell Superior d’Investigacions Científiques, E-07122 Palma de Mallorca, Spain; cDisease Control in Humanitarian Emergencies, World Health Organization, CH-1211 Geneva 27, Switzerland; dDisease Control and Prevention, Sierra Leone Ministry of Health and Sanitation, Freetown, Sierra Leone FPGG+89; eCountry Office, World Health Organization, Freetown, Sierra Leone FPGG+89; and fDepartment of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA 02115 Edited by Burton H. Singer, University of Florida, Gainesville, FL, and approved January 22, 2020 (received for review July 29, 2019) Forecasting the spatiotemporal spread of infectious diseases during total number of secondary infections by an infectious individual in an outbreak is an important component of epidemic response. a completely susceptible population (i.e., R0) (13, 14). Indeed, the However, it remains challenging both methodologically and with basis of contact tracing protocols during an outbreak reflects the respect to data requirements, as disease spread is influenced by need to identify and contain individuals during the incubation numerous factors, including the pathogen’s underlying transmission period, and the relative effectiveness of interventions such as parameters and epidemiological dynamics, social networks and pop- symptom monitoring or quarantine significantly depends on the ulation connectivity, and environmental conditions.
    [Show full text]
  • Understanding Predictability
    Understanding Predictability Lior Menzly University of Southern California Tano Santos Columbia University and NBER Pietro Veronesi University of Chicago, CEPR and NBER Abstract We propose a general equilibrium model with multiple securities in which in- vestors’ risk preferences and expectations of dividend growth are time vary- ing. While time varying risk preferences induce the standard positive relation between the dividend yield and expected returns, time varying expected div- idend growth induces a negative relation between them. These o¤setting e¤ects reduce the ability of the dividend yield to forecast returns and elim- inate its ability to forecast dividend growth, as observed in the data. The model links the predictability of returns to that of dividend growth, suggest- ing speci…c changes to standard linear predictive regressions for both. The model’s predictions are con…rmed empirically. Acknowledgements Di¤erent versions of the framework advanced in this paper have been pre- sented at workshops at the University of Texas at Austin, NYU, University of Illinois at Urbana-Champaign, McGill, MIT, Columbia University, North- western University, University of California at Berkely, Dartmouth College, University of Iowa, and the University of Chicago. We thank Andrew Ang, John Cochrane, George Constantinides, Kent Daniel, Harry DeAngelo, Lars Hansen, John Heaton, Bob Hodrick, Martin Lettau, Toby Moskowitz, Ruy Ribeiro, and Jessica Wachter for their comments. I. INTRODUCTION The predictability of stock market returns at long horizons has been at the center of much empirical and theoretical research in …nance in the last decade. For the market as a whole, high prices, normalized by dividends for example, forecast low future re- turns, whereas they do not predict high future dividend growth.
    [Show full text]