Deep Rao-Blackwellised Particle Filters for Time Series Forecasting

Total Page:16

File Type:pdf, Size:1020Kb

Deep Rao-Blackwellised Particle Filters for Time Series Forecasting Deep Rao-Blackwellised Particle Filters for Time Series Forecasting Richard Kurle∗y 2 Syama Sundar Rangapuram1 Emmanuel de Bezenacy 3 Stephan Günnemann2 Jan Gasthaus1 1AWS AI Labs 2Technical University of Munich 3Sorbonne Université Abstract This work addresses efficient inference and learning in switching Gaussian linear dynamical systems using a Rao-Blackwellised particle filter and a corresponding Monte Carlo objective. To improve the forecasting capabilities, we extend this classical model by conditionally linear state-to-switch dynamics, while leaving the partial tractability of the conditional Gaussian linear part intact. Furthermore, we use an auxiliary variable approach with a decoder-type neural network that allows for more complex non-linear emission models and multivariate observations. We propose a Monte Carlo objective that leverages the conditional linearity by computing the corresponding conditional expectations in closed-form and a suitable proposal distribution that is factorised similarly to the optimal proposal distribution. We evaluate our approach on several popular time series forecasting datasets as well as image streams of simulated physical systems. Our results show improved forecasting performance compared to other deep state-space model approaches. 1 Introduction The Gaussian linear dynamical system (GLS) [4, 38, 31] is one of the most well-studied dynamical models with wide-ranging applications in many domains, including control, navigation, and time- series forecasting. This state-space model (SSM) is described by a (typically Markovian) latent linear process that generates a sequence of observations. The assumption of Gaussian noise and linear state transitions and measurements allows for exact inference of the latent variables—such as filtering and smoothing—and computation of the marginal likelihood for system identification (learning). However, most systems of practical interest are non-linear, requiring more complex models. Many approximate inference methods have been developed for non-linear dynamical systems: Deter- ministic methods approximate the filtering and smoothing distributions e.g. by using a Taylor series expansion of the non-linear functions (known as extended Kalman filter (EKF) and second-order EKF) or by directly approximating these marginal distributions by a Gaussian using moment matching techniques [26, 27, 2, 3, 36]. Stochastic methods such as particle filters or smoothers approximate the filtering and smoothing distributions by a set of weighted samples (particles) using sequential Monte Carlo (SMC) [13, 16]. Furthermore, system identification with deep neural networks has been proposed using stochastic variational inference [17, 15] and variational SMC [23, 29, 20]. A common challenge with both types of approximations is that predictions/forecasts over long forecast horizons suffer from accumulated errors resulting from insufficient approximations at every time step. Switching Gaussian linear systems (SGLS) [1, 12]—which use additional latent variables to switch between different linear dynamics—provide a way to alleviate this problem: the conditional linearity can be exploited by approximate inference algorithms to reduce approximation errors. Unfortunately, this comes at the cost of reduced modelling flexibility compared to more general non-linear dynamical systems. Specifically, we identify the following two weaknesses of the SGLS: i) the switch transition ∗Correspondence to [email protected]. yWork done while at AWS AI Labs. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. model is assumed independent of the GLS state and observation; ii) conditionally-linear emissions are insufficient for modelling complex multivariate data streams such as video data, while more suitable emission models (e.g. using convolutional neural networks) exist. The first problem has been addressed in [21] through augmentation with a Polya-gamma-distributed variable and a stick-breaking process. However, this approach uses Gibbs sampling to infer model parameters and thus does not scale well to large datasets. The second problem is addressed in [11] using an auxiliary variable between GLS states and emissions and stochastic variational inference to obtain a tractable objective function for learning. Yet, this model predicts the GLS parameters deterministically using an LSTM with an auto-regressive component, resulting in poor long-term forecasts. We propose auxiliary variable recurrent switching Gaussian linear systems (ARSGLS), an extension of the SGLS to address both weaknesses by building on ideas from [21] and [11]. ARSGLS improves upon the SGLS by incorporating a conditionally linear state-to-switch dependence, which is crucial for accurate long-term out-of-sample predictions (forecasting), and a decoder-type neural network that allows modelling multivariate time-series data with a non-linear emission/measurement process. As in the SGLS, a crucial feature of ARSGLS is that approximate inference and likelihood estimation can be Rao-Blackwellised, that is, expectations involving the conditionally linear part of the model can be computed in closed-form, and only the expectations wrt. the switch variables need to be approximated. We leverage this feature and propose a Rao-Blackwellized filtering objective function and a suitable proposal distribution for this model class. We evaluate our model with two different instantiations of the GLS: in the first scenario, the GLS is implemented with a constrained structure that models time-series patterns such as level, trend and seasonality [14, 34]; the second scenario considers a general, unconstrained conditional GLS. We compare our approach to closely related methods such as Deep State Space Models (DeepState) [30] and Kalman Variational Auto-Encoders (KVAE) [11] on 5 popular forecasting datasets and multivariate (image) data streams generated from a physics engine as used in [11]. Our results show that the proposed model achieves improved performance for univariate and multivariate forecasting. 2 Background 2.1 Gaussian Linear Dynamical Systems The GLS models sequence data using a first-order Markov linear latent and emission process: xt = Axt−1 + But + wt; wt ∼ N (0;R); (1a) yt = Cxt + Dut + vt; vt ∼ N (0;Q); (1b) where the vectors u, x and y denote the inputs (also referred to as covariates or controls), latent states, and targets (observations). A, and C are the transition and emission matrix, B and D are optional control matrices, and R, Q are the state and emission noise covariances. In the follow- ing, we will omit the optional inputs u to ease the presentation, however, we use inputs e.g. for our experiments in Sec. 5.2. The appealing property of the GLS is that inference and likelihood computation is analytically tractable using the well-known Kalman filter algorithm, alternating R between a prediction step p(xt j y1:t−1) = p(xt j xt−1)p(xt−1 j y1:t−1)dxt−1 and update step p(xt j y1:t) / p(xt j y1:t−1)p(yt j xt), where p(xt j xt−1) is the state transition and p(yt j xt) is the emission/measurement process corresponding to Eqs. (1a) and (1b), respectively. 2.2 Particle Filters In non-linear dynamical systems, the filter distribution p(xt j y1:t) is intractable and needs to be approximated. Particle filters are SMC algorithms that approximate the filter distribution at every time- step t by a set of P weighted particles fx(p)gP , combining importance sampling and re-sampling. Denoting the Dirac measure centred at xt by δ(xt), the filter distribution is approximated as P X (p) (p) p(xt j y1:t) ≈ wt δ(xt ): (2) p=1 In first-order Markovian dynamical systems, the importance-weights are computed recursively as (p) (p) (p) (p) w~ p(yt j x )p(x j x ) w~(p) = w(p) γ(x(p); x(p) ); w(p) = t ; γ(x(p); x(p) ) = t t t−1 ; (3) t t−1 t t−1 t PP (p) t t−1 (p) (p) p=1 w~t π(xt j x1:t−1) 2 (p) (p) where w~t and wt denote the unnormalised and normalised importance-weights, respectively, (p) (p) (p) γ(xt ; xt−1) is the incremental importance-weight, and π(xt j x1:t−1) is the proposal distribution. To alleviate weight degeneracy issues, a re-sampling step is performed if the importance-weights satisfy a chosen degeneracy criterion. A common criterion is to re-sample when the effective sample PP (p) 2−1 size (ESS) drops below half the number of particles, i.e. PESS = p=1(w ) ≤ P=2. 2.3 Switching Gaussian Linear Systems with Rao-Blackwellised Particle Filters Switching Gaussian linear systems (SGLS), also referred to as conditional GLS or mixture GLS, are a class of non-linear SSMs that use additional (non-linear) latent variables s1:t that index the parameters (transition, emission, control, and noise covariance matrices) of a GLS, allowing them to “switch” between different (linear) regimes: xt = A(st)xt−1 + B(st)ut + wt(st); wt(st) ∼ N (0;R(st)); (4) yt = C(st)xt + D(st)ut + vt(st); vt(st) ∼ N (0;Q(st)): The switch variables st are typically categorical variables, indexing one of K base matrices; however, other choices are possible (see Sec. 3.1). Omitting the inputs u again, the graphical model factorises QT as p(y1:T ; x0:T ; s1:T ) = p(x0) t=1 p(yt j xt; st)p(xt j xt−1; st)p(st j st−1); where p(s1 j s0) = p(s1) is the switch prior (conditioned on u1 if inputs are given). Note that we use an initial state x0 without corresponding observation for convenience.2 A crucial property of this model is that, while the complete model exhibits non-linear dynamics, given (p) a sample of the trajectory s1:T , the rest of the model is (conditionally) linear. This allows for efficient approximate inference and likelihood estimation. The so-called Rao-Blackwellised particle filter (RBPF) leverages the conditional linearity by approximating expectations—that occur in filtering and likelihood computation—wrt.
Recommended publications
  • Kalman and Particle Filtering
    Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)).
    [Show full text]
  • Lecture 19: Wavelet Compression of Time Series and Images
    Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt.
    [Show full text]
  • Applying Particle Filtering in Both Aggregated and Age-Structured Population Compartmental Models of Pre-Vaccination Measles
    bioRxiv preprint doi: https://doi.org/10.1101/340661; this version posted June 6, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Applying particle filtering in both aggregated and age-structured population compartmental models of pre-vaccination measles Xiaoyan Li1*, Alexander Doroshenko2, Nathaniel D. Osgood1 1 Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada 2 Department of Medicine, Division of Preventive Medicine, University of Alberta, Edmonton, Alberta, Canada * [email protected] Abstract Measles is a highly transmissible disease and is one of the leading causes of death among young children under 5 globally. While the use of ongoing surveillance data and { recently { dynamic models offer insight on measles dynamics, both suffer notable shortcomings when applied to measles outbreak prediction. In this paper, we apply the Sequential Monte Carlo approach of particle filtering, incorporating reported measles incidence for Saskatchewan during the pre-vaccination era, using an adaptation of a previously contributed measles compartmental model. To secure further insight, we also perform particle filtering on an age structured adaptation of the model in which the population is divided into two interacting age groups { children and adults. The results indicate that, when used with a suitable dynamic model, particle filtering can offer high predictive capacity for measles dynamics and outbreak occurrence in a low vaccination context. We have investigated five particle filtering models in this project. Based on the most competitive model as evaluated by predictive accuracy, we have performed prediction and outbreak classification analysis.
    [Show full text]
  • Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
    Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive
    [Show full text]
  • Time Series: Co-Integration
    Time Series: Co-integration Series: Economic Forecasting; Time Series: General; Watson M W 1994 Vector autoregressions and co-integration. Time Series: Nonstationary Distributions and Unit In: Engle R F, McFadden D L (eds.) Handbook of Econo- Roots; Time Series: Seasonal Adjustment metrics Vol. IV. Elsevier, The Netherlands N. H. Chan Bibliography Banerjee A, Dolado J J, Galbraith J W, Hendry D F 1993 Co- Integration, Error Correction, and the Econometric Analysis of Non-stationary Data. Oxford University Press, Oxford, UK Time Series: Cycles Box G E P, Tiao G C 1977 A canonical analysis of multiple time series. Biometrika 64: 355–65 Time series data in economics and other fields of social Chan N H, Tsay R S 1996 On the use of canonical correlation science often exhibit cyclical behavior. For example, analysis in testing common trends. In: Lee J C, Johnson W O, aggregate retail sales are high in November and Zellner A (eds.) Modelling and Prediction: Honoring December and follow a seasonal cycle; voter regis- S. Geisser. Springer-Verlag, New York, pp. 364–77 trations are high before each presidential election and Chan N H, Wei C Z 1988 Limiting distributions of least squares follow an election cycle; and aggregate macro- estimates of unstable autoregressive processes. Annals of Statistics 16: 367–401 economic activity falls into recession every six to eight Engle R F, Granger C W J 1987 Cointegration and error years and follows a business cycle. In spite of this correction: Representation, estimation, and testing. Econo- cyclicality, these series are not perfectly predictable, metrica 55: 251–76 and the cycles are not strictly periodic.
    [Show full text]
  • Generating Time Series with Diverse and Controllable Characteristics
    GRATIS: GeneRAting TIme Series with diverse and controllable characteristics Yanfei Kang,∗ Rob J Hyndman,† and Feng Li‡ Abstract The explosion of time series data in recent years has brought a flourish of new time series analysis methods, for forecasting, clustering, classification and other tasks. The evaluation of these new methods requires either collecting or simulating a diverse set of time series benchmarking data to enable reliable comparisons against alternative approaches. We pro- pose GeneRAting TIme Series with diverse and controllable characteristics, named GRATIS, with the use of mixture autoregressive (MAR) models. We simulate sets of time series using MAR models and investigate the diversity and coverage of the generated time series in a time series feature space. By tuning the parameters of the MAR models, GRATIS is also able to efficiently generate new time series with controllable features. In general, as a costless surrogate to the traditional data collection approach, GRATIS can be used as an evaluation tool for tasks such as time series forecasting and classification. We illustrate the usefulness of our time series generation process through a time series forecasting application. Keywords: Time series features; Time series generation; Mixture autoregressive models; Time series forecasting; Simulation. 1 Introduction With the widespread collection of time series data via scanners, monitors and other automated data collection devices, there has been an explosion of time series analysis methods developed in the past decade or two. Paradoxically, the large datasets are often also relatively homogeneous in the industry domain, which limits their use for evaluation of general time series analysis methods (Keogh and Kasetty, 2003; Mu˜nozet al., 2018; Kang et al., 2017).
    [Show full text]
  • Dynamic Detection of Change Points in Long Time Series
    AISM (2007) 59: 349–366 DOI 10.1007/s10463-006-0053-9 Nicolas Chopin Dynamic detection of change points in long time series Received: 23 March 2005 / Revised: 8 September 2005 / Published online: 17 June 2006 © The Institute of Statistical Mathematics, Tokyo 2006 Abstract We consider the problem of detecting change points (structural changes) in long sequences of data, whether in a sequential fashion or not, and without assuming prior knowledge of the number of these change points. We reformulate this problem as the Bayesian filtering and smoothing of a non standard state space model. Towards this goal, we build a hybrid algorithm that relies on particle filter- ing and Markov chain Monte Carlo ideas. The approach is illustrated by a GARCH change point model. Keywords Change point models · GARCH models · Markov chain Monte Carlo · Particle filter · Sequential Monte Carlo · State state models 1 Introduction The assumption that an observed time series follows the same fixed stationary model over a very long period is rarely realistic. In economic applications for instance, common sense suggests that the behaviour of economic agents may change abruptly under the effect of economic policy, political events, etc. For example, Mikosch and St˘aric˘a (2003, 2004) point out that GARCH models fit very poorly too long sequences of financial data, say 20 years of daily log-returns of some speculative asset. Despite this, these models remain highly popular, thanks to their forecast ability (at least on short to medium-sized time series) and their elegant simplicity (which facilitates economic interpretation). Against the common trend of build- ing more and more sophisticated stationary models that may spuriously provide a better fit for such long sequences, the aforementioned authors argue that GARCH models remain a good ‘local’ approximation of the behaviour of financial data, N.
    [Show full text]
  • Bayesian Filtering: from Kalman Filters to Particle Filters, and Beyond ZHE CHEN
    MANUSCRIPT 1 Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond ZHE CHEN Abstract— In this self-contained survey/review paper, we system- IV Bayesian Optimal Filtering 9 atically investigate the roots of Bayesian filtering as well as its rich IV-AOptimalFiltering..................... 10 leaves in the literature. Stochastic filtering theory is briefly reviewed IV-BKalmanFiltering..................... 11 with emphasis on nonlinear and non-Gaussian filtering. Following IV-COptimumNonlinearFiltering.............. 13 the Bayesian statistics, different Bayesian filtering techniques are de- IV-C.1Finite-dimensionalFilters............ 13 veloped given different scenarios. Under linear quadratic Gaussian circumstance, the celebrated Kalman filter can be derived within the Bayesian framework. Optimal/suboptimal nonlinear filtering tech- V Numerical Approximation Methods 14 niques are extensively investigated. In particular, we focus our at- V-A Gaussian/Laplace Approximation ............ 14 tention on the Bayesian filtering approach based on sequential Monte V-BIterativeQuadrature................... 14 Carlo sampling, the so-called particle filters. Many variants of the V-C Mulitgrid Method and Point-Mass Approximation . 14 particle filter as well as their features (strengths and weaknesses) are V-D Moment Approximation ................. 15 discussed. Related theoretical and practical issues are addressed in V-E Gaussian Sum Approximation . ............. 16 detail. In addition, some other (new) directions on Bayesian filtering V-F Deterministic
    [Show full text]
  • Monte Carlo Smoothing for Nonlinear Time Series
    Monte Carlo Smoothing for Nonlinear Time Series Simon J. GODSILL, Arnaud DOUCET, and Mike WEST We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filtered trajectories. KEY WORDS: Bayesian inference; Non-Gaussian time series; Nonlinear time series; Particle filter; Sequential Monte Carlo; State- space model. 1. INTRODUCTION and In this article we develop Monte Carlo methods for smooth- g(yt+1|xt+1)p(xt+1|y1:t ) p(xt+1|y1:t+1) = . ing in general state-space models. To fix notation, consider p(yt+1|y1:t ) the standard Markovian state-space model (West and Harrison 1997) Similarly, smoothing can be performed recursively backward in time using the smoothing formula xt+1 ∼ f(xt+1|xt ) (state evolution density), p(xt |y1:t )f (xt+1|xt ) p(x |y ) = p(x + |y ) dx + .
    [Show full text]
  • Time Series Analysis 5
    Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Time Series Analysis 5. State space models and Kalman filtering Andrew Lesniewski Baruch College New York Fall 2019 A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Outline 1 Warm-up: Recursive Least Squares 2 Kalman Filter 3 Nonlinear State Space Models 4 Particle Filtering A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering OLS regression As a motivation for the reminder of this lecture, we consider the standard linear model Y = X Tβ + "; (1) where Y 2 R, X 2 Rk , and " 2 R is noise (this includes the model with an intercept as a special case in which the first component of X is assumed to be 1). Given n observations x1;:::; xn and y1;:::; yn of X and Y , respectively, the ordinary least square least (OLS) regression leads to the following estimated value of the coefficient β: T −1 T βbn = (Xn Xn) Xn Yn: (2) The matrices X and Y above are defined as 0 T1 0 1 x1 y1 X = B . C 2 (R) Y = B . C 2 Rn; @ . A Matn;k and n @ . A (3) T xn yn respectively. A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Recursive least squares Suppose now that X and Y consists of a streaming set of data, and each new observation leads to an updated value of the estimated β.
    [Show full text]
  • Adaptive Motion Model for Human Tracking Using Particle Filter
    2010 International Conference on Pattern Recognition Adaptive Motion Model for Human Tracking Using Particle Filter Mohammad Hossein Ghaeminia1, Amir Hossein Shabani2, and Shahryar Baradaran Shokouhi1 1Iran Univ. of Science & Technology, Tehran, Iran 2University of Waterloo, ON, Canada [email protected] [email protected] [email protected] Abstract is periodically adapted by an efficient learning procedure (Figure 1). The core of learning the motion This paper presents a novel approach to model the model is the parameter estimation for which the complex motion of human using a probabilistic sequence of velocity and acceleration are innovatively autoregressive moving average model. The analyzed to be modeled by a Gaussian Mixture Model parameters of the model are adaptively tuned during (GMM). The non-negative matrix factorization is then the course of tracking by utilizing the main varying used for dimensionality reduction to take care of high components of the pdf of the target’s acceleration and variations during abrupt changes [3]. Utilizing this velocity. This motion model, along with the color adaptive motion model along with a color histogram histogram as the measurement model, has been as measurement model in the PF framework provided incorporated in the particle filtering framework for us an appropriate approach for human tracking in the human tracking. The proposed method is evaluated by real world scenario of PETS benchmark [4]. PETS benchmark in which the targets have non- The rest of this paper is organized as the following. smooth motion and suddenly change their motion Section 2 overviews the related works. Section 3 direction. Our method competes with the state-of-the- explains the particle filter and the probabilistic ARMA art techniques for human tracking in the real world model.
    [Show full text]
  • Sequential Monte Carlo Filtering Estimation of Ebola
    Sequential Monte Carlo Filtering Estimation of Ebola Progression in West Africa 1, 1 2 1 Narges Montazeri Shahtori ∗, Caterina Scoglio , Arash Pourhabib , and Faryad Darabi Sahneh approaches is that they provide an offline inference of an Abstract— We use a multivariate formulation of sequential outbreak that is inherently dynamic and parameters of model Monte Carlo filter that utilizes mechanistic models for Ebola change during disease evolution, so we need to keep tracking virus propagation and available incidence data to simultane- ously estimate the disease progression states and the model parameters when new data become available. Furthermore, parameters. This method has the advantage of performing since lots of factors such as intervention strategies could the inference online as the new data becomes available and affect on parameters, we expect that the basic reproductive estimates the evolution of basic reproductive ratio R0(t) of the ratio changes during the disease evolution. Therefore we Ebola outbreak through time. Our analysis identifies a peak in need techniques that are able to trace new data as they the basic reproductive ratio close to the time when Ebola cases were reported in Europe and the USA. become available. Towevers et al. [6] estimated the basic reproduction ratio, R0(t), by fitting exponential regression I. INTRODUCTION models to small successive time intervals of the Ebola Since December 2013, West Africa has experienced the outbreak. Therefore, they obtained an estimate of temporal largest Ebola outbreak with more than 20,000 infected cases variations of the growth rate. Their application of regression reported [1]. Secondary infections have also been reported models ignores the systemic epidemiological information in Spain and the United state [2].
    [Show full text]