Lecture 3: Particle Filtering and Introduction to Bayesian Inference 1 Recap

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 3: Particle Filtering and Introduction to Bayesian Inference 1 Recap EE378A Statistical Signal Processing Lecture 3 - 04/18/2017 Lecture 3: Particle Filtering and Introduction to Bayesian Inference Lecturer: Tsachy Weissman Scribe: Charles Hale In this lecture, we introduce new methods for solving Hidden Markov Processes in cases beyond discrete alphabets. This requires us to learn first about estimating pdfs based on samples from a different distribution. At the end, we introduce the Bayesian inference briefly. 1 Recap: Inference on Hidden Markov Processes (HMPs) 1.1 Setting A quick recap on the setting of a HMP: 1. fXngn≥1 is a Markov process, called \the state process". 2. fYngn≥1 is \the observation process" where Yi is the output of Xi sent through a \memoryless" channel characterized by the distribution PY jX . Alternatively, we showed in HW1 that the above is totally equivalent to the following: 1. fXngn≥1 is \the state process" defined by Xt = ft(Xt−1;Wt) where fWngn≥1 is an independent process, i.e-each Wt is independent of all the others. 2. fYngn≥1 is \the observation process" related to fXngn≥1 by Yt = gt(Xt;Nt) where fNngn≥1 is an independent process. 3. The processes fNngn≥1 and fWngn≥1 are independent of each other. 1.2 Goal Since the state process, fXngn≥1 is \hidden" from us, we only get to observe fYngn≥1 and wish to find a way of estimating fXngn≥1 based on fYngn≥1. To do this, we define the forward recursions for causal estimation: αt(xt) = F (βt;PYtjXt ; yt) (1) βt+1(xt+1) = G(αt;PXt+1jXt ) (2) where F and G are the operators defined in lecture 4. 1.3 Challenges For general alphabets X ; Y, computing the forward recursion is a very difficult problem. However, we know efficient algorithms that can be applied in the following situations: 1. All random variables are discrete with a finite alphabet. We can then solve these equations by brute force in time proportional to the alphabet sizes. 1 2. For t ≥ 1, ft and gt are linear and Nt;Wt;Xt are Gaussian. The solution is the \Kalman Filter" algorithm that we derive in Homework 2. Beyond these special situations, there are many heuristic algorithms for this computation: 1. We can compute the forward recursions approximately by quantizing the alphabets. This leads to a tradeoff between model accuracy and computation cost. 2. This lecture will focus on Particle Filtering, which is a way of estimating the forward recursions with an adaptive quantization. 2 Importance Sampling Before we can learn about Particle Filtering, we first need to discuss importance sampling1. 2.1 Simple Setting Let X1; ··· ;XN be i.i.d. random variables with density f. Now we wish to approximate f by a probability ^ mass function, fN , using our data. The idea: we can approximate f by taking N 1 X f^ = δ(x − X ) (3) N N i i=1 where δ(x) is a Dirac-delta distribution. ^ Claim: fN is a good estimate of f. Here we define the "goodness" of our estimate by looking at [g(X)] (4) EX∼f^N and seeing how close it is to EX∼f [g(X)] (5) for any function g with Ejg(X)j < 1. We show that by this definition, (3) is a good estimate for f. Proof: Z 1 [g(X)] = f^ (x)g(x)dx (6) EX∼f^N N −∞ N Z 1 1 X = δ(x − X )g(x)dx (7) N i −∞ i=1 N 1 X Z 1 = δ(x − X )g(x)dx (8) N i i=1 −∞ N 1 X = g(X ) (9) N i i=1 1See section 2.12.4 of the Spring 2013 lecture notes for further reference. 2 By the law of large numbers, N 1 X a:s: g(X ) ! [g(X)] (10) N i EX∼f i=1 as N ! 1. Hence, [g(X)] ! [g(X)] (11) EX∼f^N EX∼f ^ so fN is indeed a "good" estimate of f. 2.2 General Setting Suppose now that X1; ··· ;XN i.i.d. with common density q, but we wish to estimate a different pdf f. Theorem: The following estimator N ^ X fN (x) = wiδ(x − Xi) (12) i=1 with weights f(Xi) q(Xi) wi = (13) PN f(Xj ) j=1 q(Xj ) is a good estimator of f. Proof: It's easy to see N X E ^ [g(X)] = wig(Xi) (14) X∼f^N i=1 N f(Xi) X q(Xi) = g(Xi) (15) PN f(Xj ) i=1 j=1 q(Xj ) PN f(Xi) i=1 g(Xi) = q(Xi) (16) PN f(Xj ) j=1 q(Xj ) 1 PN f(Xi) i=1 g(Xi) = N q(Xi) : (17) 1 PN f(Xj ) N j=1 q(Xj ) As N ! 1, the above expression converges to f(X) R 1 f(x) E[ g(X)] g(x)q(x)dx q(X) = −∞ q(x) (18) f(X) R 1 f(x) E[ q(X) ] −∞ q(x) q(x)dx R 1 −∞ f(x)g(x)dx = R 1 (19) −∞ f(x)dx Z 1 = f(x)g(x)dx (20) −∞ = EX∼f [g(X)]: (21) 3 Hence, E ^ [g(X)] ! EX∼f [g(X)] (22) X∼f^N as N ! 1. 2.3 Implications The above theorem is useful in situations when we don't know f and cannot draw samples relying on Xi ∼ f, but we know that f(x) = c·h(x) for some constant c (in many practical cases it is computationally prohibitive to obtain the normalization constant c = (R h(x)dx)−1). Then f(Xi) q(Xi) wi = (23) PN f(Xj ) j=1 q(Xj ) c·h(Xi) = q(Xi) (24) PN c·h(Xj ) j=1 q(Xj ) h(Xi) = q(Xi) (25) PN h(Xj ) j=1 q(Xj ) Using these wi's, we see that we can approximate f using samples drawn from a different distribution and h. 2.4 Application Let X ∼ fX and Y be the observation of X through a memoryless channel characterized by fY jX . For MAP reconstruction of X, we must compute fX (x)fY jX (yjx) fXjY (xjy) = R 1 (26) −∞ fX (~x)fY jX (yjx~)dx~ In general, this computation may be hard due to the integral appearing on the bottom. But the denominator is just a constant. Hence, suppose we have Xi i.i.d. ∼ fX for 1 ≤ i ≤ N. By the previous theorem, we can take N ^ X fXjy = wi(y)δ(x − Xi) (27) i=1 as an approximation, where fXjy (Xi) fX (Xi) wi(y) = (28) PN fXjy (Xj ) j=1 fX (Xj ) fX (Xi)fY jX (yjXi) = fX (Xi) (29) PN fX (Xj )fY jX (yjXj ) j=1 fX (Xj ) f (yjXi) = Y jX (30) PN j=1 fY jX (yjXj) Hence, using the samples X1;X2; ··· ;XN we can build a good approximation of fXjY (·|y) for fixed y using the above procedure. 4 Let us denote this approximation as N ^ X fN (X1;X2; :::; XN ; fX ; fY jX ; y) = wiδ(x − Xi) (31) i=1 3 Particle Filtering Particle Filtering is a way to approximate the forward recursions of a HMP using importance sampling to approximate the generated distributions rather than computing the true solutions. (i) N particles generate fX1 gi=1 i.i.d. ∼ fX1 ; for t ≥ 2 do ^ (i) N ^ α^t = FN (fX1 gi=1; βt;PYtjXt ; yt); // estimate αt using importance sampling as in (31) ~ (i) N nextParticles generate fXt gi=1 i.i.d. ∼ α^t; reset particles; for 1 ≤ i ≤ N do (i) ~ (i) particles generate Xt+1 ∼ fXt+1jXt (·|Xt ); end ^ 1 PN βt = N i=1 δ(x − Xt+1); end Algorithm 1: Particle Filter This algorithm requires some art in use. It is not well understood how errors propagate due to these successive approximations. In practice, the parameters need to be periodically reset. Choosing these hyper- parameters is an application-specific task. 4 Inference under logarithmic loss Recall the Bayesian decision theory and let us have the following: • X 2 X is something we want to infer; • X ∼ px (assume that X is discrete); • x^ 2 X^ is something we wish to reconstruct; • Λ: X × X!^ R is our loss function. First we assume that we do not have any observation, and define the Bayesian response as X U(px) = min EΛ(X; x^) = min px(s)Λ(x; x^) (32) x^ x^ x and ^ XBayes(pX ) = arg min EΛ(X; x^): (33) x^ Example: Let X = X^ = R, Λ(x; x^) = (x − x^)2. Then 2 U(pX ) = min EX∼pX (X − x^) = Var(X) (34) x^ ^ XBayes(pX ) = E[X]: (35) 5 Consider now the case we have some observation (side information) Y , where our reconstruction X^ can be a function of Y . Then EΛ(X; X^(Y )) = E[E[Λ(X; X^(y))jY = y]] (36) X ^ = E[Λ(X; X(y))jY = y]pY (y) (37) y X X ^ = pY (y) pXjY =y(x)Λ(x; X(y)) (38) y x This implies ^ U(pX ) = min EΛ(X; X(Y )) (39) X^ (·) X X ^ = min pY (y) pXjY =y(x)Λ(x; X(y)) (40) ^ X(·) y x X = pY (y)U(pXjY =y) (41) y and thus ^ ^ Xopt(y) = XBayes(pXjY =y) (42) 6.
Recommended publications
  • Kalman and Particle Filtering
    Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)).
    [Show full text]
  • Applying Particle Filtering in Both Aggregated and Age-Structured Population Compartmental Models of Pre-Vaccination Measles
    bioRxiv preprint doi: https://doi.org/10.1101/340661; this version posted June 6, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Applying particle filtering in both aggregated and age-structured population compartmental models of pre-vaccination measles Xiaoyan Li1*, Alexander Doroshenko2, Nathaniel D. Osgood1 1 Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada 2 Department of Medicine, Division of Preventive Medicine, University of Alberta, Edmonton, Alberta, Canada * [email protected] Abstract Measles is a highly transmissible disease and is one of the leading causes of death among young children under 5 globally. While the use of ongoing surveillance data and { recently { dynamic models offer insight on measles dynamics, both suffer notable shortcomings when applied to measles outbreak prediction. In this paper, we apply the Sequential Monte Carlo approach of particle filtering, incorporating reported measles incidence for Saskatchewan during the pre-vaccination era, using an adaptation of a previously contributed measles compartmental model. To secure further insight, we also perform particle filtering on an age structured adaptation of the model in which the population is divided into two interacting age groups { children and adults. The results indicate that, when used with a suitable dynamic model, particle filtering can offer high predictive capacity for measles dynamics and outbreak occurrence in a low vaccination context. We have investigated five particle filtering models in this project. Based on the most competitive model as evaluated by predictive accuracy, we have performed prediction and outbreak classification analysis.
    [Show full text]
  • Dynamic Detection of Change Points in Long Time Series
    AISM (2007) 59: 349–366 DOI 10.1007/s10463-006-0053-9 Nicolas Chopin Dynamic detection of change points in long time series Received: 23 March 2005 / Revised: 8 September 2005 / Published online: 17 June 2006 © The Institute of Statistical Mathematics, Tokyo 2006 Abstract We consider the problem of detecting change points (structural changes) in long sequences of data, whether in a sequential fashion or not, and without assuming prior knowledge of the number of these change points. We reformulate this problem as the Bayesian filtering and smoothing of a non standard state space model. Towards this goal, we build a hybrid algorithm that relies on particle filter- ing and Markov chain Monte Carlo ideas. The approach is illustrated by a GARCH change point model. Keywords Change point models · GARCH models · Markov chain Monte Carlo · Particle filter · Sequential Monte Carlo · State state models 1 Introduction The assumption that an observed time series follows the same fixed stationary model over a very long period is rarely realistic. In economic applications for instance, common sense suggests that the behaviour of economic agents may change abruptly under the effect of economic policy, political events, etc. For example, Mikosch and St˘aric˘a (2003, 2004) point out that GARCH models fit very poorly too long sequences of financial data, say 20 years of daily log-returns of some speculative asset. Despite this, these models remain highly popular, thanks to their forecast ability (at least on short to medium-sized time series) and their elegant simplicity (which facilitates economic interpretation). Against the common trend of build- ing more and more sophisticated stationary models that may spuriously provide a better fit for such long sequences, the aforementioned authors argue that GARCH models remain a good ‘local’ approximation of the behaviour of financial data, N.
    [Show full text]
  • Bayesian Filtering: from Kalman Filters to Particle Filters, and Beyond ZHE CHEN
    MANUSCRIPT 1 Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond ZHE CHEN Abstract— In this self-contained survey/review paper, we system- IV Bayesian Optimal Filtering 9 atically investigate the roots of Bayesian filtering as well as its rich IV-AOptimalFiltering..................... 10 leaves in the literature. Stochastic filtering theory is briefly reviewed IV-BKalmanFiltering..................... 11 with emphasis on nonlinear and non-Gaussian filtering. Following IV-COptimumNonlinearFiltering.............. 13 the Bayesian statistics, different Bayesian filtering techniques are de- IV-C.1Finite-dimensionalFilters............ 13 veloped given different scenarios. Under linear quadratic Gaussian circumstance, the celebrated Kalman filter can be derived within the Bayesian framework. Optimal/suboptimal nonlinear filtering tech- V Numerical Approximation Methods 14 niques are extensively investigated. In particular, we focus our at- V-A Gaussian/Laplace Approximation ............ 14 tention on the Bayesian filtering approach based on sequential Monte V-BIterativeQuadrature................... 14 Carlo sampling, the so-called particle filters. Many variants of the V-C Mulitgrid Method and Point-Mass Approximation . 14 particle filter as well as their features (strengths and weaknesses) are V-D Moment Approximation ................. 15 discussed. Related theoretical and practical issues are addressed in V-E Gaussian Sum Approximation . ............. 16 detail. In addition, some other (new) directions on Bayesian filtering V-F Deterministic
    [Show full text]
  • Monte Carlo Smoothing for Nonlinear Time Series
    Monte Carlo Smoothing for Nonlinear Time Series Simon J. GODSILL, Arnaud DOUCET, and Mike WEST We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filtered trajectories. KEY WORDS: Bayesian inference; Non-Gaussian time series; Nonlinear time series; Particle filter; Sequential Monte Carlo; State- space model. 1. INTRODUCTION and In this article we develop Monte Carlo methods for smooth- g(yt+1|xt+1)p(xt+1|y1:t ) p(xt+1|y1:t+1) = . ing in general state-space models. To fix notation, consider p(yt+1|y1:t ) the standard Markovian state-space model (West and Harrison 1997) Similarly, smoothing can be performed recursively backward in time using the smoothing formula xt+1 ∼ f(xt+1|xt ) (state evolution density), p(xt |y1:t )f (xt+1|xt ) p(x |y ) = p(x + |y ) dx + .
    [Show full text]
  • Time Series Analysis 5
    Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Time Series Analysis 5. State space models and Kalman filtering Andrew Lesniewski Baruch College New York Fall 2019 A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Outline 1 Warm-up: Recursive Least Squares 2 Kalman Filter 3 Nonlinear State Space Models 4 Particle Filtering A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering OLS regression As a motivation for the reminder of this lecture, we consider the standard linear model Y = X Tβ + "; (1) where Y 2 R, X 2 Rk , and " 2 R is noise (this includes the model with an intercept as a special case in which the first component of X is assumed to be 1). Given n observations x1;:::; xn and y1;:::; yn of X and Y , respectively, the ordinary least square least (OLS) regression leads to the following estimated value of the coefficient β: T −1 T βbn = (Xn Xn) Xn Yn: (2) The matrices X and Y above are defined as 0 T1 0 1 x1 y1 X = B . C 2 (R) Y = B . C 2 Rn; @ . A Matn;k and n @ . A (3) T xn yn respectively. A. Lesniewski Time Series Analysis Warm-up: Recursive Least Squares Kalman Filter Nonlinear State Space Models Particle Filtering Recursive least squares Suppose now that X and Y consists of a streaming set of data, and each new observation leads to an updated value of the estimated β.
    [Show full text]
  • Adaptive Motion Model for Human Tracking Using Particle Filter
    2010 International Conference on Pattern Recognition Adaptive Motion Model for Human Tracking Using Particle Filter Mohammad Hossein Ghaeminia1, Amir Hossein Shabani2, and Shahryar Baradaran Shokouhi1 1Iran Univ. of Science & Technology, Tehran, Iran 2University of Waterloo, ON, Canada [email protected] [email protected] [email protected] Abstract is periodically adapted by an efficient learning procedure (Figure 1). The core of learning the motion This paper presents a novel approach to model the model is the parameter estimation for which the complex motion of human using a probabilistic sequence of velocity and acceleration are innovatively autoregressive moving average model. The analyzed to be modeled by a Gaussian Mixture Model parameters of the model are adaptively tuned during (GMM). The non-negative matrix factorization is then the course of tracking by utilizing the main varying used for dimensionality reduction to take care of high components of the pdf of the target’s acceleration and variations during abrupt changes [3]. Utilizing this velocity. This motion model, along with the color adaptive motion model along with a color histogram histogram as the measurement model, has been as measurement model in the PF framework provided incorporated in the particle filtering framework for us an appropriate approach for human tracking in the human tracking. The proposed method is evaluated by real world scenario of PETS benchmark [4]. PETS benchmark in which the targets have non- The rest of this paper is organized as the following. smooth motion and suddenly change their motion Section 2 overviews the related works. Section 3 direction. Our method competes with the state-of-the- explains the particle filter and the probabilistic ARMA art techniques for human tracking in the real world model.
    [Show full text]
  • Sequential Monte Carlo Filtering Estimation of Ebola
    Sequential Monte Carlo Filtering Estimation of Ebola Progression in West Africa 1, 1 2 1 Narges Montazeri Shahtori ∗, Caterina Scoglio , Arash Pourhabib , and Faryad Darabi Sahneh approaches is that they provide an offline inference of an Abstract— We use a multivariate formulation of sequential outbreak that is inherently dynamic and parameters of model Monte Carlo filter that utilizes mechanistic models for Ebola change during disease evolution, so we need to keep tracking virus propagation and available incidence data to simultane- ously estimate the disease progression states and the model parameters when new data become available. Furthermore, parameters. This method has the advantage of performing since lots of factors such as intervention strategies could the inference online as the new data becomes available and affect on parameters, we expect that the basic reproductive estimates the evolution of basic reproductive ratio R0(t) of the ratio changes during the disease evolution. Therefore we Ebola outbreak through time. Our analysis identifies a peak in need techniques that are able to trace new data as they the basic reproductive ratio close to the time when Ebola cases were reported in Europe and the USA. become available. Towevers et al. [6] estimated the basic reproduction ratio, R0(t), by fitting exponential regression I. INTRODUCTION models to small successive time intervals of the Ebola Since December 2013, West Africa has experienced the outbreak. Therefore, they obtained an estimate of temporal largest Ebola outbreak with more than 20,000 infected cases variations of the growth rate. Their application of regression reported [1]. Secondary infections have also been reported models ignores the systemic epidemiological information in Spain and the United state [2].
    [Show full text]
  • Floor Plan-Free Particle Filter for Indoor Positioning of Industrial Vehicles
    Floor Plan-free Particle Filter for Indoor Positioning of Industrial Vehicles Ivo Silvaa, Adriano Moreiraa, Maria Jo~ao Nicolaua and Cristiano Pend~aoa aAlgoritmi Research Center, University of Minho, Portugal Abstract Industry 4.0 is triggering the rapid development of solutions for indoor localization of industrial ve- hicles in the factories of the future. Either to support indoor navigation or to improve the operations of the factory, the localization of industrial vehicles imposes demanding requirements such as high accuracy, coverage of the entire operating area, low convergence time and high reliability. Industrial vehicles can be located using Wi-Fi fingerprinting, although with large positioning errors. In addition, these vehicles may be tracked with motion sensors, however an initial position is necessary and these sensors often suffer from cumulative errors (e.g. drift in the heading). To overcome these problems, we propose an indoor positioning system (IPS) based on a particle filter that combines Wi-Fi fingerprinting with data from motion sensors (displacement and heading). Wi-Fi position estimates are obtained using a novel approach, which explores signal strength measurements from multiple Wi-Fi interfaces. This IPS is capable of locating a vehicle prototype without prior knowledge of the starting position and heading, without depending on the building's floor plan. An average positioning error of 0.74 m was achieved in performed tests in a factory-like building. Keywords indoor positioning, particle filter, Wi-Fi fingerprinting, sensor fusion, industrial vehicles 1. Introduction Factories of the future rely on the automation of vehicles to increase productivity in processes such as delivering raw materials and dispatching finished products.
    [Show full text]
  • Research Article Epileptic Seizure Prediction by a System of Particle Filter Associated with a Neural Network
    Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2009, Article ID 638534, 10 pages doi:10.1155/2009/638534 Research Article Epileptic Seizure Prediction by a System of Particle Filter Associated with a Neural Network Derong Liu,1 Zhongyu Pang,2 and Zhuo Wang2 1 The Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China 2 Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607-7053, USA Correspondence should be addressed to Derong Liu, [email protected] Received 3 December 2008; Revised 5 March 2009; Accepted 28 April 2009 Recommended by Jose Principe None of the current epileptic seizure prediction methods can widely be accepted, due to their poor consistency in performance. In this work, we have developed a novel approach to analyze intracranial EEG data. The energy of the frequency band of 4–12 Hz is obtained by wavelet transform. A dynamic model is introduced to describe the process and a hidden variable is included. The hidden variable can be considered as indicator of seizure activities. The method of particle filter associated with a neural network is used to calculate the hidden variable. Six patients’ intracranial EEG data are used to test our algorithm including 39 hours of ictal EEG with 22 seizures and 70 hours of normal EEG recordings. The minimum least square error algorithm is applied to determine optimal parameters in the model adaptively. The results show that our algorithm can successfully predict 15 out of 16 seizures and the average prediction time is 38.5 minutes before seizure onset.
    [Show full text]
  • Motor Cortical Decoding Using an Autoregressive Moving Average Model
    Motor Cortical Decoding Using an Autoregressive Moving Average Model Jessica Fisher and Michael J. Black Department of Computer Science, Brown University Providence, Rhode Island 02912 Email: {jfisher, black}@cs.brown.edu Abstract— We develop an Autoregressive Moving Average An ARMA model was suggested by [1] in the context (ARMA) model for decoding hand motion from neural firing of decoding a center-out reaching task. They extended the data and provide a simple method for estimating the parameters method to model a non-linear relationship between firing of the model. Results show that this method produces more accurate reconstructions of hand position than the previous rates and hand motions using Support Vector Regression. Kalman filter and linear regression methods. The ARMA model Here we apply the simpler ARMA model to a more complex combines the best properties of both these methods, producing task involving arbitrary 2D hand motions. We show that a reconstructed hand trajectories that are smooth and accurate. very simple algorithm suffices to estimate the parameters of This simple technique is computationally efficient making it the ARMA process, and that the resulting decoding method appropriate for real-time prosthetic control tasks. results in reconstructions that are highly correlated to the true I. INTRODUCTION hand trajectory. We explore the choice of parameters and pro- One of the primary problems in the development of prac- vide a quantitative comparison between the ARMA process, tical neural motor prostheses is the formulation of accurate linear regression, and the Kalman filter. We found that the methods for decoding neural signals. Here we focus on the simple ARMA process provides smooth reconstructions that problem of reconstructing the trajectory of a primate hand are more accurate than those of previous methods.
    [Show full text]
  • Bayesian Inference: Particle Filtering
    Bayesian Inference: Particle Filtering Emin Orhan Department of Brain & Cognitive Sciences University of Rochester Rochester, NY 14627, USA [email protected] August 9, 2012 Introduction: Particle filtering is a general Monte Carlo (sampling) method for performing inference in state-space models where the state of a system evolves in time, and information about the state is obtained via noisy measurements made at each time step. In a general discrete-time state-space model, the state of a system evolves according to: xk = fk(xk−1; vk−1) (1) where xk is a vector representing the state of the system at time k, vk−1 is the state noise vector, fk is a possibly non-linear and time-dependent function describing the evolution of the state vector. The state vector xk is assumed to be latent or unobservable. Information about xk is obtained only through noisy measurements of it, zk, which are governed by the equation: zk = hk(xk; nk) (2) where hk is a possibly non-linear and time-dependent function describing the measurement process and nk is the measurement noise vector. Optimal filtering: The filtering problem involves the estimation of the state vector at time k, given all the measurements up to and including time k, which we denote by z1:k. In a Bayesian setting, this problem can be formalized as the computation of the distribution p(xkjz1:k), which can be done recursively in two steps. In the prediction step, p(xkjz1:k−1) is computed from the filtering distribution p(xk−1jz1:k−1) at time k − 1: Z p(xkjz1:k−1) = p(xkjxk−1)p(xk−1jz1:k−1)dxk−1 (3) where p(xk−1jz1:k−1) is assumed to be known due to recursion and p(xkjxk−1) is given by Equation 1.
    [Show full text]