The Cusum Algorithm - a Small Review Pierre Granjon

Total Page:16

File Type:pdf, Size:1020Kb

The Cusum Algorithm - a Small Review Pierre Granjon The CuSum algorithm - a small review Pierre Granjon To cite this version: Pierre Granjon. The CuSum algorithm - a small review. 2013. hal-00914697 HAL Id: hal-00914697 https://hal.archives-ouvertes.fr/hal-00914697 Submitted on 13 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The CUSUM algorithm a small review Pierre Granjon GIPSA-lab March 13, 2014 Contents 1 The CUSUM algorithm 2 1.1 Algorithm ............................... 2 1.1.1 The problem ......................... 2 1.1.2 The different steps ...................... 3 1.1.3 The whole algorithm ..................... 5 1.1.4 Example ............................ 7 1.2 Performance .............................. 7 1.2.1 Criteria ............................ 7 1.2.2 Optimality .......................... 9 1.3 Practical considerations ....................... 9 1.3.1 Dealing with unknowns ................... 9 1.3.2 Setting the parameters .................... 12 1.3.3 Two-sided algorithm ..................... 15 1.4 Variations ............................... 18 1.4.1 Fast initial response CUSUM . 18 1.4.2 Combined Shewhart-CUSUM . 19 1.4.3 Multivariate CUSUM .................... 19 Bibliography 20 1 Chapter 1 The CUSUM algorithm Lai [Lai 1995] and Basseville [Basseville 1993] give a full treatment of the field, from the simplest to more complicated sequential detection procedures (from Shewhart control charts [Shewhart 1931] to GLR algorithm first given in [Lorden 1971]). 1.1 Algorithm It is Page who first proposes different forms of the cumulative sum (CUSUM) algorithm in [Page 1954a]: direct or recursive forms. • one- or two-sided forms, • 1.1.1 The problem Let X[n] be a discrete random signal with independent and identically dis- tributed samples. Each of them follows a probability density function (PDF) p (x[n],θ) depending on a deterministic parameter θ (for example the mean µX 2 or the variance σX of X[n]). This signal may contain one abrupt change occur- ring at the change time nc. This abrupt change is modeled by an instantaneous modification of the value of θ occurring at the change time nc. Therefore θ = θ0 before nc and θ = θ1 from nc to the current sample. Under these assumptions, the whole PDF of the signal pX observed between the first sample x[0] and the current one x[k] can take two different forms. Under the no change hypothesis ( ), the PDF of X[n] is given by: • H0 k pX|H0 = p (x[n],θ0) . (1.1) nY=0 2 Under the one change hypothesis ( ), this PDF becomes: • H1 nc−1 k pX|H1 = p (x[n],θ0) p (x[n],θ1) . (1.2) nY=0 nY=nc In this problem, the PDF of each sample p (x[n],θ) and the values of the pa- rameter before (θ0) and after (θ1) the abrupt change are supposed to be known. Finally, the only unknowns to be determined are: the lack or occurrence of an abrupt change between n = 0 and n = k, • the value of the possible change time nc. • In order to develop the desired algorithm, we follow an on-line approach. This means that the abrupt change possibly contained in the past of the signal has to be sequentially detected sample after sample. Therefore, at each new sample x[k], one of the two previous hypotheses 0 and 1 has first to be decided. Then, in the case a change has been detectedH ( decided),H the change time has H1 to be estimated thanks to an estimator nc. The general form of this sequential algorithm is given in algorithm 1. c initialization if necessary end while the algorithm is not stopped do measure the current sample x[k] decide between 0 (no change) and 1 (one change) if decided thenH H H1 store the detection time nd k ← estimate the change time nc stop or reset the algorithm end end Algorithm 1: general form of a sequential change detection algorithm. Two important steps appear in this algorithm : detection step: How to decide between and ? H0 H1 estimation step: How to efficiently estimate the change time nc? These steps are further detailed in the two following sections. 1.1.2 The different steps Detection step The problem here is to decide between two possible hypotheses 0 and 1 from the measured samples x[0],...,x[k], which is also termed a binaryH hypothesisH 3 testing problem. The solution, given by the detection theory, is to use the so-called likelihood ratio test [Kay 1998, chap. 3]: test 1. Let the log-likelihood ratio LX be defined by: pX|H1 LX = ln . (1.3) pX|H0 Then, decide if LX >h (else ), where h is a threshold set by the user. H1 H0 In our case, the theoretical expression of LX at the current sample x[k] is ob- tained by reporting (1.1) and (1.2) in (1.3): k pX|H1 [k,nc] p (x[n],θ1) LX [k,nc]=ln = ln . (1.4) pX|H0 [k] p (x[n],θ0) nX=nc Unfortunately, this quantity can not be calculated since it depends on the un- known change time nc. Once again, the detection theory leads to an efficient solution which consists of replacing all the unknowns in LX by their maximum likelihood estimates. Such a test is termed the generalized likelihood ratio test, and is given by [Kay 1998, chap. 6]: test 2. Let the generalized log-likelihood ratio GX be defined by: GX[k] = max LX [k,nc] 1≤nc≤k k p (x[n],θ ) = max ln 1 . (1.5) 1≤nc≤k p (x[n],θ0) nX=nc Then, decide if GX[k] >h (else ), where h is a threshold set by the user. H1 H0 The quantity defined in (1.5) can now be calculated at each new sample and is thus used to decide between and . It is also termed a decision function. H0 H1 Estimation step Once has been decided and an abrupt change has been detected, the prob- H1 lem is to efficiently estimate the change time nc from the measured samples x[0],...,x[k]. One way to solve this problem is to use its maximum likeli- hood estimate, which is the value of nc maximizing the likelihood pX|H1 [k,nc] [Kay 1993, chap. 7]: nc = arg max pX|H1 [k,nc] = arg max LX [k,nc] 1≤nc≤k 1≤nc≤k c k p (x[n],θ ) = arg max ln 1 . (1.6) 1≤nc≤k p (x[n],θ0) nX=nc 4 1.1.3 The whole algorithm Direct form In order to obtain a simplified form of the whole algorithm, let’s define from (1.4) the instantaneous log-likelihood ratio at time n by: p (x[n],θ1) s[n]= LX[n,n]=ln , (1.7) p (x[n],θ0) and its cumulative sum from 0 to k: k S[k]= s[n]. (1.8) nX=0 From (1.7) and (1.8), the log-likelihood ratio LX[k,nc] defined in (1.4) can be rewritten as: LX [k,nc]= S[k] S[nc 1]. (1.9) − − Reporting this expression in (1.5) and (1.6), we obtain new expressions for the decision function GX[k] and for the change time estimate nc[k]: GX[k] = S[k] min S[nc 1]c (1.10) − 1≤nc≤k − nc = arg min S[nc 1]. (1.11) 1≤nc≤k − c Equation (1.10) shows that the decision function GX[k] is the current value of the cumulative sum S[k] minus its current minimum value. Equation (1.11) shows that the change time estimate is the time following the current minimum of the cumulative sum. Therefore, each step composing the whole algorithm relies on the same quantity: the cumulative sum S[k] defined through (1.7) and (1.8). This explains the name of cumulative sum or CUSUM algorithm. By using equations (1.7), (1.8), (1.10) and (1.11) into algorithm 1, the direct form of the CUSUM algorithm 2 is obtained. Recursive form The previous algorithm can easily be rewritten in a recursive manner. Indeed, the cumulative sum (1.8) can be calculated through the simple recursive formula: S[k]= S[k 1] + s[k]. (1.12) − Moreover, as shown in [Basseville 1993, chap. 2], since the decision function is compared to a positive threshold h, it can be rewritten as: + GX[k]= GX[k 1] + s[k] , (1.13) { − } where z + = sup(z, 0). Finally, by using (1.12) and (1.13) into the direct form algorithm{ }2, the recursive form of the CUSUM algorithm 3 is obtained. Clearly, the obtained algorithm is very simple and very interesting to use in real-time or on-line applications. 5 initialization set the detection threshold h> 0 k = 0 end while the algorithm is not stopped do measure the current sample x[k] s[k]=ln p(x[k],θ1) p(x[k],θ0) k S[k]= n=0 s[n] GX[k]=PS[k] min1≤nc≤k S[nc 1] − − if GX[k] >h then nd k ← nc = arg min S[nc 1] 1≤nc≤k − stop or reset the algorithm c end k = k + 1 end Algorithm 2: CUSUM algorithm, direct form. initialization set the detection threshold h> 0 S[ 1] = GX[ 1] = 0 k =− 0 − end while the algorithm is not stopped do measure the current sample x[k] p(x[k],θ1) s[k]=ln p(x[k],θ ) 0 S[k]= S[k 1] + s[k] − + GX[k]= GX[k 1] + s[k] { − } if GX[k] >h> 0 then nd k ← nc = arg min S[nc 1] 1≤nc≤k − stop or reset the algorithm c end k = k + 1 end Algorithm 3: CUSUM algorithm, recursive form.
Recommended publications
  • Quickest Change Detection
    1 Quickest Change Detection Venugopal V. Veeravalli and Taposh Banerjee ECE Department and Coordinated Science Laboratory 1308 West Main Street, Urbana, IL 61801 Email: vvv, [email protected]. Tel: +1(217) 333-0144, Fax: +1(217) 244-1642. Abstract The problem of detecting changes in the statistical properties of a stochastic system and time series arises in various branches of science and engineering. It has a wide spectrum of important applications ranging from machine monitoring to biomedical signal processing. In all of these applications the observations being monitored undergo a change in distribution in response to a change or anomaly in the environment, and the goal is to detect the change as quickly as possibly, subject to false alarm constraints. In this chapter, two formulations of the quickest change detection problem, Bayesian and minimax, are introduced, and optimal or asymptotically optimal solutions to these formulations are discussed. Then some generalizations and extensions of the quickest change detection problem are described. The chapter is concluded with a discussion of applications and open issues. I. INTRODUCTION The problem of quickest change detection comprises three entities: a stochastic process under obser- vation, a change point at which the statistical properties of the process undergo a change, and a decision maker that observes the stochastic process and aims to detect this change in the statistical properties of arXiv:1210.5552v1 [math.ST] 19 Oct 2012 the process. A false alarm event happens when the change is declared by the decision maker before the change actually occurs. The general objective of the theory of quickest change detection is to design algorithms that can be used to detect the change as soon as possible, subject to false alarm constraints.
    [Show full text]
  • Unsupervised Network Anomaly Detection Johan Mazel
    Unsupervised network anomaly detection Johan Mazel To cite this version: Johan Mazel. Unsupervised network anomaly detection. Networking and Internet Architecture [cs.NI]. INSA de Toulouse, 2011. English. tel-00667654 HAL Id: tel-00667654 https://tel.archives-ouvertes.fr/tel-00667654 Submitted on 8 Feb 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 5)µ4& &OWVFEFMPCUFOUJPOEV %0$503"5%&-6/*7&34*5²%&506-064& %ÏMJWSÏQBS Institut National des Sciences Appliquées de Toulouse (INSA de Toulouse) $PUVUFMMFJOUFSOBUJPOBMFBWFD 1SÏTFOUÏFFUTPVUFOVFQBS Mazel Johan -F lundi 19 décembre 2011 5J tre : Unsupervised network anomaly detection ED MITT : Domaine STIC : Réseaux, Télécoms, Systèmes et Architecture 6OJUÏEFSFDIFSDIF LAAS-CNRS %JSFDUFVS T EFʾÒTF Owezarski Philippe Labit Yann 3BQQPSUFVST Festor Olivier Leduc Guy Fukuda Kensuke "VUSF T NFNCSF T EVKVSZ Chassot Christophe Vaton Sandrine Acknowledgements I would like to first thank my PhD advisors for their help, support and for letting me lead my research toward my own direction. Their inputs and comments along the stages of this thesis have been highly valuable. I want to especially thank them for having been able to dive into my work without drowning, and then, provide me useful remarks.
    [Show full text]
  • Statistical Methods for Change Detection - Michèle Basseville
    CONTROL SYSTEMS, ROBOTICS AND AUTOMATION – Vol. XVI - Statistical Methods for Change Detection - Michèle Basseville STATISTICAL METHODS FOR CHANGE DETECTION Michèle Basseville Institut de Recherche en Informatique et Systèmes Aléatoires, Rennes, France Keywords: change detection, statistical methods, early detection, small deviations, fault detection, fault isolation, component faults, statistical local approach, vibration monitoring. Contents 1. Introduction 1.1. Motivations for Change Detection 1.2. Motivations for Statistical Methods 1.3. Three Types of Change Detection Problems 2. Foundations-Detection 2.1. Likelihood Ratio and CUSUM Tests 2.1.1. Hypotheses Testing 2.1.2. On-line Change Detection 2.2. Efficient Score for Small Deviations 2.3. Other Estimating Functions for Small Deviations 3. Foundations-Isolation 3.1. Isolation as Nuisance Elimination 3.2. Isolation as Multiple Hypotheses Testing 3.3. On-line Isolation 4. Case Studies-Vibrations Glossary Bibliography Biographical Sketch Summary Handling parameterized (or parametric) models for monitoring industrial processes is a natural approach to fault detection and isolation. A key feature of the statistical approach is its ability to handle noises and uncertainties. Modeling faults such as deviations in the parameter vector with respect to (w.r.t.) its nominal value calls for the use of statisticalUNESCO change detection and isolation – EOLSS methods. The purpose of this article is to introduce key concepts for the design of such algorithms. SAMPLE CHAPTERS 1. Introduction 1.1 Motivations for Change Detection Many monitoring problems can be stated as the problem of detecting and isolating a change in the parameters of a static or dynamic stochastic system. The use of a model of the monitored system is reasonable, since many industrial processes rely on physical principles, which write in terms of (differential) equations, providing us with (dynamical) models.
    [Show full text]
  • A Survey of Change Detection Methods Based on Remote Sensing Images for Multi-Source and Multi-Objective Scenarios
    remote sensing Article A Survey of Change Detection Methods Based on Remote Sensing Images for Multi-Source and Multi-Objective Scenarios Yanan You , Jingyi Cao * and Wenli Zhou School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China; [email protected] (Y.Y.); [email protected] (W.Z.) * Correspondence: [email protected] Received: 11 June 2020; Accepted: 27 July 2020; Published: 31 July 2020 Abstract: Quantities of multi-temporal remote sensing (RS) images create favorable conditions for exploring the urban change in the long term. However, diverse multi-source features and change patterns bring challenges to the change detection in urban cases. In order to sort out the development venation of urban change detection, we make an observation of the literatures on change detection in the last five years, which focuses on the disparate multi-source RS images and multi-objective scenarios determined according to scene category. Based on the survey, a general change detection framework, including change information extraction, data fusion, and analysis of multi-objective scenarios modules, is summarized. Owing to the attributes of input RS images affect the technical selection of each module, data characteristics and application domains across different categories of RS images are discussed firstly. On this basis, not only the evolution process and relationship of the representative solutions are elaborated in the module description, through emphasizing the feasibility of fusing diverse data and the manifold application scenarios, we also advocate a complete change detection pipeline. At the end of the paper, we conclude the current development situation and put forward possible research direction of urban change detection, in the hope of providing insights to the following research.
    [Show full text]
  • Change Detection Algorithms
    25 2 Change Detection Algorithms In this chapter, we describe the simplest change detection algorithms. We consider a sequence of indepen- y p y k dent random variables k with a probability density depending upon only one scalar parameter. t 0 1 0 Before the unknown change time 0 , the parameter is equal to , and after the change it is equal to . The problems are then to detect and estimate this change in the parameter. The main goal of this chapter is to introduce the reader to the design of on-line change detection al- gorithms, basically assuming that the parameter 0 before change is known. We start from elementary algorithms originally derived using an intuitive point of view, and continue with conceptually more involved but practically not more complex algorithms. In some cases, we give several possible derivations of the same algorithm. But the key point is that we introduce these algorithms within a general statistical framework, based upon likelihood techniques, which will be used throughout the book. Our conviction is that the early introduction of such a general approach in a simple case will help the reader to draw up a unified mental picture of change detection algorithms in more complex cases. In the present chapter, using this general approach and for this simplest case, we describe several on-line algorithms of increasing complexity. We also discuss the off-line point of view more briefly. The main example, which is carried through this chapter, is concerned with the detection of a change in the mean of an independent Gaussian sequence.
    [Show full text]
  • The Null Steady-State Distribution of the CUSUM Statistic
    The Null Steady-State Distribution of the CUSUM Statistic O. A. Grigg † D. J. Spiegelhalter June 4, 2007 Abstract We develop an empirical approximation to the null steady-state distribution of the cumulative sum (CUSUM) statistic, defined as the distribution of values obtained by running a CUSUM with no upper boundary under the null state for an indefinite period of time. The derivation is part theoretical and part empirical and the approximation is valid for CUSUMs applied to Normal data with known variance (although the theoretical result is true in general for exponential family data, which we show in the Ap- pendix). The result leads to an easily-applied formula for steady-state p-values corresponding to CUSUM values, where the steady-state p-value is obtained as the tail area of the null steady-state distribution and represents the expected proportion of time, under repeated application of the CUSUM to null data, that the CUSUM statistic is greater than some particular value. When designing individual charts with fixed boundaries, this measure could be used alongside the average run-length (ARL) value, which we show by way of examples may be approximately related to the steady-state p-value. For multiple CUSUM schemes, use of a p-value enables application of a signalling procedure that adopts a false discovery rate (FDR) approach to multiplicity control. Under this signalling procedure, boundaries on each individual chart change at every observation according to the ordering of CUSUM values across the group of charts. We demonstrate practical application of the steady-state p-value to a single chart where the example data are number of earthquakes per year registering > 7 on the Richter scale measured from 1900 to 1998.
    [Show full text]
  • Maximum Likelihood Estimator (MLE) for the Process Change Point Using the Step Change Likelihood Function for a Binomial Random Variable
    A robust change point estimator for binomial CUSUM control charts Advisor: Takeshi Emura ( 江 村 剛 志 ) 博士 Presenter : Yi Ting Ho 2014/6/24 1 Outline Introduction Background 1. Binomial CUSUM control chart 2. Maximum likelihood estimator 3. Page’s last zero estimator Method Combine CUSUM estimator and MLE 2014/6/24 2 Outline Simulations Design closely follows those of Perry and Pignatiello (2005) Data analysis Jewelry manufacturing data by Burr (1979) Conclusion 2014/6/24 3 Outline Introduction Background 1. Binomial CUSUM control chart 2. Maximum likelihood estimator 3. Page’s last zero estimator Method Combine CUSUM estimator and MLE 2014/6/24 4 Introduction • What is the change point? • We consider observations come from binomial distribution with the same fraction nonconforming p . CUSUM Chart 15 Out-of-control 10 h=12.043 CumulativeSum 5 Change 0 point 0 10 20 30 40 50 60 2014/6/24 Subgroup number 5 Introduction In SPC, the np-chart is most famous charts used to monitor the number of nonconforming items for industrial manufactures. np-chart When sample size is large and defective 10 upper control limit rate is not too small, the np-chart works 8 6 well. defectives 4 center line 2 lower control limit 0 2014/6/24 0 5 10 15 20 25 30 Subgroup number 6 Introduction Page (1954, 1955) first suggested the CUSUM chart to estimate the change point. The binomial CUSUM control chart is a good alternative when small changes are important. 2014/6/24 7 Introduction Samuel and Pignatiello (2001) proposed maximum likelihood estimator (MLE) for the process change point using the step change Likelihood function for a binomial random variable.
    [Show full text]
  • (Quickest) Change Detection: Classical Results and New Directions Liyan Xie, Student Member, IEEE, Shaofeng Zou, Member, IEEE, Yao Xie, Member, IEEE, and Venugopal V
    1 Sequential (Quickest) Change Detection: Classical Results and New Directions Liyan Xie, Student Member, IEEE, Shaofeng Zou, Member, IEEE, Yao Xie, Member, IEEE, and Venugopal V. Veeravalli, Fellow, IEEE Abstract—Online detection of changes in stochastic systems, reason, the modern sequential change detection problem’s referred to as sequential change detection or quickest change scope has been extended far beyond its traditional setting, detection, is an important research topic in statistics, signal often challenging the assumptions made by classical methods. processing, and information theory, and has a wide range of applications. This survey starts with the basics of sequential These challenges include complex spatial and temporal de- change detection, and then moves on to generalizations and pendence of the data streams, transient and dynamic changes, extensions of sequential change detection theory and methods. We high-dimensionality, and structured changes, as explained be- also discuss some new dimensions that emerge at the intersection low. These challenges have fostered new advances in sequen- of sequential change detection with other areas, along with a tial change detection theory and methods in recent years. selection of modern applications and remarks on open questions. (1) Complex data distributions. In modern applications, sequential data could have a complex spatial and temporal dependency, for instance, induced by the network structure I. INTRODUCTION [16], [68], [167]. In social networks, dependencies are usually HE efficient detection of abrupt changes in the statistical due to interaction and information diffusion [116]: users in T behavior of streaming data is a classical and fundamental the social network have behavior patterns influenced by their problem in signal processing and statistics.
    [Show full text]
  • Monitoring Active Portfolios: the CUSUM Approach, Thomas K. Philips
    Monitoring Active Portfolios: The CUSUM Approach Thomas K. Philips Chief Investment Officer Paradigm Asset Management Agenda § Portfolio monitoring: problem formulation & description § Sequential testing and process control § The Cusum scheme – Economic intuition, simplified theory & implementation § Issues that arise in practice – Optimality and robustness – Causality 2 The Investor’s / CIO’s Problem § Invested in / responsible for many active products – Fallout of the asset allocation / manager selection / sales / reseach / product development process § Lots of data coming in from portfolio managers – Returns, portfolio holdings, sector allocations, risk profiles, transactions etc. § Not clear which portfolios merit extra attention – Investor / CIO will ideally focus on products that might be in trouble 3 Observations on The Investment Environment § First order approximation: market is efficient – Performance measurement plays a vital role in evaluation § Hard to differentiate luck from skill – Alpha (signal)= 1%, Tracking Error (noise)=3% – t-test for a > 0 requires N>36 years for t>2 § Subtle, and wrong, assumptions – Alpha and tracking error are stationary for 36years – t>2 is necessary to make a decision about the portfolio 4 Performance Measurement in Practice § Performance measurement is rooted in classical statistics § Measure benchmark relative performance over fixed & rolling intervals – 3 to 5 years is common – Extrapolate trends in rolling benchmark or peer group relative performance – Focus attention on the underperforming products § Cannot identify regime changes or shifts in performance – t-stats are too low to be meaningful – Not clear if attention is focused on the right products at the right time 5 Monitoring Performance § Performance monitoring is rooted in decision theory & hypothesis testing – First step: define what constitutes good & bad performance.
    [Show full text]
  • Robust Change Detection and Change Point Estimation for Poisson Count Processes Marcus B
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2004 Robust Change Detection and Change Point Estimation for Poisson Count Processes Marcus B. Perry Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY COLLEGE OF ENGINEERING ROBUST CHANGE DETECTION AND CHANGE POINT ESTIMATION FOR POISSON COUNT PROCESSES By Marcus B. Perry A Dissertation submitted to the Department of Industrial Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy Degree Awarded: Summer Semester, 2004 Copyright © 2004 Marcus B. Perry All Rights Reserved The members of the Committee approve the dissertation of Marcus B. Perry defended on May 28, 2004. _______________________________ Joseph J. Pignatiello Jr. Professor Directing Dissertation _______________________________ Anuj Srivastava Outside Committee Member _______________________________ James R. Simpson Committee Member _______________________________ Chuck Zhang Committee Member Approved: _________________________________ Ben Wang, Chair, Department of Industrial and Manufacturing Engineering _________________________________ Ching-Jen Chen, Dean, College of Engineering The Office of Graduate Studies has verified and approved the above named committee members. ii ACKNOWLEDGEMENTS I would like to express my appreciation to Dr. Joseph Pignatiello and Dr. James Simpson for their excellent mentoring and contributions to this manuscript.
    [Show full text]
  • A Markovian Approach to Unsupervised Change Detection with Multiresolution and Multimodality SAR Data
    remote sensing Article A Markovian Approach to Unsupervised Change Detection with Multiresolution and Multimodality SAR Data David Solarna *,† , Gabriele Moser *,† and Sebastiano B. Serpico *,† Department of Electrical, Electronic, Telecommunication Engineering and Naval Architecture, University of Genoa, Via All’Opera Pia 11A, 16145 Genoa, Italy * Correspondence: [email protected] (D.S.); [email protected] (G.M.); [email protected] (S.B.S.); Tel.: +39-010-3532190 (G.M.) † These authors contributed equally to this work. Received: 4 October 2018; Accepted: 19 October 2018; Published: 23 October 2018 Abstract: In the framework of synthetic aperture radar (SAR) systems, current satellite missions make it possible to acquire images at very high and multiple spatial resolutions with short revisit times. This scenario conveys a remarkable potential in applications to, for instance, environmental monitoring and natural disaster recovery. In this context, data fusion and change detection methodologies play major roles. This paper proposes an unsupervised change detection algorithm for the challenging case of multimodal SAR data collected by sensors operating at multiple spatial resolutions. The method is based on Markovian probabilistic graphical models, graph cuts, linear mixtures, generalized Gaussian distributions, Gram–Charlier approximations, maximum likelihood and minimum mean squared error estimation. It benefits from the SAR images acquired at multiple spatial resolutions and with possibly different modalities on the considered acquisition times to generate an output change map at the finest observed resolution. This is accomplished by modeling the statistics of the data at the various spatial scales through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images that are defined on the pixel grid at the finest resolution and would be collected if all the sensors could work at that resolution.
    [Show full text]
  • The Max-CUSUM Chart
    Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session STS041) p.2996 The Max-CUSUM Chart Smiley W. Cheng Department of Statistics University of Manitoba Winnipeg, Manitoba Canada, R3T 2N2 [email protected] Keoagile Thaga Department of Statistics University of Botswana Private Bag UB00705 Gaborone Botswana [email protected] Abstract Control charts have been widely used in industries to monitor process quality. We usually use two control charts to monitor the process. One chart is used for monitoring process mean and another for monitoring process variability, when dealing with variables da- ta. A single Cumulative Sum (CUSUM) control chart capable of de- tecting changes in both mean and standard deviation, referred to as the Max-CUSUM chart is proposed. This chart is based on standar- dizing the sample means and standard deviations. This chart uses only one plotting variable to monitor both parameters. The proposed chart is compared with other recently developed single charts. Comparisons are based on the average run lengths. The Max- CUSUM chart detects small shifts in the mean and standard devia- tion quicker than the Max-EWMA chart and the Max chart. This makes the Max-CUSUM chart more applicable in modern produc- tion process where high quality goods are produced with very low fraction of nonconforming products and there is high demand for good quality. Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session STS041) p.2997 1. Introduction Control charts are basic and most powerful tools in statistical process control (SPC) and are widely used for monitoring quality characteristics of production processes.
    [Show full text]