Structural Break Detection in Time Series

Structural Break Detection in Time Series

StructuralStructural BreakBreak DetectionDetection inin TimeTime SeriesSeries Richard A. Davis Thomas Lee Gabriel Rodriguez-Yam Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) This research supported in part by an IBM faculty award. 1 ¾ Introduction y Background y Setup for AR models ¾ Model Selection Using Minimum Description Length (MDL) y General principles y Application to AR models with breaks ¾ Optimization using Genetic Algorithms y Basics y Implementation for structural break estimation ¾ Simulation examples y Piecewise autoregressions y Slowly varying autoregressions ¾Applications y Multivariate time series y EEG example ¾Application to nonlinear models (parameter-driven state-space models) y Poisson model y Stochastic volatility model 2 Introduction Structural breaks: Kitagawa and Akaike (1978) • fitting locally stationary autoregressive models using AIC • computations facilitated by the use of the Householder transformation Davis, Huang, and Yao (1995) • likelihood ratio test for testing a change in the parameters and/or order of an AR process. Kitagawa, Takanami, and Matsumoto (2001) • signal extraction in seismology-estimate the arrival time of a seismic signal. Ombao, Raz, von Sachs, and Malow (2001) • orthogonal complex-valued transforms that are localized in time and frequency- smooth localized complex exponential (SLEX) transform. • applications to EEG time series and speech data. 3 Introduction (cont) Locally stationary: Dahlhaus (1997, 2000,…) • locally stationary processes • estimation Adak (1998) • piecewise stationary • applications to seismology and biomedical signal processing MDL and coding theory: Lee (2001, 2002) • estimation of discontinuous regression functions Hansen and Yu (2001) • model selection 4 Introduction (cont) Time Series: y1, . , yn Piecewise AR model: Y = γ + φ Y + + φ Y + σ ε , if τ ≤ t < τ , t j j1 t−1 L jp j t− p j j t j-1 j where τ0 = 1 < τ1 < . < τm-1 < τm = n + 1, and {εt} is IID(0,1). Goal: Estimate m = number of segments th τj = location of j break point th γj = level in j epoch th pj = order of AR process in j epoch (φ , ,φ ) = AR coefficients in jth epoch j1 K jp j th σj = scale in j epoch 5 Introduction (cont) Motivation for using piecewise AR models: Piecewise AR is a special case of a piecewise stationary process (see Adak 1998), ~ m Y = Y j I (t / n), t,n ∑ t [τ j−1 ,τ j ) j=1 j where { Y t } , j = 1, . , m is a sequence of stationary processes. It is argued in Ombao et al. (2001), that if {Yt,n} is a locally stationary process (in the sense of ~ Dahlhaus), then there exists a piecewise stationary process { Y t,n} with mn → ∞ with mn / n → 0, as n → ∞, that approximates {Yt,n} (in average mean square). Roughly speaking: {Yt,n} is a locally stationary process if it has a time-varying spectrum that is approximately |A(t/n,ω)|2 , where A(u,ω) is a continuous function in u. 6 Example--Monthly Deaths & Serious Injuries, UK Data: yt = number of monthly deaths and serious injuries in UK, Jan `75 – Dec `84, (t = 1,…, 120) Remark: Seat belt legislation introduced in Feb `83 (t = 99). Counts 1200 1400 1600 1800 2000 2200 1976 1978 1980 1982 1984 Year 7 Example -- Monthly Deaths & Serious Injuries, UK (cont) Data: xt = number of monthly deaths and serious injuries in UK, differenced at lag 12; Jan `75 – Dec `84, (t = 13,…, 120) Remark: Seat belt legislation introduced in Feb `83 (t = 99). Traditional regression analysis: Ytt = a + bf (t) +Wtt , ⎧0, if 1 ≤ t ≤ 98, f(t) = ⎨ ⎩1, if 98 < t ≤ 120. Differenced Counts XXtt ==YYtt −−YYtt−−1212 ==bgbg((tt))++NNtt ⎧⎧11,, ifif 9999≤≤tt ≤≤110110,, g(t)g(t)== -600 -400 -200 0 200 ⎨⎨ 1976 1978 1980 1982 1984 ⎩⎩0,0, otherwiseotherwise.. Year Model: b=−373.4, {Nt}~AR(13). 8 Model Selection Using Minimum Description Length Basics of MDL: Choose the model which maximizes the compression of the data or, equivalently, select the model that minimizes the code length of the data (i.e., amount of memory required to encode the data). M = class of operating models for y = (y1, . , yn) LF (y) = code length of y relative to F ∈ M Typically, this term can be decomposed into two pieces (two-part code), ˆ ˆ LF ( y) = L(F|y) + L(eˆ | F), where L(Fˆ|y) = code length of the fitted model for F L(eˆ|Fˆ) = code length of the residuals based on the fitted model 9 Model Selection Using Minimum Description Length (cont) Applied to the segmented AR model: Y = γ + φ Y + + φ Y + σ ε , if τ ≤ t < τ , t j j1 t−1 L jp j t− p j j t j-1 j First term ˆ : Let n = τ – τ and ψ = ( γ , φ , , φ , σ ) denote the L(F|y) j j j-1 j j j1 K jp j j length of the jth segment and the parameter vector of the jth AR process, respectively. Then ˆ L(F|y) = L(m) + L(τ1,K,τm ) + L( p1,K, pm ) + L(ψˆ 1 | y) +L+ L(ψˆ m | y) = L(m) + L(n1,K,nm ) + L( p1,K, pm ) + L(ψˆ 1 | y) +L+ L(ψˆ m | y) Encoding: integer I : log2 I bits (if I unbounded) log2 IU bits (if I bounded by IU) ˆ ˆ MLE θ : ½ log2N bits (where N = number of observations used to compute θ ; Rissanen (1989)) 10 Model Selection Using Minimum Description Length (cont) So, m m p + 2 ˆ j L(F|y) = log2 m + mlog2 n + ∑log2 p j +∑ log2 n j j=1 j=1 2 Second term L ( eˆ | F ˆ ) : Using Shannon’s classical results on information theory, Rissanen demonstrates that the code length of e ˆ can be approximated by the negative of the log-likelihood of the fitted model, i.e., by m n ˆ j 2 L(eˆ | F) ≈ ∑ (log2 (2πσˆ j ) +1) j=1 2 For fixed values of m, (τ1,p1),. , (τm,pm), we define the MDL as MDL(m,(τ1, p1),K,(τm , pm )) m m m p j + 2 n j 2 n = log2 m + mlog2 n + ∑log2 p j +∑∑log2 n j + log2 (2πσˆ j ) + j=1 j==112 j 2 2 The strategy is to find the best segmentation that minimizes MDL(m,τ1,p1,…, τm,pm). To speed things up, we use Y-W estimates of AR parameters. 11 Optimization Using Genetic Algorithms Basics of GA: Class of optimization algorithms that mimic natural evolution. • Start with an initial set of chromosomes, or population, of possible solutions to the optimization problem. • Parent chromosomes are randomly selected (proportional to the rank of their objective function values), and produce offspring using crossover or mutation operations. • After a sufficient number of offspring are produced to form a second generation, the process then restarts to produce a third generation. • Based on Darwin’s theory of natural selection, the process should produce future generations that give a smaller (or larger) objective function. 12 Application to Structural Breaks—(cont) Genetic Algorithm: Chromosome consists of n genes, each taking the value of −1 (no break) or p (order of AR process). Use natural selection to find a near optimal solution. Map the break points with a chromosome c via (m,(τ , p ) ,(τ , p )) ←⎯→ c = (δ , ,δ ), (m,(τ1, p1)K,(τm , pm )) ←⎯→ c = (δ1,K,δn ), where ⎧⎧−−11,, ifif nono breakbreak pointpoint atat tt,, δδtt == ⎨⎨ ⎩⎩ ppjj,, ifif breakbreak pointpoint atat timetime tt == ττjj−−11 andand ARAR orderorder isis ppjj.. For example, c = (2, -1, -1, -1, -1, 0, -1, -1, -1, -1, 0, -1, -1, -1, 3, -1, -1, -1, -1,-1) t: 1 6 11 15 would correspond to a process as follows: AR(2), t=1:5; AR(0), t=6:10; AR(0), t=11:14; AR(3), t=15:20 13 Implementation of Genetic Algorithm—(cont) Generation 0: Start with L (200) randomly generated chromosomes, c1, . ,cL with associated MDL values, MDL(c1), . , MDL(cL). Generation 1: A new child in the next generation is formed from the chromosomes c1, . , cL of the previous generation as follows: ¾ with probability πc, crossover occurs. two parent chromosomes ci and cj are selected at random with probabilities proportional to the ranks of MDL(ci). th k gene of child is δk = δi,k w.p. ½ and δj,k w.p. ½ ¾ with probability 1− πc, mutation occurs. a parent chromosome ci is selected th k gene of child is δk = δi,k w.p. π1 ; −1 w.p. π2 ; and p w.p. 1− π1−π2. 14 Implementation of Genetic Algorithm—(cont) Execution of GA: Run GA until convergence or until a maximum number of generations has been reached. Various Strategies: ¾ include the top ten chromosomes from last generation in next generation. ¾ use multiple islands, in which populations run independently, and then allow migration after a fixed number of generations. This implementation is amenable to parallel computing. 15 Simulation Examples-based on Ombao et al. (2001) test cases 1. Piecewise stationary: Consider a time series following the model, ⎧ .9Yt−1 + εt , if 1 ≤ t < 513, ⎪ Yt = ⎨1.69Yt−1 − .81Yt−2 + εt , if 513 ≤ t < 769, ⎪ ⎩1.32Yt−1 − .81Yt−2 + εt , if 769 ≤ t ≤ 1024, where {εt} ~ IID N(0,1). -10 -5 0 5 10 1 200 400 600 800 1000 Time 16 1. Piecewise stat (cont) Implementation: Start with NI = 50 islands, each with population size L = 200. After every Mi = 5 generations, allow migration. 4 Replace worst 2 in Island 3412 with best 2 from Island 2.3.4.1. 3 1 Stopping rule: Stop when the max MDL does not change for 10 consecutive 2 migrations or after 100 migrations.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    46 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us