Uniform Noise Autoregressive Model Estimation
Total Page:16
File Type:pdf, Size:1020Kb
Uniform Noise Autoregressive Model Estimation L. P. de Lima J. C. S. de Miranda Abstract In this work we present a methodology for the estimation of the coefficients of an au- toregressive model where the random component is assumed to be uniform white noise. The uniform noise hypothesis makes MLE become equivalent to a simple consistency requirement on the possible values of the coefficients given the data. In the case of the autoregressive model of order one, X(t + 1) = aX(t) + (t) with i.i.d (t) ∼ U[α; β]; the estimatora ^ of the coefficient a can be written down analytically. The performance of the estimator is assessed via simulation. Keywords: Autoregressive model, Estimation of parameters, Simulation, Uniform noise. 1 Introduction Autoregressive models have been extensively studied and one can find a vast literature on the subject. However, the the majority of these works deal with autoregressive models where the innovations are assumed to be normal random variables, or vectors. Very few works, in a comparative basis, are dedicated to these models for the case of uniform innovations. 2 Estimator construction Let us consider the real valued autoregressive model of order one: X(t + 1) = aX(t) + (t + 1); where (t) t 2 IN; are i.i.d real random variables. Our aim is to estimate the real valued coefficient a given an observation of the process, i.e., a sequence (X(t))t2[1:N]⊂IN: We will suppose a 2 A ⊂ IR: A standard way of accomplish this is maximum likelihood estimation. Let us assume that the noise probability law is such that it admits a probability density. Denote it f: The likelihood of a sequence is easily found to be: N−1 Y L((X(t))t2[1:N]⊂IN j a) = f(X(t + 1) − aX(t)) t=1 1 and the log likelihood is thus given by: N−1 X l((X(t))t2[1:N]⊂IN j a) = ln(f(X(t + 1) − aX(t))): t=1 Taking derivatives with respect to a we obtain @l((X(t)) j a) N−1 −X(t) t2[1:N]⊂IN = X : @a t=1 f(X(t + 1) − aX(t)) Thus, in general, the MLE of a,a; ^ is either within the solutions of N−1 X(t) X = 0 t=1 f(X(t + 1) − aX(t)) or at the boundary of A; and these conditions may be fulfilled by more then one real value of a: Thus, in general, it will require numeric methods to determine the value or values of a that maximize the likelihood. We will apply MLE to determine the estimate of a in case we have uniform noise. In this 1 case we have f = β−α χ[α,β]: The maximum likelihood estimator can be directly determined by looking at the expression for the likelihood of the observation which is now written: N−1 1 Y 1 L((X(t))t2[1:N]⊂IN j a) = (N−1) χ[α,β](X(t + 1) − aX(t)) ≤ (N−1) : (β − α) t=1 (β − α) Clearly this leads to 1 L((X(t)) j a) = χ (a); t2[1:N]⊂IN (β − α)(N−1) [Γ;∆] where 0 !1 0 !1 _ X(t + 1) − α _ X(t + 1) − β Γ = @ A _ @ A ft:1≤t<N;X(t)<0g X(t) ft:1≤t<N;X(t)>0g X(t) and 0 !1 0 !1 ^ X(t + 1) − β ^ X(t + 1) − α ∆ = @ A ^ @ A : ft:1≤t<N;X(t)<0g X(t) ft:1≤t<N;X(t)>0g X(t) Thus, every number in the interval [Γ; ∆] is a maximum likelihood estimate of a: The likeli- 1 hood that corresponds to any of these estimates is exactly (β−α)(N−1) and the likelihood for every other possible estimate is zero. Now, we observe that the compatibility relations 8t; 1 ≤ t < N − 1;X(t + 1) 2 aX(t) + [α; β] are equivalent to a 2 [Γ; ∆]: Thus, in the case of uniform noise, this simple compatibility conditions on the possible values of a imply maximum likelihood. Moreover, this relations guarantee that the event fa 2 [Γ; ∆]g is the sure event, that it, not only its probability is one but also its complement is the empty set. This leads us to the choice of our estimator: Γ + ∆ a^ = ; 2 since this estimator satisfies the optimality requirement of minimum maximum error, and this with certainty. 2 3 Main results All that has been said about the estimation for uniform noise autoregressive estimation is contained in the following Theorem 3.1 Let (X(t))t2[1:N]⊂IN; N > 1; be an observation of a real autoregressive process Γ+∆ X(t + 1) = aX(t) + (t + 1); with uniform i.i.d noise in [α; β]: Then, a^ = 2 is a maximum ∆−Γ likelihood estimator of a and fa 2 [Γ; ∆]g = fa 2 a^ + [−δ; δ]g, where δ = 2 ; is the sure event. Proof: Direct consequence of the arguments in section 2. 4 Estimator Performance In this section we will present some results of a simulation study of the estimatora: ^ We will be mainly interested in the behaviour of our estimator as a function of the real value of a and the size of the time series that we have at our disposal to calculatea: ^ We have chosen the to present here the simulations results for a = 0:7; a = −0:5 and a = 1:000001: This corresponds to the typical damped behaviors, which include the cases with and without oscillation, and the case of steady divergence to infinity. Similar analysis can, of course be performed for the other possibilities of classes of trajectories of this stochastic process, depending on a: We have also chosen the series length to be N = 100;N = 350; and, N = 1400: We summarize the results of the simulations in Figures 1, 2 and 3. 5 Final Remarks We observe that some of the features of the estimator presented in this work are due to the boundedness of the support of the noise. Generalizations for AR(m) processes are possible. References [1] Pavelkov, L.,Krn, N. Estimation of ARX model with uniform noise- Algorithms and examples, Institute of Information Theory and Automation. Prague, Czech Republic. [2] Pavelkov, L., Examples of State and Parameter Estimation for Linear Model with Uniform Innovations. Institute of Information Theory and Automation. Prague, Czech Republic. 3 Figure 1. Simulation results for a=0.7 and different sample sizes (N= 100, 350 and 1400). Figure 2. Simulation results for a= -0.5 and different sample sizes (N= 100, 350 and 1400). 4 Figure 3. Simulation results for a=1.0001 and different sample sizes (N= 100, 350 and 1400). 5.