<<

mathematics

Article Comparing Parameter Estimation of Random Coefficient Autoregressive Model by Frequentist Method

Autcha Araveeporn

Department of , Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand; [email protected]

 Received: 20 November 2019; Accepted: 20 December 2019; Published: 2 January 2020 

Abstract: This paper compares the frequentist method that consisted of the least-squares method and the maximum likelihood method for estimating an unknown parameter on the Random Coefficient Autoregressive (RCA) model. The frequentist methods depend on the likelihood that draws a conclusion from observed by emphasizing the or proportion of the data namely and maximum likelihood methods. The method of least squares is often used to estimate the parameter of the frequentist method. The minimum of the sum of squared residuals is found by setting the to zero. The maximum likelihood method carries out the observed data to estimate the parameter of a distribution by maximizing a likelihood function under the , while this estimator is obtained by a differential parameter of the likelihood function. The efficiency of two methods is considered by average square error for simulation data, and mean square error for actual data. For simulation data, the data are generated at only the first-order models of the RCA model. The results have shown that the least-squares method performs better than the maximum likelihood. The average mean square error of the least-squares method shows the minimum values in all cases that indicated their performance. Finally, these methods are applied to the actual data. The series of monthly averages of the Stock Exchange of Thailand (SET) index and daily volume of the exchange rate of Baht/Dollar are considered to estimate and forecast based on the RCA model. The result shows that the least-squares method outperforms the maximum likelihood method.

Keywords: least-squares method; maximum likelihood method; random coefficient autoregressive model

1. Introduction The modeling of data has been applied in a field of finance, business, and economics. Normally the time series data exhibit changing data as trend, volatility, stationary, nonstationary, and random walk, especially when the time series data are a sequence taken at successive equally space points in time. The modeling of time series data can help estimate parameters and forecast the values in future time. A model that is widely used to fit the stationary data is the Autoregressive (AR) (MA) model. The Autoregressive Moving Average (ARMA) model provides a parsimonious description of a weakly stationary stochastic process. When the time series data show evidence of non-stationarity, the Autoregressive Integrated Moving Average (ARIMA) model can be applied to eliminate the non-stationarity. These models have some problems to overspecify the model and estimate integration parameter. The Conditional Heteroscadastic Autoregressive Moving average (CHARMA) model [1] is an alternative way to model by using when volatility arises. Another model that is approached to CHARMA models is called the Random Coefficient Autoregressive (RCA) model studied by Nicholls

Mathematics 2020, 8, 62; doi:10.3390/math8010062 www.mdpi.com/journal/mathematics Mathematics 2020, 8, 62 2 of 17 and Quinn [2]. Normally, the RCA model is now being concentrated on the past of time series data to determine the order and obtain estimates of the unknown parameter as a volatility model. For parameter estimation, there has been an increasing interest in the unknown parameter for the RCA model. Nicholls and Quinn [3] model shown to be strongly and satisfy a . By of a two-stage regression procedure, estimates of the unknown parameters of this model are obtained. Hwang and Basawa [4] studied the generalized random coefficient autoregressive process in which the error process is introduced. Conditional least squares and weight least squares estimators are derived to estimate the unknown parameter. Aue, Horvath, and Steinebach [5] proposed the quasi-maximum likelihood method to estimate parameters of an RCA model of order 1. The strong consistent estimator and the asymptotic normal estimations are derived under the estimation. Under the frequentist method, parameter and hypothesis are viewed as unknown but fixed quantities and, consequently, there is no possibility of making probability statements about the unknowns [6]. The most popular methods used by a for estimating the parameter on several models are the frequentist methods named as Least-Squares (LS) and Maximum Likelihood (ML). The Least-squares method is used by minimizing the class of sum squared residuals, and the process of estimating unknown parameter is shown as the differential coefficient. Another method is the maximum likelihood method, which is a common method that can be developed using flexible statistics from . The likelihood function is used for obtaining the observed data and differential as the least square method. The frequentist method is approached by the random coefficient model [7], which has proposed two estimation methods for the coefficients of the explanatory variables, namely, the generalized least squares and the maximum likelihood estimator for the matrix. The random coefficient model uses dependent and explanatory variables, but the random coefficient autoregressive model mentions just one variable. This study considers the frequentist methods for estimating the parameter of the RCA model based on the least-squares and maximum likelihood methods. The performance of these methods uses the criterion as the Average Mean Square Error (AMSE) on simulated and Mean Square Error (MSE) on real data.

2. The Random Coefficient Autoregressive (RCA) Model The Random Coefficient Autoregressive (RCA) model of order p [2] is written by

Xp xt = α + βtixt i + εt, t = 2, 3, ... , n. (1) − i=1

For Equation (1) the following assumption are made.

(i) εt; t = 2, 3, ... , n is an independent sequence of random variables with mean zero and covariance { } matrix G.

(ii) The α and β = (βt1, βt2, ... , βtp)0 are constant. ti (iii) Letting β = (βt1, βt2, ... , βtp)0 is an independent sequence with mean zero and ti C.

(iv) β = (βt1, βt2, ... , βtp)0 is also independent of εt; t = 2, 3, ... , n . ti { } Wang and Ghosh [8] suggested β = µ + Ωβ u , ti β t where α is a constant value, β = (βt1, βt2, ... , βtp)0 is the independent random vectors with mean ti µ = (µt1, µt2, ... , µtp)0 and covariance matrix Ωβ. β Mathematics 2020, 8, 62 3 of 17

In this literature, we mention the simplicity case study of the first order on RCA1 following

xt = α + βt1xt 1 + εt, t = 2, 3, ... , n − (2) βt1 = µβ + σβ vt, where xt’s are iid (independent and identically distributed random variables) with mean µβ, and 2 2 σβ, εt’s are iid random variables with mean zero and variance σε. The RCA1 model can be rewritten as

xt = α + βtxt 1 + εt = α + µβxt 1 + ut, (3) − − where ut = σβvtxt 1 + εt; where vt is set as a with mean zero and unit variance and − independent of εt.

3. Parameter Estimations In the parameter estimation of the RCA1 model, we present the concept of the least-squares method and maximum likelihood method.

3.1. Least-Squares Method The classical method for the estimation of parameter by the fitting model is the least-squares method. Araveeporn [9] proposed the least-square criteria to estimate parameters of the random coefficient dynamic regression model. The proposed coefficient provided asymptotically unbiased estimator. Hsiao [7] proposed the random coefficient model that consisted of the dependent variable yij for the cross-section unit and the explanatory variable xikt. The model is

XK yit = (βk + δik + γkt)xikt + εit, i = 1, ... , N; t = 1, ... , T) (4) k=1

Equation (4) can be written as matrix form

Y = Xβ + Xδ + X˜ γ + ε = Xβ + e, (5) ˜ where e = Xδ + X˜ γ + ε, and E(e e0) = Ω. If Ω is known as the covariance matrix, the best linear ˜ unbiased estimator of β will be computed from the least-squares criterion as

1 b = (X0ΩX)− X0ΩY. (6)

For the random coefficient autoregressive (RCA) model, we suggest the least-squares criterion to approximate parameter θ = (α, µβ) by minimizing sum of squared residuals. The RCA1 in Equation (2) can be written in terms of the matrix form as the following regression model Y = Xθ + ε (7) ˜ ˜      1 0     x1     ε1     1 x  " #    .   1  α  .  where Y =  . , X =  . . , θ = , ε =  . .    . .  ˜ µ ˜      . .  β   xn   εn 1 xn 1 − Mathematics 2020, 8, 62 4 of 17

h i 1 The RCA1 estimator, the estimator θˆ 0 = αˆ, µˆ β , is obtained by θˆ = (X0X)− X0Y [10]. Then ˜ ˜

 n  1 n  P − P " #  n xt   xt  αˆ  1    ˆ = =  t=2 −   t=1  θ  n n   n  ˜ µˆ β  P P 2   P   xt 1 xt 1   xt 1xt  t=2 − t=2 − t=2 −  Pn Pn   Pn   x2 x   x   t 1 t 1   t  1  t=2 − −t=2 −   t=2  = !2  n   n  Pn Pn  P   P  n x2 x     t 1 t 1  xt 1 n   xt 1xt  t=2 − − t=2 − −t=2 − t=2 − n n n n  P 2 P P P   xt 1 xt xt 1 xt 1xt   t=2 − t=1 −t=2 − t=2 −   !2   Pn Pn   n x2 x   t 1 t 1   t=2 − − t=2 −  =  n n n   P P P   n xt 1xt xt 1 xt   t=2 − −t=2 − t=1   !2   Pn Pn   n x2 x   t 1 t 1  t=2 − − t=2 −

The least-square estimates αˆ LS and µˆ β,LS are obtained by

n n n n P 2 P P P xt 1 xt xt 1 xt 1xt t=2 t=1 − t=2 − t=2 − αˆ = − , LS  !2  Pn Pn  n x2 x   t 1 t 1  t=2 − − t=2 −

Pn Pn Pn n xt 1xt xt 1 xt t=2 − − t=2 − t=1 and µˆ = . β,LS  !2  Pn Pn  n x2 x   t 1 t 1  t=2 − − t=2 − For RCA1 model, it can be fitted model as

xˆt = αˆ LS + µˆ β,LSxt 1 , t = 2, 3, ... n. − 3.2. Maximum Likelihood Method The maximum likelihood method has been widely used in a field of for estimating parameters on the distribution function. Araveeporn [11] developed the maximum likelihood method to estimate the parameter on the random coefficient dynamic regression model. From a random coefficient model [7] on Equation (4), the maximum likelihood estimator is obtained 2 by assuming that δik N(0, ∆), γkt N(0, Γ) and εit N(0, σ ). Then, e = Xδ + X˜ γ + ε is normally ∼ ∼ ∼ ˜ distributed. The density of yij is   1 1 1 exp (Y Xβ)0Ω− (Y Xβ) . 1 NT 1 −2 − − (2π) 2 Ω 2 | | Maximum likelihood estimation of β, ∆, Γ, and σ2 will require the solution of highly nonlinear equations. For any set of observations x , ... , xn of the RCA model, let L(θ) be the likelihood function { 1 } based on the joint probability density of the observed data. Furthermore, the function of the unknown parameters in likelihood function held to be fixed. Mathematics 2020, 8, 62 5 of 17

Let γt be the iid random variable of t, let ut = xt µβxt 1, and it can be shown that − − 2 2 2 2 E(ut γt 1) = 0, E(ut γt 1) = σε + σβ xt 1, − − − 2 2 2 and Var(ut γt 1) = σε + σβ xt 1. − − Nicholls and Quinn [3] model is shown to be strongly consistent estimator and satisfied a central limit theorem to . So, the likelihood function is concerned in terms of normal distribution. The maximum likelihood method considers the likelihood function as

n Q L(θ) = L(θ xtxt 1) = f (xt xt 1) | − − t=(2 ) (8) n/2 n   1/2 n 2  1  Q 2 2 2 − 1 P (xt α µβxt 1) = σ + σ x exp − − − 2π ε β t 1 2 σ2+σ2 x2 t=2 − t=2 ε β t 1 − − 2 = 2 = 2 From (8), construct the new likelihood function by setting parameter and let σ σε σβ, so it can be written as   n/2 n n 2  1  Y   1/2  1 X (xt α µβxt 1)  L(θ) = σ2 1 + x2 − exp − − −  (9) 2π t 1 −2σ2 + 2  t=2 −  t=2 1 xt 1  − From (9), take ln on likelihood function following

n n 2 n n 1X 1 X (xt α µβxt 1) ln L(θ) = ln (2π) ln σ2 ln(1 + x2 ) − − − . (10) − 2 − 2 − 2 t 1 − 2σ2 + 2 t=2 − t=2 1 xt 1 −

From (10), differential with respect to parameters α and µβ,

n ∂ X (xt α µβxt 1) ln L(θ) = − − − ∂α + 2 t=2 1 xt 1 − n ∂ X (xt α µβxt 1)xt 1 ln L(θ) = − − − − ∂µ + 2 β t=2 1 xt 1 − Now we get ∂ ln L(θ) = 0 ∂(α, µβ) We obtain the αˆ by

n n n ∂ X xt X 1 X xt 1 ln L(θ) = α µ − = 0 ∂α + 2 − + 2 − β + 2 t=2 1 xt 1 t=2 1 xt 1 t=2 1 xt 1 − − − Then n n P xt P xt 1 µ − 1+x2 β 1+x2 t=2 t 1 − t=2 t 1 = − − αˆ n . (11) P 1 1+x2 t=2 t 1 − We obtain the µˆ β by

n n n 2 ∂ X xtxt 1 X xt 1 X xt 1 ln L(θ) = − α − µ − = 0 ∂µ + 2 − + 2 − β + 2 β t=2 1 xt 1 t=2 1 xt 1 t=2 1 xt 1 − − − Mathematics 2020, 8, 62 6 of 17

Then n n P xtxt 1 P xt 1 − α − 1+x2 1+x2 t=2 t 1 − t=2 t 1 µˆ β = − − . (12) n x2 P t 1 − 1+x2 t=2 t 1 − From (11) and (12), it can be rewritten as

c1 µˆ βc2 c4 αˆ c2 αˆ = − and µˆ β = − , c3 c5 where n n n P xt P xt 1 P 1 c = , c = − , c = 1 1+x2 2 1+x2 3 1+x2 t=2 t 1 t=2 t 1 t=2 t 1 n 2 − n − − P xt 1 P xtxt 1 c = − , c = − . 4 1+x2 5 1+x2 t=2 t 1 t=2 t 1 − − Lastly, we obtain two estimators by

c1c5 c2c4 c3c5 c1 c2 αˆ ML = − and µˆ β,ML = − . c c c2 c c c2 3 5 − 2 3 4 − 2 For observed fitting values of RCA1 model, it can be written as

xˆt = αˆ ML + µˆ β,MLxt 1 , t = 2, 3, ... n. − 4. Simulation Study

The objective of this study is to estimate the parameter θ = (α, µβ) from RCA1 by using the least-squares and maximum likelihood methods. The results have been shown to compare these estimators in the sizes 100, 300, and 500. The Mean Square Error (MSE) is evaluated as the mean square of difference between the estimated values and simulated values. We also computed the MSE as the criterion defined as the following:

Pn 2 = (xt xˆt) MSE = t 2 − , j n 1 − where xt denotes the simulated values, xˆt denotes the estimated values, and j is the number of . The simulation study is divided into two parts. In the first process, we generated data xt, t = 1, 2, ... , n from RCA1 as

2 2 2 2 2 xt = α + µβ xt 1 + ut, ut N(0, σu), σu = σε + σβ xt 1. − ∼ − The parameter of RCA1 model is mentioned in 4 cases as

= = 2 = 2 = 1. α 0.5, µβ 0.5, σε σβ 0.5 = = 2 = 2 = 2. α 0.5, µβ 0.5, σε σβ 0.1 = = 2 = 2 = 3. α 0.5, µβ 1, σε σβ 0.01 = = 2 = 2 = 4. α 0, µβ 0, σε σβ 0.01

Figures1–3 show generating data of 100, 300, and 500 sample sizes. It should be noted that cases 1 and 2 tend to oscillate around its mean zero and one; case 3 is displayed as the random walk. Mathematics 2019, 7, x 7 of 17 Mathematics 2019, 7, x 7 of 17 =+αμ + σ22222 σ = σ + σ x βεβxuuN−−,~(0,), x . ttttuut=+αμ11 + σ22222 σ = σ + σ xttttuutβεβxuuN−−11,~(0,), x . The parameter of RCA1 model is mentioned in 4 cases as The parameter of RCA1 model is mentioned in 4 cases as αμ====0.5, 0.5, σσ22 0.5 1. αμ====βεβ σσ22 1. 0.5,βεβ 0.5, 0.5 αμ====0.5, 0.5, σσ22 0.1 2. αμ====βεβ σσ22 2. 0.5,βεβ 0.5, 0.1 αμσσ====0.5, 1,22 0.01 3. αμσσ====βεβ22 3. 0.5,βεβ 1, 0.01 αμ====0, 0, σσ22 0.01 4. αμ====βεβ σσ22 4. 0,βεβ 0, 0.01 Figures 1–3 show generating data of 100, 300, and 500 sample sizes. It should be noted that cases Figures 1–3 show generating data of 100, 300, and 500 sample sizes It should be noted that cases 1 and 2 tend to oscillate around its mean zero and one; case 3 is displayed. as the random walk. 1 and 2 tend toMathematics oscillate2020 ,around8, 62 its mean zero and one; case 3 is displayed as the random7 of 17 walk. Case 1 Case 2 Case 1 Case 2 x x x x 0.0 0.5 1.0 1.5 2.0 -2 0 2 4 0.0 0.5 1.0 1.5 2.0

-20 0 20406080100 2 4 0 20406080100

0 20406080100Time 0 20406080100Time

Time Time Case 3 Case 4 Case 3 Case 4 x x x x 0.00.51.01.5 0 20406080100 0.00.51.01.5 0 20406080100 0 20406080100 0 20406080100 0 20406080100Time 0 20406080100Time Time Time Figure 1. The time series plot of Case1–4 for generated data (100 sample sizes). Figure 1. TheFigure time 1. The series time seriesplot plotof Case1–4 of Case1–4 forfor generated generated data (100 data sample (100 sizes). sample sizes). Case 1 Case 2 Case 1 Case 2 x x x x 0.0 1.0 2.0 3.0 -4-20246 0.0 1.0 2.0 3.0

-4-20246 0 50 100 150 200 250 300 0 50 100 150 200 250 300

0 50 100 150Time 200 250 300 0 50 100Time 150 200 250 300

Time Time

Case 3 Case 4 Case 3 Case 4 x x x x 0 50 100 150 -4 -3 -2 -1 0 1 0 50 100 150 0 50 100 150 200 250 300 -4 -30 -2 50 -1100 0 1 150 200 250 300 0 50 100 150Time 200 250 300 0 50 100Time 150 200 250 300 Time Time Figure 2. The time series plot of Case 1–4 for generated data (300 sample sizes). Figure 2. The time series plot of Case1–4 for generated data (300 sample sizes). Figure 2. The time series plot of Case1–4 for generated data (300 sample sizes).

Mathematics 2020, 8, 62 8 of 17 Mathematics 2019, 7, x 8 of 17

Case 1 Case 2 x x -5 0 5 10 -0.5 0.5 1.5 2.5 0 100 200 300 400 500 0 100 200 300 400 500

Time Time

Case 3 Case 4 x x -1.0 0.0 1.0 2.0 0 20406080

0 100 200 300 400 500 0 100 200 300 400 500 Time Time FigureFigure 3.3. The time series plot of CaseCase1–4 1–4 for for generated generated data data (500 (500 sample sample sizes). sizes). ˆ In the second part, we obtain the estimatorθαμˆθLS == (αˆ LS, µˆ β,LS) from the least-squares method In the second part, we obtain the estimator LSLSLS(,ˆ ˆ β , ) from the least-squares method and θˆML = (αˆ ML, µˆ β,ML) from the maximum likelihood method. From the least-squares method, θαμˆ = weand estimate MLMLML(,xˆt fromˆ β , ) from the maximum likelihood method. From the least-squares method, we xˆt = αˆ LS + µˆ β,LSxt 1 , t = 2, 3, ... , n, estimate xt from − And x from maximum likelihood is approximated by t =+αμ = xxtnˆˆtLSLStˆβ ,1− , 2,3,..., , xˆt = αˆ ML + µˆ β,MLxt 1 , t = 2, 3, ... , n. − And xt from maximum likelihood is approximated by = ( ) Finally, we simulated data=+ atαμ 500 replications from RCA1. = The estimation of αˆ j αˆ LS,j, αˆ ML,j xxtnˆˆtMLMLtˆβ ,1− , 2,3,..., . and µˆ β,j = (µˆ β,LS,j, µˆ β,ML,j) , j = 1, 2, ... , m, (m = 500) is obtained. We also compute the Monte Carlo mean and (sd) for each parameter and sample sizes. The biasααα= of parameter is Finally, we simulated data at 500 replications from RCA1. The estimation of ˆ jLSjMLj(,ˆˆ,, ) approximated by and μμμˆ ==(ˆˆ , ),jm 1,2,...,m , (500)m = is obtained. We also βββ,jjj ,LS, ,ML, 1 X α = (αˆ α) compute the Monte Carlo mean and standardbias deviationm j(sd− ) for each parameter and sample sizes. The j=1 bias of parameter is approximated by m 1 X µ = 1 m (µˆ µ ) αααβ,bias =−m ()ˆβ,j − β biasj=1 j m j=1 A t-test employed to determine the means of bias is different from the zero. In this case, m 1 ˆ the hypotheses are H0 : µ ˆ = 0 andμμμH1 : µ ˆ=−, 0 whereˆ θ = (αˆ, µˆ β). θ βββ,,biasθ  () j To see this, we compute the average of MSEm j=1 (AMSE) for comparing the effective estimation between least-squares method and maximum likelihood method by A t-test employed to determine the means of bias is different from the zero. In this case, the μ = μ ≠ Pm θαˆ = μ hypotheses are H 0 :0θˆ and H1 :0θˆ wherej=1 MSEj (,ˆ ˆ β ). AMSE = . To see this, we compute the average of MSE (AMSE)m for comparing the effective estimation betweenThe least-squares results of mean, method standard and maximum deviation likelihood (sd), bias, method and AMSE by are shown in Tables1 and2. m However, the percentage of the difference between the least-squaresMSE and maximum likelihood methods  j=1 j presented especially in Table2. AMSE = . m

Mathematics 2020, 8, 62 9 of 17

Table 1. The mean (sd), bias, and the Average Mean Square Error (AMSE) of parameter estimation in the least-squares method.

Sample Sizes Case Parameter n = 100 n = 300 n = 500 0.5617 0.5634 0.5265 α = 0.5 (0.1788) (0.1208) (0.1308) α 0.0617 * 0.0036 * 0.0265 * Case 1 bias − − − σ2 = σ2 = 0.5 0.4176 0.4490 0.4648 ε β µ = 0.5 β (0.1563) (0.1236) (0.1060)

µβ,bias 0.0823 * 0.0509 * 0.0351 * AMSE 3.0500 2.8682 2.8415 0.5118 0.5036 0.5033 α = 0.5 (0.0881) (0.0559) (0.0440) α 0.0118 * 0.0036 0.0351 Case 2 bias − − σ2 = σ2 = 0.1 0.4812 0.4923 0.4947 ε β µ = 0.5 β (0.0998) (0.0631) (0.0497)

µβ,bias 0.0187 * 0.0076 * 0.0052 * AMSE 0.2270 0.2278 0.2283 0.8988 1.4964 1.8823 α = 0.5 (0.4075) (0.8428) (1.4514) α 0.3988 * 0.9964 * 1.3823 * Case 3 bias − − − σ2 = σ2 = 0.01 0.9814 0.9823 0.9846 ε β µ = 1 β (0.0227) (0.0130) (0.0108)

µβ,bias 0.1852 * 0.0176 * 0.0153 * AMSE 11.1952 130.6887 492.6607 0.0012 0.0011 0.0003 α = 0 − − (0.0408) (0.0292) (0.0335) α 0.0012 0.0011 0.0003 Case 4 bias − σ2 = σ2 = 0.01 0.9424 0.9766 0.9833 ε β µ = 1 β (0.0464) (0.0172) (0.0116)

µβ,bias 0.0575 * 0.0233 * 0.0166 * AMSE 0.0174 0.0389 0.0959 * indicates significance at 5% level.

From Table1, it appears that the means of bias in Cases 1 and 3 are significantly di fferent for two parameters. However, the means of bias in Cases 2 and 4 are not significantly different for α and provide asymptotically unbiased estimates for α. The of bias are presented in Figures4–7. From Figures4–7, the histograms are apparent that relative biases are reduced with increasing sample sizes, and the distribution of biases appears to be more normal distribution for large sample sizes. MathematicsMathematics 20202019,, 87,, 62x 1010 of of 1717 Mathematics 2019, 7, x 10 of 17

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 50 100 150 0 50 100 150 200

0 50-1.0 100 -0.5 150 200 0.0 0.5 0-0.6 50 100 -0.2 150 0.2 0.4 0-0.6 50 100 -0.2 150 200 0.2

-1.0alpha.ls1 -0.5 0.0 0.5 -0.6alpha.ls3 -0.2 0.2 0.4 -0.6 -0.2alpha.ls5 0.2

alpha.ls1 alpha.ls3 alpha.ls5 n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20406080100120 050100150 0 50 100 150

0 20406080100120 -0.4 0.0 0.2 0.4 0.6 050100150 -0.4 0.0 0.2 0.4 0.6 0-0.4 50 -0.2 100 0.0 150 0.2 0.4 -0.4mu.beta.ls1 0.0 0.2 0.4 0.6 -0.4mu.beta.ls 0.0 0.2 3 0.4 0.6 -0.4 -0.2mu.beta.ls 0.0 0.2 5 0.4 mu.beta.ls1 α mu.beta.ls 3 mu.beta.ls 5 Figure 4. of estimated parameter and μβ with the least-squares method in Case 1. α FigureFigure 4.4. HistogramHistogram of ofestimated estimated parameter parameter α andand µμββwith with the the least-squares least-squares method method in in Case Case 1. 1.

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20 40 60 80 100 120 0 50 100 150 0 50 100 150

0-0.3 20 40 -0.1 60 80 100 0.1 120 0.2 0-0.2 -0.1 50 0.0 100 0.1 150 0.2 0-0.15 50 -0.05 100 0.05 150 0.15

-0.3 -0.1alpha.ls1 0.1 0.2 -0.2 -0.1alpha.ls3 0.0 0.1 0.2 -0.15 -0.05alpha.ls5 0.05 0.15

alpha.ls1 alpha.ls3 alpha.ls5 n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20406080100 0 50 100 150 0 50 100 150

0 20406080100 -0.3 -0.1 0.1 0.3 0-0.2 -0.1 50 0.0 100 0.1 0.2 150 0-0.2 -0.1 50 0.0 100 0.1 150 0.2 -0.3 -0.1mu.beta.ls1 0.1 0.3 -0.2 -0.1mu.beta.ls3 0.0 0.1 0.2 -0.2 -0.1mu.beta.ls5 0.0 0.1 0.2 mu.beta.ls1 α mu.beta.ls3 mu.beta.ls5 FigureFigure 5. Histogram 5. Histogram of ofestimated estimated parameter parameter α and µμβ βwith with the the least-squares least-squares method method in Casein Case 2. 2. α Figure 5. Histogram of estimated parameter and μβ with the least-squares method in Case 2.

Mathematics 2019, 7, x 11 of 17 Mathematics 2020, 8, 62 11 of 17 Mathematics 2019, 7, x 11 of 17 n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 50 100 150 200 250 0 100 200 300

0 50-2 100 -1 150 200 0 1 0 50-6 100 150 -4 200 250 -2 0 2 0-15 100 -10 200 -5 300 0

-2alpha.ls1 -1 0 1 -6 -4alpha.ls3 -2 0 2 -15 -10alpha.ls5 -5 0

alpha.ls1 alpha.ls3 alpha.ls5 n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 050100150 0 50 100 150 200 250

0-0.05 50 100 0.00 150 200 0.05 050100150 -0.04 0.00 0.04 0-0.04 50 100 150 0.00 200 250 0.04 -0.05mu.beta.ls1 0.00 0.05 -0.04mu.beta.ls 0.00 3 0.04 -0.04mu.beta.ls 0.00 5 0.04 mu.beta.ls1 α mu.beta.ls 3 mu.beta.ls 5 Figure 6. Histogram of estimated parameter and μβ with the least-squares method in Case 3. Figure 6. Histogram of estimated parameterαα and µ with the least-squares method in Case 3. Figure 6. Histogram of estimated parameter and μβ β with the least-squares method in Case 3.

n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 50 100 150 200 250 0 50 100 150 200

0-0.15 50 -0.05 100 150 0.05 200 0.15 0-0.15 50 -0.05 100 150 200 0.05 250 0.15 0-0.2 50 100 -0.1 150 0.0 200 0.1 0.2

-0.15 -0.05alpha.ls1 0.05 0.15 -0.15 -0.05alpha.ls3 0.05 0.15 -0.2 -0.1alpha.ls5 0.0 0.1 0.2

alpha.ls1 alpha.ls3 alpha.ls5 n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200

0-0.1 50 0.0 100 150 0.1 200 0.2 0.3 0 500.00 100 150 0.05 200 0.10 0-0.04 50 1000.00 150 200 0.04 -0.1 0.0mu.beta.ls 0.1 1 0.2 0.3 0.00mu.beta.ls3 0.05 0.10 -0.04mu.beta.ls5 0.00 0.04 mu.beta.ls 1 α mu.beta.ls3 mu.beta.ls5 FigureFigure 7. Histogram 7. Histogram of estimated of estimated parameter parameter α and µμββwith with the the least-squares least-squares method method in in Case Case 4. 4. α Figure 7. Histogram of estimated parameter and μβ with the least-squares method in Case 4.

Mathematics 2020, 8, 62 12 of 17

Table 2. The mean (sd), bias, AMSE, and percentage of differences of parameter estimation in the maximum likelihood method.

Sample Sizes Case Parameter n = 100 n = 300 n = 500 0.5178 0.5047 0.5043 α = 0.5 (0.1124) (0.0615) (0.0476) α 0.0178 * 0.0509 * 0.0043 * Case 1 bias − − σ2 = σ2 = 0.5 0.4867 0.4925 0.4957 ε β µ = 0.5 β (0.1194) (0.0641) (0.0508) µ 0.0132 * 0.0047 0.00425 β,bias − AMSE 3.2655 2.9876 2.9310 Percentage of Difference 7.0650 4.1628 3.1497 0.5180 0.5059 0.5038 α = 0.5 (0.0838) (0.0473) (0.0367) α 0.0180 * 0.0059 * 0.0038 * Case 2 bias − − − σ2 = σ2 = 0.1 0.4825 0.4925 0.4958 ε β µ = 0.5 β (0.0925) (0.0516) (0.0404)

µβ,bias 0.0174 * 0.0074 * 0.0041 * AMSE 0.2281 0.2283 0.2286 Percentage of Difference 0.4845 0.2194 0.1314 0.5087 0.5112 0.5085 α = 0.5 (0.0571) (0.0548) (0.0558) α 0.0087 * 0.1123 * 0.0085 * Case 3 bias − − − σ2 = σ2 = 0.01 1.0000 0.9994 0.9998 ε β µ = 1 β (0.0115) (0.0059) (0.0048) µ 0.0000 0.0005 0.0001 β,bias − AMSE 11.5103 132.899 501.1316 Percentage of Difference 2.8140 1.6912 1.7180 0.0008 0.0003 0.0002 α = 0 − (0.0362) (0.0197) (0.0155) α 0.0008 00003 0.0002 Case 4 bias − − σ2 = σ2 = 0.01 0.9473 0.9825 0.9891 ε β µ = 1 β (0.0481) (0.0177) (0.0120)

µβ,bias 0.0526 * 0.0174 * 0.0108 * AMSE 0.0175 0.0392 0.0968 Percentage of Difference 0.5474 0.7712 0.9384 * indicates significance at 5% level.

From Table2, it appears that the number of means of bias significantly di fferent is less than Table1. Especially, Cases 1, 3, and 4 are not significantly different for α, β, and provide asymptotically unbiased estimates for α, β. The histograms of bias are presented in Figures8–11. The AMSEs of the least-squares method in Table1 are less than AMSEs of the maximum likelihood method in Table2. It means that the least-squares method outperforms the maximum likelihood method. So, the AMSEs of two methods show slightly different values and, hence, the percentage of the difference between 2 methods have appeared on the last row in each case. Mathematics 20202019, 87, 62x 1313 of of 17 Mathematics 2019, 7, x 13 of 17 From Figures 8–11, the histograms are apparent that relative biases are reduced with increasing sampleFromFrom sizes FiguresFigures and the8 8–11,–11 distribution, the the histograms histograms of biases areare appears apparentapparent to that thatbe more relativerelative normally biasesbiases are distributedare reducedreduced for with with large increasingincreasing sample sizessamplesample except sizessizes μ andandβ in the theCase distributiondistribution 4. ofof biasesbiases appears to be more normallynormally distributeddistributed forfor largelarge samplesample μ sizessizes exceptexcept µββin in Case Case 4. 4.

n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20406080 0 50 100 150 0 20406080

0 20406080 -0.4 -0.2 0.0 0.2 0-0.2 50 -0.1 100 0.0 150 0.1 0 20406080 -0.15 -0.05 0.05

-0.4 -0.2alpha.ml1 0.0 0.2 -0.2 -0.1alpha.ml3 0.0 0.1 -0.15 -0.05alpha.ml5 0.05

alpha.ml1 alpha.ml3 alpha.ml5 n=100 n=300 n=500

n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 050100150 0 20406080100120140 050100150

050100150 -0.2 0.0 0.2 0.4 0 20406080100120140 -0.2 -0.1 0.0 0.1 0.2 050100150 -0.15 -0.05 0.05 0.15

-0.2mu.beta.ml1 0.0 0.2 0.4 -0.2 -0.1mu.beta.ml3 0.0 0.1 0.2 -0.15 -0.05mu.beta.ml5 0.05 0.15

mu.beta.ml1 mu.beta.ml3 mu.beta.ml5 α Figure 8. Histogram of estimated parameter and μβ with maximum likelihood method in Case 1. Figure 8. Histogram of estimated parameterα α and µβ with maximum likelihood method in Case 1. Figure 8. Histogram of estimated parameter and μβ with maximum likelihood method in Case 1.

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20406080100 020406080 0 20406080100

0 20406080100 -0.3 -0.1 0.0 0.1 0.2 020406080 -0.15 -0.05 0.05 0 20406080100 -0.10 0.00 0.05 0.10

-0.3alpha.ml1 -0.1 0.0 0.1 0.2 -0.15alpha.ml3 -0.05 0.05 -0.10alpha.ml5 0.00 0.05 0.10

alpha.ml1 alpha.ml3 alpha.ml5 n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 20406080100 050100150 020406080

0 20406080100 -0.2 0.0 0.2 050100150 -0.1 0.0 0.1 0.2 020406080 -0.10 0.00 0.10 -0.2mu.beta.ml1 0.0 0.2 -0.1mu.beta.ml3 0.0 0.1 0.2 -0.10mu.beta.ml5 0.00 0.10 mu.beta.ml1 α mu.beta.ml3 mu.beta.ml5 FigureFigure 9.9. HistogramHistogram of ofestimated estimated parameter parameter αandand μµβ withwith maximummaximum likelihoodlikelihood method in in Case Case 2 2.. α Figure 9. Histogram of estimated parameter and μβ with maximum likelihood method in Case 2.

Mathematics 20202019,, 87,, 62x 1414 of of 17 Mathematics 2019, 7, x 14 of 17

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 0 50 100 150 0 50 100 150 0 50 100 150 0 50 100 150 0 50 100 150 -0.20 -0.10 0.00 0.10 -0.20 -0.10 0.00 0.10 -0.2 -0.1 0.0 0.1 0.2 -0.20 -0.10 0.00 0.10 -0.20 -0.10 0.00 0.10 -0.2 -0.1 0.0 0.1 0.2 alpha.ml1 alpha.ml3 alpha.ml5 alpha.ml1 alpha.ml3 alpha.ml5

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 020406080 0 50 100 150 0 20406080 020406080 0 50 100 150 0 20406080 -0.04 -0.02 0.00 0.02 -0.02 -0.01 0.00 0.01 0.02 -0.015 -0.005 0.005 0.015 -0.04 -0.02 0.00 0.02 -0.02 -0.01 0.00 0.01 0.02 -0.015 -0.005 0.005 0.015 mu.beta.ml1 mu.beta.ml3 mu.beta.ml5 mu.beta.ml1 mu.beta.ml3 mu.beta.ml5 α μ Figure 10. Histogram of estimated parameter α and β with maximum likelihood method in Case 3. FigureFigure 10. 10. HistogramHistogram of ofestimated estimated parameter parameter α andand µμββwith with maximum maximum likelihood likelihood methodmethod in Case 3 3..

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 -0.15 -0.05 0.05 0.15 -0.05 0.00 0.05 0.10 -0.05 0.00 0.05 0.10 -0.15 -0.05 0.05 0.15 -0.05 0.00 0.05 0.10 -0.05 0.00 0.05 0.10 alpha.ml1 alpha.ml3 alpha.ml5 alpha.ml1 alpha.ml3 alpha.ml5

n=100 n=300 n=500 n=100 n=300 n=500 Frequency Frequency Frequency Frequency Frequency Frequency 0 50 100 150 200 0 20406080100120140 0 50 100 150 0 50 100 150 200 0 20406080100120140 0 50 100 150 -0.05 0.05 0.15 0.25 -0.02 0.02 0.06 0.10 -0.02 0.02 0.04 0.06 -0.05 0.05 0.15 0.25 -0.02 0.02 0.06 0.10 -0.02 0.02 0.04 0.06 mu.beta.ml1 mu.beta.ml3 mu.beta.ml5 mu.beta.ml1 mu.beta.ml3 mu.beta.ml5 Figure 11. Histogram of estimated parameterα α and µμ with maximum likelihood method in Case 4. Figure 11. Histogram of estimated parameter α and ββ with maximum likelihood method in Case 4. Figure 11. Histogram of estimated parameter and μβ with maximum likelihood method in Case 4.

Mathematics 2020, 8, 62 15 of 17 Mathematics 2019, 7, x 15 of 17

5.5. Application of Actual Data InIn thisthis part,part, wewe examineexamine thethe RCA1RCA1 modelmodel byby usingusing thethe least-squaresleast-squares methodmethod andand thethe maximummaximum likelihoodlikelihood methodmethod developeddeveloped inin thethe previousprevious section.section. At At first, first, the monthly average of the StockStock ExchangeExchange ofof ThailandThailand index,index, oror calledcalled SETSET index, is an important index inin Thailand thatthat discusseddiscussed inin termsterms ofof actualactual data.data. The trading of SET index is started on April 30, 1975, and we estimated estimated parameter ofof RCA1RCA1 model by using data from 1975 to 2016, then we obtained forecast future data from 2017 toto 2018.2018. These These data data are are shown in Figure Figure 12a,12a, which which is is obtained from [12]. [12].

(a) SET Index (b) LS & ML Models

LS ML SET SET Index SET 500 1000 1500 1550 1600 1650 1700 1750 1800

0 100 200 300 400 500 5101520

month month FigureFigure 12.12. The time series plot for the Stock Exchange of Thailand (SET) index and the plot of SETSET indexindex forfor forecastingforecasting of least squares (LS)(LS) vs maximum likelihood methods.

BasedBased onon estimating thethe RCA1 model to fitfit the future SET index, we approximate the forecasting valuesvalues inin Figure 1212b.b. The plots of SET indexindex are presented as the actual data, and the dashed line ofof thethe least-squaresleast-squares methodmethod andand thethe dotteddotted lineline ofof the maximum likelihood methodmethod areare presentedpresented asas thethe forecastingforecasting data.data. ComparedCompared withwith FigureFigure 12 12b,b, it it is is di difficultfficult to to see see that that the the forecasting forecasting values values of the of least-squaresthe least-squares and maximumand maximum likelihood likelihood methods methods are fairly are fairly close close to the to actual the actual observed observed series. series The Mean. The Mean Square Square Error (MSE)Error (MSE) is the criterionis the criterion to decide to thedecide performance the performance of two methods of two methods approximated approximated by the mean by the square mean of the difference between the forecasting data (xˆt) and actual data (xt). We can compute the MSE as square of the difference between the forecasting data ()xˆt and actual data ()xt . We can compute the MSE as Pn 2 (xt xˆt) MSE = t=2 − n . n ()xx1 − ˆ 2 t=2− tt MSE = . Therefore, we should be more convinced by the MSEn −1 of the least-squares method given 2649.224, but the MSE of the maximum likelihood method is shown by 2676.102. ForTherefore, the second we should real data, be more we analyzed convinced the by daily the MSE exchange of the rate least-squares of Thai Baht method to one given U.S. Dollar2649.224, as 1426but the records MSE fromof the 12 maximum January 2013, likelihood to 12 December method is 2018. shown We carriedby 2676.102. on the least-squares and maximum likelihoodFor the models second to real forecast data, fromwe analyzed 12 December the daily 2018, exchange to 12 January rate of 2019.Thai Baht These to data one areU.S. shown Dollar inas Figure1426 records 13a, which from is 12 obtained January from 2013, [13 ].to 12 December 2018. We carried on the least-squares and maximum likelihood models to forecast from 12 December 2018, to 12 January 2019. These data are shown in Figure 13a, which is obtained from [13].

Mathematics 2020, 8, 62 16 of 17 Mathematics 2019, 7, x 16 of 17

(a) Exchange Rate of Baht/Dollar (b) LS & ML Models Baht/Dollar Baht/Dollar

LS ML 30 32 34 36 31.8 32.0 32.2 32.4 32.6 32.8 33.0

0 500 1000 1500 0 5 10 15 20 25 30

day Day FigureFigure 13. 13.The The time time series series plot plot for for the the exchange exchange rate rate of of Baht Baht/Dollar,/Dollar, andand thethe plotplot ofof thethe exchangeexchange rate of Bahtof /Baht/DollarDollar for forecasting for forecasting of least of least squares squares (LS) (LS) vs. vs. maximum maximum likelihood likelihood methods. methods.

ComparingComparing with with FigureFigures 13 13b,b, it it can can be be seen seen thatthat thethe forecastingforecasting values values of of the the least-squares least-squares (LS) (LS) methodmethod are are close close to to the the actual actual observed observed datadata setssets moremore thanthan the maximum likelihood likelihood (ML) (ML) method method.. Therefore,Therefore, we we should should be be presented presented byby thethe MSEMSE of LS given as 0 0.0076.0076 is is smaller smaller than than the the maximum maximum likelihoodlikelihood as as 0.0573. 0.0573.

6. Conclusions6. Conclusions TheThe least-squares least-squares and and maximum maximum likelihoodlikelihood methods are are studied studied to to estimate estimate first first order order in inRCA, RCA, oror called called RCA1 RCA1 model. model. Through Through aa MonteMonte Carlo simulation, we we evaluated evaluated the the performance performance of ofthese these methodsmethods and and showed showed the the mean mean and and standardstandard deviation of of the the parameter, parameter, and and played played the the AMSEs AMSEs of of variousvarious data data and and sample sample sizes. sizes.It It appearsappears that all cases of of the the least-squares least-squares method method perform perform well well in in pickingpicking up up the the correct correct model, model, to to see see the the AMSEAMSE isis minimum.minimum. It It is is indicated indicated that that the the RCA1 RCA1 model model is is affaffectedected by by past past observed observed data data more more than than thethe informativeinformative prior to to the the maximum maximum likelihood likelihood method method.. ForFor actual actual data, data, we we used used the the powerpower ofof estimatingestimating by by the the minimum minimum valu valuee of of the the mean mean square square error.error We. We can can see see that that the the least-squares least-squares methodmethod outperforms the the maximum maximum likelihood, likelihood, similar similar to tothe the resultsresults of of the the simulation simulation study. study. For For RCA1RCA1 model, we suggest suggest the the least- least-squaressquares method method to toestimate estimate parameters where stationary and non-stationary data are expected. These results are supported by parameters where stationary and non-stationary data are expected. These results are supported by Araveeporn [14] who showed limiting distribution and consistent estimator of the RCA1 model based Araveeporn [14] who showed limiting distribution and consistent estimator of the RCA1 model based on the least-squares method. on the least-squares method. As a part of further work, the Bayesian approach is used to study the estimating parameter in the As a part of further work, the Bayesian approach is used to study the estimating parameter in RCA1the RCA1 model model by considering by considering the prior the prior and and posterior posterior distribution. distribution.

Funding:Funding:This This research research received received no no external external funding.funding.

Acknowledgments:AcknowledgmentsThis: This research research was was supportedsupported by King Mongkut’s Mongkut’s Institute Institute of of Technology Technology Ladkrabang Ladkrabang..

ConflictsConflicts of of Interest: InterestThe: The authors authors declare declare no noconflicts conflicts ofof interest.interest.

ReferencesReferences

1. 1. Tsay,Tsay, R. ConditionalR. Conditional Heteroscedastic Heteroscedastic Time Time SeriesSeries Models.Models. J.J. Am. Am. Stat. Stat. Assoc. Assoc. 19871987, 82, 82, 590–604, 590–604.. [CrossRef] 2. Nicholls, D.F.; Quinn, B.G. Random Coefficient Autoregressive Models: An Introduction; Springer: New York, NY, USA, 1982; pp. 2–6. Mathematics 2020, 8, 62 17 of 17

3. Nicholls, D.F.; Quinn, B.G. The Estimation of Random Coefficient Autoregressive Model. J. Time Ser. Anal. 1980, 1, 37–47. [CrossRef] 4. Hwang, S.Y.; Basawa, I.V.Parameter Estimation for Generalized Random Coefficient Autoregressive Processes. J. Stat. Plan. Inference 1998, 68, 323–337. [CrossRef] 5. Aue, A.; Horvath, L.; Steinebach, J. Estimation in Random Coefficient Autoregressive Models. J. Time Ser. Anal. 2006, 27, 61–67. [CrossRef] 6. Wakefiels, J. Bayesian and Frequentist Regression Methods; Springer: New York, NY, USA, 2013; pp. 27–28. 7. Hsiao, C. Some Estimation Methods for a Random Coefficient Model. Econom. J. Econom. Soc. 1975, 43, 305–325. [CrossRef] 8. Wang, D.; Ghosh, S.K. Bayesian Estimation and Unit Root Tests for Random Coefficient Autoregressive Models. Model Assist. Stat. Appl. 2008, 3, 281–295. [CrossRef] 9. Araveeporn, A. The Least-Squares Criteria of the Random Coefficient Dynamic Regression Model. J. Stat. Theory Pract. 2012, 6, 315–333. [CrossRef] 10. Freund, R.J.; Wilson, W.J. Statistical Modeling of a Response Variable; Academic Press: London, UK, 1998; pp. 78–81. 11. Araveeporn, A. The Maximum Likelihood of Random Coefficient Dynamic Regression Model. World Acad. Sci. Eng. Technol. 2013, 78, 1185–1190. 12. The Stock Exchange of Thailand. Available online: http://www.set.or.th/th/market/market/statistics.html (accessed on 30 January 2019). 13. The Exchange Rate of Baht/Dollar. Available online: http://www.bot.or.th/thai/exchangerate (accessed on 30 January 2019). 14. Araveeporn, A. A Comparison of Two Least-Squared Random Coefficient Autoregressive Models: With and without Autoregressive Errors. Int. J. Adv. Stat. Probab. 2013, 1, 151–162. [CrossRef]

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).