ISSN: 2332-2071 Volume 8 Number 5 2020

Mathematics and

http://www.hrpub.org

Horizon Research Publishing, USA

http://www.hrpub.org

Mathematics and Statistics Mathematics and Statistics is an international peer-reviewed journal that publishes original and high-quality research papers in all areas of mathematics and statistics. As an important academic exchange platform, scientists and researchers can know the most up-to-date academic trends and seek valuable primary sources for reference. The subject areas include, but are not limited to the following fields: Algebra, Analysis, Applied mathematics, Approximation theory, Combinatorics, Computational statistics, Computing in Mathematics, Design of , Discrete mathematics, Dynamical systems, Geometry and Topology, Logic and Foundations of mathematics, Number theory, Numerical analysis, Probability theory, Quantity, Recreational mathematics, Sample Survey, Statistical modelling, . General Inquires Publish with HRPUB, learn about our policies, submission guidelines etc. Email: [email protected] Tel: +1-626-626-7940 Subscriptions Journal Title: Mathematics and Statistics Journal’s Homepage: http://www.hrpub.org/journals/jour_info.php?id=34 Publisher: Horizon Research Publishing Co.,Ltd Address: 2880 ZANKER RD STE 203 SAN JOSE, CA 95134 USA Publication : bimonthly Electronic Version: freely online available at http://www.hrpub.org/journals/jour_info.php?id=34 Online Submission Manuscripts should be submitted by Online Manuscript Tracking System (http://www.hrpub.org/submission.php). If you are experiencing difficulties during the submission process, please feel free to contact the editor at [email protected]. Copyright Authors retains all copyright interest or it is retained by other copyright holder, as appropriate and agrees that the manuscript remains permanently open access in HRPUB 's site under the terms of the Creative Commons Attribution International License (CC BY). HRPUB shall have the right to use and archive the content for the purpose of creating a record and may reformat or paraphrase to benefit the display of the record. Creative Commons Attribution License (CC-BY) All articles published by HRPUB will be distributed under the terms and conditions of the Creative Commons Attribution License(CC-BY). So anyone is allowed to copy, distribute, and transmit the article on condition that the original article and source is correctly cited. Open Access Open access is the practice of providing unrestricted access to peer-reviewed academic journal articles via the internet. It is also increasingly being provided to scholarly monographs and book chapters. All original research papers published by HRPUB are available freely and permanently accessible online immediately after publication. Readers are free to copy and distribute the contribution under creative commons attribution-non commercial licence. Authors can benefit from the open access publication model a lot from the following aspects: • High Availability and High Visibility-free and unlimited accessibility of the publication over the internet without any restrictions; • Rigorous peer review of research papers----Fast, high-quality double blind peer review; • Faster publication with less cost----Papers published on the internet without any subscription charge; • Higher Citation----open access publications are more frequently cited. Mathematics and Statistics

Editor-in-Chief

Prof. Dshalalow Jewgeni Florida Inst. of Technology, USA Members of Editorial Board

Jiafeng Lu Zhejiang Normal University, China

Nadeem-ur Rehman Aligarh Muslim University, India

Debaraj Sen Concordia University, Canada

Mauro Spreafico University of São Paulo, Brazil

Veli Shakhmurov Okan University, Turkey

Antonio Maria Scarfone Institute of Complex Systems - National Research Council, Italy

Liang-yun Zhang Nanjing Agricultural University, China

Ilgar Jabbarov Ganja state university, Azerbaijan

Mohammad Syed Pukhta Sher-e-Kashmir University of Agricultural Sciences and Technology, India

Vadim Kryakvin Southern Federal University, Russia

Rakhshanda Dzhabarzadeh National Academy of Science of Azerbaijan, Azerbaijan

Sergey Sudoplatov Sobolev Institute of Mathematics, Russia

Birol Altın Gazi University, Turkey

Araz Aliev Baku State University, Azerbaijan

Francisco Gallego Lupianez Universidad Complutense de Madrid, Spain

Hui Zhang St. Jude Children's Research Hospital, USA

Yusif Abilov Odlar Yurdu University, Azerbaijan

Evgeny Maleko Magnitogorsk State Technical University, Russia

İmdat İşcan Giresun University, Turkey

Emanuele Galligani University of Modena and Reggio Emillia, Italy

Mahammad Nurmammadov Baku State University, Azerbaijan

Horizon Research Publishing http://www.hrpub.org ISSN: 2332-2071 Table of Contents

Mathematics and Statistics

Volume 8 Number 5 2020

Efficiency of Parameter of Various Methods on WarpPLS Analysis (https://www.doi.org/10.13189/ms.2020.080501) Luthfatul Amaliana, Solimun, Adji Achmad Rinaldo Fernandes, Nurjannah ...... 481

A Modified Robust Support Vector Regression Approach for Containing High Leverage Points and Outliers in the Y-direction (https://www.doi.org/10.13189/ms.2020.080502) Habshah Midi, Jama Mohamed ...... 493

Test Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data (https://www.doi.org/10.13189/ms.2020.080503) Diah Ayu Widyastuti, Adji Achmad Rinaldo Fernandes, Henny Pramoedyo, Nurjannah, Solimun ...... 506

Construction of Bivariate Copulas on a Multivariate Exponentially Weighted (https://www.doi.org/10.13189/ms.2020.080504) Sirasak Sasiwannapong, Saowanit Sukparungsee, Piyapatr Busababodhin, Yupaporn Areepong...... 520

Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions (https://www.doi.org/10.13189/ms.2020.080505) Ali F Jameel, Akram H. Shather, N.R. Anakira, A. K. Alomari, Azizan Saaban ...... 527

-action Induced by Shift Map on 1-Step Shift of Finite Type over Two Symbols and k-type Transitive (https://www.doi.org/10.13189/ms.2020.080506) Nor Syahmina Kamarudin, Syahida Che Dzul-Kifli ...... 535

Modified Average Sample Number for Improved Double Plan Based on Truncated Life Test Using Exponentiated Distributions (https://www.doi.org/10.13189/ms.2020.080507) O. S. Deepa ...... 542

Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach (https://www.doi.org/10.13189/ms.2020.080508) Ali F Jameel, Sardar G Amen, Azizan Saaban, Noraziah H Man, Fathilah M Alipiah...... 551

Integration of Cluster Centers and Gaussian Distributions in Fuzzy C- for the Construction of Trapezoidal Membership Function (https://www.doi.org/10.13189/ms.2020.080509) Siti Hajar Khairuddin, Mohd Hilmi Hasan, Manzoor Ahmed Hashmani ...... 559

Hankel Determinant H2(3) for Certain Subclasses of Univalent Functions (https://www.doi.org/10.13189/ms.2020.080510) Andy Liew Pik Hern, Aini Janteng, Rashidah Omar ...... 566

Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with Strongly Generalized Differentiability (https://www.doi.org/10.13189/ms.2020.080511) N. A. Abdul Rahman ...... 570

Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material (https://www.doi.org/10.13189/ms.2020.080512) Edy Nurfalah, Irvana Arofah, Ika Yuniwati, Andi Haslinah, Dwi Retno Lestari ...... 577

Stochastic Latent Residual Approach for Consistency Model Assessment (https://www.doi.org/10.13189/ms.2020.080513) Hani Syahida Zulkafli, George Streftaris, Gavin J. Gibson ...... 583

Determining Day of Given Date Mathematically (https://www.doi.org/10.13189/ms.2020.080514) R. Sivaraman ...... 590

Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount (https://www.doi.org/10.13189/ms.2020.080515) Piyali Mallick, Lakshmi Narayan De ...... 596

Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion (https://www.doi.org/10.13189/ms.2020.080516) Chatarina Enny Murwaningtyas, Sri Haryatmi Kartiko, Gunardi, Herry Pribawanto Suryawan ...... 610

Mathematics and Statistics 8(5): 481-492, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080501

Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

Luthfatul Amaliana, Solimun*, Adji Achmad Rinaldo Fernandes, Nurjannah

Department of Statistics, Faculty of Mathematics and Natural Sciences, Brawijaya University, Indonesia

Received May 9, 2020; Revised July 16, 2020; Accepted July 29, 2020

Cite This Paper in the following Citation Styles (a): [1] Luthfatul Amaliana, Solimun, Adji Achmad Rinaldo Fernandes, Nurjannah , "Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis," Mathematics and Statistics, Vol. 8, No. 5, pp. 481 - 492, 2020. DOI: 10.13189/ms.2020.080501. (b): Luthfatul Amaliana, Solimun, Adji Achmad Rinaldo Fernandes, Nurjannah (2020). Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis. Mathematics and Statistics, 8(5), 481 - 492. DOI: 10.13189/ms.2020.080501. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract WarpPLS analysis has three algorithms, 1. Introduction namely the outer model parameter estimation algorithm, the inner model, and the hypothesis testing algorithm Structural Equation Modeling (SEM) is an analysis to which consists of several choices of resampling methods obtain data and relationships between latent variables that namely Stable1, Stable2, Stable3, Bootstrap, Jackknife, are carried out simultaneously [1]. PLS is a method that is and Blindfolding. The purpose of this study is to apply the more complicated than SEM because it can be applied to WarpPLS analysis by comparing the six resampling the reflective indicator model and the formative indicator methods based on the relative efficiency of the parameter model. The WarpPLS method is the development of Partial estimates in the six methods. This study uses secondary Least Square (PLS) analysis which can identify and predict data from the with 1 variable being relationships between linear and non-linear latent variables. formative and 2 variables being reflective. Secondary data WarpPLS analysis has three algorithms: the outer model for the Infrastructure Service Satisfaction Index (IKLI) parameter estimation algorithm, the inner model, and the were obtained from the Study Report on the Regional hypothesis testing algorithm [2]. Development Planning for Economic Growth and the The hypothesis testing algorithm in WarpPLS uses a Malang City Gini Index in 2018, while secondary data for resampling algorithm that consists of various resampling the Social Capital Index (IMS) and Community methods: Stable1, Stable2, Stable3, Bootstrap, Jackknife, Development Index (IPMas) were obtained from the and Blindfolding. Stable1, Stable2, and Stable3 are the Research Report on Performance Indicators Regional latest resampling methods in WarpPLS analysis. This Human Development Index and Poverty Rate of Malang method uses a quasi-parametric approach or method, which is a p-value approximated by the average value. City in 2018. The results of this study indicate that based In the Bootstrap resampling method, the resampling is on two criteria used, namely the calculation of relative done with a certain sample size and repeated by 100 times efficiency and measure of fit as a model good, it can be to achieve convergence. In the Jackknife method, concluded that the Jackknife resampling method is the resampling is done by removing one line and repeating most efficient, followed with the Stable1, Bootstrap, until the last sample. Whereas the Blindfolding method is Stable3, Stable2, and Blindfolding methods. more similar to the Jackknife method, but the first row data Keywords Partial Least Square, Resampling, is replaced by the average of each column (variable) and WarpPLS then continued until the last row. Of the six resampling methods, the most commonly used is the Bootstrap resampling method.

482 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

Based on previous research, the Blindfolding resampling presence of more than one factor contained in a set of method is more efficient than the Bootstrap resampling indicators on a variable. Because formative indicators do method [3]. Other research states that the Stable3 not require common factors, so composite latent variables resampling method which becomes the default setting in will always be obtained [1]. the WarpPLS program package is also more efficient than WarpPLS is a method and software of program package the Bootstrap and Stable2 resampling methods [4]. application developed by Ned Kock [4] to analyze variant Therefore, this study wants to find out which resampling or PLS-based SEM models. It is not only used for method is the most efficient of the six resampling methods non-recursive models but also non- analysis found in the WarpPLS analysis: Stable1, Stable2, Stable3, (Warp2 and Warp3). Bootstrap, Jackknife, and Blindfolding resampling According to [1], the structural model in WarpPLS methods, especially on the data used, namely the consists of two things: Satisfaction Index Infrastructure Services, Social Capital 1) Outer model is the collection of latent variable data Index, and Malang City Community Development Index. sourced from its indicator, consisting of reflective or formative indicator models. 2) Inner models are the relationship model between 2. Materials and Methods recursive and not recursive latent variables. Structural equation modeling or SEM is a technique used to describe the simultaneous relationship of linear relations 2.1. WarpPLS Method between observational variables, which also involves latent WarpPLS analysis has three-parameter estimation variables that cannot be measured directly [2]. SEM algorithms: outer model estimation algorithm, inner model, analysis initially combines a system of simultaneous and hypothesis testing algorithm [2]. The estimation of the equations, path analysis, or with . Factor analysis is used as a method for obtaining outer model parameter algorithm is the calculation process latent variable data. The process of estimating parameters to produce latent variable data sourced from data items, and testing is based on the concept of a indicators, or dimensions. While the inner model - matrix, so it is often referred to as estimation algorithm is the method and process of path covariance-based SEM [1]. coefficient calculation, that is the coefficient of the According to [1], the WarpPLS analysis is a influence of explanatory/predictor variables on the development of the PLS analysis. PLS model was response/dependent variable. In the hypothesis testing developed as an alternative when the model design has a algorithm, WarpPLS analysis uses a resampling algorithm weak or undiscovered theory, or some indicators could not [7]: be measured by reflective measurements so that it was 1) Stable1 is an approach or a method of quasi formative [5]. PLS is a powerful method because it does parametric, and the p-value is approached by the not require a lot of assumptions, and the sample size can be grade point average. small or large. Besides to use as a confirmation of theory 2) Stable 2 produces a consistent guess, through the (hypothesis testing), PLS can also be used to build Bootstrap resampling algorithm. relationships that do not have a theoretical basis or to test 3) Stable 3 produces consistent allegations, through the propositions. Bootstrap resampling algorithm. If the structural model to be analyzed is not recursive, 4) Bootstrap is the resampling with a certain sample size and latent variables have formative, reflective, or mixed (equal or smaller than the original sample) and indicators, one of the appropriate methods to be applied is repeated by 100 times (Bootstrap samples) to achieve PLS [6]. PLS can avoid indeterminacy factor, namely the convergence can see in Figure 1.

Mathematics and Statistics 8(5): 481-492, 2020 483

Population (N=135)

Original Sample (n=108)

Sample Bootstrap B1 Sample Bootstrap B2 Sample Bootstrap B100 (n1=108) (n2=108) (n100=108)

Figure 1. Bootstrap Resampling Illustration 5) Jackknife is a method by removing one row (one sample) and being repeated until the last sample. In this method, the sample is reduced by one based on Figure 2.

Population (N=135)

Original Sample (n=108)

Sample Jackknife J1 Sample Jackknife J2 Sample Jackknife J108 (n1=107) (n2=107) (n108=107)

Figure 2. Jackknife Resampling Illustration 6) Blindfolding is similar to Jackknife but the first row data is replaced by the average of each column (variable) and continued by the second until the last row [8]. As with the Bootstrap method, it will converge to 100 repetitions following the Figure 3.

484 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

Population (N=135)

Original Sample (n=108)

Sample Blindfolding F1 Sample Blindfolding F2 Sample Blindfolding F108 (n1=108) (n2=108) (n100=108)

Figure 3. Blindfolding Resampling Illustration

2.2. Hypothesis Testing (Resampling) Description:  : path coefficient of the influence of endogenous Hypothesis testing in the WarpPLS analysis is done by the resampling method. It guarantees data to be variables on the endogenous variable distribution-free. This study uses six types of resampling:  : the influence of exogenous variables on the Stable1, Stable2, Stable3, Bootstrap, Jackknife, and endogenous variable Blindfolding.  : factor load or component weight Calculation of for resampling parameters: Testing was done using a t-test with the criteria; if the

2 p-value ≤ 0.1 (alpha 10%), the result will be significant. R r    (1) 2.3. Assumption on WarpPLS SEresampling   r1 R 1 Data distribution assumptions are not needed in the WarpPLS analysis, meaning the data do not have to meet where: normal assumptions. This may be caused by the fact that  r : resampling parameter estimator WarpPLS, which is the development of PLS, is a powerful method and the required sample size can be large or small.  : the average of the resampling parameter estimator Data assumptions on WarpPLS have been fulfilled in the R : the number of resampling estimator hypothesis testing process that involves a resampling Hypothesis testing can be done using t-test [9]: approach. By taking at least 100 samples, the central limit theory states that if a population has a  and  variance  2 , median distribution of sample will be closer t  s (2) to the normal distribution with the median  and n 2 variance  , in which the greater the n value, the n 1) Statistical hypothesis for the outer model: faster the example distribution get fulfilled [10].

H0 : i  0 vs. H1 : i  0 The important WarpPLS assumption is the linearity 2) Statistical hypothesis for the inner model: assumption. This determines the algorithm used in The effect of exogenous latent variables on the WarpPLS modeling between linear and non-linear endogenous variable algorithms. WarpPLS can be used when linearity assumptions are met or not. Linearity test is done by using H : vs. H : 0  i  0 1  i  0 the method of Regression Specification Error Test The effect of endogenous latent variables on the (RESET). In its approach, Ramsey RESET uses OLS endogenous variable (Ordinary Least Square) to minimize the number of errors that are squared from each observation [11]. H0 : i  0 vs. H1 : i  0

Mathematics and Statistics 8(5): 481-492, 2020 485

2.4. The Sample Size for WarpPLS 2.6. Relative Efficiency The size of the sample in a study should be determined The efficiency of the two can be compared by using a formula. The formula is adjusted to the sampling using relative efficiency. The efficiency of the two technique used and the availability of information. The * accuracy of the formula and the circumstances will estimators  i relative to  i can be defined as follows minimize the total error (the combination of sampling error [13]: and non-sampling error). If the information to determine 2 R ()ˆ  the sample size is not available, it can use a table or rule of V ()ˆ  r (3) i r1 the thumb [1]. Some examples of the rule of the thumb are: (R  1) 1. Ten times the number of variables (remember where: WarpPLS is part of the multivariate analysis);  r : resampling parameter estimator 2. Ten times the number of formative indicators : the average of the resampling parameter estimator (ignoring reflective indicators);  3. Ten times the number of structural paths in the inner R : the number of resampling estimator model.

2.5. Reflective Indicator Model (4) Formative or reflective models can be determined from "operational definition." Based on the definition of the where: operational variable, it can be precisely determined either a ˆ* V ()i : Parameter estimation variance with the 1st formative or reflective model formed. In general, there is resampling method an opinion that the latent variables with formative indicator ˆ models are sourced from indicators whose data are V ()i : Parameter estimation variance with the 2nd quantitative, for example, community welfare with resampling method indicators; per capita income, length of education, and life The efficiency of the two estimators can be seen in Table expectancy, where all data indicators are quantitative [13]. 1.

Table 1. The Efficiency of Two Estimators

No Explanation Results

* 1 The calculation result > 1 ˆ ˆ The i estimator is better than the i estimator. * 2 The calculation result < 1 ˆ ˆ The i estimator is better tha the i estimator. * 3 The calculation result = 1 ˆ ˆ The i estimator is as good as the i estimator.

486 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

2.8. The Measure of Fit and Significance in Several 3. Community Development Index (CDI) Resampling Methods Community Development Index is a composite index The measure of fit can be performed on measurement that measures the nature of cooperation, tolerance, and the models, structural models, and overall models (overall community’s feeling of security. According to Katz in [16] models). A measure of fit in this measurement model is development is a major social change from a particular intended to check (test) whether the research instrument is situation to a situation that is considered more valuable. valid and reliable and to find out how much information According to [17], community development is an effort to can be explained by structural models or relationships increase all resources, carried out in a planned and between latent variables) from the results of the WarpPLS sustainable manner. analysis as well as a measure of combined between measurement models and structural models. In 2.9. Research Methodology this study, indicators are formative and reflective. The validity and reliability tests in the WarpPLS analysis The data used were secondary data obtained from a can be measured using convergent validity, if the loading Likert scale questionnaire containing 3 latent variables: value is 0.5 to 0.6 then it can be said to be valid. Validity Infrastructure Service Satisfaction Index, Community measurement with discriminant validity, if the average Development Index, and Social Capital Index of Malang variance extracted (AVE) is greater than the correlation City in 2018. The steps of the parameter estimator with all other latent variables then it can be said to be valid. efficiency of various resampling methods in warpPLS The research instrument is said to be reliable if the analysis are as follows: composite reliability value is greater or equal to 0.7. 1) Model Design A measure of fit in the structural model in the WarpPLS The path analysis model in SEM WarpPLS has two analysis can be measured using the Model Fit and Quality relationships: the inner model and the outer model [1]. The Indices [14]. The criteria used are the rule of thumb, so it inner model is the design of the relationships between should not apply rigidly and absolutely. If there are one or theory-based latent variables, empirical research, intuition, two indicators of the fit and Quality Indices model that is and rational research. While the outer model is the met, of course, the model can still be used. specification of the reflective and formative relationship A measure of fit in the overall model can be seen through between the latent variable and its indicator. the results of hypothesis testing (equation (6)) which shows the significance of the results of the resampling method 2) Constructing Path Charts that has been used. The result of the structural model (inner model) and the measurement model (outer model) will be more easily 2.8. Research Variable understood if they are constructed or expressed in a path diagram. The use of WarpPLS notation in the path diagram 1. Infrastructure Service Satisfaction Index is similar to the PLS notation. Infrastructure Service Satisfaction Index (ISSI) is a 3) Model Identification measure used to know the level of community satisfaction about the infrastructure development conducted by the  Inner model Central Government and Regional Governments. ISSI is The inner model is the specification of the relationship expected to be a tool that produces an illustration of the between latent variables (structural models). The inner community’s perspective objectively, comprehensively, model (inner relation) describes the relationship between and credibly, both in the aspects of physical development latent variables. Its equation model can be written as in and benefits. equation (5). 2. Capital Social Index (CSI) Y = Y*β + Xγ +ζ (5) [15] summarized the definitions of several figures by where: explaining that the true identity of social capital is the Y : endogenous latent variables vector (m 1) values and norms held to as a reference for behaving and * : endogenous latent variable matrix dealing with other parties that bind to the process of change Y ()mm and community efforts to achieve a goal. These values and β : path coefficient vector between the endogenous elements are manifested in participatory attitudes, mutual latent variable (m 1) attention, mutual giving and receiving, mutual trust, the X : exogenous latent variable matrix ()ms willingness of the community to be proactive in maintaining values, forming collaborative networks, and γ : coefficient vector of the latent variable path of creating new ideas, which whole is reinforced by the values exogenous to endogenous (s 1) and norms that support it. ζ : error vector of inner model (m 1)

Mathematics and Statistics 8(5): 481-492, 2020 487

m : the number of endogenous latent variables (a) Inner Model s : the number of paths of exogenous to endogenous Evaluation of the inner model is done by using the latent variables Goodness of Fit criteria, which is an index and measure of  Outer Model the goodness of relationships between the inner model. The Outer model is the specification of the relationship goodness of Fit in the WarpPLS analysis is the Model Fit between latent variables and their indicators. There are two and Quality Indices [1]. The criteria used were as a rule of types of models, namely reflective and formative models. thumb, meaning if there are one or two indicators not This study used a reflective and formative model. fulfilled, the model still could be used. Reflective indicator models can be written as in equation (b) Outer Model (6). Evaluation of the outer model was made to the validity y=λy Y ε (6) test and instrument reliability check. According to [12], where: validity testing of WarpPLS was evaluated by a convergent y : indicator matrix for endogenous latent variables validity test and discriminant validity test based on cross-loading and AVE values. While the reliability check (p 1) can be evaluated by the composite reliability index.

λ y : loading matrix of endogenous latent variables ()pm 3. Results and Discussion Y : endogenous latent variable matrix (m 1) ε : error vector for endogenous latent variables (p 1) 3.1. Assumption Testing of Inner Model Linearity p : the number of endogenous latent variable indicators m : the number of endogenous variables SEM analysis using the WarpPLS approach does not have strict assumptions. The assumptions only related to Whereas the formative indicator model can be written as the inner model to select the inner model algorithm. Hence, in equation (7): it should be conducted linearity test using the RRT test X=λ x +δ (7) with the help of software R. The results of the test can be x seen in Appendix 3. In summary, it can be shown in Table where: 2. X : exogenous latent variable (q 1) Table 2. Linearity Test Results λ x : loading matrix of endogenous latent variable (qr  ) Variable p-value Information x : indicator matrix for exogenous latent variable (r  1) X to Y 0.0003 Non Linear δ : error vector for endogenous latent variable (q 1) 1 1 q : the number of exogenous latent variables to Y2 0.0878 Linear r : the number of exogenous variables indicators. to 0.0235 Non Linear 4) Parameter Estimation where: Parameter estimation in WarpPLS is similar to that of : Infrastructure Service Satisfaction Index Variable PLS, using the least square method (4). Parameter estimation is done by an iteration calculation process that : Social Capital Index Variable Variable will stop if a convergent condition has been reached. The : Community Development Index Variable calculation process is done by three-stage iteration. The first stage is to produce a stable weight estimator by Table 2 shows the relationship between and ; calculating the outside and inside approximation of the latent variable. An estimator to get outside approximation and has p-value< 0.05. It can be concluded that the is the inner model estimator and inside approximation relationship between these variables does not meet the (outer model estimator). The second step is to predict the linearity assumption, so the warp algorithm was used. relationship between the path and Ordinary Least Square (OLS). The third stage is to calculate the of each 3.2. Structural Model Evaluation (Inner Model) indicator with the original data and the weights from the first stage. The Inner Model is evaluated by looking at the 5) The goodness of Fit Evaluation Goodness of Fit Model value by using the rule of the thumb criteria. The value of model Goodness of Fit can be seen in There are 2 evaluation models: Table 3 below:

488 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

Table 3. Fit and Quality Indices Model

No. Model fit and quality indices Fit Criteria Test Results Information Accepted if 1. Average Path coefficient (APC) p <0.001 Good p <0.05 Accepted if 2. Average R-squared (ARS) p <0.001 Good p < 0.05 Accepted if 3. Average adjusted R-squared p <0.001 Good p < 0.05 Accepted if (AVIF) = 4. Average block VIF (AVIF) Ideal  5 ; Ideal if 3.3 1.204 (AFVIF) = 5. Average full collinearity VIF Ideal Accepted if  5 ; Ideal if 3.3 1.922 6. Tenenhaus GoF (GoF) Small  0.1 ; Medium 0.25 ; Large 0.36 (GoF) = 0.519 Large 7. Symphson’s paradox ratio Accepted if 0.7 ; Ideal if = 1 1 Ideal 8. R squared contribution ratio Accepted if  0.9 ; Ideal if = 1 1 Ideal 9. Statistical suppression ratio Accepted if  0.7 1 Good Nonlinier bivariate causality direction 10. 1 Good ratio (NLBCDR) Accepted if  0.7

Based on Table 3, it can be seen that all Goodness of Fit Table 5. Indicator Weight Value values have met the acceptance criteria. Hence, it can be Indicator Variable Indicator p-value explained that the index and measure of the good Weight relationship between the latent variables are acceptable. X1.1 0.213 <0.001

X1.2 0.222 <0.001 3.3. Measurement Model Evaluation Infrastructure Service

Satisfaction Index ( ) X1.3 0.232 <0.001 X1 The results of the hypothesis of the outer model for each X1.4 0.230 <0.001 variable with reflective indicators can be seen in Table 4.

X1.5 0.214 <0.001 Table 4. Loading Factor Value Based on Table 5, all indicators have a p-value <0.001 so Variable Indicator Loading Value p-value it can be said that all indicators are significant as a measure Y 0.760 <0.001 of IKLI variables with the equation as follows. Social Capital 1.1 Index Y1.2 0.879 <0.001 X0.213 x  0.222 x  0.232 x  0.230 X  0.214 x ( ) 1 11 12 13 14 15 Y1 Y1.3 0.787 <0.001 3.4. Hypothesis Testing of Inner Model and Outer Community Y2.1 0.894 <0.001 Development Model

Index Y2.2 0.865 <0.001 ( ) The result of hypothesis testing on the inner model can Y2 Y 0.729 <0.001 2.3 be seen in Table 6. Based on Table 4, all indicators have a p-value <0.05, so Table 6. Path Coefficient Values Between Variables it can be said that the research indicators are significant as a Effect Indirect measure of CSI and CDI with the equation as follows. Direct Effect p-value Information Testing Effect yY0.760  Y 11 1  1 0.429 <0.001 - Sig. Y 0.147 <0.001 0.292 Sig. yY0.879  2 12 1 2  0.680 <0.001 - Sig.

yY130.787 1 3 Table 6 shows that the result of direct influence testing yY0.894  21 2 4 between ISSI and SCI has a path coefficient value of 0.429 with a p-value <0.001, while the path coefficient value yY0.865  22 2 5 between ISSI and CDI is 0.147 and its p-value <0.001. yY0.729  Also, the path coefficient value between SCI and CDI is 23 2 6 0.680, and its p-value <0.001. Due to p-values <0.05 so As for the formative variable weighting, its values can be there is a significant direct effect between ISSI on SCI, seen in Table 5. ISSI on CDI, and SCI on CDI.

Mathematics and Statistics 8(5): 481-492, 2020 489

The results of the indirect effect between ISSI and CDI trough the SCI had an indirect effect coefficient of 0.292. Because the effect of ISSI on SCI and SCI on CDI is significant, it can be said that there is a significant indirect influence. The result of the inner model hypothesis testing can be seen in Figure 4.

Y1 (R) 3i

β=0.43 R2=0.18 β=0.68 (p<0.01) (p<0.01)

β=0.15

X1 (p<0.01) Y2 (F) 5i (R)3i

Figure 4. Path Diagram Hypothesized Test Results The model from the calculation result of the inner model is as follows: YX 0.43 11 YYX0.68 0.15 2 1 1

3.5. Efficiency Test The results of the standard error value calculation of each resampling as the efficiency test criteria are as follows.

Table 7. Standard Error of Each Resampling

Standard Error Stable1 Stable2 Stable3 Bootstrap Jackknife Blindfolding

 1 0.034 0.036 0.036 0.030 0.032 0.090

 2 0.034 0.037 0.038 0.034 0.033 0.075

1 0.034 0.035 0.036 0.038 0.032 0.081 Mean 0.034 0.036 0.037 0.034 0.032 0.082

Based on the results of relative efficiency testing, it can be seen that the variance values obtained from the Standard Error-values are squared. The Jackknife resampling method produces the smallest variance values of the other 5 methods. Therefore, the Jackknife resampling method is the most efficient in this study.

Table 8. Efficiency between Stable1 and Boostrap

Standard Error ER Information Stable1 Bootstrap

0.034 0.030 1.284 Bootstrap is more efficient

0.034 0.034 1 Stable1 is as efficient as Bootstrap

0.034 0.038 0.801 Stable1 is more efficient

Mean 0.034 0.034 1 Stable1 is as efficient as Bootstrap Variance 0 0.000 0 Variance of Stable1 is smaller Bootstrap

Based on Table 8 it can be seen that the results of the efficiency tests from the 1st to 3rd ER values and from the averages, the two resampling methods are equally efficient. However, when viewed based on its variance value, the Stable1 method has a smaller variance value than the Bootstrap resampling method variance value. Therefore, it can be concluded that the Stable1 resampling method is more efficient than the Bootstrap resampling method.

490 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

Table 9. Efficiency between Stable1 and Jackknife

Standard Error ER Information Stable1 Jackknife

 1 0.034 0.032 1.129 Jackknife is more efficient

 2 0.034 0.033 1.062 Jackknife is more efficient

1 0.034 0.032 1.129 Jackknife is more efficient Mean 0.034 0.032 1.129 Jackknife is more efficient

Based on Table 9, it can be seen that the results of the efficiency test from the 1st ER value have been quite consistent between the two resampling methods. So it can be concluded that Jackknife resampling is better than Stable1 resampling.

Table 10. Efficiency Test between Stable1 and Blindfolding

Standard Error ER Information Stable1 Blindfolding

0.034 0.090 0.143 Stable1 is more efficient

0.034 0.075 0.206 Stable1 is more efficient

0.034 0.081 0.176 Stable1 is more efficient

Mean 0.034 0.082 0.172 Stable1 is more efficient

Based on Table 10 it can be seen that the results of the efficiency tests from the 1st to 3rd ER values have been quite consistent between the two resampling methods. So it can be concluded that the Stable1 resampling is better than the Blindfolding resampling.

Table 11. Efficiency Test between Bootstrap and Jackknife

Standard Error ER Information Bootstrap Jackknife

0.030 0.034 0.879 Bootstrap is more efficient

0.037 0.035 1.062 Jackknife is more efficient

0.038 0.032 1.410 Jackknife is more efficient

Mean 0.035 0.034 1.129 Jackknife is more efficient

Based on Table 11, it can be seen that the results of the efficiency tests of the 2nd and 3rd ER values have been quite consistent between the two resampling methods. So it can be concluded that Jackknife resampling is better than Bootstrap resampling.

Table 12. Efficiency Test between Bootstrap and Blindfolding

Standard Error ER Information Bootstrap Blindfolding

0.030 0.090 0.111 Bootstrap is more efficient

0.037 0.084 0.205 Bootstrap is more efficient

0.038 0.081 0.220 Bootstrap is more efficient

Mean 0.035 0.082 0.172 Bootstrap is more efficient

Based on Table 12, it can be seen that the results of the efficiency test from the 2nd ER value to the 3rd san have been quite consistent between the two resampling methods. So it can be concluded that Bootstrap resampling is better than Blindfolding resampling.

Mathematics and Statistics 8(5): 481-492, 2020 491

Table 13. Efficiency Test between Jackknife and Blindfolding

Standard Error ER Information Jackknife Blindfolding

 1 0.034 0.090 0.116 Bootstrap is more efficient

 2 0.035 0.075 0.193 Bootstrap is more efficient

1 0.032 0.081 0.156 Bootstrap is more efficient Mean 0.034 0.082 0.152 Bootstrap is more efficient

Based on Table 13, it can be seen that the results of the Brawijaya, 2017. efficiency tests of the 2nd and 3rd ER values have been [4] Kock, N., “Advanced mediating effects tests, multi-group quite consistent between the two resampling methods. So it analyses, and measurement model assessments in PLS-based can be concluded that Jackknife resampling is better than SEM”, International Journal of e-Collaboration (IJeC), 10(1), Blindfolding resampling. 1-13, 2014. Based on all combinations of the two resampling [5] Fernandes, A.A.R., Hutahayan, B., Solimun, methods, the result is that the Jackknife method is a more Arisoesilaningsih, E., Yanti, I., Astuti, A.B., Nurjannah, & efficient resampling method, followed by the Stable1, Amaliana, L., “Comparison of Curve Estimation of the Bootstrap, Stable2, Stable3, and Blindfolding methods. Smoothing Spline Nonparametric Function Path Based on PLS and PWLS In Various Levels of ”, IOP Conference Series: Materials Science and Engineering, Forthcoming Issue, 2019. 4. Conclusions [6] Fernandes, A.A.R., Widiastuti, D.A., Nurjannah, It can be concluded that the results of parameter “Smoothing spline semiparametric regression model estimation in the WarpPLS analysis using the Stable1, assumption using PWLS approach”, International Journal of Stable2, Stable3, Bootstrap, Jackknife, and Blindfolding Advanced Science and Technology, 29(4), pp. 2059-2070, 2020. resampling methods produce different relative efficiency values. The Jackknife resampling method produces the [7] Fernandes, A.A.R., Solimun, “The Mediation Effect of lowest variance value than the other five resampling Strategic Orientation and Innovations on the Effect of methods. Environmental Uncertainties on Performance of Business in the Indonesian Aviation Industri, International Journal of The result of the measures of fit that showed that there Law and Management, 59(6), pp 11-20, 2017. were no differences in the evaluation of structural models and the sixth measurement model of the resampling [8] Fernandes, A. A. R., Solimun, “The Consistency of methods. The value of loadings, indicators weight, and the Blindfolding in The Path Analysis Model with Various Number of Resampling”, Mathematics and Statistics, 8(3), results of hypothesis testing on the six methods produce the pp. 233-243, 2020, DOI: 10.13189/ms.2020.080301. same output. Therefore, based on the two criteria used in this study, namely the calculation of relative efficiency and [9] Walpole, R. E., “Pengantar Statistika, edisi ke-3”, PT. measure of fit as a model good, it can be concluded that the Gramedia Pustaka Utama, Jakarta, 1995. Jackknife resampling method is the most efficient method [10] Yitnosumarto, S, “Dasar-dasar Statistika edisi 1”, Rajawali, than the other five methods. Jakarta, 1990. [11] Gujarati, D., “Basic , Fourth Edition”, McGraw Hill, New York, 2004. [12] Solimun, “Metode Partial Least Square-PLS”, CV Citra REFERENCES Malang, Malang, 2010. [1] Solimun, Fernandes, A. A. R. and Nurjannah, “Metode [13] Wackerly, D., Mendenhall, W., and Shearffer, R., Statistika Multivariat: Pemodelan Persamaan structural “ with Applications, 7th issue”, (SEM) pendekatan WarpPLS”, UB Press, Malang, 2017. Thomson Brooks/Cole, Florida, 2008.

[2] Solimun, “Warppls analysis application for effects of [14] Fernandes, A.R., Solimun, Nurjannah, Hutahayan, B., personality and commitment on the engagement of ngrebeg “Comparison of use of linkage in integrated cluster with mekotek traditional actors in munggu village, bali”, discriminal analysis approach”, International Journal of International Journal of Advanced Science and Technology, Advanced Science and Technology. 29(3), pp. 5654-5668, 29(4), pp. 2025-2044, 2020. 2020. [3] Himawan, Khabib W., “Pendekatan WarpPLS pada [15] Hasbullah, J., “Social Capital (Menuju Keunggulan Budaya Pemodelan Persamaan Struktural Nilai Perusahaan Real Manusia Indonesia)”, Jakarta: MR-United Press Jakarta, Estate yang Terdaftar di BEI”, Skripsi, Universitas 2006.

492 Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis

[16] Yuwono, T., “Manajemen Otonomi Daerah: Membangun [17] Effendi, B., “Pembangunan Daerah Otonom Berkeadilan”, Daerah Berdasar Paradigma Baru”, CLoGAPPS, Kurnia Alam Semesta, Uhaindo Media dan Offset, Diponegoro University, Semarang, 2001. Yogyakarta, 2002.

Mathematics and Statistics 8(5): 493-505, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080502

A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

Habshah Midi1, Jama Mohamed2,*

1Faculty of Science and Institute for Mathematical Research, University Putra Malaysia, Malaysia 2Faculty of Mathematics and Statistics, College of Applied and Natural Science, University of Hargeisa, Somaliland

Received May 24, 2020; Revised July 3, 2020; Accepted July 29, 2020

Cite This Paper in the following Citation Styles (a): [1] Habshah Midi, Jama Mohamed , "A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction," Mathematics and Statistics, Vol. 8, No. 5, pp. 493 - 505, 2020. DOI: 10.13189/ms.2020.080502. (b): Habshah Midi, Jama Mohamed (2020). A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction. Mathematics and Statistics, 8(5), 493 - 505. DOI: 10.13189/ms.2020.080502. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract The support vector regression (SVR) model is currently a very popular non-parametric method used for estimating linear and non-linear relationships between response and predictor variables. However, there is a 1. Introduction possibility of selecting vertical outliers as support vectors that can unduly affect the estimates of regression. Outliers The SVR model is first introduced by [1]. It depends on from abnormal data points may result in bad predictions. the powerful concept of support vector machine (SVM), In addition, when both vertical outliers and high leverage which is a typical perspective in the field of statistical points are present in the data, the problem is further learning. With a specific and global optimal solution, SVR complicated. In this paper, we introduced a modified has become popular in recent years due to its excellent robust SVR technique in the simultaneous presence of performance and high generalization capability [2]. Some these two problems. Three types of SVR models, i.e. of the reasons for the widespread use of the SVR include eps-regression (ε-SVR), nu-regression (v-SVR) and bound reduced sensitivity to relative minima, hypothetical output constraint eps-regression (ε-BSVR), with eight different guarantees, and a high degree of versatility to add kernel functions are integrated into the new proposed additional dimensions to the input space, thereby avoiding algorithm. Based on 10-fold cross-validation and some the model’s increasing complexity [3]. model performance measures, the best model with a The superiority of SVR is its ability to approximate suitable kernel function is selected. To make the selected nonlinear relationships by the use of kernel tricks by model robust, we developed a new double SVR (DSVR) constructing a sparse model to tackle regression problems technique based on fixed parameters. This can be used to [3]. To explain this, consider a data set such that an original detect and reduce the weight of influential observations or n 1 input is x ∈ R and a target output is y ∈ R . The input vector anomalous points in the data set. The effectiveness of the x is first mapped onto a high-dimensional feature space in proposed technique is verified by using a simulation study the SVR method, which is nonlinearly connected to the and some well-known contaminated data sets. input space. The idea is to use the kernel trick in a Keywords Double Support Vector Regression, Fixed high-dimensional predictor space to approximate the Parameters, Vertical Outliers, High Leverage Points, nonlinear relationship within the original input space to Robust Mahalanobis Distance linear form [4–5]. The function of regression is given by:

494 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

f( x ) w , ( x) b (1) method, [14] suggested the µ-ɛ-SVR outlier detection procedure, which uses a new parameter of regularization where (x) is a non-linear function, w and b correspond (abbreviated as μ). The possible shortcomings of this to the weight and the bias. The goal of SVR is to estimate method are detection of one outlier per iteration, high the values of parameters w and b, which optimize the computational costs and lack of clear rule for selecting for expected risk by minimizing the ε-insensitive the threshold value, ɛ which complicates the approach. In below: order to avoid the disadvantages of the above techniques, [15] presented a functional technique for the detection of  T  0, if | yii w x |  outliers, using non-sparse ε-SVR that helps to minimize the Ly ()i   (2) T time cost and incorporates fixed parameters. The robust |yii w x | ,otherwise SVR based on this approach is often referred to as Double In other words, SVR tries to minimize the bound SVR (DSVR). The objective of this study is to modify the generalization error such that generalized efficiency is DSVR technique by integrating three types of SVR models, achieved instead of minimizing the actual training error [6]. i.e. epsilon-SVR (ε-SVR), nu-SVR (v-SVR), and epsilon It is described through the use of kernel functions, sparse bound-constrained SVR (ε-BSVR), with different kernel solution, and margin and number of support vectors functions into the algorithm. controlled by Vapnik-Chervonenkis (VC) theory [7]. The key terminologies in SVR are shown in Figure 1. 2. Methodology

2.1. Standard SVR Models The SVR concept is suggested by [1] to solve problems of function fitting, which is called the ɛ-SVR. Given a set n of data points, {(x1, y1), ..., (xn, yn)} such that xi ∈ R is an 1 input and yi ∈ R is a target output, we want to employ ε-SVR. The optimization problem is:

n 1 2* Minimize ||wC || (ii ) 2 i1  y wT x  b   (3)  i i i  T * subject to w xi b  y i   i  * Figure 1. Basic terminology of SVR ii, 0,in 1,2,..., According to [8], most applications for real-world data This can be solved as a quadratic programming problem. are subject to anomalies and noises, which is a common C is the parameter to control the amount of influence of * issue that leads to misleading and false conclusions. the error, ii, are slack variables and b is the bias term. Barnett and Lewis [9] have described an outlier as one that In [16], the authors introduced an improved SVR model, appears to significantly deviate from other members of the called nu-SVR (v-SVR), which uses a ‘v’ parameter to sample it occurs in. Outliers are observations which are far monitor the amount of training error and support vectors. from the bulk of the rest of the data [10]. In SVR, it is To put it in another way, the parameter v is used to evaluate possible to select outlying observations (outliers and high the proportion of the number of support vectors we want to leverage points) as supporting vectors which may affect the hold in our solution about the total number of samples in estimation process [11–12]. It is, therefore, useful to the dataset. The size of v is between 0 and 1. That is, 0 < v < implement some robust methods in SVR to remedy both 1. After substituting ɛ for v, (3) is modified as can be seen problems of outliers and high leverage points. in (4). Several studies were conducted by utilizing SVR to 11n analyze data consisting of outliers. For instance, Jordaan Minimize ||wC ||2*  (    ) 2 n  ii and Smits [13], using the benefits of the Lagrange i1 multipliers, developed the traditional SVR method for  y wT x  b   (4) outlier detection in relation to Karush-Kuhn-Tucker (KKT)  i i i  T * requirements. The main drawbacks of this approach are subject to w xi b  y i   i high calculation costs, difficult operation for non-expert  * ii, 0,in 1,2,..., users and the possibility of the emergence of masking and swamping problems. To solve the problems of the standard The difference between ε-SVR and v-SVR is that in

Mathematics and Statistics 8(5): 493-505, 2020 495

v-SVR, we can control the number of support vectors, but restricted to a given . in ε-SVR, we can control the amount of error in the model instead of support vectors. 2.2. Kernel Functions Finally, the proposed method in [17], in which the bias term squared is applied to the objective function, is known To employ SVR models, kernel functions are to be used. as epsilon bound-constrained SVR (ε-BSVR) formulation. The kernel function measures the dot product of two The optimization problem of ε-BSVR is: vectors xi and x j in the feature space (xi ) and (x ) . That is, 11C n j Minimize ||wb ||2 2  (  * ) 22n  ii i1 K( xi , x j ) (( x i )  x j ) (6)  T yi w x i  b 1  i (5)  In SVR procedure, K(,) xij x is used in some form of subject to wT x b  y 1  * i i i nonlinear relationship to map the input space into high  * ii, 0,in 1,2,..., dimensional predictor space. Several kernel function types are employed in SVR. The type of kernel function chosen This can be solved with Wolfe’s dual to a quadratic affects the estimates and computational complexity for programming problem with box constraints only. The implementing the designed SVR. Table 1 outlines eight of parameters of ε-BSVR are the same as the parameters of the common kernel function types that are considered for ε-SVR except that the parameter b, the bias term, is this research study.

Table 1. Eight kernel functions considered for the study

Kernel type Function

Linear k(,), xii x x x

2 RBF or Gaussian k( xii , x ) exp(  x  x )

degree Polynomial k( xii , x ) (scale.  x , x   offset)

Hyperbolic Tangent k( xii , x ) tanh(scale. x , x offset)

n Bessel(  1) () xi  x Bessel function k(,) x x  i n( 1) ()xxi 

Laplace RBF k( xii , x ) exp(  x  x )

d n ANOVA Radial Basis k( x , x ) exp(  ( xkk  x )2 ) ii k1

One dimension:

xxi  k( xi , x ) 1  x i x min( x i , x )  (min 3 2 min(xx , ) (,))xx2  i i 3 Spline multidimensional:

n kk k(,)(,') xi x  k x x k1

496 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

2.3. Double SVR (DSVR) Algorithm polynomial kernel, they all need to be equal to 1. The proposed robust SVR approach based on fixed parameters The Double SVR (DSVR) is proposed by [12]. It is a can be summarized as follows: practical method for detecting influential observations by considering three directions: type of transformation, Step 1: Based on FPSVR (in ε-SVR and ε-BSVR the fixed sparseness and robustness. The efficiency of this approach parameters are ε = 0, C = 10,000 and h = 1, and in v-SVR, ε arises from the fact that it decreases the computational cost = 0, v = 1 and h = 1), find the fitted values of the selected and can identify outliers without having to delete them to SVR model. enable handling. This can be achieved by minimizing the n weights for outlying points in the data set. The DSVR is z y ()(,) * k x x  b (12) based on fixed parameters SVR (FPSVR) method [8]. The SVR  i i i i1 benefit of this technique is to control the free parameters, ε, C and σ, where σ is the hyperparameter of the RBF Or alternatively, based on the FPSVR (in ε-SVR and kernel. ε-BSVR the fixed parameters are ε = 0, C = 0.0001 and h = The method of DSVR based on fixed parameters can be 1, and in v-SVR, ε = 0, v = 0.01 and h = 1), find the absolute summarized as follows: residuals. Step 1: Based on FPSVR (the fixed parameters of ε-SVR are: ε = 0, C = 10000 and σ = 1), find the fitted values of z  | y ySVR | (13) the ε-SVR model. The values of the parameters C and v change with (12) n and (13) because in both equations we considered that the z y ()(,) * k x x  b (7)  SVR  i i i fitted and estimated error values to be large enough to i1 detect the vertical outliers. * where ii, [0,C ] are Lagrange Multipliers, k xi , x is the kernel function and b is the constant (bias term). Or Step 2: Any point with an absolute z value larger than the alternatively, find the absolute residuals based on the cutoff point is considered to be an outlier. FPSVR (the fixed parameters are ε = 0, C = 0.0001 and σ CP Median( z ) 3 MAD ( z ) (14) = 1 in the case of using residuals, z). The absolute residuals are given in (8). where MAD( z ) bMed {| zi Med ( z ) |} and b is 1.4826 for normal distribution. For other distributions, b is the z  | y y SVR | (8) th inverse of 75 (Q3) of the raw MAD [18–19]. Step 2: Any point with an absolute z value larger than the Step 3: The suspect high leverage points are detected by cutoff point is considered to be an outlier. The cutoff point using robust Mahalanobis distance (RMD) based on the is given in (9). Minimum Volume Ellipsoid (MVE) developed by [20] as:  var(z ) CP2 median ( z ) 2 (9) RMD( XTXCXXTX  ( ))T ( )1 (  ( )), for i = 1, 2, .., n 2n i R R R (15) Step 3: Compute the weight function as given in (10). where TR(X) and CR(X) are robust locations and shape

i  min[1,CP / z ] (10) estimates of the MVE, respectively. Rahmatullah Imon [21] suggested a cut-off point for the robust Mahalanobis Step 4: Estimate the final robust ε-SVR by substituting distances as: (10) into (7) as follows: n CP Median( RMDii ) 3 MAD ( RMD ) (16) * f()()(,) xi  i   i k x i x  b (11) The RMD is only valid for small dimensional data (P ˂ i1 n), where P is the number of predictor variables and n is the number of data points. 2.4. The Proposed Robust SVR Method Step 4: Compute the weight function: As the standard SVR models are not good robust against outliers and high leverage points, DSVR is developed. min[1,CP / z ] for outlier cases w   (17) Nevertheless, the DSVR considers only ε-SVR with RBF i min[1,CP / RMD ] for HLPs kernel. Hence, we improved the DSVR by combining all  i three forms of SVR into the algorithm with different kernel where z is the estimated, predicted or fitted values, as in functions. The hyperparameter of any kernel is abbreviated (12), and HLPs are high leverage points. as h. In this method, the value of h is set equal to 1. If the kernel function has more than one hyperparameter, like the Step 5: Estimate the final robust SVR model as follows:

Mathematics and Statistics 8(5): 493-505, 2020 497

n 2. Root Mean Square Error (RMSE) indicates the * f()()(,) x wi i  i k x i x  b (18) closeness of the data points to the predicted values of i1 the model. Lower RMSE values indicate improved fit.

2.5. Performance Measures n ()yy 2 To measure the accuracy and determine the best SVR  i (23) model, this study considers the following three types of RMSE  i1 n errors as stated in [22]: 1. Training error is the error that we get back to training data when we run the trained model. 2. Test error is the error that we get on the test data when 3. Results and Discussion we run the trained model. 3. A k-fold cross-validation error is an average error that 3.1. Belgian Phone Data Set we get when we run the trained model on k subsets Belgian phone data, collected from the Belgian (folds) of the test data. Statistical Survey, is a real data set that has vertical Also, we can determine the goodness of the model by outliers. It consists of the total number of international using the following measures: phone calls made between the years 1950 and 1973 (in ten 1. The R-square (R2) demonstrates the percentage of million calls) [20]. The cross-validation error is the total variation of the response variable that is determined using 10-fold cross-validation. We only used explained by the predictor variable/s. it to figure out the best model for this data set. The results in Tables 2, 3 and 4 present the performance 2 RSS of the three different SVR types with eight different R 1   100 (19) TSS kernel functions of Belgian phone dataset by tuning the parameters ε, C and v with 10-fold cross-validation in the where RSS is Residual Sum of Squares and TSS is Total grid search. The optimum parameters obtained in ε-SVR Sum of Squares. and ε-BSVR by the grid-search are ε = 0.3 and C = 251, 2 2. The Predicted R-squared (R pred) reveals how well a while in v-SVR, the optimum parameters are ε = 0.4 and v regression model predicts outcomes for new data. = 0.1. In these results, we can find that when Laplace kernel is used, the MSE is at least. In addition, the 2 PRESS determination coefficient (R2) is presented to measure the R pred   1   100 (20) TSS performance (goodness) of each model. In this data set, ε-SVR with Laplace kernel is the best model since MSE where PRESS is Predicted Residual Error Sum of Squares. and cross-validation errors are minimal. As for the other 2 n SVR models, R is also relatively high. PRESS() y y 2  i ii, (21) Figure 2 displays the outlier detection of ε-SVR i1 technique with Laplace kernel depending on fixed Finally, the assessment of the merits of the selected parameters (C = 10,000, σ = 1, and ε = 0) by using the SVR, the DSVR and the proposed robust SVR methods median absolute deviation of the absolute fitted values. can be made by using the following performance We can see that the cases 15-20 are vertical outliers. measures: In Figure 3, we compared the performance of the three 1. Mean Square Error (MSE) is used as an efficiency test methods by considering different values for the to figure out which methods are the best in various parameters (ε = 0.1, 0.2, 0.3, C = 1, 20, 50 and σ = 0.5, 1, cases. 5). The values of C greater than 50 give approximately the same results like those used. Furthermore, Figure 3 shows n that ε-SVR is less efficient in reducing the estimation ()yy 2  i (22) effects of the outliers, whereas the proposed robust SVR MSE  i1 and DSVR methods compete with each other. However, in n this data set, our proposed method is marginally better where y is the fitted values and n is the number of data than DSVR by attaining the lowest MSE and RMSE points. values.

498 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

Table 2. ε-SVR (ε = 0.3, C = 251) of Belgian phone data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 13 13 13 24 24 16 9 19

MSE 36.860 36.860 5.307 1.1×107 3.8261 16.562 5.861 1.2×105

Cross Validation Error 39.118 39.529 20.561 1.2×107 12.882 48.754 9.726 5.2×104

2 R 0.296 0.296 0.8711 0.004 0.990 0.608 0.860 0.062

Table 3. v-SVR (ε = 0.4, v = 0.1) of Belgian phone data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 4 4 10 4 18 4 5 4

MSE 56.119 56.119 30.501 42.775 34.664 55.622 45.006 49.173

Cross Validation Error 55.493 56.157 33.513 53.721 38.373 56.366 47.303 48.729

2 R 0.296 0.296 0.846 0.292 0.845 0.343 0.737 0.295

Table 4. ε-BSVR (ε = 0.3, C = 251) of Belgian phone data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 13 13 14 24 24 16 9 20

MSE 36.864 36.864 3.4204 1.3×107 3.862 16.589 5.942 4.2×105

Cross Validation Error 46.320 46.731 17.504 1.5×107 41.738 26.773 9.356 2.1×105

2 R 0.296 0.296 0.937 0.006 0.989 0.6077 0.859 0.060

Figure 2. Outlier detection of ε-SVR with Laplace kernel of Belgian phone data set

Mathematics and Statistics 8(5): 493-505, 2020 499

Figure 3. The MSE and RMSE of ε-SVR, DSVR and the proposed robust SVR methods for Belgian phone data set

3.2. Hawkins-Bradu-Kass Data Set kernel, those errors are small. Moreover, Predicted R2 (R2 of the test data) is provided to determine the goodness of In this example, we used the well-known artificial data fit of each model. Based on these findings, the suitable set created by [23]. It consists of 75 observations with one SVR model of this data set is ε-BSVR with RBF kernel as dependent and three independent variables. Additionally, the test and cross-validation errors are minimal and the first 14 cases are influential observations. 70 percent predicted R2 is reasonably large when compared to the of this data set is randomly selected to use in training data other SVR models. and the remaining 30 percent in test data. Also, the The results in Figure 4, using RMD based on MVE, cross-validation error is determined using 10-fold explain the detection of high leverage points for HBK data cross-validation. We only used this to determine the set. We can observe that the cases 1-14 are detected as appropriate model and we used the complete data set for influential observations. the rest. In Figure 5, we considered different values for the Tables 5, 6 and 7 show the results of three different parameters (ε = 0.1, 0.2, 0.3, C = 1, 50, 100 and σ = 0.5, 1, forms of SVR with eight different kernel functions of 5) for evaluating the three methods’ performances. The HBK data set by tuning the parameters ε, C and v in the values of C greater than 100 yield approximately the same grid search with a 10-fold cross-validation error. The results as those previously. Furthermore, the results in optimum parameters obtained in ε-SVR and ε-BSVR are ε Figure 5 clearly shows the efficiency of the proposed = 0 and C = 15, while the optimum parameters in v-SVR robust SVR approach over DSVR and ε-BSVR techniques are ε = 0.2 and v = 0.1. Three types of errors, i.e. training, in terms of achieving lower MSE and RMSE values. test and cross-validation errors, are considered to optimize Based on these outcomes, we can infer that the use of the the accuracy of the selection of the model. From the proposed robust SVR method is recommended for HBK results, we can see that when using RBF or Gaussian data set.

500 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

Table 5. ε-SVR (ε = 0, C = 15) of HBK data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 53 53 53 53 53 53 53 53

MSE (Training) 6.988 6.987 0.025 2.3×103 0.000 0.527 0.333 0.299

MSE(Test) 5.800 5.800 0.974 1.5×103 7.990 1.360 0.853 1.086

Cross Validation Error 9.426 8.761 1.488 3.1×103 7.534 3.053 0.940 4.213

2 R pred 0.670 0.670 0.931 0.181 0.731 0.913 0.948 0.931

Table 6. v-SVR (ε = 0.2, v = 0.1) of HBK data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 8 8 18 7 28 9 10 12

MSE (Training) 15.325 16.325 4.120 8.785 14.33 9.701 0.548 0.350

MSE (Test) 16.260 16.260 5.183 10.382 18.563 10.785 1.187 1.150

Cross Validation Error 15.786 15.222 6.0804 9.641 17.520 10.803 1.267 0.621

2 R pred 0.672 0.673 0.946 0.419 0.847 0.821 0.926 0.928

Table 7. ε-BSVR (ε = 0, C = 15) of HBK data set

Kernel Type Linear Poly RBF Tanh Laplace Bessel ANOVA Spline

No. of Support Vectors Used 53 53 53 53 53 53 53 53

MSE (Training) 6.499 6.491 0.082 2.0×103 0.000 1.056 0.254 0.299

MSE (Test) 9.443 9.508 0.636 2.1×103 5.550 0.892 0.797 1.086

Cross Validation Error 12.235 11.100 1.190 1.7×103 6.733 2.368 1.237 1.086

2 R pred 0.654 0.654 0.956 0.001 0.827 0.927 0.937 0.931

Figure 4. Detection of high leverage points based on robust Mahalanobis distance for HBK data set

Mathematics and Statistics 8(5): 493-505, 2020 501

Figure 5. The MSE and RMSE of ε-BSVR, DSVR and the proposed robust SVR methods for HBK data set

3.3. Simulation Study yU~  55.5, 60 , with x1 and x2 the same as the clean Two simulation studies are conducted to examine the data. Also, we considered different contamination merits of our newly developed robust SVR technique and percentages (10%, 15% and 20%) and different values for existing methods (standard SVR and DSVR) in the the parameters (ɛ = 0.1, 0.3 (as small as possible) and C = presence of outliers and high leverage points. The first 50, 100). simulation study deals with the linear case and the second For this simulation study, consideration is given ɛ-SVR simulation with the nonlinear case. The simulation studies with a linear kernel. On the basis of (22) and (23) and for were performed by using R software. each simulation run (1000), MSE and RMSE are obtained on the basis of ɛ-SVR, DSVR and the proposed robust SVR 3.3.1. Simulation I: Linear Case methods. Average MSE and RMSE for 1000 iterations are A model with two independent recorded. This can be used to assess the merits of the variables and different sample sizes, that is n = 50, 100 and methods mentioned above. 150 is considered in this simulation study. For each sample The complete results of this simulation study, based on n = 50, 100 and 150, the following relationship is used to the proposed robust SVR method, DSVR and the ɛ-SVR, produce the clean data [24]: are explained graphically in Figures 6, 7 and 8. This shows the estimates of the above techniques for the different y1  2 x12  3 x  ri (24) sample sizes and levels of contamination. The DSVR where x and x are generated from a uniform distribution, 1 2 successfully detected vertical outliers in this simulation . The errors, r are generated from standard U 5, 5 i study, based on absolute residuals of SVR. However, it normal distribution, N 0, 1 . Some clean observations does not detect high leverage points. On the other hand, by are replaced by contaminated observations in order to using RMD based on MVE, the proposed robust SVR create outliers in the y-direction and high leverage points. approach detects both the vertical outliers and high leverage points. Additionally, we can clearly see that the For each sample,  n observations for x1, x2 and y are proposed robust SVR approach has successfully achieved replaced by outlying values, where α is the amount of very low values of MSE and RMSE relative to DSVR and contamination. For high leverage points, the last good observations of x and x are replaced with contaminated ɛ-SVR for all possible combinations of different sample 1 2 sizes and different percentages of contamination. This observations where xU~  44.5, 50 and 1 reveals that the proposed robust SVR method is a more xU2 ~  50.5, 55 , holding the corresponding y values the efficient technique compared to other approaches, despite same as the clean data. To create vertical outliers, the third the presence of various percentages of contamination and subsequent good observations are replaced with points in both x and y directions.

502 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

Figure 6. The MSE and RMSE of the ɛ-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 10% contamination

Figure 7. The MSE and RMSE of the ɛ-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 15% contamination

Figure 8. The MSE and RMSE of the ɛ-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 20% contamination

3.3.2. Simulation II: Nonlinear Case observations are replaced by contaminated observations in order to generate outliers and high leverage points. A model with two independent Different percentages of contamination, 10%, 15% and variables and different sample sizes, which is n = 50, 100 20% are considered. Contamination is achieved by and 250, is considered in this simulation study. For each replacing some clean observations of x1, x2 and y by sample n = 50, 100 and 250, the following relationship is contaminated observations where xU~  4.9, 5 , used to generate the clean data [12]: 1 xU2 ~  29.9, 30 and yU~  9.9, 10 . Also, specific 22 sin xx values for the parameters (ɛ = 0.1, 0.3 (as small as possible), yr12 i (25) v = 0.2, 0.5 and σ = 1, 5) are taken into consideration. xx22 12 In this simulation study, v-SVR with RBF kernel is

The x1 and x2 values are generated with standard considered. The MSE and RMSE based on v-SVR, DSVR uniform distribution, U 0,1 . The residuals (additive and the proposed robust SVR methods are obtained for each simulation run (1000) on the basis of (22) and (23). error) ri , i  1, 2, , n are generated on the basis of The MSE and RMSE averages are recorded for 1000 standard normal distribution, N 0,1 . Some good iterations. This can be used to evaluate the effectiveness

Mathematics and Statistics 8(5): 493-505, 2020 503

of the techniques suggested. robust Mahalanobis distance based on MVE respectively. Figures 9, 10 and 11 present graphically the results of Furthermore, we can obviously see that the proposed comparison of the proposed robust SVR technique, the robust SVR method has smaller MSE and RMSE than the DSVR and the v-SVR of this simulation study. This DSVR and the v-SVR for all possible combinations of describes the estimates of the mentioned methods for different sample sizes and different percentages of different sample sizes and levels of contamination. In this contamination. This demonstrates that the proposed robust simulation study, vertical outliers and high leverages are SVR approach is the best technique compared to other effectively identified by both DSVR and the proposed techniques, in the presence of different levels of robust SVR approach using absolute fitted values and contamination.

Figure 9. The MSE and RMSE of the v-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 10% contamination

Figure 10. The MSE and RMSE of the v-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 15% contamination

Figure 11. The MSE and RMSE of the v-SVR, the DSVR and the proposed robust SVR methods for different sample sizes and 20% contamination

504 A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction

4. Conclusions [4] A. J. Smola, B. Schölkopf. A tutorial on support vector regression, Statistics and computing, Vol. 14, No. 3, pp. The regular SVR models are good models for estimating 199-222, 2004. the linear and nonlinear relationship of nonparametric [5] V. N. Vapnik. The nature of statistical learning theory, regression problems if there are no anomalous data points. Statistics for engineering and information science, New However, the approximation of standard SVR models may York, Vol. 21, pp. 1003-1008, 2000. no longer be efficient due to the possibility of choosing [6] D. Basak, S. Pal, D. C. Patranabis. Support Vector these abnormal observations as support vectors. The DSVR Regression, Neural Information Processing, Vol. 11, No. 10, is being proposed as an alternative technique for rectifying 2007. the problem. Nonetheless, only ɛ-SVR with RBF kernel [7] A. Mariette, R. Khanna. Support vector regression, In was considered in the DSVR approach. The weights are Efficient learning machines, Apress, Berkeley, CA, pp. also based on the sample variance of the fitted values that 67-80, 2015. can still be influenced by outliers. In order to incorporate the DSVR algorithm into all three kinds of SVR models, [8] S. Rana, W. Dhhan, H Midi. Fixed parameters support vector regression for outlier detection, Economic ε-SVR, v-SVR and ε-BSVR, with eight different kernel Computation & Economic Cybernetics Studies & Research, functions and to resolve the limitations of the DSVR Vol. 52, No. 2, 2018. procedure, we have proposed a new robust SVR method, based on the MAD and RMD by using MVE approach, to [9] V. Barnett, T. Lewis. Outliers in statistical data, John Wiley and Sons, New York, 1994. remedy the problems of outliers and high leverage points at the same time with a greater performance by giving a lower [10] P. J. Rousseeuw, B. C. Van Zomeren. Unmasking weight to them. Two numerical examples and two multivariate outliers and leverage points, Journal of the simulation studies with linear and nonlinear cases based on American Statistical association, Vol. 85, No. 411, pp. some of SVR parameters and different sample sizes are 633-639, 1990. employed to investigate the performance of the proposed [11] C. C. Chuang, S. F. Su, J. T. Jeng, C. C. Hsiao. Robust robust SVR method. MSE and RMSE are used to measure support vector regression networks for function the performances of the proposed robust SVR, DSVR and approximation with outliers, IEEE Transactions on Neural selected SVR methods. The overall results show that the Networks, Vol. 13, No. 6, pp. 1322-1330, 2002. newly proposed technique performs well as it produces the [12] W. Dhhan, H. Midi, T. Alameer. Robust Support Vector smallest MSE and RMSE. So, our proposed robust SVR Regression Model in the Presence of Outliers and Leverage method is a good alternative to deal with contaminated data Points, Modern Applied Science, Vol. 11, No. 8, 2017. sets. For future research, we suggest further work on a [13] E. M. Jordaan, G. F. Smits. Robust outlier detection using robust SVR for high dimensional data (P > n) and utilizing SVM regression, Paper presented at the IEEE International various loss functions like quadratic, Huber and Tukey loss Joint Conference on Neural Networks, 2004. functions to make a significant contribution to the [14] J. Nishiguchi, C. Kaseda, H. Nakayama, M. Arakawa, Y. performance of the estimation process. Yun. Modified support vector regression in outlier detection. Paper presented at the Neural Networks (IJCNN), The 2010 International Joint Conference on Neural Networks, IEEE, Acknowledgements pp. 1-5, 2010. This research paper is part of MSc final year project [15] W. Dhhan, S. Rana, H. Midi. Non-sparse ϵ-insensitive support vector regression for outlier detection. Journal of under School of Graduate Studies, University Putra Applied Statistics, Vol. 42, No. 8, pp. 1723-1739, 2015. Malaysia. [16] B. Schölkopf, A. J. Smola, R. C. Williamson, P. L. Bartlett. New support vector algorithms, Neural Computation, Vol. 12, No. 5, pp. 1207–1245, 2000.

[17] O. L. Mangasarian and D. R. Musicant. Successive REFERENCES overrelaxation for support vector machines, IEEE Transactions on Neural Networks, Vol. 10, pp. 1032–1037, [1] C. Cortes, V. Vapnik. Support-vector networks, Machine 1999. learning, Vol. 20, No. 3, pp. 273-297, 1995. [18] C. Leys, C. Ley, O. Klein, P. Bernard, L. Licata. Detecting [2] H. Yang, K. Huang, L. Chan, I. King, M. R. Lyu. Outliers outliers: Do not use around the mean, treatment in support vector regression for financial time use absolute deviation around the median, Journal of series prediction, International Conference on Neural Experimental Social Psychology, Vol. 49, No. 4, pp. Information Processing, Springer, pp. 1260-1265, 2004. 764-766, 2013. [3] V. Ceperic, G. Gielen, A. Baric. Sparse ɛ-tube support [19] P. J. Rousseeuw, C. Croux. Alternatives to the median vector regression by active learning, Soft Computing, Vol. absolute deviation, Journal of the American Statistical 18, No. 6, pp. 1113 – 1126, 2014. association, Vol. 88, No. 424, pp. 1273-1283, 1993.

Mathematics and Statistics 8(5): 493-505, 2020 505

[20] A. M. Leroy, P. J Rousseeuw. and outlier Springer-Verlag, New York, 2009. detection, Wiley Series in Probability and Mathematical Statistics, New York, 1987. [23] D. M. Hawkins, D. Bradu, G. V. Kass. Location of several outliers in multiple-regression data using elemental sets, [21] A. Rahmatullah Imon. Identifying multiple influential Technometrics, Vol. 26, No. 3, pp. 197-208, 1984. observations in linear regression, Journal of Applied Statistics, Vol. 32, No. 9, pp. 929-946, 2005. [24] H. Midi, L. H. Ann, S. Rana. On the Robust Parameter Estimation for Linear Model with Autocorrelated Errors, [22] T. Hastie, R. Tibshirani, J. Friedman. The elements of Advanced Science Letters, Vol. 19, No. 8, pp. 2494-2496, statistical learning: prediction, inference and data mining, 2013.

Mathematics and Statistics 8(5): 506-519, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080503

Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

Diah Ayu Widyastuti*, Adji Achmad Rinaldo Fernandes, Henny Pramoedyo, Nurjannah, Solimun

Department of Statistics, Faculty of Mathematics and Natural Science, Brawijaya University, Indonesia

Received June 8, 2020; Revised July 16, 2020; Accepted August 10, 2020

Cite This Paper in the following Citation Styles (a): [1] Diah Ayu Widyastuti, Adji Achmad Rinaldo Fernandes, Henny Pramoedyo, Nurjannah, Solimun , "Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data," Mathematics and Statistics, Vol. 8, No. 5, pp. 506 - 519, 2020. DOI: 10.13189/ms.2020.080503. (b): Diah Ayu Widyastuti, Adji Achmad Rinaldo Fernandes, Henny Pramoedyo, Nurjannah, Solimun (2020). Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data. Mathematics and Statistics, 8(5), 506 - 519. DOI: 10.13189/ms.2020.080503. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Regression analysis has three approaches in relationship. So in this study, spatial data that has a estimating the regression curve, namely: parametric, non-linear relationship between predictor variables and nonparametric, and semiparametric approaches. Several responses tends to be better modeled with a nonparametric studies have discussed modeling with the three approaches approach. in cross-section data, where observations are assumed to be Keywords Parametric, Nonparametric, Spatial, independent of each other. In this study, we propose a new Semiparametric, Heteroscedasticity method for estimating parametric, nonparametric, and semiparametric regression curves in spatial data. Spatial data states that at each point of observation has coordinates that indicate the position of the observation, so between observations are assumed to have different variations. The 1. Introduction model developed in this research is to accommodate the influence of predictor variables on the response variable Regression analysis is one method that can be used to globally for all observations, as well as adding coordinates determine the relationship between variables involved in a at each observation point locally. Based on the value of study (Draper & Smith, 1992). Regression analysis that Mean Square Error (MSE) as the best involves one response variable and several predictor criteria, the results are obtained that modeling with a variables is multiple linear regression analysis. According nonparametric approach produces the smallest MSE value. to (Kutner et al, 2005), multiple linear regression analysis So this application data is more precise if it is modeled by requires several assumptions that must be fulfilled, namely the nonparametric truncated spline approach. There are linearity, error normalization, homogeneity of various eight possible models formed in this research, and the errors, non-, and non-multicollinearity. nonparametric model is better than the parametric model, There are three approaches to regression analysis including because the MSE value in the nonparametric model is parametric, nonparametric, and semiparametric smaller. As for the semiparametric regression model that is approaches. formed, it is obtained that the variable X2 is a parametric The parametric approach used in the shape of the component while X1 and X3 are the curve is known as a linear, quadratic, cubic, or components (Model 2). The regression curve estimation polynomial degree k (Fernandes et al, 2014). A model with a nonparametric approach tends to be more nonparametric approach used in the shape of the regression efficient than Model 2 because the linearity assumption test curve is unknown, while the semiparametric approach is results show that the relationship of all the predictor modeling the regression analysis used when some form of variables to the response variable shows a non-linear the regression curve is known and partly unknown. When

Mathematics and Statistics 8(5): 506-519, 2020 507

research data shows an unknown shape of the regression response variable and the predictor does not know the curve and the linearity assumption is not met, it is shape of the regression curve or there is no past necessary to do statistical modeling with a nonparametric information about the data pattern then the approach used approach (Fernandes et al, 2015). is nonparametric regression (Fernandes et al, 2014). In Some regression models with nonparametric approaches addition to the two approaches, there is a semiparametric that are often used by researchers include Spline, Kernel, regression approach which states that this approach used in Fourier, and (Eubank, 1999). Spline regression is the shape of the regression curve is partially known and a regression analysis method that can be used to estimate partly unknown (Eubank, 1999). nonparametric regression models. Data that has a changing pattern at certain subintervals is very well modeled with 2.2.1. Parametric Regression splines (Hardle, 1990). Spline has piecewise polynomial In parametric regression, several classical assumptions properties in which a piece of the polynomial has must be fulfilled. One such assumption is that the shape of segmented properties at the interval k formed at the knot the regression curve is known, for example, as linear, point. One approach that can be used for parameter quadratic, cubic, p-degree polynomials, exponents, and so estimation in nonparametric regression models is truncated on. Parametric regression functions can be written with the spline, where the truncated spline approach can overcome following equation: the changing data patterns at certain subintervals. There is a development method of multiple linear yi f x i  i ; in1,2,..., (1) regression, namely statistical modeling based on regional characteristics, where the that is formed is fx i  is a parametric regression function with  i influenced by the geographical location between regions as random errors that follow a normal distribution with (Lu et al, 2014). Differences in geographical location affect 2 the potential that is owned or used by an area. Therefore we zero mean and variance  . Parametric regression need a statistical modeling method that considers functions are accompanied by linear functions as follows geographical location or location observation factors. (Pramoedyo, 2013): Based on the three approaches in regression analysis, (2) this study will discuss the comparison of the three approaches when used to that is formed is influenced by the where: geographical location between regions. The selection of the best model is seen based on the value of Mean Square Error yi : response variable at the i-th observation (MSE). With this research, it is expected to be able to show  : intersection point between a regression line and the right regression analysis approach to be used if the 0 research data does not follow a certain pattern. Regression the y axis (intercept) curve estimation is done by the Weighted Least Square 1,..., p : regression coefficient for each p-predictor (WLS) method, and the estimated regression curve variable obtained applies globally and locally. X ip : the value of the i-th observation on the p-predictor variable

2. Literature Review  i : error in i-th observation n : number of observations; i : 1, 2, ..., n 2.1. Spatial Data p : number of predictors variable If parametric regression is applied to spatial data, the Spatial data is data that contains geographical following equation will be obtained: information so that it can be described in a map. In spatial data, there is a dependency between observation locations. p (3) The difference between spatial data and other data is that yi0  u i,, v i   k u i v i X ik   i there are coordinates that indicate the location points k1 according to geographical conditions (Anselin, 1988). Coordinates uvii,  indicate the location of each 2.2. Regression Analysis observation. In parametric modeling with spatial data, we will get as many models as the location of the observation, There are several regression analysis approaches based and this shows that each location of the observation shows on data patterns, namely the parametric regression a different effect between the predictor variables on the approach, nonparametric regression, and semiparametric response variable (Lu et al, 2014). So that if equations (2) regression. If the pattern of the relationship between the and (3) are combined, a parametric regression curve response variable and the predictor is known then it is estimation model will be obtained globally for all called parametric regression analysis (Kutner et al, 2005), locations and locally for each observation location with whereas if the pattern of the relationship between the the following equation:

508 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

p qq mr m k yi0  0 u ii,, v   1 kik X  2 kiiik u v X    i y  X  X k   i jk ij  j() m h ij hj i k1 j1 k  1 j  1 h  1 (4) (7) where: and truncated functions as follows: X ik : the value of the i-th observation on the k-th parametric predictor variable  m  Xij k hj, X ji k hj k : 1, 2, …, p Xk  ij hj   0, Xkij hj 2.2.2. Nonparametric Regression  Nonparametric regression is a statistical method used to where: estimate the pattern of relationships between predictor X : j-th nonparametric predictor variable variables and responses when no information is obtained ji about the shape of a function or regression curve. In khj : the h-knot point of the j-th nonparametric nonparametric regression, there are no classical predictor variable assumptions as in parametric regression (Fernandes et al, 2019). Based on n independent observations, the  jk : the k-th parameter in the j-th nonparametric predictor variable relationship from variables yi and xi is unknown.  : parameters of the truncated spline function According to (Hardle, 1990), a nonparametric regression j m h model with more than one nonparametric predictor variable is as follows: The truncated spline nonparametric regression model in q data with coordinate locations in the development of ; in1,2,..., (5) nonparametric regression, where the model is applied to yij f xij  i j1 spatial data (Sifriyani et al., 2018), so the estimated parameters generated are local for each observation where: location. The truncated spline approach is used to solve yi : response variable from i-th observations spatial data analysis problems for which the regression curve is unknown (Sifriyani et al, 2017). fxj  ij  : nonparametric regression function i-th observation on the j-th predictor variable q m y u,, v u v X k  : error is assumed independent with zero mean i0  i i  jk i i ij i jk11 2 (8) q r and variance  m Truncated Spline is one of the approach methods in the u , v X  K   j() m h i i ij hj i nonparametric regression model that is often used. jh11 Truncated Spline is polynomial pieces that have segmented Equation (9) shows a global nonparametric model so and continuous properties. One of the advantages of the that in all locations of observation the predictor variables truncated spline approach in nonparametric regression have the same effect on the response variable. If equation tends to find its estimation of the regression function (9) is applied to spatial data, we will obtain a global according to the data. nonparametric regression curve estimation model for all f xi  is a function of a regression curve whose locations and locally for each observation location with shape is unknown and assumed to be additive. If the the following equation: nonparametric regression function is approximated by the q  X Xk    u, v X truncated spline function, then it can be written in the 12j ijj 1 i x j  j i i ij y   u, v   equation as follows: i0 0  i i  i j1  uv,  Xk jx11i i  ji j1  mr  m fx  Xk X  k (6) (9)  i  m ihm   i h  kh11 2.2.3. Semiparametric Regression If the nonparametric predictor variable is more than one, Semiparametric regression combined between the data pair arrangement is y ,xx , ,..., x so the  i i12 i iq  parametric regression and nonparametric regression. nonparametric regression model that is formed is as According to Eubank (1999), semiparametric regression follows. states that the regression curve is partly known and partly q unknown. The truncated spline semiparametric regression equation with more than one nonparametric predictor yii  fxj ij   variable is as follows: j1

Mathematics and Statistics 8(5): 506-519, 2020 509

p yi0  0 u i,, v i    1 k X ik  2 k u i v i X ik  k1 (10) q  1 jiXjjjX1  kx2 j u i,, v i X ij j1  u i v i X j1ix k i   jj 1  j1   where: yi : response variable p : number of parametric predictors variable q : number of nonparametric predictors variable yˆi1 : predicted value of response variable from equation (11) Based on equation (10), it can be seen that in this y : mean value of response variable from equation equation, there is a global and local influence. Global influence does not involve the coordinates of the point of (11) observation so that all observation locations have the same 2. Perform a regression analysis by entering the fitted influence (Fernandes et al, 2020). While the influence value obtained from equation (11) as a new predictor locally gives a different effect at each observation location. variable with the regression equation model as The regression curve estimation method used in the three follows: approaches is the Weighted Least Square (WLS) method (Fernandes et al, 2017), where the weighting indicates that (12) there is an influence of heterogeneity of variance globally for all observations and locally for each observation Based on equation (13) the coefficient determination is location. obtained with the following equation:

n 2.3. Testing Linear Assumptions ˆ 2  yyii 2  The linearity assumption states that the relationship 2 SSE2 i1 R2  1  n between the response variable and the predictor variable is SST 2 2 yy appropriate, which means that the regression curve can be  i  i1 expressed in a linear, quadratic, or cubic form. If the linearity assumption is not met, then the linear regression where: analysis with the parametric approach is not suitable for SSE : sum square error from equation (13) use in data analysis. One method for testing linearity 2 assumptions is the Regression Specification Error Test or SST2 : sum square total from equation (13) RESET, which was first introduced in 1969 by Ramsey. yˆ : predicted value of response variable from According to (Gujarati, 2003) steps to implement the i2 RESET, namely: equation (13) 1. Perform a regression analysis using one predictor : mean value of response variable from equation variable to get the fitted value of the response variable (13) from the following equation. 2 3. Then, the value R1 of equation (11) and the value (11) 2 R2 of equation (12) are obtained. After the values From equation (11), parameter estimation using and of the two equations are known, then Ordinary Least Square (OLS) method and the coefficient calculate the value of the test F with the determination is obtained with following equation: test following equation: n 2 yy ˆ 22  ii1  R21 R / m 2 SSE1 i1 Ftest  (13) R1  1  n 2 2 1/R n k  SST1  2   yyi   i1 where: where: m : number of predictor variables that have been added SSE1 : sum square error from equation (11) n : number of observations

SST1 : sum square total from equation (11) k : number of parameters in the new equation

510 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

R2 : coefficient determination from equation (11) response variable. Thus, insufficient evidence of the 1 existence of a linear relationship pattern is used. The next 2 R2 : coefficient determination from equation (12) step is modeling with three approaches at once, namely parametric, nonparametric, and semiparametric approaches.

4. Based on the value Ftest of equation (13), then Based on the three approaches, modeling results will be compare the value with F the following obtained with the most efficient approach to represent data table that has a non-linear relationship between the predictor hypothesis: variable and the response variable. Modeling on these three H0 : 2 3 0 vs approaches is done by combining classical regression models with spatial data regression involving coordinates H1 : at least one of  j 0;j 2,3 at each observation location. If FFtest table or p-value <   0,05 , H0 is Table 1. The Results of Linearity Test rejected. The specification of the model used is a non-linear model. Relationship p-value Information

X1  y 7,8742e-10 Non-linear X  y 1,1029e-11 Non-linear 3. Research Methodology 2 X3  y 1,8683e-14 Non-linear In the study, discussing farmer satisfaction with Note: non-linear relationship (p-value <0.05) subsidized fertilizers from the government with the research variables used are as follows: Courage of a field 4.2. Parametric Regression counselor (Y), Nation Culture (X1), Reward Financial Courage of a field counselor (X2), and Leadership Role Following are the equations obtained with the (X3). The composition of the research data consists of three parametric approach globally for all observations and cultures that exist in East Java Province, wherein each locally according to each coordinate of the observation culture consists of five regencies that have the coordinates location: of the observation location. In each culture, analysis was Table 2. Estimation of Global Parametric Regression Curve carried out to determine the level of farmer satisfaction, research was conducted to model farmers' satisfaction with Parameter Coefficients subsidized fertilizer as a whole culture and in each culture. Modeling is done with three approaches, namely 0,0096 parametric, nonparametric, and semiparametric so that the most appropriate modeling is obtained to represent farmer 0,2454 satisfaction data.

0,1819 4. Results and Discussion

0,3005 4.1. Testing Linear Assumptions

Linearity assumption testing is used to find out whether Based on Table 2, the resulting curve estimates are valid the relationship between response variables and predictor for all observation locations without considering the variables can be stated precisely. This means that the coordinates of the location where the observations are. So regression curve can be expressed in a linear, quadratic, or the model formed is as follows: cubic form. Based on Table 1, it can be seen that all the predictor y0,0096  0,2454 X  0,1819 X  0,3005 X i 1i 2 i 3 i variables involved have a non-linear relationship to the

Mathematics and Statistics 8(5): 506-519, 2020 511

Table 3. Estimation of Local Parametric Regression Curves

Culture 1 Culture 2 Culture 3

Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1442 -0,0493 -0,0852  01  02  03

-0,0294 0,1419 0,1329  211  212  213

0,0295 0,0720 0,0804  221  222  223

0,0563 0,1313 0,1129  231  232  233

Locally estimated regression curves are obtained Modeling with a nonparametric approach is done because of the coordinates at each observation location. globally and locally so that all the predictor variables Based on Table 3, three regions produce estimations of the involved in the research are assumed to have non-linear regression curve locally, namely regions with culture 1, relationships. Here are the results of estimating global culture 2, and culture 3 as follows: nonparametric regression curves, in which there are three Culture 1: nonparametric predictor variables. Estimation of nonparametric regression curves with a

y1i 0,1442 0,0294 X11i  0,0295 X 21 i  0,0563 X 31 i first-order truncated spline approach with the point of the optimum knot is as follows: Culture 2:

yi  0,0475 0,2182 X1ii  0,1687 X 2  0,2615 X 3i  y2i 0,0493 0,1419 X12i  0,0720 X 22 i  0,1313 X 32 i 0,0512 (XX  0,4671)  0,0232 (  0,4881)  Culture 3: 12i i 0,1345 (X 3i 0,490 5 ) y0,0852  0,1329 X  0,0804 X  0,1129 X 3i 13i 23 i 33 i From Table 5 the estimation of the nonparametric regression curve using the first-order truncated spline 4.3. Nonparametric Regression approach in each culture as follows: Culture 1: Table 4. Estimation of Global Nonparametric Regression Curve

Parameter Coefficients y1i 0,1293 0,0127 X11i 0,0162 X 21 i  0,0335 X 31 i 

 0,0111(XX11i  0,4462)  0,0548 ( 21i  0,6742)  0,0475  0,0168(X  0,6556) 31i  Culture 2: 0,2182

y2i 0,0237 0,1211 X12i  0,0116 X 22 i  0,0664X 32 i  0,1687  0,0025(XX12ii 0,8048) 0,1131 (22  0,7899)   0,0314 (X  0,7357) 32i  0,2615 Culture 3:

yX0,0581 0,1098  0,0195XX  0,0687  0,0512 3i 13i 23 i 33 i  0,0376(XX13ii 0,4684) 0,0937 (23  0,7755)   0,0725 (X  0,6841) 0,0232 33i   2

0,1345  3

512 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

Table 5. Estimation of Local Nonparametric Regression Curve

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1293 -0,0237 -0,0581  01  02  03

-0,0127 0,1211 0,1098  211  212  213

-0,0162 0,0116 0,0195  221  222  223

0,0335 0,0664 0,0687  231  232  233

-0,0111 -0,0025 -0,0376  11  12  13

0,0548 0,1131 0,0937  21  22  23

0,0168 0,0314 0,0725  31  32  33

4.4. Semiparametric Regression Table 6 states that the semiparametric curve estimation is done globally for all observations. Estimation is carried Semiparametric regression modeling is used to model out with the assumption that part of the regression curve parametric and nonparametric regression simultaneously. has a known shape and part that has no / unknown shape. The following will discuss six possible models that were The equation obtained is as follows: formed using the semiparametric approach. yX 0,0649 0,2502XX  0,2266  0,1155  4.4.1. Model 1 i 1ii23i

0,1470 (X 23ii 0,4881) 0,1075(X 0,4950) The first model formed is assuming the variable X1 as a parametric component, while X2 and X3 as the Local semiparametric regression curve estimates cause nonparametric component. each observation location to give different results. The Table 6. Estimation of Global Semiparametric Regression Curve semiparametric model is arranged based on optimal knot points, where each location gives different pieces of knot Parameter Coefficient points with the following results: Culture 1: 0,0649

y1i 0,1789 0,0637 X11i  0,0080 X 21 i 0,0051 X 31 i  0,2502 0, 0424 (XX  0,6742)  0,0180 (  0,6556) 21ii 31 Culture 2: 0,2266

y2i 0,0191  0,1169 X12i  0,0558 X 22 i  0,0069 X 32 i  0,0981 (XX  0,7899)  0,0263 (  0,7357) 0,1155 22ii 32 Culture 3: 0,1470  2 y3i 0,0949  0,1969 X13i  0,1629 X 23 i  0,0267 X 33 i  0,0250(XX 0,7755) 0,0384 ( 0,6841) 0,1075 23ii 33  3

Mathematics and Statistics 8(5): 506-519, 2020 513

Table 7. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1789 -0,0191 -0,0949  01  02  03

-0,0637 0,1169 0,1969  211  212  213

0,0080 0,0558 0,1629  221  222  223

-0,0051 -0,0069 -0,0267  231  232  233

0,0424 0,0981 -0,0250  21  22  23

0,0180 0,0263 0,0384  31  32  33

4.4.2. Model 2 observation locations so that the estimated curve model The second model formed to compile the estimation of generated by the order 1 truncated spline approach and the point of the optimum knot is as follows: the regression curve is by assuming X2 as a parametric component, while X and X as nonparametric 1 3 yX 0,0787 0,2326  0,2164XX  0,1147  components. i 1ii23i

0,1635 (X13i 0,4671) 0,0974 (X i 0,4950) Table 8. Estimation of Global Semiparametrik Regression Curve

Parameter Coefficients Table 9 presents the results of the estimated semiparametric regression curves for each of the 0,0787 observation sites which include Culture 1, Culture 2, and Culture 3 with the following results: Culture 1: 0,2326

y1i 0,1436 0,0219 X11i 0,0606 X 21 i  0,0293 X 31 i  0,0531 (XX  0,6742)  0,0168 (  0,6556) 0,2164 21ii 31 Culture 2: 0,1147

y2i 0,0127 0,1079 X12i  0,0121 X 22 i  0,0619 X 32 i 

0,0531 (XX21ii  0,6742)  0,0168 ( 31  0,6556) 0,1635 Culture 3:

0,0974  3 y3i 0,0523 0,1465 X13i  0,1298 X 23 i  0,1252 X 33 i  0,0322 (XX 0,7755) 0,0256 ( 0,6841) 23ii 33 Based on Table 8, the estimation is done globally for all

514 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

Table 9. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1436 -0,0127 -0,0523  01  02  03

-0,0219 0,1079 0,1465  211  212  213

-0,0606 0,0121 0,1298  221  222  223

0,0293 0,0619 0,1252  231  232  233

0,0531 0,0939 -0,0322  11  12  13

0,0168 0,0240 0,0256  31  32  33

4.4.3. Model 3 the effect of the predictor variables on the response The third equation that may be formed if a variable is the same for all observation locations namely Culture 1, Culture 2, and Culture 3. Based on Table 10, semiparametric approach is used, namely X3 as a then the equation can be written as follows: parametric component, and X1 and X2 as nonparametric components. yXi  0,0702 0,2232XX1ii  0,196723  0,1539 i  Table 10. Estimation of Global Semiparametric Regression Curve 0,1562 (X 0,4671)  0,1258(X 0,4881) 12ii Parameter Coefficients Based on Table 11, the estimated results of the regression curve for each observation location are obtained. 0,0702 The estimation of the regression curve locally shows that the influence of the predictor variable on the response variable at each location is different. 0,2232 Culture 1:

y1i 0,1481  0,0246 X11ii  0,0644 X 21 0,0276 X 31i 0,1967 0,0011 (XX 0,4462)  0,0521 (  0,6742) 11ii 21

0,1539 Culture 2:

y2i  0,0140 0,1077 X12i 0,0109 X 22 i  0,0583 X 32 i  0,1562 0,0030 (XX 0,8048) 0,0951 ( 0,7899) 12ii 22 Culture 3: 0,1258  2 y3i  0,0639 0,1400 X13i 0,1279 X 23 i  0,1109 X 33 i  0,0240 (XX 0,4684) 0,0066 ( 0,6556) 13ii 23 The estimation of the global regression curve states that

Mathematics and Statistics 8(5): 506-519, 2020 515

Table 11. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1481 -0,0140 -0,0639  01  02  03

-0,0246 0,1077 0,1400  211  212  213

-0,0644 0,0109 0,1279  221  222  223

0,0276 0,0583 0,1109  231  232  233

-0,0011 -0,0030 -0,0240  11  12  13

0,0521 0,0951 0,0066  21  22  23

4.4.4. Model 4 X2 are parametric components, while X3 as nonparametric components are as follows: The fourth model that is formed is by assuming that X1 and X2 as parametric components, while X3 as yX 0,0435 0,2804  0,2568 X  nonparametric components. i 1ii2 0,1138X  0,1225 (X  0,4950) 3i 3i  Table 12. Estimation of Global Semiparametric Regression Curve Based on Table 13, the estimated regression curve can be Parameter Coefficients stated as follows: Culture 1: 0,0435 y1i  0,1742 0,0603 X11ii 0,0144 X 21

0,0491X31i  0,0212 (X31i 0,65 56) 0,2804 Culture 2:

0,2568 yX2i  0,0408  0,133712ii 0,0674 X 22 0,1128 X 0, 0 3 2 3 (X  0,7357) 32i 32i  0,1138 Culture 3:

0,1225 yX 0,0899  0,2069 0,1749X  3 3i 13ii 23 0,0481X 0, 0 3 7 8 (X 0,6841) 33i 33i  Estimates of the global regression curve where X1 and

516 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

Table 13. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1742 -0,0408 -0,0899  01  02  03

-0,0603 0,1337 0,2069  211  212  213

0,0144 0,0674 0,1749  221  222  223

0,0491 0,1128 -0,0481  231  232  233

0,0212 0,0323 0,0378  31  32  33

4.4.5. Model 5

The fifth model is by assuming X1 and X3 as parametric components, X2 as nonparametric components.

Table 14. Estimation of Global Semiparametric Regression Curve

Parameter Coefficients

0,0370

0,2637

0,2275

0,1622

0,1580  2

Table 14 shows the results of the global semiparametric regression curve estimation. The effect on all observation locations is assumed to be the same as the following results: yX 0,0370 0,2637  0,2275 X  i 1ii2 0,1622X3i  0,1580 (X 2i  0,4881)  The local semiparametric regression curve estimation based on Table 15, states that each observation location shows a different effect between the predictor variables on the response variable.

Mathematics and Statistics 8(5): 506-519, 2020 517

Table 15. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1802 -0,0382 -0,1050  01  02  03

-0,0645 0,1308 0,1973  211  212  213

0,0109 0,0612 0,1555  221  222  223

0,0465 0,1120 0,0037  231  232  233

0,0202 0,0324 0,0412  21  22  23

Culture 1: The sixth model that was formed is by assuming variables X2 and X3 as parametric components, while X1 as yX1i  0,1802 0,0645X11ii 0,0109 21  nonparametric components. Global semiparametric regression curve estimates from 0,0465X31i 0,0202 (X 21i  0,6742) the sixth model based on Table 16 are stated as follows. Culture 2: yX 0,0539 0,24451ii  0,2173 X 2  yX 0,0382  0,1308 0,0612 X i 2i 12ii 22 0,1558X  0,1813 (X  0,4671) 0,1120 X 0, 0 3 2 4 (X  0,7899) 3i 1i  32i 22i  Based on Table 17, semiparametric regression curve Culture 3: estimation locally gives a difference with the estimated yX 0,1050  0,1973 0,1555 X parameter globally. The results of the estimation of the 3i 13ii 23 regression curve for each observation location are as 0,0037 X 0, 0 4 1 2 (X  0,7755) 33i 23i  follows: Culture 1: 4.4.6. Model 6

Table 16. Estimation of Global Semiparametric Regression Curve y1i 0,1443  0,0221 X11ii  0,0607 X 21  0,0322X  0,0569 (X  0,4462) Parameter Coefficients 31i 11i 

0,0539 Culture 2:  0

yX2i  0,0298  0,120112ii 0,0138 X 22 0,2445  11 0,0668 X 0, 1 0 6 2 (X  0,8048) 32i 12i 

0,2173 Culture 3:  12

yX2i  0,0298  0,120112ii 0,0138 X 22  0,1558 13 0,0668 X 0, 1 0 6 2 (X  0,8048) 32i 12i 

0,1813  1

518 Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data

Table 17. Estimation of Local Semiparametric Regression Curves

Culture 1 Culture 2 Culture 3 Parameter Coefficients Parameter Coefficients Parameter Coefficients

0,1443 -0,0298 -0,0605  01  02  03

-0,0221 0,1201 0,1465  211  212  213

-0,0607 0,0138 0,1314  221  222  223

0,0322 0,0668 0,1184  231  232  233

0,0569 0,1062 -0,0073  11  12  13

4.5. The Efficiency Model parametric, nonparametric, and semiparametric. In this modeling combined with global and local curve estimation. Based on the results obtained, there are three regression In this study, there are three locations namely Culture 1, approaches curve estimation models that can be compiled. Culture 2, and Culture 3. Estimation of the regression curve From the curve estimation models obtained, the best model when done globally will provide the same effect of the will be selected. The best model is seen from the Mean predictor variables on the response variable, while the Square Error (MSE) value for each model with the estimation of the regression curve locally gives different following results. results at each observation location. Then based on Table 18. MSE from Parametric and Nonparametric Models regression curve estimation models that are formed the best model is selected with the criteria for selecting the best Model MSE model using MSE, the smaller the MSE value, the better Parametric 0,5519 the model formed. Of the eight possible models formed, the Nonparametric 0,3995 nonparametric model is better than the parametric model because the MSE value in the nonparametric model is Based on Table 18, the model formed based on the smaller. As for the semiparametric regression model that is estimated regression curve shows that the nonparametric formed, it is obtained that Model 2 has the smallest MSE model has a smaller MSE value than the parametric model. value, wherein Model 2 it is assumed that the variable X2 is Table 19. MSE from Semiparametric Models a parametric component while X1 and X3 are the nonparametric components. Model MSE The regression curve estimation model with a Model 1 0,4188 nonparametric approach tends to be more efficient than Model 2 0,4063 Model 2 because the linearity assumption test results show Model 3 0,4132 that the relationship of all the predictor variables to the Semiparametric response variable shows a non-linear relationship. So in Model 4 0,4132 this study, spatial data that has a non-linear relationship Model 5 0,4152 between predictor variables and responses tends to be Model 6 0,4202 better modeled with a nonparametric approach.

Table 19 shows some semiparametric models that are formed based on the results of the estimation of the regression curve. Based on the six models formed, it is found that in Model 2 the MSE values are the smallest REFERENCES compared to the other models. [1] Anselin, L, “Spatial Econometrics: Method and Models”, Netherlands, Kluwer Academic Publishers, 1988. 5. Conclusions [2] Fernandes, A.A.R., Nyoman Budiantara, I., Otok, B.W., Suhartono, “Reproducing Kernel Hilbert space for penalized This study discusses the estimation of the regression regression multi-predictors: Case in longitudinal data”, curve carried out with three approaches. The approach is International Journal of Mathematical Analysis, Vol 8 No

Mathematics and Statistics 8(5): 506-519, 2020 519

40, pp 1951-1961, 2014. [7] Gujarati, D. N, “Basic Econometrics (Fourth Edi)”, New York, McGraw-Hill, 2003. [3] Fernandes, A.A.R, Budiantara, I.N, Otok, B.W., and Suhartono, “Spline Estimator for Bi-Responses and [8] Hardle, W, “Applied Nonparametric Regression”, New Multi-Predictors Nonparametric Regression Model in Case York, Cambridge University Press, 1990. of Longitudinal Data”, Journal of Mathematics and Statistics, Vol 11, No 2, pp. 61-69, 2015. [9] Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W, “Applied Linear Statistical Models (Fifth Edit)”, New York, [4] Fernandes, A.A.R., Solimun, & Arisoesilaningsih, E, McGraw-Hill, 2005. “Estimation of spline function in nonparametric path analysis based on penalized weighted least square (PWLS)”, [10] Lu, B., Charlton, M., Harris, P., & Fotheringham, A. S, AIP Conference Proceedings, Vol 1913 No 1, pp 020037, “Geographically weighted regression with a non-Euclidean 2017. distance metric: A case study using hedonic house price data”. International Journal of Geographical Information [5] Fernandes, A.A.R., Hutahayan, B., Solimun, Science, 28(4), 660–681, 2014, https://doi.org/10.1080/136 Arisoesilaningsih, E., Yanti, I., Astuti, A.B., Nurjannah, & 58816.2013.865739 Amaliana, L, “Comparison of Curve Estimation of the Smoothing Spline Nonparametric Function Path Based on [11] Sifriyani, Haryatmi, Budiantara, I. N., & Gunardi, PLS and PWLS In Various Levels of Heteroscedasticity”, “Geographically Weighted Regression with Spline IOP Conference Series: Materials Science and Engineering, Approach”. Far East Journal of Mathematical Sciences Forthcoming Issue, 2019. (FJMS), 101(6), 1183–1196, 2017, https://doi.org/http://dx. doi.org/10.17654/MS101061183 [6] Fernandes, A.A.R., Widiastuti, D.A., Nurjannah, “Smoothing spline semiparametric regression model [12] Sifriyani, Kartiko, S. H., Budiantara, I. N., & Gunardi, assumption using PWLS approach”, International Journal “Development of nonparametric geographically weighted of Advanced Science and Technology, 29(4), pp. 2059-2070, regression using truncated spline approach”. Songklanakarin 2020. J. Sci. Technol., 40(4), 909–920, 2018.

Mathematics and Statistics 8(5): 520-526, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080504

Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart

Sirasak Sasiwannapong1, Saowanit Sukparungsee1,*, Piyapatr Busababodhin2, Yupaporn Areepong1

1Department of Applied Statistics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Thailand 2Department of Mathematics, Faculty of Science, Mahasarakham University, Thailand

Received April 13, 2020; Revised June 24, 2020; Accepted July 10, 2020

Cite This Paper in the following Citation Styles (a): [1] Sirasak Sasiwannapong, Saowanit Sukparungsee, Piyapatr Busababodhin, Yupaporn Areepong , "Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart," Mathematics and Statistics, Vol. 8, No. 5, pp. 520 - 526, 2020. DOI: 10.13189/ms.2020.080504. (b): Sirasak Sasiwannapong, Saowanit Sukparungsee, Piyapatr Busababodhin, Yupaporn Areepong (2020). Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart. Mathematics and Statistics, 8(5), 520 - 526. DOI: 10.13189/ms.2020.080504. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract The control chart is an important tool in 1. Introduction multivariate statistical process control (MSPC), which for monitoring, control, and improvement of the process Multivariate Statistical Process Control (MSPC) is an control. In this paper, we propose six types of copula important method for process monitoring, control and combinations for use on a Multivariate Exponentially improvement in many areas such as engineering, Weighted Moving Average (MEWMA) control chart. economics, environmental statistics, finance and etc. For Observations from an exponential distribution with example, in automotive production depends dependence measured with Kendall’s tau for moderate and on correlated variables such as the lifetimes of the strong positive and negative dependence (where components in the engine, etc. A control chart is a common ) among the observations were generated by tool for MSPC for detecting changes in the vector means of using Monte Carlo simulations to measure the Average the process. Multivariate control charts are generalizations Run Length (ARL) as the performance metric and should of their univariate counterparts [1]. Hotelling’s T2 was the be sufficiently large when the process is in-control on a first multivariate control chart [2], followed by the MEWMA control chart. In this study, we develop an Multivariate Exponentially Weighted Moving Average approach performance on the MEWMA control chart (MEWMA) control chart as a better alternative for based on copula combinations by using the Monte Carlo detecting small shifts in the process vector mean [3,4]. simulations.The results show that the out-of-control (ARL ) 1 Most multivariate detection procedures are based on the values for were less than for in almost assumption that the observations are independent and all cases. The performances of the identically distributed (i.i.d.) and follow a multivariate Farlie-Gumbel-Morgenstern Ali-Mikhail-Haq copula normal distribution. However, many processes are combination was superior to the others for all shifts with non-normal and correlated, so multivariate control charts strong positive dependence among the observations and . Moreover, when the magnitudes of the shift were need to be able to cope with related joint distributions. very large, the performance metric values for observations Hence, Kuvattana et al. [5] and Sukparungsee et al. [6] with moderate and strong positive and negative introduced the copula to address this requirement. dependence followed the same pattern. Copulas are functions that join multivariate distributions to their one-dimensional marginal distribution functions in Keywords Marginal, Joint Distribution, Multivariate which the one-dimensional margins are uniform on the Control Chart, Monte Carlo Simulation interval (0,1) [7]. They are used to explain the dependence between random variables and are based on a Mathematics and Statistics 8(5): 520-526, 2020 521

representation of Sklar’s theorem [8]. A new way of limit chosen for the desired in-control process. Generally, constructing asymmetric copulas was introduced by the Average Run Length (ARL) can be used to measure the Mukherjee et al. [9], and later on copulas have been applied performance of a MEWMA control chart. It depends on the to MSPC [10]. Several other studies have proposed and degree of dependence between the variables measured compared the performance of bivariate copulas on the using the and the scalar-weighted multivariate control charts [11-14]. Herein, we present the associated with the past observations. We consider a efficiency of the combinations of bivariate copulas bivariate EWMA control chart and the control limit H for constructed for shifts in the process vector mean on a the in-control process ARL0 = 370. MEWMA control chart when observations follow an exponential distribution. 2.2. Copulas Function and Constructing Bivariate Copulas 2. Research Methodology Theoretically, for the copula function according to This paper is organized into the following sections: in Sklar’s theorem [8] for a bivariate case, let X and Y be section 2.1 the multivariate exponentially weighted continuous variables with joint distribution function G and moving average (MEWMA) control chart. Section 2.2, we marginal cumulative distributions and , review copulas function and constructing bivariate respectively. Consequently, with copulas. Section 2.3 describes the dependence measure of copula where is a parameter of the data and finally section 2.4 provides the ARL and the copula. Theoretically, let A and B be bivariate copulas. It simulation study. follows that , where 2.1. The Multivariate Exponentially Weighted Moving is a copula with parameters and Average (MEWMA) Control Chart [15]. If , then C1,1 = A, and if then

The MEWMA control chart was first developed by C0,0 = B. Similarly, if C(u, v) ≠ C(v, u) we have an asymmetric Lowry et al. [4]. The given observations from a copula. d-variate Gaussian distribution , for i = 1,2,. . . , In accordance with Khoudraji’s device [16], let C be can be defined as symmetric copula , where is independence copula. A family of asymmetric copulas with (1) parameters , that includes C as a where Zi is a vector of variable values from the data and limiting case is given by

Λ is a diagonal matrix with entries , for . (3) 01≤≤λ and . The quantity plotted on the control chart is 2.3. Dependence Measure of the Data

, (2) Generally, a copula can be used in the study of the dependence of association between random variables by where Kendall’s tau, which we implemented in this study (Table-1). Let X and Y be continuous random variables When on the interval (0,1) (as with copula C, then Kendall’s tau is given by assumed in this study), the control chart signals a shift in the mean vector when where H is the control

Table 1. Kendall’s tau of copula function

Copula Type Kendall’s tau Parameter space of Clayton Asymmetric

Frank Asymmetric

FGM Symmetric [-1,1]

AMH Symmetric [-1,1]

522 Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart

2.4. The ARL and the Simulation Study and 5 (for the out-of-control process). The performance of the MEWMA control chart was assessed for = 0.05 and Theoretically, the ARL is an average number of points 0.10. For all combinations of copulas, setting that must be plotted before the out-of-control condition corresponds to Kendall’s tau for moderate and strong occurs. ARL is classified into ARL0 and ARL1. ARL0 is the positive and negative dependence ( = 0.5, -0.8). average number of observations before the first out-of-control point, while ARL1 is the average number of observations when the process is out-of-control. The 3. Results expectations of ARL0 and ARL1 can be respectively expressed as The simulation results are reported in Tables 2 to 9, in for (4) which the results are only empirical. The aim of the study was to optimize the parameters for constructing bivariate for (5) copulas ( ), as shown in Equation (3), for which we where is the change point time, is the stopping time, used the Maximum pseudo-likelihood estimator method [21]. For the in-control process on the MEWMA control and is the expectation under the assumption that chart, the desired ARL0 = 370 was set for each copula the change point occurs at combination. The results in Tables 2 and 3 indicate We ran a Monte Carlo simulation using R statistical moderate positive dependence among the observations software [17-20] with the 50,000 rounds and a sample size ( ), Tables 4 and 5 strong positive dependence of 6,000. The observations were generated from a copula ( ), Tables 6 and 7 moderate negative dependence based on an exponential distribution with mean = 1 (for the ( ), and Tables 8 to 9 show strong negative in-control process) and shifts at level 0.01, 0.05, 0.1, 0.5, 1, dependence ( ).

Table 2. ARL1 of the MEWMA control chart with moderate positive dependence ( = 0.5, = 0.05)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 329.14 330.20 329.25 332.24 332.61 329.24 0.05 236.15 240.22 233.78 242.38 234.52 241.08 0.10 194.51 197.77 194.46 200.24 197.11 199.02 0.50 12.74 14.41 13.39 10.27 12.87 10.48 1.00 1.74 1.92 1.81 2.10 1.70 2.19 5.00 1.02 1.02 1.09 1.14 1.07 1.03 UCL 10.69 12.24 11.21 14.03 10.66 15.30 0.566 0.858 0.953 0.855 0.161 0.045 0.617 0.466 0.906 0.841 0.128 0.032 Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Table 3. ARL1 of the MEWMA control chart with moderate positive dependence ( = 0.5, = 0.10)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 330.39 332.88 332.15 334.11 333.3 332.28 0.05 243.41 246.95 245.29 248.99 242.75 251.73 0.10 204.81 209.86 137.03 211.93 208.05 211.82 0.50 15.30 16.52 20.87 17.30 15.69 11.86 1.00 2.06 2.27 2.16 2.43 2.03 2.52 5.00 1.02 1.03 1.01 1.04 1.09 1.20 UCL 13.69 15.56 14.26 17.77 13.73 19.35 0.566 0.858 0.953 0.855 0.161 0.045 0.617 0.466 0.906 0.841 0.128 0.032 Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Mathematics and Statistics 8(5): 520-526, 2020 523

Table 4. ARL1 of the MEWMA control chart with strong positive dependence ( = 0.8, = 0.05)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 329.43 331.64 328.91 334.11 326.23 331.25 0.05 237.85 243.71 238.09 244.72 210.57 243.13 0.10 194.65 203.84 196.25 205.23 126.87 202.94 0.50 15.03 16.48 14.07 17.48 7.84 11.83 1.00 2.01 2.36 1.93 2.47 1.67 2.47 5.00 1.03 1.07 1.10 1.10 1.02 1.04 UCL 13.11 17.84 11.99 20.52 10.24 20.70 0.457 0.567 0.635 0.95 0.405 0.069 0.457 0.779 0.652 0.949 0.676 0.007

Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Table 5. ARL1 of the MEWMA control chart with strong positive dependence ( = 0.8, = 0.10)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 331.30 330.08 333.33 335.59 330.10 336.65 0.05 246.74 249.45 246.12 256.45 240.26 232.98 0.10 207.94 211.95 139.75 217.58 203.87 152.70 0.50 16.52 18.31 10.68 19.68 15.17 13.33 1.00 2.37 2.67 2.24 2.87 2.01 2.88 5.00 1.04 1.08 1.13 1.12 1.00 1.32 UCL 16.57 22.50 15.15 26.31 13.22 26.65 0.457 0.567 0.635 0.95 0.405 0.069 0.457 0.779 0.652 0.949 0.676 0.007

Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Table 6. ARL1 of the MEWMA control chart with moderate negative dependence ( = -0.5, = 0.05)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 324.71 328.54 326.98 325.05 330.92 328.38 0.05 232.22 235.22 235.14 233.45 235.47 235.28 0.10 192.74 192.98 191.74 191.16 193.04 193.90 0.50 14.47 16.13 14.30 15.92 13.97 16.06 1.00 1.85 2.45 1.82 2.36 1.80 2.29 5.00 1.02 1.02 1.02 1.02 1.02 1.02 UCL 11.32 14.53 11.1 13.95 10.97 13.40 0.982 0.999 0.99 0.919 1.000 0.149 0.999 0.999 0.998 1.000 0.915 0.029

Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

524 Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart

Table 7. ARL1 of the MEWMA control chart with moderate negative dependence ( = -0.5, = 0.10)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 328.63 323.78 326.92 330.17 326.36 326.79 0.05 233.24 228.14 233.79 231.26 233.64 227.13 0.10 191.54 185.83 190.78 187.44 191.02 186.32 0.50 16.39 16.20 16.10 16.32 16.11 16.27 1.00 2.26 2.80 2.22 2.73 2.19 2.65 5.00 1.02 1.03 1.02 1.03 1.02 1.02 UCL 14.25 17.68 14.00 17.14 13.86 16.38 0.982 0.999 0.990 0.919 1.000 0.149 0.999 0.999 0.998 1.000 0.915 0.029

Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Table 8. ARL1 of the MEWMA control chart copulas with strong negative dependence ( = -0.8, = 0.05)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 326.37 328.95 327.01 326.35 326.71 322.98 0.05 234.28 232.98 232.22 235.52 235.31 233.37 0.10 192.17 191.68 194.22 192.31 194.65 191.90 0.50 14.44 16.25 14.31 15.92 14.28 15.89 1.00 1.84 2.58 1.82 2.50 1.82 2.42 5.00 1.02 1.01 1.01 1.01 1.02 1.01 UCL 11.32 15.53 11.09 14.95 11.12 14.25 0.996 1.000 0.995 0.806 0.999 0.261 0.987 0.858 0.998 0.999 0.999 0.001

Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Table 9. ARL1 of the MEWMA control chart with strong negative dependence ( = -0.8, = 0.10)

Copula combinations Shift [1] [2] [3] [4] [5] [6] 0.01 326.44 327.60 325.84 324.42 328.11 327.16 0.05 234.44 226.75 232.27 228.30 235.30 229.35 0.10 193.59 187.43 190.09 186.49 192.91 185.96 0.50 16.35 15.83 16.02 15.91 16.34 15.99 1.00 2.26 2.88 2.22 2.82 2.22 2.75 5.00 1.02 1.01 1.02 1.01 1.02 1.01 UCL 14.25 18.77 13.98 18.12 14.05 17.40 0.996 1.000 0.995 0.806 0.999 0.261 0.987 0.858 0.998 0.999 0.999 0.001 Note that: Copula combinations i.e. [1] Clayton × FGM [2] Clayton × Frank [3] Clayton × AMH [4] FGM × Frank [5] FGM × AMH [6] Frank × AMH

Mathematics and Statistics 8(5): 520-526, 2020 525

The results in Tables 2 to 9 show that the ARL1 values parameters, Journal of Statistical Computation and Simulation, Vol.83, No.4, 721-738, 2013. for were less than those for in almost all cases. The results in Tables 2 and 3 indicate that the [2] H. Hotelling. Multivariate Quality Control Illustrated by Air Clayton Ali-Mikhail-Haq (AMH) copula combination Testing of Sample Bombsights, In: Eisenhart, C., Hastay, was superior to the others in almost all cases. Meanwhile, M.W. and Wallis, W.A., Eds., Techniques of Statistical with strong positive dependence ( ) and , Analysis, McGraw Hill, New York, 111-184. 1947. Farlie-Gumbel-Morgenstern (FGM) AMH attained the [3] H. Midi, A. Shabbak. Robust multivariate control chart to minimum ARL1 with all shifts (Table 4). Meanwhile, for detect small shifts in mean, Mathematical Problems in moderate negative dependence ( ), Clayton Engineering, Vol.2011, 1-19, 2011. FGM attained the minimum ARL1 with shift values at 0.01 [4] C. A. Lowry, W. H. Woodall, C. W. Champ, S. E. Rigdon. A and 0.05 (Table 6). For the results for strong negative multivariate exponentially weighted moving average control dependence ( ) and (Table 8), the chart, Technometrics, Vol.34, No.1, 46-53, 1992. performance of FGM AMH was superior to the others [5] S. Kuvattana, S. Sukparungsee, P. Busbabodhin, Y. with shift values at 0.5 and 1. However, when the Areepong. Bivariate copulas on the exponentially weighted magnitude of the shift was large ( ), the performances moving average control chart, Songklanakarin Journal of of all of the copula combinations for moderate and strong Science and Technology, Vol.38, No.5, 569-574, 2016. positive and negative dependence were the same. [6] S. Sukparungsee, S. Kuvattana, P. Busbabodhin, Y. Areepong. Bivariate copulas on the Hotelling’s T2 control chart, Communications in Statistics-Simulation and 4. Conclusions Computation, Vol.47, No.2, 413-419, 2018. [7] R. B. Nelson. An introduction to copulas, 2nd ed, Springer, In this study, we investigated closed-form New York, 2006. approximations of the ARL for MEWMA control charts using bivariate copulas constructed via Khoudraji’s device, [8] A. Sklar. Random variables, Joint distribution function and and we used Monte Carlo simulation when the marginal of copulas, Kybernetica, Vol.9, 449-460, 1973. the variables was exponential with . The simulation [9] S. Mukherjee, Y. S. Lee, J. M. Kim, J. Jang, J. S. Park. results suggest that there were no meaningful differences Construction of bivariate asymmetric copulas, between the performances of the bivariate copulas at a very Communications for Statistical Applications and Methods, large shift ( ) when the observations had moderate and Vol.25, No.2, 217-234, 2018. strong positive and negative dependence. In addition, the [10] P. Busababodhin, P. Amphanthong. Copula modelling for performances of the constructed bivariate copulas were multivariate statistical process control: a review, superior to a single copula [5] for a moderate shift in a Communications for Statistical Applications and Methods, process on a MEWMA control chart. For further research, Vol.23, No.6, 497-515, 2016. we could use the real data to compare the simulation [11] S. Tiengket, S. Sukparungsee, P. Busababodhin, Y. results. Areepong. Construction of bivariate copulas on the Hotelling’s T2 control chart, Thailand , Vol.18, No.1, 1-15, 2020. Acknowledgements [12] S. Sasiwannapong, S. Sukparungsee, P. Busababodhin, Y. Areepong. The efficiency of constructed bivariate copulas The authors are grateful to the Ministry of Science and for MEWMA and Hotelling’s T2 control charts, Technology, Thailand, and the Graduate College, King Communications in Statistics-Simulation and Computation, Mongkut’s University of Technology, North Bangkok, Online 25 Nov 2019, doi: 10.1080/03610918.2019.1687719 Thailand, for financially supporting this study. This [13] S. Kuvattana, S. Sukparungsee, P. Busbabodhin, Y. research was funded by Office the Higher Education Areepong. Efficiency of bivariate copula on the CUSUM Commission-National Research University with contract chart, The 2nd International Multi Conference of Engineers no. KMUTNB-NRU-59-11. Finally, this research was and Computer Scientist, Hong Kong, 1-4, 2015. inspired by the manuscript, “Construction of bivariate [14] S. Kuvattana, S. Sukparungsee, P. Busbabodhin, Y. asymmetric copulas” [9]. Moreover, we would like to Areepong. A comparison of efficiency between multivariate thank Professor Dr. Jeong-Soo Park for the idea proposed Shewhart and multivariate CUSUM control chart for in this paper. bivariate copula, The International Conference on Applied Statistics, Pattaya, Thailand, 219-223, 2015. [15] G. C. Salvadori, C. De Michele, N. T. Kottegoda, R. Rose. Extremes in nature: an approaching using copulas, Water Science and Technology Library, Vol.56, 266, 2007. REFERENCES [16] F. A. Khoudraji. Contributions a` l’e ́tude des copules et a` la [1] M. A. Mahmoud, P. E. Maravelakis. The performance of ́lisation des valeurs extreˆmes bivarie ́es’, Ph.D. thesis, multivariate CUSUM control charts with estimated Universite ́ Laval, Que ́bec, Canada, 1995.

526 Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart

[17] J. Yan. Enjoy the joy of copulas with a package copula, Framework, Journal of Statistical Software, Vol. 82, 1-26, Journal of Statistical Planning and Inference, Vol.21, No.4, 2016. 1-21, 2007. [20] K. F. Vajargah. Comparing Ridge Regression and Principal [18] I. Kojadinovic, J. Yan. Modeling Multivariate Distributions Components Regression by Monte Carlo Simulation Based with Continuous Margins Using the copula R Package. on MSE, Journal of Computer Science & Computational Journal of Statistical Software, Vol.34, No.9, 1-20, 2010. Mathematics, Vol.3, No.2, 25-29, 2013. [19] V. N. Nyaga, M. Arbyn, M. Aerts. CopulaDTA: An R [21] J. A. Nelder, R. Mead. A simplex algorithm for function Package for Copula-Based Bivariate Beta-Binomial Models minimization, Computer journal, Vol.7, 308-313, 1965. for Diagnostic Text Accuracy Studies in a Baysian

Mathematics and Statistics 8(5): 527-534, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080505

Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions

Ali F Jameel1,*, Akram H. Shather2, N.R. Anakira3, A. K. Alomari4, Azizan Saaban1

1School of Quantitative Sciences, College of Art and Sciences, Universiti Utara Malaysia (UUM), Malaysia 2Department of Computer Engineering Techniques, Al-Kitab University Altun Kupri, Iraq 3Department of Mathematics, Faculty of Science and Technology, Irbid National University, Jordan 4Department of Mathematics, Faculty of Science, Yarmouk University, Jordan

Received April 13, 2020; Revised June 19, 2020; Accepted July 10, 2020

(a): [1] Ali F Jameel, Akram H. Shather, N.R. Anakira, A. K. Alomari, Azizan Saaban , "Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions," Mathematics and Statistics, Vol. 8, No. 5, pp. 527 - 534, 2020. DOI: 10.13189/ms.2020.080505. (b): Ali F Jameel, Akram H. Shather, N.R. Anakira, A. K. Alomari, Azizan Saaban (2020). Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions. Mathematics and Statistics, 8(5), 527 - 534. DOI: 10.13189/ms.2020.080505. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract This research focuses on the approximate displayed in the form of tables and figures. solutions of second-order fuzzy differential equations with fuzzy initial condition with two different methods Keywords Fuzzy Set Theory, Fuzzy Differential depending on the properties of the fuzzy set theory. The Equations, Nonlinear Fuzzy Initial Value Problem, methods in this research based on the Optimum homotopy Optimum Homotopy Asymptotic Method (OHAM), Homotopy Analysis Method (HAM) asymptotic method (OHAM) and homotopy analysis method (HAM) are used implemented and analyzed to obtain the approximate solution of second-order nonlinear fuzzy differential equation. The concept of topology homotopy is used in both methods to produce a convergent 1. Introduction series solution for the propped problem. Nevertheless, in contrast to other destructive approaches, these methods do As a mathematical model, a large number of dynamic not rely upon tiny or large parameters. This way we can real-life problems can be formulated in mathematical easily monitor the convergence of approximation series. equations. These models may take the form of ordinary or Furthermore, these techniques do not require any partial differential equations. Fuzzy differential equations discretization and linearization relative with numerical are an important tool to model a dynamical system when methods and thus decrease calculations more that can solve information about its behavior is inadequate. Fuzzy high order problems without reducing it into a first-order differential equations with fuzzy initial conditions appear system of equations. The obtained results of the proposed when the modeling of these problems was imperfect and its problem are presented, followed by a comparative study of nature is under uncertainty that involves fuzzy parameters the two implemented methods. The use of the methods that cannot be detected through ordinary measurement [1]. investigated and the validity and applicability of the Thus, fuzzy differential equations are suitable methods in the fuzzy domain are illustrated by a numerical mathematical models, these kinds of measurements in example. Finally, the convergence and accuracy of the which there exist uncertainties or vagueness. These models proposed methods of the provided example are presented are raised in many real-life applications such us, population through the error estimates between the exact solutions models [2], physics [3], medicine [4], etc. Some of these 528 Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions fuzzy differential equations have been solved numerically 2. Fuzzy Analysis of Second-Order such as first-order linear fuzzy initial value problems [5,6]. Fuzzy Nonlinear Initial Value Also, some approximate methods are involved with an approximate solution of various types and order of fuzzy Problems differential equations by Nedal et al [7] and Jameel et al By following the defuzzification by Ali et al [19] we [8]. consider second-order nonlinear fuzzy Initial value Both HAM and OHAM are classified as approximate problem: methods that have been used to solve differential equations approximately in various applications [9-14] that have ( ) = ( ) + ( ) + ( ), ( ) + ( ), (1) many advantages such as solving the difficult nonlinear. It ′′ ′ ′ �( ) = , ( ) = . ̃ (2) often helps an engineer or scientist to better understand a 푦� 𝑡𝑡 푎�푦� 𝑡𝑡 푏푦� 𝑡𝑡 푐̃푁�푦� 𝑡𝑡 푦� 𝑡𝑡 � 푑푤 𝑡𝑡 ′ physical problem and can help to improve future for . According푦 𝑡𝑡0 to [1훼9]푦� the𝑡𝑡0 defuzzifcation훽 and the procedures and designs for solving problems, also solve fuzzy analysis of Eq. (1) with the fuzzy level set [0,1] 0 high order FIVPs directly without reducing it into is as𝑡𝑡 follows:≥ 𝑡𝑡 first-order system, and determine the accuracy of the ( ) : is a fuzzy function [19] of the crisp variable𝑟𝑟 ∈ such approximate solution without needing the exact solution that [ ( )] = [ ( )( ), ( )( )], 푦� 𝑡𝑡 푥 especially the nonlinear equations [14]. Approximate ( ): is the first order푟 fuzzy푟 Hukuhara-derivative [20] 푦� 𝑡𝑡 푟 푦 𝑡𝑡 푦 𝑡𝑡 methods like HAM and OHAM provide a simple way to such′ that [ ( )] = ( )( ), ( )( ) , ensure the convergence of a series solution that comes from 푦� 𝑡𝑡 ( ) ′ ′ 푟 ′ 푟 the great freedom to choose proper base function : is the second푟 order fuzzy Hukuhara-derivative [20] 푦� 𝑡𝑡 �푦 𝑡𝑡 ( )푦 𝑡𝑡 (�) approximating a nonlinear problem [15]. In addition, these such′′ that [ ( )] = ( ) , ( ) , 푦� 𝑡𝑡 ′′ methods have the flexibility to give an approximate and : is fuzzy′′ nonlinear function′′ 푟 of the 푟crisp variable , the 푦� 𝑡𝑡 푟 �푦 𝑡𝑡 푦 𝑡𝑡 � exact solution to both linear and nonlinear problems fuzzy variable and the fuzzy derivative such that: without any need for discretization and linearization as 푁� ′ 푥 ( ) ( ) numerical methods, hence is free rounding off errors and ( ( ), 푦�( )) = ( , , ) , 푦(� , , ) , does not require computer memory or time [16]. The HAM ′ ′ 푟 ′ 푟 � 푟 [ ( )] = can rate the convergence of the solution, by using the where�푁 푦�the𝑡𝑡 푦�nonhomogeneous푥 � �푁 𝑡𝑡 푦 푦 term 푁 is𝑡𝑡 푦 푦 � ( )( ), ( )( ) . The fuzzy coefficients in Eqs. (1) are advantages of h-curves adjusts the accuracy of the solution 푤� 𝑡𝑡 푟 by finding the best values of the convergence control triangular푟 fuzzy푟 numbers [21] that can be identify as: �푤 𝑡𝑡 푤 𝑡𝑡 � parameter [10]. The OHAM has a built-in convergence [ ] = [ , ] , = [ , ] , [ ] = [ , ] and criteria similar to HAM but with a greater degree of = [ , ] for all the fuzzy level set [0,1]. 푎� 푟 푎 푎 푟 �푏��푟 푏 푏 푟 푐̃ 푟 푐 푐 푟 flexibility [16]. Some approximate methods were used to : is the first푟 fuzzy initial condition triangular numbers solve problems in the second-order and above involving �푑̃�푟 푑 푑 𝑟𝑟 ∈ denoted by [ ( )] = ( )( ), ( )( ) , such fuzzy initial values [17]. 푦�0 that[ ] = [ , ] , finally: 푟 푟 Our main motivation is to present a complete fuzzy 푦� 𝑡𝑡0 푟 �푦 𝑡𝑡0 푦 𝑡𝑡0 � analysis of the problem of second-order fuzzy initials, : is the second fuzzy initial condition triangular 훼�0 푟 훼0 훼0 푟 followed by a fuzzy analysis of OHAM and HAM, to numbers′ denoted by [ ( )] = ( )( ), ( )( ) , 푦� obtain an approximate solution to the proposed problem ′ ′ 푟 ′ 푟 such that = [ , ] . 0For푟 simplest0 form of 0Eq. (1) and to present a comparative study of these methods in 푦� 𝑡𝑡 �푦 𝑡𝑡 푦 𝑡𝑡 � let detail. This is the first attempt to present a comparative and �훽�0�푟 훽0 훽0 푟 analytical study of the approximate solution in the ( ) + ( ) + ( ), ( ) + ( ) = , ( ) nonlinear second-order fuzzy initial value problem using ′ ′ HAM and OHAM to the best of our knowledge. where푎�푦� 𝑡𝑡 푏�푦� 𝑡𝑡 푐̃푁�푦� 𝑡𝑡 푦� 𝑡𝑡 � 푑̃푤 𝑡𝑡 푓̃�𝑡𝑡 푦� 𝑡𝑡 � This research outline is as follows: The second-order ( , ( ))( ) = , , , nonlinear fuzzy initial value problem of second necessary 푟 ′ ′ (3) ( ) fuzzy analysis details is recalled. An analysis and 푓(𝑡𝑡, 푦�(𝑡𝑡)) = 퐿 �푦 ,푦 ,푦 ,푦 �푟 description of HAM and OHAM general formulas are � 푟 ′ ′ presented in separate sections. In the section 'Illustration then Eq. (1) become푓 𝑡𝑡 as푦� follows𝑡𝑡 푈 �푦 푦 푦 푦 �푟 and Discussion', a numerical example is involved and ( ) = , ( ) results have been compared and displayed by our proposed (4) methods. Finally, there is a summary that contains the ′′ By using the fuzzy푦� extension𝑡𝑡 푓̃� 𝑡𝑡fuzzy푦� 𝑡𝑡 principle� properties, conclusions of this paper. we can define the following membership function [22,23]:

Mathematics and Statistics 8(5): 527-534, 2020 529

( , ( ))( ) = min ( , ( )) ( ) ( ) where for all [0,1], ( )and ( ) are the constants 푟 (5) that can be determined easily from the initial conditions in ( , ( ))( ) = max ( , ( )) ( ) ( ) 1 2 푓 𝑡𝑡 푦� 𝑡𝑡 �푓̃ 𝑡𝑡 휇� 𝑟𝑟 �휇� 𝑟𝑟 ∈ 푦� 𝑡𝑡 � Eq. (7-8). It can𝑟𝑟 be∈ concluded𝐶𝐶̃ 𝑟𝑟 that 𝐶𝐶wheñ 𝑟𝑟 = 0 and = 1 � 푟 where ̃ we have 푓 𝑡𝑡 푦� 𝑡𝑡 �푓 𝑡𝑡 휇� 𝑟𝑟 �휇� 𝑟𝑟 ∈ 푦� 𝑡𝑡 � 푝 푝 ( , ( ))( ) = [ , ( )] = , ( )( ) ( ; 0) = ( ; ), ( ; 1) = ( ; ), (6) (푟) ∗ ∗ 푟( ) (15) ( , ( )) = [ , ( )]푟 = , ( ) 0 푓 𝑡𝑡 푦� 𝑡𝑡 퐿 𝑡𝑡 푦� 𝑡𝑡 퐿 �𝑡𝑡 푦� 𝑡𝑡 � �∅(𝑡𝑡; 0)�푟 = 푦 (𝑡𝑡; 𝑟𝑟). �∅(𝑡𝑡; 1)�푟 = 푦(𝑡𝑡; 𝑟𝑟). � 푟 ∗ ∗ 푟 From Eqs. (5) and (6), we rewrite푟 Eq. (1) as follows � � 푓 𝑡𝑡 푦 𝑡𝑡 푈 𝑡𝑡 푦� 𝑡𝑡 푈 �𝑡𝑡 푦� 𝑡𝑡 � Hence�∅ 𝑡𝑡 as �푟 푦 0increases𝑡𝑡 𝑟𝑟 �∅ from𝑡𝑡 �푟 0 푦 to𝑡𝑡 𝑟𝑟 1 the ( )( ) = , ( )( ) solution ( ; ) , ( ; ) varies from the initial guess (7) 푝 ( ′′)( ) =푟 , ∗ ( )( ) =푟 ( ; ) and ( ; ) to the solution ( ; ) and ( ; ). 푦 𝑡𝑡 퐿 �𝑡𝑡 푦� 𝑡𝑡 � �∅ 𝑡𝑡 푝 �푟 �∅ 𝑡𝑡 푝 �푟 푟 ′ 푟 � By0 expanding ( ; ) and ( ; ) as a Taylor series 푦 𝑡𝑡0 ( )( ) 훼=0 푦 𝑡𝑡0, ( )( )훽0 푦 𝑡𝑡 𝑟𝑟 푦0 𝑡𝑡 𝑟𝑟 푦 𝑡𝑡 𝑟𝑟 푦 𝑡𝑡 𝑟𝑟 ′′ 푟 ∗ 푟 (8) with respect to , we can obtain ( )( ) = , ( )( ) = �∅ 𝑡𝑡 푝 �푟 �∅ 𝑡𝑡 푝 �푟 푦 𝑡𝑡 푈 �𝑡𝑡 푦� 𝑡𝑡 � � 푟 ′ 푟 ( ; ) = ( ; ) + ( ; ) 0 0 0 0 푝 푦 𝑡𝑡 훼 푦 𝑡𝑡 훽 ∞ 푚 (16) ( ; ) = 0( ; ) + 푚=1 푚 ( ; ) �∅ 𝑡𝑡 푝 �푟 푦 𝑡𝑡 𝑟𝑟 ∑ 푦 𝑡𝑡 𝑟𝑟 푝 3. Analysis of Fuzzy Homotopy � ∞ 푚 Analysis Method where �∅ 𝑡𝑡 푝 �푟 푦0 𝑡𝑡 𝑟𝑟 ∑푚=1 푦푚 𝑡𝑡 𝑟𝑟 푝 ( ; ) In the study of this section, we give details and structure ( ; ) = ! 푚 of HAM for the approximate solution of high order FIVPs. 1 휕 �∅ 푡 푝 �푟 푚 (17) The HAM is applied to approximately solve high order ⎧ 푦푚 𝑡𝑡 𝑟𝑟 푚 휕푝( ; ) � ⎪ ( ; ) = 푝=0 linear and nonlinear FIVP. Toward this end, we consider ! 푚 1 �휕 ∅ 푡 푝 �푟 Eq. (7-8) followed by the defuzzification of Eq. (1) In 푚 ⎨ 푦푚 𝑡𝑡 𝑟𝑟 푚 휕푝 � ⎪ 푝=0 Section 2 such that for all [0,1] and according to the If auxiliary⎩ linear operator , the initial HAM we can construct the following correction functions guesses ( ; ) and ( ; ) , the convergence control ̃2 as follows 𝑟𝑟 ∈ parameter , and the auxiliary functionℒ ( ), are properly 푦0 𝑡𝑡 𝑟𝑟 푦0 𝑡𝑡 𝑟𝑟 ( ; ) = , ( )( ) chosen then the series (17) converges to the exact solution (9) at = 1 andℎ we have: 퐻 𝑡𝑡 ′′( ; ) = ∗ , ( )(푟 ) 푦 𝑡𝑡 𝑟𝑟 퐿 �𝑡𝑡 푦� 𝑡𝑡 � ( ; 1) = ( ; ) + ( ; ) � ′′ ∗ 푟 푝 Also, we can write푦 Eq.𝑡𝑡 (10)𝑟𝑟 as푈 follows�𝑡𝑡 푦� 𝑡𝑡 � ∞ (18) ( ; 1) = 0( ; ) + 푚=1 푚 ( ; ) ( ) �∅ 𝑡𝑡 �푟 푦 𝑡𝑡 𝑟𝑟 ∑ 푦 𝑡𝑡 𝑟𝑟 , ( ) = 0 ∞ (10) � 푚=1 ∗ , ( )(푟 ) = 0 According � ∅to 𝑡𝑡[19] �we푟 define푦0 𝑡𝑡 the𝑟𝑟 vectors∑ 푦푚 𝑡𝑡 𝑟𝑟 퐿 �𝑡𝑡 푦� 𝑡𝑡 � � ∗ 푟 ( ) ( ) ( ) ( ; ) = { [ ( ; )] , [ ( ; )] : | ( ; ) = { ; , ; , … , ; } 푈 �𝑡𝑡 푦� 𝑡𝑡 � (19) [ ∗ , ( )( ) ]}, ∗ (11) 푖( ; ) = { 0( ; ), 1( ; ), … , 푚 ( ; )} ℱ��∅� 𝑡𝑡 푝 �� 푚푖푛 퐿 ∅ 𝑡𝑡 푝 푟 푈 ∅ 𝑡𝑡 푝 푟 휇 휇 ∈ 푦⃗ 𝑡𝑡 𝑟𝑟 푦 𝑡𝑡 𝑟𝑟 푦 𝑡𝑡 𝑟𝑟 푦 𝑡𝑡 𝑟𝑟 ∗ 푟 � ( ; ) = {퐿 �𝑡𝑡[ 푦�( 𝑡𝑡; )]�, [ ( ; )] : | Also according푦⃗푖 𝑡𝑡 𝑟𝑟 to푦 0 Section𝑡𝑡 𝑟𝑟 푦1 𝑡𝑡2 𝑟𝑟 differentiating푦푚 𝑡𝑡 𝑟𝑟 (17) [ ∗ , ( )( ) ]}∗, (12) times with respect to the embedding parameter and � 푟 푟 풢��∅ 𝑡𝑡 푝 �� 푚푎푥 퐿∗ ∅ 𝑡𝑡 푝 푟 푈 ∅ 𝑡𝑡 푝 휇 휇 ∈ then setting = 0 and dividing them by !, we have the where represents the membership푈 �𝑡𝑡 푦� 𝑡𝑡 �function of Eq. (1). 푚 order deformation equation 푝 The zero-order deformation equation can be written as: 푡ℎ 푝 푚 휇 푚 ( ; ) ( ; ) = ( ( ; )) (1 ) ( ; ) ( ; ) = ( ) ( ; ) , (20) (13) 2 푚( ; ) 푚 푚−1( ; ) = 푚( 푚−1( ; )) (1 ) 2 ( ; ) 0( ; ) = ( ) �( ; ) ℒ �푦 𝑡𝑡 𝑟𝑟 − 휒 푦 𝑡𝑡 𝑟𝑟 � ℎℛ 푦⃗ 𝑡𝑡 𝑟𝑟 − 푝 ℒ �∅ 𝑡𝑡 푝 − 푦 𝑡𝑡 𝑟𝑟 � 푝ℏ퐻 𝑡𝑡 ℱ��∅ 𝑡𝑡 푝 �� � � 2 푚 푚 According− 푝 ℒ2� ∅to𝑡𝑡 [9],푝 − 푦0 [𝑡𝑡0,𝑟𝑟1]� is 푝anℏ퐻 embedding𝑡𝑡 풢��∅� 𝑡𝑡 푝parameter,�� whereℒ �푦 푚 𝑡𝑡 𝑟𝑟 − 휒 푦푚−1 𝑡𝑡 𝑟𝑟 � ℎℛ 푦⃗푚−1 𝑡𝑡 𝑟𝑟 and is nonzero convergence- control parameter. The ( ; ) ( ; ) = ( ) is an auxiliary푝 ∈ function while the operators ( )! 푚−1 1 휕 ℱ��∅� 푡 푝 �� ℏ ( ; ) [( ; )] 푚−1 (21) = = 푚 푚−1 푚−1 휕푝 ( ; ) 2 and are auxiliary linear ⎧ℛ �푦⃗ (𝑡𝑡 𝑟𝑟)� �푝=0 퐻 𝑡𝑡 휕 �∅ 푡 푝 � 2 ; = 푟 휕 ∅ 푡 푝 푟 ( )! 푚−1 � operators. 2We can define the2 initial approximation 1 휕 풢��∅ 푡 푝 �� 2 휕푡 2 휕푡 푚−1 ℒ ℒ ⎨ 푚 푚−1 푚−1 휕푝 [ ( )] = ( ; ), ( ; ) , ℛ �푦⃗ 𝑡𝑡 𝑟𝑟 � �푝=0 1 as follows: The⎩ solution of the order deformation for is: 푡ℎ 푦�0 𝑡𝑡 푟 �푦0 𝑡𝑡 𝑟𝑟( ;푦0) 𝑡𝑡=𝑟𝑟 �( ) + ( ) ( ; ) = 푚( ; ) + ( ( ;푚)≥) (14) −1 (22) 푚( ) 푚 푚−1 ( ) 2 푚 푚−1 ( ) 푦0( 𝑡𝑡; 𝑟𝑟) = 𝐶𝐶1 (𝑟𝑟 ) +𝐶𝐶2 (𝑟𝑟 )𝑡𝑡 푦 𝑡𝑡; 𝑟𝑟 = 휒 푦 𝑡𝑡;𝑟𝑟 + ℏℒ ℛ (푦⃗ 𝑡𝑡;𝑟𝑟 ) � −1 � 푚 푚 푚−1 2 푚 푚−1 푦0 𝑡𝑡 𝑟𝑟 𝐶𝐶1 𝑟𝑟 𝐶𝐶2 𝑟𝑟 𝑡𝑡 푦 𝑡𝑡 𝑟𝑟 휒 푦 𝑡𝑡 𝑟𝑟 ℏℒ ℛ 푦⃗ 𝑡𝑡 𝑟𝑟 530 Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions

0, 1 ( ; ) = ( ) + ( ) + = ( ) = (33) 1, > 0 ( ; ) = ( ) + ( ) 2 + = 푛 ( ) 푖 푚 ≤ ℋ 푝 𝑟𝑟 𝐶𝐶1 𝑟𝑟 푝 𝐶𝐶2 𝑟𝑟 푝 ⋯ ∑푖=1 𝐶𝐶푖 𝑟𝑟 푝 휒푚 � � 2 푛 푖 where = [. ] and 푚 = [. ] are whereℋ 푝 (𝑟𝑟 ) , 𝐶𝐶1( 𝑟𝑟) 푝, … 𝐶𝐶are2 𝑟𝑟 the푝 constants⋯ ∑푖= 1that𝐶𝐶푖 𝑟𝑟 become푝 a −1 the inverse− 1fuzzy integral operators, [0,1]. Note that function of r to be found all [0,1]. According to [24], ℒ2 ∫ ∫ 푟푑𝑡𝑡푑𝑡𝑡 ℒ2 ∫ ∫ 푟푑𝑡𝑡푑𝑡𝑡 1 2 we can apply the convergence analysis of HAM that expanding𝐶𝐶̃ 𝑟𝑟 𝐶𝐶̃; 𝑟𝑟, ( ) into Taylor’s series about p, ∀𝑟𝑟 ∈ 𝑟𝑟 ∈ discussed in [24]. we obtain approximate solution series: �∅��𝑡𝑡 푝 𝐶𝐶̌푖 𝑟𝑟 ��푟 ; , ( ) = ( ; ) + y , ( ) (34) 푛 푖 4. Analysis of Fuzzy Optimal 푖 0 푖=1 i 푖 Now,�∅��𝑡𝑡 substitute푝 𝐶𝐶̌ 𝑟𝑟 �� 푟Eq. 푦�(34)𝑡𝑡 into𝑟𝑟 Eqs.∑ (28�� �-𝑡𝑡29),𝐶𝐶̌ 𝑟𝑟and�� 푟equating푝 Homotopy Asymptotic Method the coefficient of like powers of p to obtain the following differential equations: This section presents the analysis of OHPM to obtain an th approximate solution of second order nonlinear FIVP. The zero order problem is given by: According to Section 3, we fuzzify OHAM and then ( ; ) ( )( ) = 0, [ , ] defuzzify for the approximate solution of Eq. (1). Again, (35) 푟 consider Eqs. (7-8) followed by the defuzzification of Eq. ℒ2 �푦 𝑡𝑡 𝑟𝑟 � − 푤 𝑡𝑡 [ ] 𝑡𝑡 ∈ 𝑡𝑡0 푇 (1) in Section 2 such that for all [0,1] we rewrite Eqs. ( ; ), = 0 (8-9) in the following forms 휕 푦 푟 𝑟𝑟 ∈ and for the upperℬ �bound푦 𝑡𝑡 𝑟𝑟 we have� ( ; ) , ( )( ) = 0, [ , ] (23) 휕𝑡𝑡 ∗ 푟 ( ; ) ( )( ) = 0, [ , ] (36) ℒ2 �푦 𝑡𝑡 𝑟𝑟 � − 퐿 �𝑡𝑡 푦� 𝑡𝑡 [ ]� 𝑡𝑡 ∈ 𝑡𝑡0 푇 ( ; ), = 0 (24) 푟 휕 푦 푟 ℒ2�푦 𝑡𝑡 𝑟𝑟 � − 푤 𝑡𝑡 [ ] 𝑡𝑡 ∈ 𝑡𝑡0 푇 ( ; ), = 0 and for the upperℬ bound�푦 𝑡𝑡 we𝑟𝑟 have휕푡 � 휕 푦 푟 ( ; ) , ( )( ) = 0, [ , ] (25) ℬ �푦 𝑡𝑡 𝑟𝑟 � First order Problem: 휕𝑡𝑡 ∗ 푟 2 ( ) [ ] 0 ℒ �푦 𝑡𝑡 𝑟𝑟 � − 푈 �𝑡𝑡 푦� 𝑡𝑡 ; � , 𝑡𝑡 ∈= 0𝑡𝑡 푇 (26) ( ; ) ( ; ) = ( ) ( ; ) 푟 휕 푦 (37) According to OHAM [25.26], Esq.휕푡 (23-26) can also be ℬ �푦 𝑡𝑡 𝑟𝑟 � ℒ푛 � 푦1(𝑡𝑡; 𝑟𝑟)� − 푤(𝑡𝑡; 𝑟𝑟) = 𝐶𝐶1 (𝑟𝑟)ℱ0� 푦�0(𝑡𝑡; 𝑟𝑟)� written as follows: � ℒ̅푛 � 푦1 𝑡𝑡 𝑟𝑟 � − 푤 𝑡𝑡 𝑟𝑟 [𝐶𝐶1 ]𝑟𝑟 풢0� 푦�0 𝑡𝑡 𝑟𝑟 � (1 ) ( ; ) = ( ; ) ( ; ) ( ; ), = 0 1 푟 2 ( ; 2) (27) 휕 푦� − 푝 �ℒ ��∅ 𝑡𝑡 푝 �푟�� ℋ 푝 𝑟𝑟 �ℒ ��∅ 𝑡𝑡 푝 �푟� − Second order ℬProblem�푦�1 𝑡𝑡 𝑟𝑟 � 휕𝑡𝑡 (1 ) ( ; ) = ℱ(��;∅� )𝑡𝑡 푝 �푟��( ; ) ( ; ) ( ; ) = ( ) ( ; ) 2 (2 ; ) (28) − 푝 �ℒ ��∅ 𝑡𝑡 푝 �푟�� ℋ 푝 𝑟𝑟 �ℒ ��∅ 𝑡𝑡 푝 �푟� − 2 2 2 1 2 0 0 ⎧ +ℒ �(푦) 𝑡𝑡 𝑟𝑟 � −(ℒ; �)푦+𝑡𝑡 𝑟𝑟 � ( 𝐶𝐶; )𝑟𝑟, ℱ (� ;푦� ) 𝑡𝑡 𝑟𝑟 � ( ; )풢 ��∅� 𝑡𝑡 푝 �푟�� ⎪ ( ; ) , = 0 (29) ⎪ 1 푛 1 1 0 1 휕�∅� 푡 푝 �푟 𝐶𝐶 𝑟𝑟(�;ℒ )� 푦 𝑡𝑡 𝑟𝑟 � ( ;ℱ )� 푦�= 𝑡𝑡 𝑟𝑟( )푦� 𝑡𝑡 𝑟𝑟(�;� ) ℬ ��∅� 𝑡𝑡 푝 �푟 휕푡 � where ( ; ) and ( ; ) are defined in ⎨ℒ+2 � 푦(2 )𝑡𝑡 𝑟𝑟 � − ℒ(푛;�)푦1+𝑡𝑡 𝑟𝑟 � (𝐶𝐶;2 )𝑟𝑟, 풢0�( 푦�;0 )𝑡𝑡 𝑟𝑟 � Eq.(12-13), p [0, 1] is an embedding parameter, and ⎪ ℱ ��∅� 𝑡𝑡 푝 �푟� 풢 ��∅� 𝑡𝑡 푝 �푟� 1 2 1 1 [ ]0 1 ( ; ) is a nonzero starting fuzzy function, for p≠0, and ⎩ 𝐶𝐶 𝑟𝑟 �ℒ � 푦 𝑡𝑡 𝑟𝑟 (�; )풢, � 푦� 𝑡𝑡=𝑟𝑟 0 푦 � 𝑡𝑡 𝑟𝑟 � � (38) (0; ) = 0, ∈ ( ; ) is an unknown fuzzy function, 휕 푦�2 푟 ℋ� 푝 𝑟𝑟 The general nth orderℬ �푦� 2formula𝑡𝑡 𝑟𝑟 with휕푡 � respect to ( ; ) is respectively. Now, when = 0 and p = 1, we yield: ℋ� 𝑟𝑟 �∅� 𝑡𝑡 푝 �푟 given by: ( ) ( ) ( ) ( ) 푦�푛 𝑡𝑡 𝑟𝑟 ; 0 = ;푝 , ; 1 = ; (31) ( ; ) ( ; ) = ( ) ( ; ) �∅(𝑡𝑡; 0)�푟 = 푦0(𝑡𝑡; 𝑟𝑟), �∅(𝑡𝑡; 1)�푟 = 푦(𝑡𝑡; 𝑟𝑟) 2 푛 2 푛−1 푛 0 0 � ⎧ℒ � 푦 𝑡𝑡 𝑟𝑟 � − ℒ � 푦 𝑡𝑡 𝑟𝑟 � 𝐶𝐶 𝑟𝑟 ℱ � 푦� 𝑡𝑡 𝑟𝑟 � Thus, as 푟p increases0 from 푟0 to 1, the �∅ 𝑡𝑡 � 푦 𝑡𝑡 𝑟𝑟 �∅ 𝑡𝑡 � 푦 𝑡𝑡 𝑟𝑟 ⎪ + 푛−1 ( ) ( ; ) + 푛−1 ( ; ) solution ( ; ) varies from ( ; ) to solution of Eqs. ⎪ (28-30) denoted by ( ; ), for = 0 we have: ⎪ � 𝐶𝐶푖 𝑟𝑟 �ℒ2 � 푦푛−푖 𝑡𝑡 𝑟𝑟 � ℱ푛−푖 �� 푦푗 𝑡𝑡 𝑟𝑟 �� �∅� 𝑡𝑡 푝 �푟 푦�0 𝑡𝑡 𝑟𝑟 ⎪ 푖 =1 ( ; ) ( ; ) = (푗=)0 ( ; ) [ ] ( ; ) = 0, ( ; ), = 0 (32) 푦� 𝑡𝑡 𝑟𝑟 푝 2 2 푛 0 0 휕 푦�0 푟 ⎨ℒ � 푦푛 𝑡𝑡 𝑟𝑟 � − ℒ � 푦푛−1 𝑡𝑡 𝑟𝑟 � 𝐶𝐶 𝑟𝑟 풢 � 푦� 𝑡𝑡 𝑟𝑟 � The ̃popover2 0 starting function0 (휕푡; ) for equation ⎪ + 푛−1 ( ) ( ; ) + 푛−1 ( ; ) ℒ �푦� 𝑡𝑡 𝑟𝑟 � ℬ �푦� 𝑡𝑡 𝑟𝑟 � ⎪ (28-30) in the form [13]: ⎪ � � 𝐶𝐶푖 𝑟𝑟 �ℒ2 � 푦푛−푖 𝑡𝑡 𝑟𝑟 � 풢푛−푖 �� 푦푗 𝑡𝑡 𝑟𝑟 �� ℋ 푝 𝑟𝑟 ⎪ 푖=1 푗=0 ⎩ Mathematics and Statistics 8(5): 527-534, 2020 531

[ ] ( ; ), = 0 (39) 5. Illustration and Discussion 휕 푦�푛 푟 푛 where [ ( )] is nonhomogeneousℬ �푦� 𝑡𝑡 𝑟𝑟 휕푡 �term of Eq.(1) which Consider the second-order fuzzy nonlinear differential equal to zero if Eq.(1) is homogeneous iand equation given by Jameel et al [27]: 푟 푤� 𝑡𝑡 ( ; ) and ( ; ) are the ( ) = ( ( )) , 0 0.1 푛−1 푛−1 ( ; ) ′′ ′ 2 coefficient푛−푖 푗=0 푗 of in the expansion푛−푖 푗= 0of 푗 and (0) = ( , 2 ), (0) = (1 + , 3 ), [0,1] (45) ℱ �∑ 푦� 𝑡𝑡 𝑟𝑟 � 풢 �∑ 푦� 𝑡𝑡 𝑟𝑟 � 푦 𝑡𝑡 − 푦 𝑡𝑡 ≤ 𝑡𝑡 ≤ ( ; ) about푛 the embedding parameter p ′ 푝 ℱ�∅� 𝑡𝑡 푝 �푟 It푦 can be 𝑟𝑟verified− 𝑟𝑟 that푦 by using 𝑟𝑟the −exact𝑟𝑟 ∀ 𝑟𝑟solution∈ of Eq. � 푟 (45) given in [27] such that 풢�∅ 𝑡𝑡 푝 � ; , ( ) = 푛 ( ; ) = ln [( + ) + ] , ( ; ) = ln [(3 ( ( ; )) + 푖 ( [ ] ) ℱ ��∅��𝑡𝑡 푝 ∑푖=1 𝐶𝐶 𝑟𝑟 ��푟� (40) 푟 푟 ) +푟 ] 2−푟 ⎧ ∞ 푛 푛 ; , ( ) = 푌 𝑡𝑡 𝑟𝑟 푒 푒 2𝑟𝑟−푟𝑡𝑡 푒 2−푌푟 𝑡𝑡 𝑟𝑟 푒 − ⎪ℱ0 푦�0 𝑡𝑡 𝑟𝑟 ∑푛=1 퐹푛 ∑푖=0 푦�푖 푟 푝 푒 𝑟𝑟 𝑡𝑡 푒 ( ( ; )) + 푛 ( [ ] ) Fuzzy OHAM: ⎨ 풢 ��∅��𝑡𝑡 푝 ∑푖=1 𝐶𝐶푖 𝑟𝑟 ��푟� ⎪ ∞ 푛 푛 0 0 푛=1 푛 푖=0 푖 푟 Follow section 4 we have the followings: It has been⎩풢 observed푦� 𝑡𝑡 𝑟𝑟 that∑ the convergence풢 ∑ 푦� of푝 the series Zero order problem: (40) depends upon the auxiliary constants ( ), ( ), … , then at p = 1, we obtain: ( ; ) = 0 (46) ′′ (0; ) = [ , 2 ] 0 (0; ) = [( + 1), (3 )], ̃1 ̃2 푦� 𝑡𝑡 𝑟𝑟 𝐶𝐶 𝑟𝑟 𝐶𝐶 𝑟𝑟 ′ , ( ) ; = ( ; ) + 0 0 푛 푦First� order𝑟𝑟 problem:𝑟𝑟 − 𝑟𝑟 푦� 𝑟𝑟 𝑟𝑟 − 𝑟𝑟 푦�∗ �𝑡𝑡 � 𝐶𝐶̃푖 𝑟𝑟 𝑟𝑟� 푦�0 𝑡𝑡 𝑟𝑟 , ( ); = 1 + ( ) ( ; ) + + 푖=1 , ( )) (41) ′′ ′′ 1 ̃1 ̃1 0 푛 푛 푦� �𝑡𝑡 𝐶𝐶 𝑟𝑟 𝑟𝑟+� (�) 𝐶𝐶( ;𝑟𝑟 )� �푦 𝑡𝑡 𝑟𝑟 (47) ∑푖=1 �푦�푖�𝑡𝑡 ∑푖=1 𝐶𝐶̃푖 𝑟𝑟 ��푟 ′ 2 Substituting Eq. (41) into Eqs. (28-29) it results the ( ) 1 0 ( ) 0; 𝐶𝐶=̃ 0𝑟𝑟 � � 푦 𝑡𝑡0;𝑟𝑟 � = 0 following residual: ′ Second order푦� 1problem:𝑟𝑟 푦�1 𝑟𝑟 , ( ); = , ( ) ; ( ; ) , ( ), ( ); = 1 + ( ) , ( ); + 푛 푛 ′′ ′′ 푖=1 푖 2 ∗ 푖=1 푖 2 ( ) , ( ); ( ; ) + ( ) ( ; ) + ⎧ ℛ�𝑡𝑡 ∑ 𝐶𝐶 𝑟𝑟 𝑟𝑟� ℒ �, 푦 �𝑡𝑡 ∑( )𝐶𝐶; 𝑟𝑟 𝑟𝑟�� − 푤 𝑡𝑡 𝑟𝑟 푦�2 �𝑡𝑡 𝐶𝐶̃1 𝑟𝑟 𝐶𝐶̃2 𝑟𝑟 𝑟𝑟� � 𝐶𝐶̃1 𝑟𝑟 �푦�1 �𝑡𝑡 𝐶𝐶̃1 𝑟𝑟 𝑟𝑟� ⎪ 푛 ′ ′ ′′ ⎪ , ( );−ℱ=�푦� ∗�𝑡𝑡 ∑푖=1,𝐶𝐶̃푖 𝑟𝑟 𝑟𝑟�(�) ; ( ; ) 𝐶𝐶̃1 𝑟𝑟 �푦1 �𝑡𝑡 𝐶𝐶̃1 𝑟𝑟 𝑟𝑟� �푦0 ( 𝑡𝑡; 𝑟𝑟) 𝐶𝐶̃ 2 𝑟𝑟 � 푦� 0 𝑡𝑡 𝑟𝑟(48) 푛 푛 ′ 2 ⎨ℛ�𝑡𝑡 ∑푖=1 𝐶𝐶푖 𝑟𝑟 𝑟𝑟� ℒ̅2,�푦∗�𝑡𝑡 ∑(푖=)1;𝐶𝐶푖 𝑟𝑟 𝑟𝑟�� − 푤 𝑡𝑡 𝑟𝑟 (0; ) = �0� 푦 0 𝑡𝑡 𝑟𝑟(0�; �) = 0 ⎪ 푛 ′ ∗ 푖=1 ̃푖 (42) 2 2 ⎩ −풢 �푦� �𝑡𝑡 ∑ 𝐶𝐶 𝑟𝑟 𝑟𝑟�� Third order 푦problem:� 𝑟𝑟 푦� 𝑟𝑟 ( ) ( ) Again as in previous sections if = 0, then yields , , , ( ); = the exact solution, generally it doesn’t happen, especially 1 + (′′ ) , ( ), ( ); + ∗ 푦�3 �𝑡𝑡 𝐶𝐶̃1 𝑟𝑟 𝐶𝐶̃2 𝑟𝑟 𝐶𝐶̃3 𝑟𝑟 𝑟𝑟� in nonlinear high order FIVPs ℛ�problems. For푦� the ′′ ( ) 2 , ( ), ( ); ̃(1 ; ) +2 ̃1, ( ̃2); + � 𝐶𝐶 𝑟𝑟 �푦� �𝑡𝑡 𝐶𝐶 𝑟𝑟 𝐶𝐶 𝑟𝑟 𝑟𝑟2� determinations auxiliary constants of ( ) , i = 1, 2… n, ′ ′ we choose and T regarding to Eq. (1) such that optimum 𝐶𝐶̃1 𝑟𝑟 � �푦2 �(𝑡𝑡 𝐶𝐶)̃1 𝑟𝑟 𝐶𝐶̃2, 𝑟𝑟 (𝑟𝑟)�;푦�0 +𝑡𝑡 𝑟𝑟 �, �푦1( �)𝑡𝑡; 𝐶𝐶̃1 𝑟𝑟 (𝑟𝑟;��)� + ̃푖 ′′ ′ ′ values of ( ) for the convergent solution𝐶𝐶 𝑟𝑟 of the desired 2 1 1 1 1 0 0 𝐶𝐶̃ 𝑟𝑟 �(푦� ) �𝑡𝑡 𝐶𝐶̃( 𝑟𝑟; )𝑟𝑟+� � 푦 �(𝑡𝑡 ;𝐶𝐶̃) 𝑟𝑟 𝑟𝑟 � � 푦 𝑡𝑡 𝑟𝑟 (49)� 2 problem is 𝑡𝑡obtained. To find the optimal values of ( ) ′′ ′ 푖 here, we apply�𝐶𝐶 𝑟𝑟 the method of as in [11] to 𝐶𝐶̃3 (𝑟𝑟0;�푦�)0 =𝑡𝑡0𝑟𝑟 ��푦(0 ; 𝑡𝑡 )𝑟𝑟=� 0� 푖 obtain Eq. (42), where the residual is given by 𝐶𝐶̃ 𝑟𝑟 ′ Using Mathematica푦�3 𝑟𝑟 11 DSolve푦�3 Package𝑟𝑟 to find the solutions for the lower and the upper bounds of Eqs. = ( ;ℛ�) ([ ] ) (46-49), we obtain (43) 2 ∗ ∗ 푟 �ℛ�푟 = ℒ ��푦 �푟� − 푤 (𝑡𝑡 ;𝑟𝑟 )− ℱ ([푦� ] ) ( ; ) = ( ; ) + , ( ); + , ( ), ( ); + � , ( ), ( ), ( ); (50) 2 ∗ ∗ 푟 ∗ 0 1 ̃1 2 ̃1 ̃2 and �ℛ�푟 ℒ̅ ��푦 �푟� − 푤 𝑡𝑡 𝑟𝑟 − 풢 푦� 푦� 𝑡𝑡 𝑟𝑟 푦� 𝑡𝑡 𝑟𝑟 푦� �𝑡𝑡 𝐶𝐶 𝑟𝑟 𝑟𝑟� 푦� �𝑡𝑡 𝐶𝐶 𝑟𝑟 𝐶𝐶 𝑟𝑟 𝑟𝑟� According to Section푦�3�𝑡𝑡 𝐶𝐶 ̃14,𝑟𝑟 the𝐶𝐶̃ 2optimal𝑟𝑟 𝐶𝐶̃3 𝑟𝑟 values𝑟𝑟� of ( ) , ( ) and ( ) can be determined from Eq. (44) as = = = 0 (44) 1 ( ) ( ) ( ) 𝐶𝐶̃ 𝑟𝑟 휕 풮̃ 휕 풮̃ 휕 풮̃ showing in the following tables below 𝐶𝐶̃2 𝑟𝑟 𝐶𝐶̃3 𝑟𝑟 휕퐶̃1 푟 휕퐶̃2 푟 ⋯ 휕퐶̃푛 푟

532 Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions

Table 1. Optimal values of ( ), ( ), and ( ) given by 3 –terms of OHAM of Eq. (45).

𝐶𝐶1 𝑟𝑟 𝐶𝐶2 𝑟𝑟 𝐶𝐶3 𝑟𝑟 r ( ) ( ) ( )

1 2 3 0 0.9351596491831894𝐶𝐶 𝑟𝑟 0.0006082988473094407𝐶𝐶 𝑟𝑟 0.000006999389745013307𝐶𝐶 𝑟𝑟

0.2 −0.9266645756783222 −0.0007742120427816049 −0.000010152801305786626

0.4 −0.9210977315678253 −0.0008932128797667255 −0.000012664712219154883

0.6 −0.9048425010620137 −0.0012862461783431666 −0.000022305602470522420

0.8 −0.8892228272285377 −0.0017260290501527882 −0.000035301876455202014

1 −0.8841507175930892 −0.0018815459669624531 −0.000040411173275566030

− − − Table 2. Optimal values of ( ), ( ), and ( ) given by 3 –terms of OHAM of Eq. (45)

1 2 3 r ( ) 𝐶𝐶 𝑟𝑟 𝐶𝐶 𝑟𝑟 𝐶𝐶( ) 𝑟𝑟 ( )

1 2 3 0 0.8390281381156103𝐶𝐶 𝑟𝑟 0.0035238738570320164𝐶𝐶 𝑟𝑟 0.00010888937024478792𝐶𝐶 𝑟𝑟

0.2 −0.8458057432377328 −0.0032486904248177180 −0.00009567750340935401

0.4 −0.8503931605656544 −0.0030679597757606305 −0.00008736565633093382

0.6 −0.8644987095988346 −0.0025409267849453480 −0.00006483249923558979

0.8 −0.8791435465153755 −0.0020410509805364498 −0.00004591642229799645

1 −0.8841507175930892 −0.0018815459669624531 −0.00004041117327556603

− − − Fuzzy HAM: for (0.01; ; 1) to select the values of since that value of′ for the lower and upper solution of Eq. (45). According to Section 3 the starting function of Eq. (45) 푦� ℎ HAM h curve ℎ is taken as follows 𝑟𝑟 ( ; ) = + (1 + ) (51) 2.00 ( ; ) = (2 ) + (3 ) 푦0 𝑡𝑡 𝑟𝑟 𝑟𝑟 𝑟𝑟 𝑡𝑡 1 � 0 ; 1.99 푦 𝑡𝑡 𝑟𝑟 − 𝑟𝑟 − 𝑟𝑟 𝑡𝑡 h From to Section 3, we have: , 0.01

( ; ) = ( ; ) + ( ( ; )) ' 1.98 −1 (52) y 푦푚(𝑡𝑡; 𝑟𝑟) = 휒푚푦푚−1 (𝑡𝑡;𝑟𝑟) + ℏℒ2 ℛ푚(푦⃗푚−1 (𝑡𝑡;𝑟𝑟)) � −1 1.97 푚 푚 푚−1 2 푚 푚−1 (0푦; ) 𝑡𝑡=𝑟𝑟0, 휒(0푦; ) =𝑡𝑡 0𝑟𝑟, ℏℒ(0; ℛ) = 푦0⃗ , 𝑡𝑡(0𝑟𝑟; ) = 0 ′ ′ 1.96 푚 푚 푚 where푦 𝑟𝑟 푦 𝑟𝑟 푦 푚 𝑟𝑟 푦 𝑟𝑟 2.0 1.5 1.0 0.5 0.0 h

( ; ) = Figure 1. The h-curve of 4- terms Ham approximate solution of Eq. (45) at = 1. ( ; ) +푚 푚−1 ( ; ) ( ; ) ⎧ ℛ �푦⃗ 𝑡𝑡 𝑟𝑟 � ′′ 푚−1 ′ ′ (53) 𝑟𝑟 ⎪ 푗=0 According to [9] and Figure 1, the best convergent ⎪ 푚−1 ∑ 푗 푚−1−푗 푦 𝑡𝑡 𝑟𝑟 푦( ;𝑡𝑡 )𝑟𝑟 =푦 𝑡𝑡 𝑟𝑟 control-parameter = 0.9112 is obtained to adjust the convergence region of the homotopy analysis solution for ⎨ ( ; ) +푚 푚−1 ( ; ) ( ; ) ℛ �푦⃗ 𝑡𝑡 𝑟𝑟 � all [0,1]. ℎ − ⎪ ′′ 푚−1 ′ ′ 푦 푚−1 𝑡𝑡 𝑟𝑟 ∑푗=0 푦 푗 𝑡𝑡 𝑟𝑟 푦 푚−1−푗 𝑡𝑡 𝑟𝑟 Now we can tabulate the absolute From⎩ Eq. (45), we can obtain components of homotopy 𝑟𝑟 ∈ series solution for the lower and the upper bound of Eq. errors [ ] and [ ] of the approximate solutions (45). It is to be noted that the series solution (24) depends (0.1; ) and (0.1; ) obtained by using 4-terms of 퐸 푟 퐸 푟 upon the convergent control-parameter . For a simple HAM series solution for = 1 and = 0.9112 illustration as in [27], we plotted the ћ-curve when = 1 푦compared𝑟𝑟 with 3푦-terms𝑟𝑟 OHAM solution as follows: ℎ ℎ1 − ℎ2 − 𝑟𝑟 Mathematics and Statistics 8(5): 527-534, 2020 533

HAM,OHAM& Exact Y

Exact 2.0 HAM OH AM

1.5

r level set 1.0 0.2 0.4 0.6 0.8 1.0

0.5

0.0

Figure 2. Compression of the approximate solution given by OHAM & HAM with the exact solution of Eq. (45)]

Table 3. Comparison of the accuracy of the results solved by OHAM been formulated under fuzzy set properties to obtain an = 0.1 [0,1] and HAM at for the lower solution of Eq. (45) for all approximate solution of second-order nonlinear FIVPs. r 𝑡𝑡E HAM (0.1; )𝑟𝑟 ∈ OHAM and HAM show that the control and adjustment of the convergence of the series solution using the r 1 푟 2 0 1.53529� � ℎ × 10 3.98956�퐸� ℎ 퐻퐴푀× 10 1퐸.59889𝑟𝑟 ×푂퐻퐴푀10 convergence-control parameters are achieved in a simple −7 −8 −10 0.2 4.51332 × 10 3.91122 × 10 3.91122 × 10 way. The comparison between OHAM and HAM shows −7 −10 −10 that both methods are accurate and powerful in finding the 0.4 0.0000011207 3.79328 × 10 3.79328 × 10 solution of second-order nonlinear FIVP. Therefore, −9 −9 OHAM give a more accurate solution than HAM when 0.6 0.0000024597 4.47279 × 10 2.61947 × 10 −8 −9 = 1 other values of for all fuzzy level sets with 0.8 0.0000049128 6.92457 × 10 6.69669 × 10 fewer terms. Moreover, HAM is faster than OHAM in −8 −8 ℎterms− of analysis and findingℎ the approximate analytical 1 0.00000910987 1.53606 × 10 1.11097 × 10 −7 −8 solution of FIVPs because HAM needs less CPU time. One of the difficulties of OHAM in this study is that we need to Table 4. Comparison of the accuracy of the results solved by OHAM and HAM at = 0.1 for the upper solution of Eq. (45) for all [0,1] determine the convergence control parameters for each r-level sets. The most disadvantage of OHAM is that this r 𝑡𝑡 (0.1; )𝑟𝑟 ∈ method includes many unknowns in some r-level set 1 2 convergence-control parameters and this makes it 0 0�.퐸000096735�푟ℎ 퐻퐴푀 0.�0000067076퐸�푟ℎ 퐻퐴푀 1퐸.26440𝑟𝑟 ×푂퐻퐴푀10 −7 time-consuming for calculating the approximate solution 0.2 0.000064822 0.0000036619 9.18889 × 10 of FIVPs. Toward this end, the results obtained by OHAM −8 and HAM are satisfied by the fuzzy number properties by 0.000042133 0.0000018656 7.34552 × 10 0.4 taking either triangular fuzzy number shape. −8 0.6 0.000026432 8.76047 × 10 3.52946 × 10 −7 −8 0.8 0.000015907 3.76985 × 10 1.51728 × 10 −7 −8 Acknowledgements 1 0.00000910987 1.53606 × 10 1.11097 × 10 −7 −8 This research was funded by a grant research We can conclude from the above tables that the accuracy collaboration project (S/O number 14130) between of the approximate solution solved by 3-terms of OHAM is University Utara Malaysia, Sintok, Kedah, Malaysia, and better than of 4-terms of HAM approximate solution at Al-Kitab University, Kirkuk, Iraq. = 0.9112 and 1 for all [0,1] and

[0,0.01]of Eq. (45). ℏ − − 𝑟𝑟 ∈ 𝑡𝑡 ∈ 6. Summary REFERENCES This study presented and applied two approximate [1] Kaleva, O. (1987), Fuzzy differential equation, Fuzzy Sets analytical methods OHAM and HAM. These methods have Systems, 24, 301-317.

534 Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions

[2] Smita, T., and Chakraverty, S. (2013), Numerical solution of Multistage Optimal Homotopy Asymptotic Method, Journal fuzzy arbitrary order predator-prey equations, Applications of Mathematical and Fundamental Sciences, 50(3). 221-232, and Applied Mathematics, 8(1), 647-673. 2018. [3] El Naschie, M. S. (2005), From Experimental Quantum [15] Liao, S.-J. (2004) ‘On the homotopy analysis method for Optics to Quantum Gravity Via a Fuzzy Kahler Manifold, nonlinear problems’, Applied Mathematics and Chaos, Solitons & Fractals, 25, 969-977. Computation, 147(2), 499–513. [4] Abbod, M. F., Von Keyserlingk, D. G., and Mahfouf, M. [16] Liang, S and Jeffrey, D. J. (2009) ‘An efficient analytical (2001), Survey of utilization of fuzzy technology in approach for solving fourth order boundary value problems’, medicine and healthcare, Fuzzy Sets Systems, 120, Computer Physics Communications, 180, 2034–2040. 331-3491. [17] Fazle, M., (2014). Comparison of optimal homotopy [5] Allahviranloo, T., Ahmady, N., and Ahmady, E. (2007) asymptotic method and homotopy perturbation method for Numerical solution of fuzzy differential equations by non-linear equation. Journal of the Association of Arab predictor–corrector method, 177(7) 1633-1647. Universities for Basic and Applied Sciences, 16(1), 21-26. [6] Necdet. B., Asuman, A., and Sinan, Deniz. (2018) A New [18] Liao, S.-J. (1995) ‘An approximate solution technique not Approach to Fuzzy Differential Equations using depending on small parameters: a special example’, Logarithmic Mean, Journal of Fuzzy Set Valued Analysis, International Journal of Non-Linear Mechanics, (3), 371– 2018 (1), 10-21. 380. [7] Anakira, N. R., Shather, A. H., Jameel, A. F., Alomari, A. K., [19] Ali, F. J., Azizan S., and Hamzeh, H. Z. (2018) Numerical and Saaban, A. (2019) Direct solution of uncertain bratu solution of second-order fuzzy nonlinear two-point initial value problem, International Journal of Electrical and boundary value problems using combination of finite Computer Engineering, 9(6), 221-240. difference and Newton’s methods’, Neural Computing and Applications, 30(10), 3167–3175. [8] Jameel, A. F., Anakira, N. R., Alomari, A. K., Mahameed, M. A., and Saaban, A. (2019), A New Approximate Solution of [20] Fard, O. S. (2009) ‘An Iterative Scheme for the Solution of the Fuzzy Delay Differential Equations, International Generalized System of Linear Fuzzy Differential Equations’, Journal Mathematical Modelling and Numerical World Applied Sciences Journal, 7, 1597-11604. Optimisation, 9(3), 221-240. [21] Guo, X., Shang, D and Lu, X. (2013) ‘Fuzzy Approximate [9] Liao, S.-J. (1992) The Proposed Homotopy Analysis Solutions of Second-Order Fuzzy Linear Boundary Value Techniques for The Solution of Nonlinear Problems, Ph.D. Problems’, Journal of Boundary Value Problems, 2013, pp. dissertation, Shanghai Jiao Tong University, Shanghai, 1-17. China. [22] Dubois, D and Prade, H. (1982) ‘Towards fuzzy differential [10] Rashidi, M. M., Mohimanian pour, S. A., and Abbasbandy, calculus, Part 3: Differentiation’, Fuzzy Sets and Systems, 8, S. (2011) ‘Analytic approximate solutions for heat transfer 225-233. of a micro polar fluid through a porous medium with radiation’, Communications in Nonlinear Science and [23] Zadeh, L. A. (2005) ‘Toward A Generalized Theory of Numerical Simulation, 16, 1874–1889. Uncertainty, Information Sciences’, 172(2), 1–40. [11] Mabood, F., Khan, W.A., Ismail, A.I.M., 2013. Optimal [24] Ghoreishi, M., Ismail A., and Alomari, A. K. (2011) homotopy asymptotic method for flow and heat transfer of a ‘Application of the homotopy analysis method for solving a viscoelastic fluid in an axisymmetric channel with a porous model for HIV infection of CD4+ T-cells’, Mathematical wall. PLoS One, 8(12), e83581. and Computer modeling, 54, 3007-3015.

[12] Bogdan, M., Vasile,. M., (2018). Some exact solutions for [25] Idrees, M, Islam, S, Tirmizi, S & Haq, S. (2012), MHD flow and heat transfer to modified second grade fluid ‘Application of the optimal homotopy asymptotic method with variable thermal conductivity in the presence of thermal for the solution of the Korteweg–de Vries equation’, radiation and heat generation/absorption. Computers & Mathematical and Computer Modelling, 55(3),1324-1333. Mathematics with Applications, 76(6), 1515-1524. [26] Iqbal, S., Idrees, M., Siddiqui, A. M., and Ansari, A. R. [13] M. Alipour, M. A. Vali, Appling Homotopy Analysis (2010), ‘Some solutions of the linear and nonlinear Klein– Method to Solve Optimal Control Problems Governed by Gordon equations using the optimal homotopy asymptotic Volterra Integral Equations, Journal of Computer Science & method’, Applied Mathematics and Computation, 216(10), Computational Mathematics, 5(3), 41-47, 2015. 2898–2909.

[14] N. R. Anakira, A. F. Jameel, A. K. Alomari, A. Saaban, M. [27] Jameel, A.F., Ghoreishi, M. and Ismail, A. I. Md. (2014). Almahameed I. Hashim, Approximate Solutions of Approximate Solution of High Order Fuzzy Initial Value Multi-Pantograph Type Delay Differential Equations Using Problems, journal of uncertain systems, 8(2),149-160.

Mathematics and Statistics 8(5): 535-541, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080506

d  -action Induced by Shift Map on 1-Step Shift of Finite Type over Two Symbols and k -type Transitive

Nor Syahmina Kamarudin, Syahida Che Dzul-Kifli*

Faculty of Science and Technology, School of Mathematical Sciences, Universiti Kebangsaan Malaysia, Malaysia

Received April 13, 2020; Revised June 19, 2020; Accepted July 10, 2020

(a): [1] Nor Syahmina Kamarudin, Syahida Che Dzul-Kifli , " d -action Induced by Shift Map on 1-Step Shift of Finite Type over Two Symbols and k -type Transitive," Mathematics and Statistics, Vol. 8, No. 5, pp. 535 - 541, 2020. DOI: 10.13189/ms.2020.080506.

(b): Nor Syahmina Kamarudin, Syahida Che Dzul-Kifli (2020). d -action Induced by Shift Map on 1-Step Shift of Finite Type over Two Symbols and k -type Transitive. Mathematics and Statistics, 8(5), 535 - 541. DOI: 10.13189/ms.2020.080506. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract The dynamics of a multidimensional study of k -chaotic behaviours of d -action on the dynamical system may sometimes be inherited from the multidimensional dynamical system, contributions, and its dynamics of its classical dynamical system. In a application towards symbolic dynamics. multidimensional case, we introduce a new map called a d d -action on space X induced by a continuous map Keywords  -action, Shift of Finite Type, d Topologically Transitive, k -type Transitive fX: → X as Tf :  ×→ XX such that rn() d Tf (,) nx= f () x , where n ∈  , xX∈ and r : d → is a map of the form r() n= hn11 + hn 2 2 ++... hdd n . We then look at how topological transitivity of f effects the behaviour of 1. Introduction d d k -type transitivity of the  -action, Tf . To verify this, The study of  -action has become the current interest we look specifically at spaces called 1-step shifts of finite among researchers which involves the observation of a type over two symbols which are equipped with a map multidimensional dynamical system. Compared to the d called the shift map, σ . We apply some topological classical dynamical system study (or  -action), the d theories to prove the d -action on 1-step shifts of finite behavior of  -action is much more complex and almost impossible to be described. However, there are many type over two symbols induced by the shift map, Tσ is discussions that have an approach study on the chaotic d k -type transitive for all k ∈ 1,2,...,2 whenever σ is d {} behavior of the  -action. topologically transitive. We found a counterexample which The existence of d -action on X originally came shows that not all maps Tσ are k -type transitive for all from group action as the general form. Instead, the main d k ∈{}1,2,...,2d . However, we have also found some focus of group action was replaced with  in the studies. There are many finding results which focus on group sufficient conditions for k -type transitivity for all actions and also some other special kind of group actions. d k ∈{}1,2,...,2 . In conclusions, the map Tσ on 1 -step For instance, Barzanouni et al. [1] had studied group shifts of finite type over two symbols induced by the shift actions on metric space with expansive properties. They found several relations of expansivity in various cases such ∈ d map is k -type transitive for all k {}1,2,...,2 whenever as between subgroup actions and covering maps. The study either the shift map is topologically transitive or satisfies had also introduced orbit expansivity to characterize the the sufficient conditions. This study helps to develop the expansive action. 536 d -action Induced by Shift Map on 1 -Step Shift of Finite Type over Two Symbols and k -type Transitive

While Wang and Zhang [2] had focused on group They also find the relation between k -type sensitive actions for which the group was countable and discrete. dependence on initial conditions and k -type collective They defined the notions of local weak mixing and d Li-Yorke chaos for this kind of action to show the relation sensitive for induced  -action. Besides that, Kim and Li between them. Next, they had also studied the topological [10] had introduced and studied the notion of k -type limit entropy of actions of an infinite countable amenable group set and k -type non-wandering set of 2 -action. Their and actions of an infinite countable discrete sofic group on major purpose was to generalize the spectral a shift of finite type. decomposition theorem for k -type non-wandering points Then, Cairns et al. [3] had also studied on group actions 2 of a  -action. On the other hand, Lima [11] had an and defined six notions of dynamical transitivity and d mixing in the context of group actions. Interestingly, they interest of study on  -action which is ergodic. The highlighted some relations between those six notions in research had also tried to find the connection between which they are inherited by subgroups, by taking products ergodic property and positive topological entropy within d and when passing to the induced action on hyperspace. In  -action. addition, their discussions were also focus on There are also some interests of study on d -action for semi-conjugacy and actions of abelian groups. symbolic dynamical systems. Many studies are interested There are some studies which directed to the chaos of in looking at shifts of finite type and therefore they had semigroup actions. Wang et al. [4] studied the action of a defined d -action on the shift of finite type. In [12], the semigroup and abelian monoid on Polish space. They research had learnt about the phenomenon of transition learnt about the sensitive and syndetic transitive of the from the classical shifts of finite type to the system. Then, they had results on which the system was multidimensional shifts of finite type. The discussion is chaotic by depending on chaos in the sense of Li-Yorke mainly about an algebraic structure called Wang Tiling and Devaney. In [5], they also studied the action of which appears in certain multidimensional shifts. While the semigroup and gave a focus on periodicity and transitivity. study in [13] was interested in the entropy value of They also learnt some chaotic changes like sensitivity to multidimensional shifts of finite type. Next, Boyle and initial conditions and equicontinuous. Schraudner [14] had also extended the result by finding d Our major interest to study about  -action is mainly d shifts of finite type with positive topological entropy because of some findings from Shah and Das [6,7,8] who but cannot factor topologically onto the d Bernoulli studied and introduced the notions of k -type transitive shift on N symbols. However, Pavlov [15] had a and some other k -type chaos notions such as k -type d different approach which studied on  -shift spaces. Its periodic point, k -type sensitive dependence on initial main purpose was to give conditions which guarantee a conditions, k -type Devaney chaotic and k -type mixing. d -shift space to be nonsofic. All of the k -type chaos notions are defined mostly related In this paper, we may introduce a new concept of d to the chaos notions in the classical dynamical system and d -action called as a  -action on X induced by a therefore we may see some familiarities through the continuous map fX: → X. Then, we may focus on a definitions. The k -type chaos notions do help the research d specific kind of shift of finite type which is 1-step shift of to study the behavior of  -action on a space X with finite type over two symbols. Our main purpose is to relate easier understanding and a better way of structure. the transitivity of the shift map σ to the k -type The study of Shah and Das in [6] was to focus on d relationships between k -type Devaney chaotic of transitivity of  -action induced by the shift map on 1 -step shift of finite type over two symbols. d -action and its induced d -action. Furthermore, they also highlight some relations especially involving k -type transitive, dense k -type periodic point, k -type sensitive 2. d -action and Preliminary dependence on the initial condition, k -type weak mixing Definitions and k -type mixing. While in [7], their major focus was on a relationship of k -type chaos notions within conjugacy, Let d > 0 . We let (,)X ρ be a topological dynamical uniform conjugacy, and product spaces. They also mention system and a d -action on a space X was defined in the redundancy of k -type sensitive dependence on the most of the past studies as a continuous map initial condition for k -type Devaney chaotic as similar to T: d ×→ XX such that the finding of Banks et al. in [9] for Devaney chaos on i. Txx(0, ) = , for all xX∈ , infinite metric spaces in the study of a classical dynamical ii. TnTmx( ,,( )) = Tn( + mx ,) , for all nm, ∈ d system. and for all xX∈ . In [8], Shah and Das changed their focus to the notion of n k -type collective sensitive and studied the relation of the In addition, TX: → X is described by new notion between uniform conjugacy and finite product. Tn () x= Tnx (,) for all n ∈ d , xX∈ and clearly T n

Mathematics and Statistics 8(5): 535-541, 2020 537

is a homeomorphism on X [6]. metric ρ and the basic open ball is any subset of Σ2 of In the classical system, it is said a given continuous the form X=s ∈Σ| sww = = for any block map fX: → X is topologically transitive if for every w { 2 [−−nn ,] [ nn ,] } + pair of open sets U and V of X , there exists an w of length 21n [16]. n A continuous map on Σ , the shift map σ : Σ →Σ integer n > 0 such that fU()∩ V ≠∅. 2 22 d d is defined by (σ x) = x that shifts all sequences to the For a  -action, we must let k ∈{1,2,3,...,2 } and i i+1 −1 d left. While σ is the inverse operation which shifts the d '1i− σ k '∈{ 0,1} such that kk=12 + ∑ i . By letting sequence to the right. Hence, is one-to-one and onto. i=1 The composition of σ with itself k > 0 times d d = ∈ = ∈ k x( xx12, ,..., xd )  and y( yy12, ,..., yd )  , we σσ= ... σ shifts sequences k places to the left, '' k k kkii −−k 1 say that xy> if (−11) xyii >−( ) for while σσ= ( ) shifts the same amount to the right d id= {1,2,..., } . Then, a  -action, T: d ×→ XX is [16]. A shift space is a closed, shift-invariant subset of Σ . said to be k -type transitive if for every open set U and 2 V of X , there exists n >k 0 such that Equivalently, let F be any set of blocks (or later will be called as set of forbidden blocks), the set XX= of TUn ( ) ∩ V ≠∅ where n ∈ d [6]. F sequences that do not contain any element of F is a shift d Next, let us introduce  -action on a space X space. If the set F is finite, then it is called as a shift of induced by a continuous map f on X into itself in the finite type. A shift of finite type is an M -step if the set of following definition. the forbidden block F contains all blocks which have length M +1. Therefore, 1-step shift of finite type over two symbols is a shift space in which its forbidden block, Definition 2.1 F contains blocks of length 2 . Let fX: → X be a continuous map. T: d ×→ XX is a d -action on X induced by f f   3. Results and Discussion and is defined by Tn() x= T (,) nx = frn() () x 3.1. Shift Map on 1-Step Shifts of Finite Type over ff Two Symbols d d for n= ( nn12 , ,..., nd ) ∈  , xX∈ and r : → is a In this subsection, we will discuss the shift map on map of the form r() n= hn + hn ++... h n where 11 2 2 dd 1-step shifts of finite type over two symbols. With only hi ∈  for every id∈{1,2,..., } . two symbols, we have four possible different blocks of We want to highlight a remark that the map length two i.e. 00, 01, 10, and 11, then we have 16 sets of r : d → as in Definition 2.1 is a homomorphism and forbidden blocks. d the map Tf has satisfied both properties of  -action. FFFF1=∅= 234{00} ={01} ={10}

Next, we let Full-2-shift, Σ2 be the collection of all FFFF5= {11} 6= {00,01} 78= {00,10}= {00,11} two sided infinite sequences of symbols 0 and 1. The FFFF9 = {01,10}10 = {01,11}11 = {10,11}12 = {00,01,10} elements of Σ2 is in a form of FFFF= {00,01,11}= {00,10,11}= {01,10,11}= {00,01,10,1 1} 13 14 15 16 x =⋅=...x x xxx... x ∈ −−2 1 012 ( i )i∈ , where xi {0, 1} for For each i ∈{1,2,...,16} , X i ⊂Σ2 is the 1-step shift every i ∈  . A finite string (block) of symbols xxmn... of finite type with a set of forbidden blocks Fi . However, Σ is often denoted by x[mn ,] and the 2 is endowed with there are some of them which are singletons, empty set or product topology. In this topology, two points xy, ∈Σ2 the whole Σ2 . They are not in our interest of study due to are regarded as “close” if they agree on a large central their trivial dynamics. While there are also some of them which are either equal or topologically conjugate. block. That is, xy[−−nn ,]= [ nn ,] for some large n . The Firstly, it is clear that X12= Σ and Full-2-shift, Σ2 is a metric space which is equipped with XXX= = = ∅ XXX,, , X , X metric 13 14 16 . While 6 7 10 11 12 and X are singletons. One can show that X and X are −k 15 3 4 2 , if xy≠= and k is maximal so that xy[−−kk ,] [ kk ,], ρ(,xy )=  topologically conjugate. Then, X 2 and X 5 are  0, if xy= topologically conjugate. While X 8 and X 9 are the sets

Therefore, Σ2 is a topological space induced by the which contain only two elements. From here, we will only

538 d -action Induced by Shift Map on 1 -Step Shift of Finite Type over Two Symbols and k -type Transitive

look at four different -step shifts of finite type which are = ⋅ ∉ 1 should be x ...0 00...111... . However, x X 3 and this m XXX238,, and X 9 . is a contradiction since all sequences in X 3 are in the = = = = s X2{( xi)i∈ | if for every xj 0, then xx jj−+11 1} σ ∩=∅  form ...1100... . Therefore, ()XXwv for all s > 0 . Hence, σ is not topologically transitive. Xx31= {(ii )∈+ | if for every xi= 1 and xi = 0, then xj = 1 for all ji< and x = 0 for all ki >+ 1} k Theorem 3.3 =⋅⋅ X 8 {...01 01...,...10 10...} σ The 1-step shift of finite type (X 8 ,) which has set

of forbidden blocks F8 = {00,11} is topologically X 9 =⋅⋅{...00 00...,...11 11...} transitive. It is very trivial to proof the shift map σ on each of the four 1 -step shifts of finite type is topologically Proof. transitive or not. Since X 8 contains only two elements, then there are Theorem 3.1 four possible open subsets of X 8 . The subsets are =⋅⋅ = ⋅ The 1-step shift of finite type (X 2 ,)σ which has set X 8 {...01 01...,...10 10...} , U {...01 01...} , of forbidden blocks F2 = {00} is topologically transitive. V ={...10 ⋅ 10...} and W ={} ∅ . Let A and B be any pair of the subsets. Then, clearly that σ ()AB∩ ≠∅. Proof Therefore, σ is topologically transitive.

Let w= w−−ll... w101 ww ... w and v= v−−kk... v101 vv ... v be Theorem 3.4 two allowed blocks in X 2 . Then, X w and X v are the The 1-step shift of finite type (X 9 ,)σ which has set two nonempty basic open subsets of X 2 . With all of forbidden block F9 = {01,10} is not topologically possible allowable blocks of w and v in X 2 , wv1 or vw1 is also allowable. Let transitive. x =...w ... wwwwv⋅∈ ... 1 ... vvvv ...... X −l −1 0 1 lk−−101 k w. Proof

Then, Since X 9 contains only two elements, then there are

++ four possible open subsets of X . The subsets are σ lk2 (x )= ...w ... wwwwv ... 1 ... vvvv⋅∈ ...... X 9 −l −1 0 1 lk−−1 01 k v. X 9 =⋅⋅{...00 00...,...11 11...} , U ={...00 ⋅ 00...} , Therefore, σ lk++2 ()XX∩ ≠∅ . Hence, σ is wv V ={...11 ⋅ 11...} and W ={} ∅ . However, topologically transitive. σ m ()UV∩=∅ for all m > 0 . Therefore, σ is not Theorem 3.2 topologically transitive.

The 1-step shift of finite type (X ,)σ which has set d 3 3.2.  -action Induced by Shift Map on 1-Step Shifts of forbidden blocks F3 = {0 1} is not topologically of Finite Type over Two Symbols transitive. Our main objective is to study the behavior of d Proof -action induced by shift map on 1-step shifts of finite type over two symbols. Firstly, a d -action induced by shift Suppose by contradiction that σ is topologically map on 1-step shifts of finite type over two symbols is transitive. Then, for every pair of two nonempty basic given by the following definition. open subsets of X 3 , U and V , there exists n > 0 such that σ n ()UV∩ ≠∅. Let w = 000 and v = 111 . Definition 3.1

Then, X w and X v are two nonempty basic open Let XX=F ⊂Σ2 be 1 -step shift of finite type over σ subsets of X 3 . Since is transitive, then there exists two symbols and σ : XX→ be a shift map. m > 0 such that σ m ()XX∩ ≠∅. Then, there exists d d wv Tσ :  ×→ XX is a  -action on X induced by σ m x ∈ X w such that σ ()x ∈ X v . Then, the sequence x and is defined by

Mathematics and Statistics 8(5): 535-541, 2020 539

+ ++ −− −− Tn()xxx= Tn (,) = σ rn() () Tm()xxx= Tm ( ,) =σσrm() () = hm11 hm 2 2 ... hdd m () x= σl j1 L () x σσ σσ =⋅∈ ...vvvvv−−j ...1 01 ... j 11...11 wwwww− l ... −1 0 1 ... lv ... X . d L for n= ( nn12 , ,..., nd ) ∈  , a sequence x ∈ X and m k d So, TX()∩ X ≠∅ for > . By the both cases, r : → is a map of the form σ wv m 0 d Tσ is -type transitive for all k ∈{1,2,...,2 }. r() n= hn11 + hn 2 2 ++... hdd n where hi ∈  for every k id∈{1,2,..., } . Theorem 3.6 We will consider four different 1 -step shifts of finite d X X σ type over two symbols which are X 2 , X 3 , X 8 and X 9 . The  -action on 3 and 9 induced by is not ∈ d We have already seen that the shift map σ on X 2 and k -type transitive for all k {1,2,...,2 }.

X 8 is topologically transitive while it is not on both X 3 d Proof and X 9 . Now, we want to show the  -action induced by The proof which is trivial also use the similar reasons as shift map, T on each of the four 1-step shifts of finite σ in the Theorem 3.2 and 3.4. type is k -type transitive for all k ∈{1,2,...,2d } or not. d It is complicated to say  -action on X 8 induced by d Theorem 3.5 σ is k -type transitive for all k ∈{1,2,...,2 } or not. 2 d Here we illustrate an example of  -action on X 8 The  -action on X 2 induced by σ , induced by σ which is not k -type transitive for all T: d ×→ XX is k -type transitive for all σ 22 k ∈{1,2,3,4}. k ∈{1,2,...,2d }. Example 3.1

Proof 2 Let  -action on X 8 induced by σ be + Let w and v be two allowed blocks in X with 24nn12 2 2 Tnσ (,)xx= σ () for n=(, nn12 ) ∈  and x ∈ X 8 . + + length 21l and 21j respectively. That is, Then, Tσ is not k -type transitive for all k ∈{1,2,3,4}. w= w−−ll... w101 ww ... w and v= v−−jj... v101 vv ... v . Then, X w Proof and X v are two nonempty open subsets of X 2 . For X 2 with only forbidden block {00} , wv11...11 or vw11...11 Let k ∈{1,2,3,4} . Let U ={...01 ⋅ 01...} and s s V ={...10 ⋅ 10...} . Then, σ m ()UV∩ ≠∅ for only all odd for all s ∈  is always allowable. Let k ∈{1,2,...,2d } . integer m . However, 2nn12+= 4 2( nn 12 + 2 ) is always The first case is if hn11+ hn 2 2 ++... hdd n > 0 for some k k k even for all entries of n=(, nn12 ) > 0. Therefore, n= ( nn12 , ,..., nd )> 0 . Take m= ( mm12 , ,..., md )> 0 n k TUσ ()∩=∅ V for all n > 0 . Hence, Tσ is not k -type such that hm11+ hm 2 2 +... + hdd m >+ l j +1 . Let transitive for all k ∈{1,2,3,4}. L= hm11 + hm 2 2 +... + hdd m −− l j −1 . Then, take =⋅∈ x ...wwwww−l ... −1 0 1 ...l 11...11 vvvvv−− j ...101 ... jw ... X . Therefore, there is a sufficient condition for L homomorphism rn() to prove the k -type transitivity of Then, Tσ on X 8 . We have a supporting lemma before proving m rm() hm11+ hm 2 2 ++... hdd m L++ l j +1 Tσσ()xxx= Tm ( ,) =σσ () = () x= σ () x d -action on X induced by σ is k -type transitive for =...wwwww ...... 11...11 vvvvv ...⋅∈ ...... X . 8 −l −1 0 1 l −− j1 01 jv ∈ d L all k {1,2,...,2 }. m k TXσ ()wv∩ X ≠∅ m > 0 So, for . The other case is if Lemma 3.1 hn+ hn ++... h n < 0 n= ( nn , ,..., n )>k 0 11 2 2 dd for all 12 d . If r() n= hn11 + hn 2 2 ++... hdd n with at least one hi is m= ( mm , ,..., m )>k 0 Take 12 d such that odd integer for some id∈{1,2,..., } , then for every hm+ hm +... + h m <− l − j −1 d k 11 2 2 dd . Let k ∈{1,2,...,2 }, there exists m > 0 such that rm() is an L=−− l j −1 − ( hm + hm +... + h m ) odd integer. 11 2 2 dd . Then, take x =...vvvvv ...... 11...11 wwwww ...⋅∈ ...... X −−j101 j − l −1 0 1 lw Proof L . Then, Let r() n= hn11 + hn 2 2 ++... hdd n where hi ∈  for

540 d -action Induced by Shift Map on 1 -Step Shift of Finite Type over Two Symbols and k -type Transitive

id∈{1,2,..., } and n= ( nn , ,..., n ) ∈ d . Suppose that at d 12 d  bb b d = + bi−1 (kk12 , ,..., kd )∈ {0,1} such that kk12∑ i . Then, least one hi is odd integer for some id∈{1,2,..., } . The i=1 first case is if hi is odd integer for some id∈{1,2,..., } . rm() is odd integer. Based on the both cases, there exists + k Without loss of generality, let hh12, ,..., hhee ,+1 ,..., h d∈  m > 0 such that rm() is an odd integer for every d for 1 <

hhee++12, ,..., h d are even integers. Then, hlii=21 + for Theorem 3.7 ie∈{1,2,..., } and hjtt= 2 for te∈+{ 1, e + 2,..., d } for ∈ d some ll12, ,..., lee , j++1 , j e 2 ,..., j d  . Then, Let  -action on X 8 induced by σ be rn() d Tnσ (,)xx= σ () for n= ( nn12 , ,..., nd ) ∈  and r() n= hn11 + hn 2 2 ++... hdd n = (2 l1 + 1) n 1 + (2 l 2 + 1) n 2 ++ ... x ∈ X 8 . If r() n= hn11 + hn 2 2 ++... hdd n with at least one (2le+ 1) n e + 2 j e++1 n e 1 + 2 j e + 2 n e + 2 ++ ... 2 jdd n = 2( ln11 ++ ... le

n+ j n ++... jn ) + n + n ++ ... n . hi is odd integer for some id∈{1,2,..., } , then Tσ is k e e++11 e dd 1 2 e -type transitive for all k ∈{1,2,...,2d }. d k For k ∈{1,2,...,2 } , take m= ( mm12 , ,..., md )> 0 b b k1 ki such that ma11=−+( 1) (2 1) , mbii=( − 1) (2 ) for Proof + ie∈{2,3,..., } and any arbitrary mt ∈  for The four possible open subsets are te∈+{ 1, e + 2,..., d } for some abb, , ,..., b∈ where 123 e  X 8 =⋅⋅{...01 01...,...10 10...} , U ={...01 ⋅ 01...} , d bbbb d = + bi−1 V ={...10 ⋅ 10...} and W ={} ∅ . Suppose that (kkkk12 , ,...,ed ,..., )∈ {0,1} such that kk12∑ i . i=1 r() n= hn11 + hn 2 2 ++... hdd n with at least one hi is odd Then, rm() is odd integer. While the second case is if h i integer for some id∈{1,2,..., } . By Lemma 3.1, for every is odd integer for all id∈{1,2,..., } . Without loss of k ∈{1,2,...,2d }, there exists m >k 0 such that rm()= L generality, let h ∈ + such that h are odd integers for i  i where L is an odd integer. Note that all id∈{1,2,..., } . Then, hl=21 + for some l ∈ and n n ii i  σ (...01⋅=⋅ 01...) ...10 10... and σ (...10⋅=⋅ 10...) ...01 01... ∈ id{1,2,..., } . Then, for every odd integer n . Let A and B be any pair of the subsets. Then, r() n= hn11 + hn 2 2 ++... hdd n = (2 l1 + 1) n 1 + (2 l 2 + 1) n 2 ++ ... m rm() L (2l+ 1) n = 2( ln + ln ++ ... l n ) + n + n ++ ... n . TABTmABσσ()∩= (,) ∩=σσ () AB ∩= () AB ∩≠∅ d d 11 2 2 dd 1 2 d

d k since L is an odd integer. Therefore, Tσ is k -type For k ∈{1,2,...,2 } , take m= ( mm12 , ,..., md )> 0 b b d k1 ki transitive for all k ∈{1,2,...,2 }. such that ma11=−+( 1) (2 1) and mbii=( − 1) (2 ) for

id∈{2,3,..., } for some abb123, , ,..., bd ∈  where We list all the results in the Table 1.

d Table 1. The Dynamics of  -action on 1 -Step Shifts of Finite Type over Two Symbols Induced by Shift Map

Space (X F ,)σ (XTF ,)σ d XF22, = {00} Topologically transitive k -type transitive for all k ∈{1,2,...,2 }

XF33, = {0 1} Not topologically transitive Not k -type transitive d XF88, = {00,11} Topologically transitive k -type transitive for all k ∈{1,2,...,2 } with sufficient condition

XF99, = {01,10} Not topologically transitive Not k -type transitive

Mathematics and Statistics 8(5): 535-541, 2020 541

4. Conclusions [4] H. Wang, X. Long, H. Fu. Sensitivity and chaos of semigroup actions, Semigroup Forum, Vol. 84, 81-90, 2012. In the study, we had considered four different kinds of 1-step shifts of finite type over two symbols and we want [5] F. Schneider, S. Kerkhoff, M. Behrisch, S. Siegmund. to observe the influence of shift map, σ to the Chaotic actions of topological semigroup, Semigroup d Forum, Vol. 87, 590-598, 2013.  -action induced by the shift map, Tσ . Table 1 shows that among the four spaces, two of them are not [6] S. Shah, R. Das. A note on chaos for d -action, Dynamics topologically transitive and also not k -type transitive for of Continuous, Discrete and Impulsive System Series A: d Mathematical Analysis, Vol. 22, No. 2, 95-103, 2015. all k ∈{1,2,...,2 }. The shift map σ on both X 2 and d d X 8 is topologically transitive. Then, the  -action on [7] S. Shah, R. Das. On different types of chaos for  -actions, σ Journal of Mathematics Research, Vol. 7, No. 3, 191-197, X 2 induced by shift map is k -type transitive while 2015. the action on X is k -type transitive only whenever a 8 d sufficient condition is satisfied. Therefore, the main [8] S. Shah, R. Das. On collective sensitivity for  -actions, Dynamical systems, Vol. 31, No. 2, 1-7, 2016. conclusion we have here is a d -action on 1-step shift of finite type over two symbols induced by shift map is [9] J. Banks, J. Brooks, G. Cairns, G. Davis, P. Stacey. On k -type transitive for all k ∈{1,2,...,2d } whenever the Devaney’s definition of chaos, The American Mathematical Monthly, Vol. 99, No. 4, 332-334, 1992. shift map is topologically transitive and a sufficient condition on homomorphism rn() is satisfied. [10] D. Kim, S. Lee. Spectral decomposition of k -type nonwandering sets for 2 -actions, Bulletin of the Korean Mathematical Society, Vol. 51, No. 2, 387-400, 2014.

Acknowledgments d [11] Y. Lima.  -actions with prescribed topological and The authors would like to thank Universiti Kebangsaan ergodic properties, Ergodic theory and dynamical systems, Vol. 32, No. 1, 191-209, 2012. Malaysia and the Center for Research and Instrumentation (CRIM) for the financial funding through GUP-2019-054. [12] K. Schmidt. Multi-dimensional symbolic dynamical systems, Codes, Systems and Graphical Models, Vol. 123, 67-82, 2001. [13] M. Hochman, T. Meyerovitch. A characterization of the entropies of multidimensional shifts of finite type, Annals REFERENCES of Mathematics, Vol. 171, 2011-2038, 2010.

[1] A. Barzanouni, M.S. Divandar, E. Shah. On properties of [14] M. Boyle, M. Schraudner. d shifts of finite type without expansive group actions, Acta Mathematica Vietnamica, equal entropy full shift factors, Journal of Difference Vol. 44, No. 4, 923-934, 2019. Equations and Applications, Vol. 15, No. 1, 47-52, 2009.

[2] Z. Wang, G. Zhang. Chaotic behavior of group actions, d Contemporary Mathematics, Vol. 669, 299-316, 2016. [15] R. Pavlov. A class of nonsofic  shift spaces, Proceedings of the American Mathematical Society, Vol. [3] G. Cairns, A. Kolganova, A. Nielsen. Topological 141, No. 3, 987-996, 2013. transitivity and mixing notions for group actions, Rocky Mountain Journal of Mathematics, Vol. 37, No. 2, 371-397, [16] D. Lind, B. Marcus. Introduction to Symbolic Dynamics and 2007. Coding, Cambridge University Press, Cambridge, 1995.

Mathematics and Statistics 8(5): 542-550, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080507

Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions

O. S. Deepa

Department of Mathematics, Amrita School of Engineering, Coimbatore Amrita Vishwa Vidyapeetham, India

Received April 18, 2020; Revised June 30, 2020; Accepted July 20, 2020

(a): [1] O. S. Deepa , "Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions," Mathematics and Statistics, Vol. 8, No. 5, pp. 542 - 550, 2020. DOI: 10.13189/ms.2020.080507. (b): O. S. Deepa (2020). Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions. Mathematics and Statistics, 8(5), 542 - 550. DOI: 10.13189/ms.2020.080507. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract The reliability of the product has developed a study of tables based on proposed exponentiated family dynamic issue in a worldwide business market. Generally and earlier existing plan are also done. acceptance sampling guarantees the superiority of the product. In acceptance sampling plan, increasing the Keywords Exponentiated Gamma, Exponentiated sample size may lead to minimization of customers’ risk of Lomax, Exponentiated Weibull, Exponentiated accepting bad lots and producers’ risk of rejecting good Exponential, Average Sample Number, Modified Double lots to a certain level but will increase the cost of inspection. Sampling Plan Hence truncation of life test time may be introduced to reduce the cost of inspection. Modified Average Sample Number (MASN) for Improved Double Sampling Plan (IDSP) based on truncated life test for popular exponentiated family such as exponentiated gamma, 1. Introduction exponentiated lomax and exonentiated Weibull distribution are considered. The modified ASN creates a The main goal of the manufacturing industries is band width for average sample number which is much improving the quality of the product which could be useful for the consumer and producer. The interval for attained by acceptance sampling plans. In the acceptance average sample number makes the choice of consumer with sampling plan, a sample is taken from the lot and the lot is a maximum and minimum sample size which is of much accepted or rejected based on the inspection of the sample benefit without any loss for the producer. The probability from the lot. The number of failures is static and is done of acceptance and average sample number based on based on the opinion of both producer and consumer. The modified double sampling plan for lower and upper limit is failures that are considered after inspection is more than the computed for the exponentiated family. Optimal static number specified then the lot is rejected. This process parameters of IDSP under various exponentiated families is known as life test. Various studies were made on with different shape parameters were computed. The acceptance sampling [2,3,5,10,11,12,13,14,15,16]. proposed plan is compared over traditional double The parameters of the modified double sampling plan sampling and modified double sampling using Gamma are ,, cnn 121 and c2 . The probability of acceptance of distribution, Weibull distribution and Birnbaum-Saunders the MDSP as given by Aslam et al [2] distribution and shows that the proposed plan with respect to exponentiated family performs better than all other plans. The tables were provided for all distributions. Comparative Mathematics and Statistics 8(5): 542-550, 2020 543

probability of acceptance remains constant and average sample number also remains constant or slightly deviates.

It will be better to use β1 and β 2 instead of β which and is beneficial for both producers and consumers because

c2 various values of β with a small difference may make − n1 i n1 i ASN( p) = n1 + n2 ∑ Ci p (1− p) computation lengthy and cannot predict an interval for = + i c1 1 sample size with a fixed average sample number and probability of acceptance. The choice of any sample size Here p is called the probability of failure which can be within the interval favours both producers and consumers. determined based on the distribution of lifetime of a So the optimized problem for modified ASN is given as: product. Generally acceptable reliability level p1 , lot MASN L ( p2 ) tolerance reliability level p2 , producer’s risk- α ≥ −α ≤ β consumer’s risk β are mostly considered. subject to PaL ( p1 ) 1 , PaL ( p2 ) 1 ,

n11 > 1,n 21 > 1,c2 > c1 > 0 . and 2. Methodology Minimize MASNU ( p2 )

2.1. Modified Average Sample Number-MASN subject to PaU ( p1 ) ≥ 1−α , PaU ( p2 ) ≤ β 2 n > 1,n > 1,c > c > 0 Generally Minimize ASN subject to Pa ( p1 ) ≥ 1−α 12 22 2 1 and P ( p ) ≤ β is calculated for the average sample a 2 where β1 − β 2 < δ , the threshold δ can be any value number. The modified ASN creates a band for the depending on need of the producer and consumer. (for consumer’s confidence level which does not affect the example δ = 0.001, 0.01, 0.1) probability of acceptance. The interval for average sample number makes the choice of consumer with a maximum 2.2. Improved Double Sampling Plan: IDSP and minimum sample size which is of much benefit without any loss for the producer. Another advantage is The probability of acceptance of improved plan is given that in spite of different sample size [n1 ,n2 ] , the by

 c1 c2 c1  n11 i n11 −i n11 j n11 − j n21 i n21 −i PaL ( p) = ∑ Ci p (1− p) + ∑ i C j p (1− p) ∑ Ci p (1− p)  = = + =  i 1 j c1 1 i 1 

 c1 c2 c1  n12 i n12 −i n12 j n12 − j n22 i n22 −i PaU ( p) = ∑ Ci p (1− p) + ∑ i C j p (1− p) ∑ Ci p (1− p)  = = + =  i 1 j c1 1 i 1 

PaL ( p) and PaU ( p) is the probability of acceptance with the sample size n11 , n21 , n12 and n22 . The modified average sample number for IMDSP plan is given by

Minimize ASN L (α) = (n11 + n21 (1− PaL ( pα ))) , subject to PaL ( p1 ) ≥ 1−α , PaL ( p2 ) ≤ β1

Minimize ASNU (α) = (n12 + n22 (1− PaU ( pα ))) , subject to PaU ( p1 ) ≥ 1−α , PaU ( p2 ) ≤ β1

Also, PaL ( p) = PaU ( p) ≥ 1−α

544 Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions

3. Results the cumulative distribution function can be rewritten as γ  θ  − 0 ( γ ( + (γ )))  a 2 A1  θ   3.1. Designing the Improved Double Sampling Plan: =  −  θ    γ + γ +  p 1 e  (a )((2 A1 ( )) 1 IDSP  θ 0     The designing of modified double sampling plan is done Hence in case of exponentiated gamma distribution, the with respect to three distributions ARL and the LTRL are attained by considering  θ  (i) Exponentiated Gamma Distribution   > 1 θ 0  (ii) Exponentiated Lomax Distribution and  θ  . (iii) Exponentiated Weibull Distribution   = 1 θ 0  3.1.1. Exponentiated Gamma Distribution These ideal parameters of the plan under exponentiated Gamma distribution are given in Table 1-2 with unequal The significance of exponentiated gamma distribution and equal sample size and γ = 1 and lies in its capability to model monotone and non-monotone γ = 2 . It is noted from the Table 1-2 that functions, which are reasonably widespread in reliability and lifetime data analysis. The cdf and pdf of θ (i) For fixed β1 as increases, the value of ASN exponentiated gamma distribution is given by θ 0 F(x;θ,λ) = [1− e−λx (λx +1)]θ ,θ,λ, x > 0 also increases and the interval of ASN is narrow. For θ 2 −λx −λx θ −1 example if β = 0.25 and = 8,10,12 then the f (x;θ,λ) = θλ xe [1− e (λx +1)] ,θ,λ, x > 0 1 θ 0 where θ is the shape parameter and λ is the scale ASN is given by [6.354,7.926] , [6.356,8.711] , parameter [6.356,9.104] The mean life time under exponentiated gamma (ii) As β increases, the ASN also increases but the distribution is given by 1 interval of ASN is narrow.

γ (iii) In Table 1, for β1 = 0.05 with θ = [2 + A1(γ )] λ n11 = 8,n21 = 9,n12 = 8,n22 = 14 the ASN is where [9.362,10.119]. (iv) As the shape parameter increases from γ = 1 to ∞ j γ −1 j  Γ(r + k + 2) A (γ ) = (−1) j    ,r =1,2.... γ = 2 , the ASN decreases. 1 ∑∑    + r+k+2 j=1 k=0  j k  (1 j) (v) For different values of consumers risk and termination ratios , range of ASN is for According to Asalm and Jun [2] ,the termination life [6,10], time can be expressed as the multiple of specified average c1= 0,c2 = 1 which is a minimum value compared to all existing plan . life θ 0 and the termination ratio a i.e., t0 = aθ 0 . Hence

Mathematics and Statistics 8(5): 542-550, 2020 545

Table 1. Ideal parameters of specified distribution under exponentiated Gamma distribution with shape parameter γ = 2 and a = 1 .

θ β n ,n P P ASN 11 12 aL ASN L n12 ,n22 aU U θ 0 4 4,5 0.96959 5.963 4,9 0.95429 7.534

0.25 6 4,5 0.99258 5.963 4,10 0.98743 7.926

8 4,6 0.99703 6.354 4,11 0.99518 8.319

10 4,6 0.99871 6.356 4,12 0.99773 8.711

12 4,6 0.99936 6.356 4,13 0.99878 9.104

0.1 6 6,6 0.98592 7.552 6,10 0.98001 8.586

8 6,7 0.99447 7.810 6,11 0.9923 8.844

10 6,8 0.99734 8.069 6,12 0.9963 9.103

12 6,9 0.99854 8.327 6,12 0.99818 9.103

0.05 6 8,8 0.9756 9.211 8,11 0.9701 9.665

8 8,9 0.99053 9.362 8,13 0.98776 9.680

10 8,9 0.99584 9.362 8,14 0.99427 10.119

12 8,10 0.99774 9.514 8,15 0.99694 10.271

0.01 8 12,13 0.97999 12.5691 12,15 0.97805 12.657

10 12,14 0.9906 12.613 12,16 0.9897 12.700

12 12,15 0.99451 12.657 12,17 0.9931 12.744

Table 2. Ideal parameters of specified distribution under exponentiated Gamma distribution with shape parameter γ = 2 and a = 2

θ

β P ASN n11 ,n12 PaL ASN L n12 ,n22 aU U θ 0 0.25 4 3,6 0.9953 5.3434 3,10 0.9933 6.906 6 3,7 0.9997 5.734 3,11 0.9995 7.296 8 3,8 0.99995 6.125 3,12 0.9993 7.686 10 3,9 6.515 3,13 0.9998 8.077 0.1 4 4,4 0.9954 5.087 4,7 0.99305 5.902 6 4,5 0.9996 5.359 4,8 0.99947 6.174 8 4,6 0.9999 5.630 4,9 0.99992 6.445 10 4,6 0.9999 5.630 4,9 0.99992 6.445 0.05 4 5,6 0.99184 6.063 5,9 0.9890 6.594 6 5,7 0.99937 6.2404 5,9 0.99923 6.594 8 5,7 0.99992 6.2404 5,10 0.99989 6.772 10 5,8 0.9998 6.418 5,12 0.99997 6.875 0.01 4 8,11 0.99792 8.4429 8,15 0.97648 8.604 6 8,12 0.99829 8.4832 8,16 0.99788 8.644 8 8,14 0.99974 8.5637 8,18 0.999969 8.725 10 8,16 0.99994 8.6443 8,20 0.99993 8.805

546 Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions

3.1.2. Exponentiated Lomax Distribution These ideal parameters of the MDSP plan under Two parameter exponentiated Lomax distributions are exponentiated Lomax distribution are given in Table 3-5 used in analyzing several lifetime data. with unequal and equal sample size and shape parameter The CDF and PDF of a exponentiated Lomax γ = 1 and γ = 2 , It is noted from the Table 3-5 that θ distribution is given by (i) For fixed β1 as increases, the value of ASN α θ 0 = − + λ −θ > θ α λ > F(x) [1 (1 x) ] , x 0; , , 0 increases and the interval of ASN is very narrow. For −θ α −1 −(θ +1) example if β = and θ then the ASN f (x) = αθλ[1− (1+ λx) ] (1+ λx) 1 0.25 = 6,8,10 θ 0 α,γ are the shape parameters and λ is the scale is given by [10.115,10.738] , [11.049,11.672] , parameter. [12.295,13.229] The mean life time under exponentiated exponential (ii) As β increases, the ASN also increases but the distribution is given by 1 interval of ASN is very narrow. α   1   θ θ = β1− ,α  − β (1,α ) (iii) In Table 7, for β = 0.05 , = 10 with λ   γ   1 θ 0

According to Asalm and Jun [2], the cumulative n11 = 6,n21 = 8,n12 = 6,n22 = 10 the ASN is 7. distribution function can be rewritten as (iv) As the termination ratio increases from a = 0.5 to −γ α     a = 1 , the ASN also increases.       aα   1    (v) For different values of consumers risk, the mean ratio p = 1− 1+ β1− ,α  − β (1−α )  values and termination ratios the maximum value of  θ  γ          ASN is only 23 which is a minimum value compared        θ 0   to all existing plan.  

Table 3. Ideal parameters of specified distribution under exponentiated Lomax distribution with shape parameter γ = 1 and a = 1

θ β n ,n P n ,n P ASN 11 12 aL ASN L 12 22 aU U θ 0 0.25 14 3,3 0.97044 4.125 3,3 0.9547 4.125

16 3,3 0.9547 4.125 3,4 0.9564 4.5

18 3,3 0.96402 4.5 3,5 0.95841 4.875

20 3,4 0.964421 4.5 3,5 0.96536 4.875

0.1 20 4,4 0.9587 5 4,5 0.95162 5.25

22 4,5 0.95897 5.25 4,5 0.95897 5.25

24 4,6 0.96015 5.5 4,7 0.95572 5.75

0.05 26 5,6 0.955004 5.937 5,7 0.950301 6.09

28 5,6 0.960461 6.01 5,7 0.956275 6.09

30 5,7 0.961236 6.09 5,8 0.957609 6.25

0.01 36 7,8 0.95439 7.437 7,9 0.95091 7.492

38 7,10 0.95219 7.547 7,10 0.95219 7.547

40 7,11 0.95347 7.602 7,12 0.950733 7.656

Mathematics and Statistics 8(5): 542-550, 2020 547

Table 4. Ideal parameters of specified distribution under exponentiated Lomax distribution with shape parameter γ = 2 and a = 1 θ β P ASN n11 ,n12 PaL ASN L n12 ,n22 aU U θ 0 0.25 6 6,7 0.97924 8.4917 6,8 9.88476 8.8476 8 7,10 0.99544 10.1146 7,12 9.96255 10.7375 10 7,13 0.99297 11.0490 7,15 9.992162 11.6719 12 7,17 0.99544 12.2948 7,20 9.99423 13.2292 0.1 6 9,10 0.95734 11.2525 9,12 0.95223 11.7030 8 9,13 0.979818 11.9283 9,15 0.977718 12.3788 10 9,14 0.98949 12.15336 9,15 0.98762 12.3788 12 9,15 0.994173 12.6040 9,18 0.993628 13.0545 0.05 8 11,16 0.970596 13.4777 11,18 0.96818 13.7875 10 11,20 0.9836 14.0972 11,22 0.98243 14.4069 12 11,25 0.980749 14.8715 11,27 0.98906 15.1812 0.01 10 17,30 0.96433 18.2778 18,30 0.961958 19.01478 12 20,30 0.97656 20.7399 20,35 0.973073 20.63424 14 22,33 0.983526 22..4316 22,42 0.980506 22.5494

Table 5. Ideal parameters of specified distribution under exponentiated Lomax distribution with shape parameter γ = 2 and a = 2

θ β n11 ,n12 PaL ASN L n12 ,n22 PaU ASNU θ 0 0.25 6 3,4 0.951275 4.646 3,8 0.92231 6.292 8 3,5 0.97491 5.058 3,9 0.96132 6.704 10 3,6 0.98739 5.078 3,10 0.97839 7.115 0.1 8 4,6 0.96015 5.829 4,8 0.95146 6.439 10 4,7 0.97726 6.134 4,9 0.97267 6.743 12 4,8 0.98589 6.439 4,11 0.08199 7.353 0.05 10 6,8 0.9600 7.129 6,10 0.9537 7.411 12 6,9 0.9753 7.270 6,11 0.97167 7.5573 0.01 12 8,9 0.96531 8.5226 8,11 0.96059 8.6389

3.1.3. Exponentiated Weibull Distribution: 1   α −1α −1 − −1 The distribution function and the probability density −1 1 i γ αλ Γ +1∑ (−1) (i +1) ,α ∈ N function are given by   γ  i=0 i  E(Z ) =  1 γ α  α −1 − −1 = − − λ > α λ γ >  −1 1 α −1 Pi i γ Gα (x) {1 exp( ( x) } , x 0, , , 0 αλ Γ +1∑ (−1) (i +1) ,α ∉ N   γ  i=0 i! α −1 g(x) = αγλγ x−1{1− exp(−(λx)γ } exp{−(λx)γ }, x > 0 According to Asalm and Jun [2], the cumulative α,γ are the shape parameters and λ is the scale distribution function can be rewritten as parameter. α   1  aθ 0  1  α −1 i − The mean life time under exponentiated weibull γ p = 1− exp− αΓ +1∑ Ci (−1) (i +1) −1 ,α ∈ N distribution is  θ  γ     

548 Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions

These ideal parameters of the MDSP plan under (ii) As β1 increases, the ASN also increases but the exponentiated Lomax distribution are given in Table 6-7 interval of ASN is very narrow. with unequal and equal sample size and shape parameter θ γ = 1 and. It is noted from the Table 6-7 that (iii) In Table 7, for, = 10 with θ θ 0 (i) For fixed β1 as increases, the value of ASN n11 = 2,n21 = 2,n12 = 2,n22 = 13 the ASN is 3. θ 0 (iv) As the termination ratio increases from = to increases and the interval of ASN is very narrow. For a 0.5 θ a = 1 , the ASN also increases. example if β1 = 0.25 and = 10,12,14 then the θ (v) For different values of consumers’ risk, the mean ratio 0 values and termination ratios the maximum value of ASN is given by [2.9691,3.4537],[2.5289,3.6612], ASN is only 5 which is a minimum value compared to [2.6612,4.9075] all existing plan.

Table 6. Optimal parameters of MDSP under exponentiated weibull distribution with shape parameter γ = 1 and a = 1

θ β n ,n P P ASN 11 12 aL ASN L n12 ,n22 aU U θ 0

0.1 10 2,2 0.95973 2.9301 2,3 0.96106 3.395

12 2,3 0.96377 3.3952 2,4 0.9519 3.8603

14 2,4 0.96334 4.283 3,6 0.95043 4.5398

0.1 18 3,5 0.95637 4.283 3,6 0.95043 4.53598

20 3,7 0.954 4.7968 3,8 0.95011 5.05315

0.05 20 4,4 0.9562 4.5035 4,4 0.9562 4.5035

22 4,4 0.96306 4.5001 4,5 0.95731 4.629

0.01 24 5,5 0.95218 5.2894 5,6 0.95001 5.3473

Table 7. Optimal parameters of MDSP under exponentiated weibull distribution with shape parameter γ = 2 and a = 1

θ β P P ASN n11 ,n12 aL ASN L n12 ,n22 aU U θ 0

0.25 10 2,2 0.9675 2.9691 2,3 0.95657 3.4537

12 2,4 0.96111 2.5289 2,5 0.9541 3.6612

14 2,5 0.96499 2.6612 2,6 0.95985 4.9075

0.1 16 3,5 0.9566 4.4981 3,6 0.95069 4.7977

18 3,7 0.9648 4.498 3,6 0.95989 4.7977

0.05 20 4,6 0.95393 4.988 4,7 0.95012 5.15274

22 4,7 0.95674 4.988 4,7 0.95001 5.15274

Mathematics and Statistics 8(5): 542-550, 2020 549

Table 8. Comparison of the existing models and the proposed model

θ β ASN( p ) with γ = 2 and a = 1 θ 2 0

Exponentiated Exponentiated Exponentiated Gamma Weibull Birnbaum-Saunders Gamma Lomax Weibull Distribution Distribution Distribution Distribution Distribution Distribution 0.25 4 (2,3)3.447 (2,4)3.984 (9,10)11.580

6 (2,3)3.447 (2,12)7.953 (6,7)8.219 (6,7)8.4917

8 2,8)5.859 (2,43)23.332 (5,6)9.014 (7,10)10.114

10 (2,19)11.164 (3,71)27.081 (3,4)6.226 (7,17)12.294 (2,2)2.9291

4. Discussion Here α,λ are the shape and scale parameters respectively. Modified double sampling plan was done with respect to The ASN values of Exponentiated Exponential various distributions and the comparison of existing distribution did not give a better result as compared to the models and proposed model is shown in Table 8. proposed model. Exponentiated Lomax distribution is found to be (i) better than Weibull distribution as the ASN value is smaller 5. Conclusions (ii) better than Gamma distribution because in Gamma distribution there is jump in ASN values from 3.447 Modified average sample number (MASN) for improved to 5.859 and then to 11.16 where as in EL distribution double sampling plan-IDSP based on truncated life test for there is slow change in ASN values. Hence as θ popular exponentiated family such as exponentiated θ 0 gamma, exponentiated lomax and exonentiated Weibull increases the ASN of EL will be smaller. distribution were developed and compared. Exponentiated (iii) better than Birnbaum-Saunders distribution because Gamma, Exponentiated Lomax and Exponentiated Weibull θ perform better than the existing model when shape as increases the ASN may not exist. parameter and mean termination ratio increases. Also, θ 0 modified average sample number provides an interval Exponentiated Weibull Distribution is better than which would be much useful for both producer and Gamma, Weibull and and Birnbaum-Saunders distribution consumer regarding the sample size and average sample number even when value of the shape parameter and because the ASN is very smaller for larger values of θ . θ termination ratio increases. Exponentiated distributions 0 performed well compared to other distributions and is Exponentiated Gamma Distribution gives smaller better suited to truncated life test in acceptance sampling ASN values as shape parameter and termination ratio plan. increases.

Exponentiated Exponential Distribution It is a particular case of exponentiated weibull REFERENCES distribution. [1] Aslam M, S. Balamurali, Chi-Hyuck and Aneela-Meer: f (x,α,λ) = αλ(1− e−λx )α −1e−λx , x > 0 Time – truncated attribute sampling plans using EWMA for Weibull and Burr type X distributions, Communication in Statistics – Simulation and Computation 2017 F(x,α,λ) = (1− e−λx )α ; x > 0 [2] Aslam M, S.Balamurali and Touqeer Arif Improved double The mean life time under exponentiated exponential acceptance sampling plan based on truncated life test for some popular statistical distributions. 2016, Journal of distribution is given by statistical Computation and simulation 1 [3] Aslam M, Jun C-H. A double acceptance sampling plan for µ = (ψ (α +1) −ψ (1)) λ generalized log-logistic distributions with known shape parameters. J Appl Stat. 2010;37(3):405–414

550 Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions

[4] Aslam M, Jun C-H, Ahmad M. New acceptance sampling [11] Purkar S, Maheshwari G, Khandwawala AI. Design of a plans based on life tests for Birnbaum–Saunders double acceptance sampling plan to minimize a consumer’s distributions. J Statist Simul Comput. 2011;81(4):461–470. risk considering an OC curve; a case study. Int J Emerg Technol. 2011;2(1):114–118. [5] Aydemir E, Olgun MO. An application of single and double acceptance sampling plans for manufacturing system. J Eng [12] Shruthi. G and O.S.Deepa, 2018: “Average run length for Sci Design. 2010;2(1):65–71. exponentiated distribution under truncated life test, International Journal of Mechanical Engineering and [6] Birnbaum ZW, Saunders SC. A new family of life Technology (IJMET)”, Volume 9, Issue 6, pp.1180-1188. distributions. J. Appl. Probab. 1969;6:637–652. [13] Sreeja M. Krishnan and O. S. Deepa, 2019: Control Charts [7] Deepa O.S. (2015): Application of acceptance sampling plan for Multiple Dependent State Repetitive Sampling Plan in green design and manufacturing in the International Using Fuzzy Poisson Distribution, International Journal of Journal of Applied Engineering Research ,Special issue, Vol Civil Engineering and Technology (IJCIET), Volume 10, 10, No.2, pp. 1498-1499. Issue 1, pp.509-519. [8] Deepa O.S. (2015): Optimal production policy for the design [14] S.Balamurali, P. Jaydurga and M. Usha: of of green supply chain model in the International Journal of repetitive group sampling plans for Weibull and gamma Applied Engineering Research, Special issue,, Vol 10, No.2, distributions with applications and comparisons to the 2015, pp. 1600-1601. Birnbaum-Saunders distribution Jan 2018, Journal of Applied Statistics [9] Deros BM,Peng CY, Ab Rahaman MN,Ismail AR,Sulong AB: Assessing acceptance sampling application in [15] S.Balamurali, P. Jaydurga and M. Usha Designing of manufacturing electrical and electronic products. J Bayesian Multiple Deferred State Sampling plan Based on Achievements Mater Manuf Eng. 2008;31(2):622–628 Gamma- Poisson Distribution, American Journal of Mathematical and Management Sciences, 2016. [10] Kantam RRL, Rosaiah K. Half logistic distribution in acceptance sampling based on life tests. IAPQR Trans. [16] Tsai, T. -R. and Wu, S. -J. (2006). Acceptance sampling 1998;23:117–125 based on truncated life tests for generalized Rayleigh distribution, Journal of Applied Statistics, 33, 595–600.

Mathematics and Statistics 8(5): 551-558, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080508

Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach

Ali F Jameel1,*, Sardar G Amen1,2, Azizan Saaban1, Noraziah H Man1, Fathilah M Alipiah1

1School of Quantitative Sciences, College of Art and Sciences, Universiti Utara Malaysia (UUM), Malaysia 2Department of Financial and Banking, Collage of Business Administration and Financial Science, Al-Kitab University, Iraq

Received April 13, 2020; Revised July 6, 2020; Accepted July 20, 2020

(a): [1] Ali F Jameel, Sardar G Amen, Azizan Saaban, Noraziah H Man, Fathilah M Alipiah , "Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach," Mathematics and Statistics, Vol. 8, No. 5, pp. 551 - 558, 2020. DOI: 10.13189/ms.2020.080508. (b): Ali F Jameel, Sardar G Amen, Azizan Saaban, Noraziah H Man, Fathilah M Alipiah (2020). Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach. Mathematics and Statistics, 8(5), 551 - 558. DOI: 10.13189/ms.2020.080508. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Delay differential equations (known as Approximate Methods, Single Parametric form Fuzzy DDEs) are a broad use of many scientific researches and Numbers, Double Parametric form Fuzzy Numbers engineering applications. They come because the pace of the shift in their mathematical models relies all the basis not just on their present condition, but also on a certain past cases. In this work, we propose an algorithm of the approximate method to solve linear fuzzy delay differential 1. Introduction equations using the Homotopy Perturbation Method with The variety of life experiments in science and double parametric form fuzzy numbers. The detailed engineering can be formulated in the form of ordinary or algorithm of the approach to fuzzification and partial differential equation models. Models that involve defuzzificationis analysis is provided. In the initial delaying differential equations (DDEs) are defined as type conditions of the proposed problem there are uncertainties of differential equations where the time derivatives at the with regard to the triangular fuzzy number. A double current time depend on the solution and possibly its parametric form of fuzzy numbers is defined and applied derivatives at previous times. A class of such equations, for the first time in this topic for the present analysis. This which involve derivatives with delays as well as the method’s simplicity and ability to overcome delay solution itself, has been called neutral DDEs [1,2]. For differential equations without complicating Adomian polynomials or incorrect nonlinear assumptions. The instance, drives, sensors and field networks involved in approximate solution is compared with the exact solution feedback loops can involve delays. In epidemic dynamics, to confirm the validity and efficiency of the method to time delay system is also used for designing multiple handle linear fuzzy delay differential equation. To show different mechanisms [3]. Most DDEs that arise in the features of this proposed method, a numerical example population dynamics and model intrinsically is illustrated, involving first order fuzzy delay differential have nonnegative quantities. Therefore, it is important to equation. These findings indicate that the suggested establish that nonnegative initial data which give rise to approach is very successful and simple to implement. nonnegative solutions. For modeling a dynamic system where the information Keywords Fuzzy Delay Differential Equations regarding its behavior is insufficient, Fuzzy differential (FDDE), Homotopy Perturbation Method (HPM), equations (FDEs) are known as a useful tool [4]. When

552 Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach these experiments are not modelled completely and their membership function 휇(푥; 훼, 훽, 훾): nature is uncertain or vague, fuzzy models can be used. 0 , 푖푓 푥 < 훼 FDEs are suited to modeling such dynamical systems that 푥 − 훼 , 푖푓 훼 ≤ 푥 ≤ 훽 have been applied in many applications such as in 훽 − 훼 population modelling [5,6], mathematical physics [7] and 훾 − 푥 medical sciences [8]. Approximate methods were generally , 푖푓 훽 ≤ 푥 ≤ 푦 훾 − 훽 used to solve fuzzy delay differential equations (FDDEs) { 0 , 푖푓 푥 > 훾 with single parametric form fuzzy numbers [9,10]. Many researchers in the field of Science and Engineering have used Homotopy Perturbation Method (x) (HPM) to achieve an approximate solution for different kinds of linear and nonlinear models [10-13], even the 1 fuzzy partial differential equation models as in Sarmad et al [14]. Solving problems with HPM often help to better 0.5 understand a physical problem, and may help improve future procedures and designs used to solve these problems. 0 Also, this method has a useful feature in that it provides the    x solution in a rapid convergent power series with the elegantly computable convergence of the solution without any need for discretization and linearization as in numerical methods [15]. However, in using a single Figure 1. Triangular Fuzzy Number parametric form for HPM, a n x n fully fuzzy system has to and its r-level is: [휇]푟 = [훼 + 푟 (훽 − 훼), 훾 − 푟 (훾 − be converted to 2n x 2n crisp system. On the other hand, for 훽)], 푓표푟 푟 ∈ [0, 1], where the r-level (or r-cut) set of a the double parametric form, the n x n fully fuzzy system is fuzzy set 휇̃, labeled as 휇̃푟, is the crisp set of all 푥 ∈ 푇 such converted to the same order of crisp system, hence that 휇̃푟 ≥ 푟, that is, 휇̃푟 = {푥 ∈ 푇| 휇̃푟 > 푟, 푟 ∈ [0,1]} [18]. requiring a less amount of computation. The double Since the r-level set is the link between the fuzzy domain parametric form, which has been employed in fuzzy and the crisp domain, we can use the advantages of the differential equation, is more general and straightforward theories in crisp domain and in the fuzzy domain such that 푥−훼 [16]. = 푟 → 푟(훽 − 훼) = 푥 − 훼 → 푥 = 훼 + 푟(훽 − 훼) the 훽−훼 Our aim here is to construct a new form of HPM based 훾−푥 lower bound of fuzzy number, and = 푟 → 푟(훾 − 훽) = on the approach of double parametric form of fuzzy 훾−훽 numbers to solve first order DDEs using fuzzy-set theory 훾 − 푥 → 푥 = 훾 − 푟(훾 − 훽) upper bound of fuzzy number. properties. The outline of this research is as follows: Section 2 2.2. Single Parametric form of Fuzzy Number [19] presents some tools and definitions of the fuzzy number for the fuzzy analysis of the fuzzy model. Section 3 introduces The class of all fuzzy subsets of ℝ is denoted by 퐸̃ and the defuzzification of the general FDDE in new fuzzy the solution of Fuzzy Initial Value Problem (FIVP) satisfy number form. In the Section 4, we modified the standard the following properties: HPM into double parametric form fuzzy numbers form. 1. 휇(푥) is normal, that is, ∃푥0 ∈ ℝ with 휇(푥0) = 1, Section 5, we implement the HPM in the Section 4 on first 2. 휇(푥) is convex fuzzy set, that is, 휇(휆푥 + (1 − 휆)푡) ≥ order linear FDDE to show the capability of the method. min {휇(푥), 휇(푡)} ∀푥, 푡 ∈ ℝ, 휆[0,1], Finally section 6 presents the conclusions of this research. 3. 휇(푥) is upper semi-continuous on ℝ, 4. {푥 ∈ ℝ: 휇(푥) > 0} is compact. 2. Fuzzy Numbers 퐸̃ is called the space of fuzzy numbers and ℝ is a proper subset of 퐸̃. Define the r-level set 푥 ∈ ℝ, [휇] = {푥 \ µ(푥) ≥ 푟}, 0 ≤ 2.1. Triangular Fuzzy Number [17] 푟 푟 ≤ 1, where[휇]0 = {푥 \ µ(푥) > 0} is compact, which Fuzzy numbers are a subset of the real numbers set, and is a closed bounded interval and denoted by [휇]푟 = represent uncertain values. Fuzzy numbers are linked to [휇(푥), 휇(푥)] . In the single parametric form, a fuzzy degrees of membership referring to how true it is to say if number is represented by an ordered pair of something belongs or does not belong to a determined set. functions [휇(푥), 휇(푥)], 푟 ∈ [0,1] which satisfies: A fuzzy number µ is called a triangular fuzzy number if it is defined by three numbers  <  <  where the graph 1. 휇(푥) is a bounded left continuous non-decreasing of 휇(푥) is a triangle with the base on the interval [, ] function over [0,1]. and vertex at 푥 =  as illustrated in Figure 1 and where its 2. 휇(푥) is a bounded right continuous non-increasing membership function is given by the following function over [0,1].

Mathematics and Statistics 8(5): 551-558, 2020 553

3. 휇(푥) ≤ 휇(푥), 푟 ∈ [0,1], where a crisp number r is According to [23] and for all 푟 ∈ [0,1] and 훽 ∈ [0,1] simply represented by 휇(푟) = 휇(푟) = 푟, 푟 ∈ [0,1]. the double parametric form is given in one equation as follows: 훽 [푦′(푥; 푟) − 푦′(푥; 푟)] + 푦′(푥; 푟) = 훽[퐺 − 퐹] + 퐹, (3) 2.3. Double Parametric Form of Fuzzy Number [20]

Using the parametric form as defined in definition 퐸̃ = 훽 [푦(푥0; 푟) − 푦(푥0; 푟)] + 푦(푥0; 푟)

[휇(푟), 휇(푟)], then, one may represent this in crisp number = 훽 [푦0(푟) − 푦0(푟)] + 푦0(푟). using double parametric form as 퐸̃ (푟; 훽) = 훽 [휇(푟) − 휇(푟)] + 휇(푟), where 푟 and 훽 ∈ [0,1]. The parameter 훽 denotes the deform parameter such that if 훽 = 0 then 4. Fuzzy HPM in Double Parametric 퐸̃ (푟; 0) = 휇(푟) (lower bound fuzzy number) and if Form 훽 = 1 then 퐸̃ (푟; 1) = 휇(푟) (upper fuzzy number). In this This section presents the analysis of HPM for way, the double parametric form provides less approximately solving first order FDDE as in Eq. (1) under computational work than single parametric form. double parametric form fuzzy number. According to the fuzzy analysis by Ali et al in [24], to obtain the approximate solution of the first order fuzzy initial value 3. Fuzzy Delay Differential Equation in problem by HPM we rewrite Eq. (3) as follows: Double Parametric Form 훽 [퐿 (푦(푥; 푟) − 푦(푥; 푟))] + 퐿푦(푥; 푟) = 훽[퐺 − 퐹] + 퐹. Consider the FDDEs [9]. (4) ′ ̃ 푦̃ (푥) = 푓(푥, 푦̃(푥), 푦̃(푥 − 훼)), 푥 ∈ [푥0, 푋] (1) 훽 [푦(푥0; 푟) − 푦(푥0; 푟)] + 푦(푥0; 푟) 푦̃(푥0) = 푦̃0 = 훽 [푦0(푟) − 푦0(푟)] + 푦0(푟), where for all fuzzy level sets 푟 ∈ [0,1] and 훽 ∈ [0,1], we have the following defuzzifications: where 퐿 is the linear operators referring to the first order 푑 1. The fuzzy functions 푦̃(푥) [19] is denoted as H-derivative of Eq.(4) such that 퐿 = and 퐹, 퐺 are 푑푥 [푦̃(푥; 푟)]훽 = 훽 [푦(푥0; 푟) − 푦(푥0; 푟)] + 푦(푥0; 푟), nonlinear operators followed by the inverse operators −1 푥[̃] 2. The fuzzy delay functions 푦̃(푥 − 훼) is denoted as 퐿 = ∫0 . 푟 푑휏 . According to HPM, we construct the following homotopy equation: [푦̃(푥 − 훼; 푟)]훽 = 훽 [푦(푥 − 훼; 푟) − 푦(푥 − 훼; 푟)] + 푦(푥 − 훼; 푟), 퐻(푥, 푝; 푟; 훽) = 퐿 [훽 (푦(푥; 푟) − 푦(푥; 푟)) + 푦(푥; 푟) − 3. The fuzzy first order H-derivative (see [21]) ′ ′ 훽 (푦 (푥; 푟) − 푦0(푥; 푟)) − 푦0(푥; 푟)] + 푝 [퐿훽 (푦 (푥; 푟) − [푦̃ (푥; 푟)]훽 = 훽 [푦 (푥; 푟) − 푦′(푥; 푟)] + 푦′(푥; 푟), and 0 0 the fuzzy initial condition 푦0(푥; 푟)) − 푦0(푥; 푟) − 훽[퐺 − 퐹] − 퐹] = 0, (5) [푦̃(푥0; 푟)]훽 = 훽 [푦(푥0; 푟) − 푦(푥0; 푟)] + 푦(푥0; 푟) in where 푝 ∈ [0, 1] is an embedding parameter to deform the form of fuzzy triangular fuzzy number. Eq. (5) and initial guessing functions can be defined as ̃ ( ) ( ) 4. Let the fuzzy function 푓(푥, 푦̃ 푥 , 푦̃ 푥 − 훼 ) = follows: 푓̃ (푥, 푈̃(푥)) such that [푦̃0(푥; 푟)]훽 = 훽 (푦0(푥; 푟) − 푦0(푥; 푟)) − 푦0(푥; 푟) = 푓̃ (푡, 푈̃(푥)) = [푓 (푡, 푈̃(푥)) , 푓 (푡, 푈̃(푥))]. 훽 [푦0(푟) − 푦0(푟)] + 푦0(푟). (6) By using Zadeh extension principles [22], we have the For all 푥 ∈ [푥 , 푋] and from Eqs. (5-6) we have following membership functions 0

′ 훽 (푦(푥; 푟) − 푦(푥; 푟)) 퐹 (푥, 푈̃(푥; 푟)) = 푚푖푛{푦̃ (푥, 휇̃(푟)): 휇|휇 ∈ [푈̃(푥)]푟},

+푦(푥; 푟) 퐺 (푥, 푈̃(푥; 푟)) = 푚푎푥{푦̃′(푥, 휇̃(푟)): 휇|휇 ∈ [푈̃(푥)] }, 퐻(푥, 0; 푟; 훽) = 퐿 = 0 푟 −훽 (푦0(푥; 푟) − 푦0(푥; 푟)) Where (7) [ −푦0(푥; 푟) ] 푓 (푥, 푈̃(푥; 푟)) = 퐹 (푡, 푈(푥; 푟), 푈(푥; 푟)) = 퐹 (푥, 푈̃(푥; 푟)) , { 퐻(푥, 1; 푟; 훽) = 퐿훽 (푦 (푥; 푟) − 푦0(푥; 푟)) 푓 (푥, 푈̃(푥; 푟)) = 퐺 (푡, 푈(푥; 푟), 푈(푡; 푟)) = 퐺 (푥, 푈̃(푥; 푟)) . 0 −푦 (푥; 푟) − 훽[퐺 − 퐹] − 퐹 = 0, (2) { 0

554 Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach where 푝 changes from 0 to 1. Eqs. (6-7) are called 5. Results and Discussion deformation homotopy. The usage of 푝 as a small parameter is to construct HPM series function from Eq. (7) In this section, we present a numerical problem such that each 푦̃(푡; 푟) in Eq. (3) follows involving FDDE with an approximate solution obtained by HPM described in Section 4. This is followed by fuzzy ∞ 푘 푦̃(푥; 푟; 훽) = ∑푘=0 푝 푦̃푘(푥; 푟; 훽) (8) analysis under double parametric form fuzzy number. Consider the linear FDDE with fuzzy initial conditions [9]: Finally, substitute Eq. (8) in Eq. (5) and then collect all terms of the same powers of 푝 such that 푟 ∈ [0,1] and 1 푥 푥 1 푦̃′(푥) = 푒2푦̃ ( ) + 푦̃(푥), 푥 ∈ [0,1], (13) 훽 ∈ [0,1] such that: 2 2 2 푦̃(0) = [푟, 2 − 푟], 푟 ∈ [0,1], 훽 (푦0(푥; 푟) − 푦0(푥; 푟)) + 푦0(푥; 푟) 푝0: { Then double parametric form of Eq. (13) is given by: = 훽 [푦0(푟) − 푦0(푟)] + 푦0(푟), 1 푥 푥 훽 [푦′(푥; 푟) − 푦′(푥; 푟)] + 푦′(푥; 푟) = { 푒2훽 [푦 ( ; 푟) − 2 2 ( ) ( ) ( ) 푥 푥 1 훽 (푦1 푥; 푟 − 푦1 푥; 푟 ) + 푦1 푥; 푟 푦 ( ; 푟)] + 푦 ( ; 푟) + 훽 [푦(푥; 푟) − 푦(푥; 푟)] + 푦(푥; 푟)}, 퐿 [ ] − 2 2 2 1 푝 : +훽 (푦0(푥; 푟) − 푦0(푥; 푟)) + 푦0(푥; 푟) 훽 [푦(0; 푟) − 푦(0; 푟)] + 푦(0; 푟) = 훽[2 − 2푟] + 푟, [ ] 훽 퐺0 − 퐹0 − 퐹0 = 0 where 훽 ∈ [0,1] is a free parameter. According to [8] the { 훽[푦 (푥0; 푟) − 푦 (푥0; 푟)] + 푦 (푥0; 푟) = 0, 1 1 1 exact solution of Eq. (13) in double parametric form is given by: 퐿 [훽 (푦1(푥; 푟) − 푦1(푥; 푟)) + 푦1(푥; 푟)] 2 훽[2 − 2푟]푒푡 + 푟푒푡, 푝 : −훽[퐺 − 퐹 ] − 퐹 = 0 (9) 1 1 1 {훽[푦2(푥0; 푟) − 푦2(푥0; 푟)] + 푦2(푥0; 푟) = 0, If 훽 = 0 the exact lower solution in single parametric form is given by 푦(푥; 푟) = 푟푒푡, and if 훽 = 1 the exact

퐿 [훽 (푦푘(푥; 푟) − 푦푘(푥; 푟)) + 푦푘(푥; 푟)] upper solution in single parametric form is given by 푘+1 푦(푥; 푟) = (2 − 푟)푒푡. 푝 : −훽[퐺 − 퐹 ] − 퐹 = 0 푘 푘 푘 훽[푦 (푥 ; 푟) − 푦 (푥 ; 푟)] + 푦 (푥 ; 푟) = 0. 0 { 푘 0 푘 0 푘 0 푝 : {훽 (푦0(푥; 푟) − 푦0(푥; 푟)) + 푦0(푥; 푟) = 훽[2 − 2푟] + 푟 Then the approximate solution is given by setting 푝 = 1 as follows: 훽 (푦1(푥; 푟) − 푦1(푥; 푟)) + 푦1(푥; 푟) = 푡 훽 (푦(푥; 푟) − 푦(푥; 푟)) + 푦(푥; 푟) = ∑푚−1 훽 (푦 (푥; 푟) − 1 푡 푡 푡 푖=0 푖 1 푒2훽 [푦 ( ; 푟) − 푦0 ( ; 푟)] + 푦0 ( ; 푟) + 푝 : 퐿−1 [2 0 2 2 2 ] 1 푦푖(푥; 푟)) + 푦푖(푥; 푟). (10) 훽 [푦 (푡; 푟) − 푦0(푡; 푟)] + 푦0(푡; 푟) 2 0 In the standard HPM for solving fuzzy differential { 훽[푦1(푥0; 푟) − 푦1(푥0; 푟)] + 푦1(푥0; 푟) = 0 equation with fuzzy numbers with single parametric form, (14) it is required to analyze the HPM for the lower and upper bound solution of Eq.(1) and hence would demand more 훽 (푦2(푥; 푟) − 푦2(푥; 푟)) + 푦2(푥; 푟) = computational work. The advantage of the parameter 훽 1 푡 푡 푡 푡 that deforms from 0 to 1 reduces the computational and 2 2 푒 훽 [푦1 ( ; 푟) − 푦1 ( ; 푟)] + 푦1 ( ; 푟) + analysis work to obtain the solution of fuzzy differential 푝 : 퐿−1 [2 2 2 2 ] equations for illustration, at 훽 = 0 in Eq. (10) we obtain 1 훽 [푦1(푡; 푟) − 푦1(푡; 푟)] + 푦1(푡; 푟) the lower solution of Eq. (1) 2 { 훽[푦 (푥0; 푟) − 푦 (푥0; 푟)] + 푦 (푥0; 푟) = 0 푚−1 2 2 2 푦(푥; 푟) = ∑푖=0 푦푖(푥; 푟). (11)

훽 (푦푘+1(푥; 푟) − 푦푘+1(푥; 푟)) + 푦푘+1(푥; 푟) = On the other hand at 훽 = 1 we obtain the upper solution of 1 푡 푡 푡 Eq. (1) 2 푒 훽 [푦푘 ( ; 푟) − 푦푘 ( ; 푟)] 2 2 2 푦(푥; 푟) = ∑푚−1 푦 (푥; 푟), (12) 푘+1 푡 푖=0 푖 푝 : 퐿−1 +푦 ( ; 푟) + 푘 2 and this is applicable for all the HPM analysis in double 1 ( ) ( ) ( ) parametric form fuzzy numbers. Details and illustration are 훽 [푦푘 푡; 푟 − 푦푘1 푡; 푟 ] + 푦푘 푡; 푟 [2 ] provided in the next section. { 훽[푦푘+1(푥0; 푟) − 푦푘+1(푥0; 푟)] + 푦푘+1(푥0; 푟) = 0

Mathematics and Statistics 8(5): 551-558, 2020 555

The HPM approximate solution series is given by: and the exact solution for all 푟 ∈ [0,1] is denoted by

5 푚−1 [퐸(푟; 훽)]퐻푃푀 = |푌̃(1; 푟; 훽) − 푦̃(1; 푟; 훽)| (16) 푦̃(푥; 푟) = ∑푖=0 ∑푖=0 훽 (푦 (푥; 푟) − 푦푖(푥; 푟)) + 푦푖(푥; 푟) 푖 where the fifth order HPM approximate solution in double (15) parametric for fuzzy number for all 푟 ∈ [0,1] and 훽 ∈ at 푥 = 1. The absolute errors between fifth-order HPM [0,1] is illustrated in Tables (1-6) and Figure 2 as follows:

Table 1. Fifth-order HPM solution of Eq. (13) at 푥 = 1 and 훽 = 0 for all 푟 ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 0 0 0

0.2 0.5436299819942914 0.5436563656918091 0.000026383697517728955

0.4 1.0872599639885827 1.0873127313836182 0.000052767395035457910

0.6 1.6308899459828743 1.6309690970754274 0.000079151092553075840

0.8 2.1745199279771654 2.1746254627672363 0.000105534790070915820

1 2.7181499099714570 2.7182818284590450 0.000131918487588311700

Table 2. Fifth-order HPM solution of Eq. (13) at 푡 = 1 and 훽 = 0.2 for all 푟 ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 1.0872599639885827 1.0873127313836182 0.00005276739503545791

0.2 1.4134379531851575 1.4135065507987037 0.00006859761354616190

0.4 1.7396159423817323 1.7397003702137890 0.00008442783205664384

0.6 2.0657939315783070 2.0658941896288745 0.00010025805056734782

0.8 2.3919719207748820 2.3920880090439600 0.00011608826907805181

1 2.7181499099714570 2.7182818284590450 0.00013191848758831170

Table 3. Fifth-order HPM solution of Eq. (13) at x = 1 and β = 0.4 for all r ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 2.1745199279771654 2.1746254627672363 0.00010553479007091582

0.2 2.2832459243760237 2.2833567359055980 0.00011081152957448381

0.4 2.3919719207748820 2.3920880090439600 0.00011608826907805181

0.6 2.5006979171737402 2.5008192821823220 0.00012136500858161980

0.8 2.6094239135725985 2.6095505553206833 0.00012664174808474370

1 2.7181499099714570 2.7182818284590450 0.00013191848758831170

Table 4. Fifth-order HPM solution of Eq. (13) at x = 1 and β = 0.6 for all r ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 3.2617798919657480 3.261938194150854 0.00015830218510570

0.2 3.1530538955668900 3.1532069210124930 0.00015302544560302

0.4 3.0443278991680320 3.0444756478741306 0.00014774870609857

0.6 2.9356019027691733 2.9357443747357688 0.00014247196659544

0.8 2.8268759063703150 2.8270131015974070 0.00013719522709187

1 2.7181499099714570 2.7182818284590450 0.00013191848758831170

556 Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach

Table 5. Fifth-order HPM solution of Eq. (13) at x = 1 and β = 0.8 for all r ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 4.349039855954331 4.3492509255344730 0.00021106958014183164

0.2 4.022861866757756 4.0230571061193880 0.00019523936163157174

0.4 3.696683877561181 3.6968632867043016 0.00017940914312042366

0.6 3.370505888364607 3.3706694672892160 0.00016357892460927560

0.8 3.044327899168032 3.0444756478741306 0.00014774870609857160

1 2.718149909971457 2.7182818284590450 0.00013191848758831170

Table 6. Fifth-order HPM solution of Eq. (13) at x = 1 and β = 1 for all r ∈ [0,1]

풓 풚(ퟏ; 풓) 풀(ퟏ, 풓) 푬(ퟏ, 풓)

0 5.436299819942914 5.436563656918090 0.00026383697517662340

0.2 4.8926698379486220 4.8929072912262810 0.00023745327765922752

0.4 4.3490398559543310 4.3492509255344730 0.00021106958014183164

0.6 3.8054098739600395 3.8055945598426630 0.00018468588262354757

0.8 3.2617798919657480 3.2619381941508540 0.00015830218510570760

1 2.7181499099714570 2.7182818284590450 0.00013191848758831170

HPM Y

0 HPM 0.2 5 0.4 Exact

4

3

2

1 0.6 0.8

1 r value 0.2 0.4 0.6 0.8 1.0

Figure 2. Comparison of fifth-order HPM and the exact solution at 푥 = 1 for different values of 훽 ∈ [0,1] and for all fuzzy level sets 푟 ∈ [0,1].

Mathematics and Statistics 8(5): 551-558, 2020 557

From Tables 1 and 6, the lower and upper bound solution 248.1999. of Eq. (13) is obtained with fifth-order HPM solution when [5] Omer, A and Omer, O., A Prey and Predator Model with 훽 = 0 and 1 respectively that satisfies the fuzzy Fuzzy Initial Values’, Hacettepe Journal of Mathematics differential equations solution for all level sets 푟 ∈ [0,1] and Statistics, 41(2)pp 387-395, 2013. for푥 = 1. Additionally, for each 훽 = 0.2,0.4, 0.6 and 0.8 also follow the fuzzy differential equations solution for all [6] Tapaswini, S and Chakravery, S., Numerical Solution of Fuzzy Arbitrary Order Predator-Prey Equations’, level sets 푟 ∈ [0,1] for 푥 = 1 and other results can be Applications and Applied Mathematics, 8(1), 647-673, obtained for all values of 푥 ∈ [0,1]. 2013. This analysis is illustrated in Figure 2 in the form of triangular fuzzy number, presenting and comparing the [7] El Naschie, M. S., From Experimental Quantum Optics to Quantum Gravity Via a Fuzzy Kahler Manifold’, Chaos results under double parametric form fuzzy number of fifth Solution and Fractals, 2 5, 969-977, 2005. order HPM and the exact solution of Eq. (13) for different values of 훽 ∈ [0,1]. [8] Abbod, M. F., Von Keyserlingk, D. G., Linkens, D. A and Mahfouf, M. (2001) ‘Survey of Utilization of Fuzzy Technology in Medicine and Healthcare’, Fuzzy sets and system, Vol.120, pp. 331-349. 6. Conclusions [9] S. Narayanamoorthy and T. L. Yookesh, Approximate A new form of HPM has successfully been constructed Method for Solving the Linear Fuzzy Delay Differential based on the approach of double parametric form of fuzzy Equations, Discrete Dynamics in Nature and Society numbers to solve first order DDEs using fuzzy-set theory Volume 2015, pp. 1-9, 2015. properties. In this work, homotopy perturbation method [10] N. Mikaeilvand, and L. Hossieni, The Taylor Method for (HPM) has successfully been analyzed, altered and applied Numerical Solution of Fuzzy generalized Pantograph to the approximate solution of first order FDDE using a Equations with Linear Functional Argument, International double parametric form of fuzzy numbers. The proposed Journal of Industrial Mathematics, 2(2), 115-127, 2010. double parametric form approach is found to be easy and [11] J.-H. He, Homotopy perturbation technique, Computer straightforward with less computational work. The Methods in Applied Mechanics and Engineering, 178(3-4), performance of the method is shown using a triangular 257-262, 1999. fuzzy number. It is interesting to note of the values of [12] J.-H. He, A coupling method of a homotopy technique and a 훽 ∈ [0,1] satisfy the fuzzy differential equation solution perturbation technique for non-linear problems, in the form of the triangular fuzzy number of all fuzzy level nternational Journal of Non-Linear Mechanics, 35(1), 7-43, set. 2000. [13] C. Chun, H. Jafari and Y.-I. Kim, Numerical method for the wave and non-linear diffusion equations with the homotopy Acknowledgments perturbation method, Computers & Mathematics with Applications, 57(7), 1226-1231, 2009. The authors are very grateful to the Ministry of Higher [14] Sarmad A. Altaie, Ali F. Jameel, Azizan Saaban, Homotopy Education of Malaysia for providing us with the Perturbation Method Approximate Analytical Solution of Fundamental Research Grant Scheme (FRGS) S/O number Fuzzy Partial Differential Equation, IAENG International 14188 that enables us to pursue this research. Journal of Applied Mathematics, 49(1), 22-28, 2019. [15] J. Biazar, and H. Ghazvini, Convergence of the Homotopy Perturbation Method for Partial Differential equations, Nonlinear Analysis, 10(5), 2633–2640, 2009. REFERENCES [16] Tapaswini, S., and Chakraverty, S. (2013), Numerical solution of uncertain beam equations using double [1] A. Alomari, S. M. Noorani, and R. Nazar, Solution of delay parametric form of fuzzy numbers. Applied Computational differential equation by means of homotopy analysis Intelligence and Soft Computing, 2013, pp. 100-111. method, Acta Applicandae Mathematicae, 108, 395-412, 2009. [17] Dubois, D and Prade, H. Towards fuzzy differential calculus, Part 3: Differentiation’, Fuzzy Sets and Systems, 8, pp. [2] F. Salehi, M. A. Asadi, M. M. Hosseini, Solving system of 225-233, 1982. DAEs by Modified Homotopy Perturbation Method, Journal of Computer Science & Computational [18] Bodjanova, S., Median Alpha-Levels of A Fuzzy Number, Mathematics, Vol. 22(6), 29-33, 2012. Fuzzy Sets and Systems, 157(7), 879-891, 2006. [3] S. Vilu, R. Rozita Ahmad, and U. Khair Salma Din, [19] Fard, O. S., An Iterative Scheme for the Solution of Variational Iteration Method and Sumudu Transform for Generalized System of Linear Fuzzy Differential Solving Delay Differential Equation, International Journal Equations’, World Applied Sciences Journal, 7, of Differential Equations, 2019,1-6, 2019. 1597-11604. [4] Buckley, J., & Feuring, T, Introduce to fuzzy partial [20] Diptiranjan. B and S. Chakraverty, New approach to solve differential equations, Fuzzy Sets and Systems, 105, 241– fully fuzzy system of linear equations using single and

558 Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach

double parametric form of fuzzy numbers, Indian Academy [23] Smita, T and Chakraverty, S., Numerical Solution of of Sciences, 40(1), 35-49, 2015. Uncertain Beam Equations Using Double Parametric Form of Fuzzy Numbers, Applied Computational Intelligence and [21] Guo, X., Shang, D and Lu, X. Fuzzy Approximate Solutions Soft Computing, 2013, 1-8,2013. Of Second-Order Fuzzy Linear Boundary Value Problems’, Journal of Boundary Value Problems, 2013, 1-17, 2013. [24] Jameel, A F, Saaban, A., Ahadkulov, H., and Alipiah, F. M. Approximate solution fuzzy pantograph equation by using [22] Zadeh, L. A., Toward A Generalized Theory of Uncertainty, homotopy perturbation method, Journal of Physics: Information Sciences,. 172(2), 1–40, 2005. Conference Series, 980 (1). 2017.

Mathematics and Statistics 8(5): 559-565, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080509

Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function

Siti Hajar Khairuddin*, Mohd Hilmi Hasan, Manzoor Ahmed Hashmani

Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Perak, Malaysia

Received April 13, 2020; Revised July 14, 2020; Accepted August 5, 2020

Cite This Paper in the following Citation Styles (a): [1] Siti Hajar Khairuddin, Mohd Hilmi Hasan, Manzoor Ahmed Hashmani , "Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function," Mathematics and Statistics, Vol. 8, No. 5, pp. 559 - 565, 2020. DOI: 10.13189/ms.2020.080509. (b): Siti Hajar Khairuddin, Mohd Hilmi Hasan, Manzoor Ahmed Hashmani (2020). Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function. Mathematics and Statistics, 8(5), 559 - 565. DOI: 10.13189/ms.2020.080509. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Fuzzy C-Means (FCM) is one of the mostly used techniques for fuzzy clustering and proven to be robust and more efficient based on various applications. Image segmentation, stock market and web analytics are 1. Introduction examples of popular applications which use FCM. One limitation of FCM is that it only produces Gaussian Fuzzy logic is an idea whereby we apply the membership function (MF). The literature shows that uncertainties in the real world to be applied in the different types of membership functions may perform computing world which represents the degree of truth. It better than other types based on the data used. This means contradicts with the crisp value or Boolean value (0 or 1) to that, by only having Gaussian membership function as an produce a certain result, which is not realistic [1][2]. The option, it limits the capability of fuzzy systems to produce idea of fuzzy logic was introduced by Dr Lotfi Zadeh when accurate outcomes. Hence, this paper presents a method to he was working in natural language which cannot be easily generate another popular shape of MF, the trapezoidal translated into absolute terms of true or false [3]. An shape (trapMF) from FCM to allow more flexibility to application of fuzzy logic can be found in fuzzy inference FCM in producing outputs. The construction of trapMF is system (FIS). Fuzzification is a component in an FIS where using mathematical theory of Gaussian distributions, an input variable is compared to a membership function and inflection points. The cluster (MF) to obtain the membership degree [4]. The centers or mean (μ) and standard deviation (σ) from the membership degree will go through the fuzzy rule engine Gaussian output are fully used to determine four for processing, such as decision making. Membership trapezoidal parameters; lower limit a, upper limit d, lower degree of a fuzzifier can be constructed by two methods: support limit b, and upper support limit c with the expert opinion and generated via data [5]. One of the assistance of function trapmf() in Matlab fuzzy toolbox. common methods for MF construction from data is through The result shows that the mathematical theory of Gaussian clustering. distributions can be applied to generate trapMF from Fuzzy clustering is a technique to handle unlabelled data, FCM. which may contain outliers and unusual patterns. Thus, Keywords Fuzzy C Means, Gaussian Distribution, membership functions can provide the possibility of one Normal Distribution Membership Function, Trapezoidal data point to belong to other groups or clusters [6]. The MF clusters of data are generated by a possibility distribution 560 Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function or collected from various resources. The measurement used mathematical representation of trapMF is as follows: in most clustering algorithms to determine the cluster centres is Euclidian distance [7]. Fuzzy C-Means (FCM) is (1) one of the mostly used techniques for fuzzy clustering [8]. Based on various applications such as web usage mining, In a simple explanation, as an example, for a property stock market, web analytics and image segmentation, FCM such as “small”, different values namely x are assigned, is proven to be more efficient, robust and reliable by its along with their degrees, μ(x). The main motivation in performance [9]. However, the resultant MF of FCM is of fuzzy implementation is to ensure the value x and x’ are only Gaussian shape due to a straightforward nature of close, along with their corresponding MFs μ(x) and μ(x’) clusters to be distributed [10]. There are other regularly which should be close too [19]. used parameterized MFs such as triangular and trapezoidal MFs. These MFs are used in specific cases such as antenna positioning fuzzy controller [11] and crime prevention analysis [12]. Thus, FCM should also have a capability to produce linear MFs. The construction of Gaussian MFs is straightforward and discussed in [13]. The construction of trapezoidal MFs is not quite straightforward and a method of convex hull is proposed in [12][13]. However, the implementation is unclear and it depends on specific cases [16]. In this paper, we present a method to produce trapezoidal MFs based on the integration of cluster centres produced Figure 1. Trapezoidal MF by FCM and mathematical theory of Gaussian distributions. This paper is organized as follows: In section 2, literature Gaussian Distribution review on trapezoidal MF and Gaussian distribution is Since trapMF in this paper is constructed via FCM, the introduced and an approximation of trapezoidal MFs is output of the data clustering is in the form of Gaussian MF. explained. The result and testing are in Section 3, and Hence, it is natural to use Gaussian distribution to convert Section 4 presents the conclusion. the Gaussian MF into trapMF. Gaussian distribution is also called normal distribution which states that a is normally distributed. A normal distribution is 2. Literature Review informally called the bell curve. The function to calculate the probability of a random variable to be within a Trapezoidal MF particular range of values, instead of taking on any one value is called the probability density function, shown in eq. According to [17], with experience, one can decide (2). which shape of MF is good for certain application and cases under consideration. This is where the degrees of freedom is offered in the fuzzy system environment since (2) the MFs can be of any shape and form as long as it could map the given datasets with the desirable membership where µ is the mean, σ is the standard deviation and σ2 is degrees. It also depends entirely on the size and type of the the . Since the mean and standard deviation are problem. The MF shapes are not the only concern as setting provided from the output of Gaussian MF, it is possible to up the interval and the numbers of MFs are considered mathematically construct the trapMF by using the important too [17]. In addition, trial and error method is Gaussian distribution. often used to determine the shape of MF. However, In a standard normal distribution, µ = 0 and σ = 1, and it trapezoidal MF (trapMF), which represents fuzzy intervals, is described by the probability density function (3), is proven to be easy to implement with fast computation [17]. In most practical applications, trapMF functions work well since they use linear interpolation to get both (3) endpoints of the interval. The theoretical explanation which proves the practicality of trapMFs is discussed in [18]. An intuitive explanation on how trapMF is whereby the factor 1 / 2 in this expression makes sure functioning well is explained in simple terms in [19]. In that the area under curve is equal to 1. Since the curve is short, a trapMF (Figure 1) is defined by a lower limit a, an symmetric, the inflection√ points휋 are x = +1 and x = -1. The upper limit d, a lower support limit b, and an upper support standard normal probability density function plot is as limit c, where a < b < c < d [18]. Compactly, the Figure 2.

Mathematics and Statistics 8(5): 559-565, 2020 561

produce a symmetrical trapezoid shape. As shown in Figure 3, each tail of the curve has the area which is equal to (1-C) / 2. The area in each tail is equal to 0.05/2 = 0.025 for a 95% confidence interval [22]. The value z* in Figure 3 is representing the inflection points on the standard normal density Gaussian curve which shows that the probability of observing a value greater than z* which equals to p is known as the upper p critical value of the standard normal distribution. For a confidence interval with level C, the value p is equal to (1-C)/2. The interval is (-1.96, 1.96), since 95% of the area under the curve falls within this interval [22].

Inflection Points Approximations for TrapMF Figure 2. Probability Density Function Inflection points are where the curve turns inwards or Based on the standard normal distribution in eq. (3), the concave where for a Gaussian shape, it will concave near value of the mean (cluster centers) and standard deviation the peak in a downward manner [23]. The inflection points obtained from the Gaussian MF will give information to need to be obtained from the probability density function of the construction of trapMF based on the area under curve. the Gaussian in order to locate the x-axis points. According Through the theory of inflection point, we can use the to [23], inflection points are normally at ± σ, ±α and 2. interval estimate such as the confidence interval and These points are applied for symmetrical and normal inflection points to approximate the range of lower and Gaussian shape. α√ upper limits a, b, c and d of the trapezoidal shape. The probability density function (PDA) for a normally distributed random variable with a known mean μ and standard deviation is in eq. (4). Confidence Interval f( x ) =1/ (σ √(2 π) )exp[-(x - μ)2/(2σ2)] (4) The notation exp[y] = ey is used where e is a constant of approximately by 2.71828 [24]. The first derivative of the PDA is found by getting the derivative for ex and the chain rule is applied in eq. (5). f’ (x ) = -(x - μ)/ (σ3 √(2 π) )exp[-(x -μ) 2/(2σ2)] = -(x - μ) f( x )/σ2 (5) The second derivative of the PDA is calculated by using the product rule in eq. (6): f’’( x ) = - f( x )/σ2 - (x - μ) f’( x )/σ2 (6) The simplified expression is in eq. (7). f’’( x ) = - f( x )/σ2 + (x - μ)2 f( x )/(σ4) (7) The expression in (7) is set to zero to solve for x. Since f(x) is a non zero function [24], both sides of the equation Figure 3. Confidence Interval of 95% can be divided by the function in eq. (8): 2 2 4 A confidence interval is a range of values where it is 0 = - 1/σ + (x - μ) /σ (8) commonly known that the true value lies in [21]. This can Both sides can be multiplied by σ4 to eliminate the also be obtained by a known mean and standard deviation fractions [24] as shown in eq. (9). (sigma) which makes the range to be helpful to 0 = - σ2 + (x - μ)2 (9) approximate the trapMF data distributions. It is suitable to be used to estimate the range calculated from a given In order to solve x (inflection point for each side), by dataset [22] The common choices for the confidence level, using σ2 = (x - μ)2, calculate the square root for both sides, C are 0.90, 0.95 and 0.99 which correspond to a normal to produce eq. (10). Gaussian curve area percentage. Calculations for both left ±σ = x – μ (10) half and the right half of the curve will be the same since the Gaussian curve is symmetrical, which naturally will Based on the equation, the inflection points occurred

562 Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function when x = μ ± σ [24]. It is located one standard deviation shows the trapMF generated from the approximations for above the mean and one standard deviation below the mean. the respective GaussianMF. Hence, the cluster center obtained from FCM clustering is used in eq. (10) and it is subtracted from the standard deviation. It is applied for both left and right side of the curve. To get the inflection points of lower limit a, and upper limit d, the theory of confidence interval (-1.96, 1.96) is used whereby an approximation of x = μ ± 2σ suits the result for the datasets in testing process.

3. Result and Testing Figure 5. TrapMF and GaussianMF For the purpose of testing the theory, a dataset which contains the response time of a web service is used in order Figure 6 shows the trapMF after being separated by its to generate the GaussianMF from FCM. The dataset respective GaussianMF. The trapMF will be compared consists of 6145 points and is milliseconds (ml) in unit. The with trapMF generated by toolbox in Matlab to validate its dataset is one of the attributes to access the quality of web parameters. service and it is obtained from online resources. FCM will generate two outputs, mean and standard deviation, which will be used for trapMF approximation. Figure 4 shows the result from FCM. The number of clusters (three in this particular sample) is determined by using Clustering Validity Index (CVI) [5].

Figure 4. Gaussian MF Figure 6. TrapMF By using the cluster centers (μ) and sigma (σ) of the In Figure 7, fuzzy toolbox in Matlab is used to validate Gaussian output, trapMF approximation is performed. the trapMF generated from the mathematical calculation Both values are maintained while the inflection points are with the trapMF generated by toolbox. A function called tested in trial and error manner based on eq. (10). Figure 5 evalmf() is used to evaluate the result.

Mathematics and Statistics 8(5): 559-565, 2020 563

Figure 7. Matlab implementation of trapMF Table 1 shows the cluster centers and sigma for the Next, to test the significance between the manually chosen dataset. From the obtained results, approximation generated and Matlab toolbox-generated trapMF, t-test is from GaussianMF and TrapMF is conducted and it performed. The p-value, which is the probability that the produces the parameters a, b, c and d as in Table 2. To get results from the dataset are occurred by chance will be the the parameters, a function to produce trapezoidal ratio that will determine the validity of the result. P <0.05 membership functions called trapmf() in Matlab Fuzzy means the data have statistically significant difference [25]. toolbox is used. To use trapmf(), the membership function Our target is to validate whether the output produced by range is obtained from FCM output and applied in the manually generated trapMF is not significantly different toolbox. Next, set the type to trapmf(). The availability of from the toolbox-generated trapMF. Table 3, 4 and 5 show the toolbox to generate and edit the membership functions the t-test result from the web response time dataset for each as well as to design fuzzy inference system may help users cluster from the FCM outputs. Based on the p values, it to apply fuzzy logic. However, mathematical solution to proves that the results are valid and statistically significant. generate membership function is still important since dealing with a toolbox usually involves software Table 3. T-test for cluster 1 compatibility issue and is not publicly available. t-Test: Two-Sample Assuming Unequal Variances Table 1. Mean and Sigma for GaussianMF Variable 1 Variable 2

Cluster Mean (μ) Sigma (σ) Mean 0.17954918 0.190601

Cluster 1 2.6558 57.8258 Variance 0.11108443 0.116411

Cluster 2 2.8754 53.1434 Observations 445 445

Cluster 3 3.3927 45.1600 Hypothesized Mean Difference 0 df 888 Table 2. TrapMF Parameters t Stat -0.48880072 Cluster a b c d P(T<=t) one-tail 0.31255174

Cluster 1 38.3746 43.4637 46.8564 51.9455 t Critical one-tail 1.64657139

Cluster 2 47.3926 51.7057 54.5812 58.8943 P(T<=t) two-tail 0.62510348

Cluster 3 52.5141 56.4979 59.1538 63.1377 t Critical two-tail 1.96263904

564 Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function

Table 4. T-test for cluster 2 (FRGS/1/2018/ICT02/UTP/02/1); a grant funded by the t-Test: Two-Sample Assuming Unequal Variances Ministry of Education, Malaysia. Variable 1 Variable 2 Mean 0.1405656 0.149223 Variance 0.0924348 0.09733 REFERENCES Observations 445 445 [1] Zadeh, L. A. (1965). Fuzzy sets. Information and Control, Hypothesized Mean Difference 0 8(3), 338–353. https://doi.org/10.1016/S0019-9958(65)902 df 887 41-X

t Stat -0.41925 [2] Mendel, J. M., John, R. I., & Liu, F. (2006). Interval Type-2 P(T<=t) one-tail 0.3375675 Fuzzy Logic Systems Made Simple. IEEE Transactions on Fuzzy Systems, 14(6), 808–821. https://doi.org/10.1109/T t Critical one-tail 1.6465733 FUZZ.2006.879986

P(T<=t) two-tail 0.6751351 [3] Li, Jiamin & W. Lewis, Harold. (2016). Fuzzy Clustering t Critical two-tail 1.9626421 Algorithms — Review of the Applications. 282-288. 10.1109/SmartCloud.2016.14.

Table 5. T-test for cluster 3 [4] Jang, J.-R. (1993). ANFIS: adaptive-network-based fuzzy t-Test: Two-Sample Assuming Unequal Variances inference system. IEEE Transactions on Systems, Man, and Cybernetics, 23(3), 665–685. https://doi.org/10.1109/21.25 Variable 1 Variable 2 6541Kaymak, U. and Setnes, M. (2000). Extended Fuzzy Clustering Algorithm. ERIM Report Series Research in Mean 0.152251 0.161556 Management. 1-23.

Variance 0.098316 0.103371 [5] M.H. Hasan, J. Jaafar, M.F. Hassan, (2016). Fuzzy C-Means and Two Clusters’ Centers Method for Generating Interval Observations 445 445 Type-2 Membership Function, International Conference on Hypothesized Mean Difference 0 Computer and Information Sciences (ICCOINS) 2016.

df 887 [6] Li, Jiamin & W. Lewis, Harold. (2016). Fuzzy Clustering Algorithms — Review of the Applications. 282-288. t Stat -0.43705 10.1109/SmartCloud.2016.14.

P(T<=t) one-tail 0.331092 [7] Dunn, J. (1973). A Fuzzy Relative of the ISODATA Process t Critical one-tail 1.646573 and Its Use in Detecting Compact, Well-Separated Cluster. Journal of Cybernetics.3(3). 32-57. P(T<=t) two-tail 0.662184 [8] Kaymak, U. and Setnes, M. (2000). Extended Fuzzy t Critical two-tail 1.962642 Clustering Algorithm. ERIM Report Series Research in Management. 1-23.

[9] Singh, T., & Mahajan, M. (2014). Performance Comparison 4. Conclusions of Fuzzy C Means with Respect to Other Clustering Algorithm. International Journal of Advanced Research in In this paper, we present a method to generate Computer Science and Software Engineering, 4(5). trapezoidal MFs based on the integration of cluster centers Retrieved from https://pdfs.semanticscholar.org/3720/5c8f e390d36bde67a2e0f614d5ce8bba829b.pdf produced by FCM and mathematical theory of Gaussian distributions. MF is developed by using FCM clustering [10] Castillo, O. and Melin, P., (2008). Design of Intelligent method in Matlab environment. The mean and sigma of the Systems with Interval Type-2 Fuzzy Logic. Type-2 Fuzzy Gaussian output is then used to mathematically construct Logic: Theory and Applications - Studies in Fuzziness and Soft Computing, 223, pp. 53-76. the trapMF. In overall, the proposed method can provide more flexibility to FCM when it allows the generation of [11] Kalist, V., Ganesan, P., Sathish, B.S., Jenitha, J.M.M., other membership function types. For future works, the (2015). Possiblistic-Fuzzy C-Means Clustering Approach proposed method may be further explored to generate for the Segmentation of Satellite Images in HSL Color Space. Procedia Computer Science, 57, pp.49-56. trapezoidal fuzzy type-2 membership functions. [12] Wang, L. and Wang, J. (2012). Feature Weighting fuzzy clustering integrating rough sets and shadowed sets. International Journal of Pattern Recognition and Artificial Acknowledgements Intelligence, 26(4).

This research is an ongoing research supported by [13] Rajen Bhatt and M.Gopal (2006). “Neuro-fuzzy decision Fundamental Research Grant Scheme trees”. International Journal of Neural Systems.vol. 16, no.1,

Mathematics and Statistics 8(5): 559-565, 2020 565

pp. 63-78. [20] Reyna Vargas, M. E. (2018). Fuzzy Analytical Hierarchy Process Approach for Multicriteria Decision-Making with [14] M. Sugeno and T. Yasukawa (1993). “A fuzzy-logic based an Application to developing an ‘Urban Greenness Index’. approach to qualitative modeling”, IEEE Transactions on (Masters Thesis). University of Toronto. Retrieved from Fuzzy Systems, vol. 1, pp.6-31. http://hdl.handle.net/1807/91594 [15] M.R. Emami, I.B. Turksen, and A.A. Goldenberg (1998). [21] P.A., Wasserman, S.S., and Levine, M.M. (1992), "A Development of a systematic methodology of fuzzy logic Critical Appraisal of 98.6 Degrees F, the Upper Limit of the modeling. IEEE Transactions on Fuzzy Systems, vol. 6, no. Normal Body Temperature, and Other Legacies of Carl 3, pp. 346-361. Reinhold August Wunderlich," Journal of the American Medical Association, 268, 1578-1580. [16] Bhatt, R.B., Narayanan, S.J., Paramasivam, I. and Khalid, M., (2012). Approximating Fuzzy Membership Functions [22] Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, from Clustered Raw Data. 2012 Annual IEEE India Chapman & Hall, p49, p209 Conference (INDICON). [23] DeBruyne, D., & Sorensen, L. (2018). Quantum Mechanics I. [17] Sadollah, A. (2018, October 31). Introductory Chapter: Walter de Gruyter GmbH. Which Membership Function is Appropriate in Fuzzy System? Retrieved from https://www.intechopen.com/boo [24] Taylor, Courtney. (2019, April 28). How to Find the ks/fuzzy-logic-based-in-optimization-methods-and-control Inflection Points of a Normal Distribution. Retrieved from -systems-and-its-applications/introductory-chapter-which- https://www.thoughtco.com/inflection-points-of-a-normal- membership-function-is-appropriate-in-fuzzy-system- distribution-3126446 [18] Barua, A & Mudunuri, L.S. & Kosheleva, Olga. (2014). [25] T Test (Student's T-Test): Definition and Examples. (n.d.). Why Trapezoidal and Triangular Membership Functions Retrieved fromhttps://www.statisticshowto.datasciencecen Work So Well: Towards a Theoretical Explanation. Journal tral.com/probability-and-statistics/t-test/ of Uncertain Systems. 8. 164-168. [26] Nazirah Ramli, Siti Musleha Ab Mutalib, Daud Mohamad, [19] Kreinovich,, V., Kosheleva, O., & Shabazova, S. (2018). Fuzzy Forecasting Model based on Centre of Why Triangular and Trapezoid Membership Functions: A Gravity Similarity Measure, Journal of Computer Science Simple Explanation. Center of Excellence). Retrieved & Computational Mathematics, Vol. 8, No. 4, pp. 121-124, March 8, 2019, from http://www.cs.utep.edu/vladik/2018/t 2018.. r18-59.pdf

Mathematics and Statistics 8(5): 566-569, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080510

Hankel Determinant H2(3) for Certain Subclasses of Univalent Functions

Andy Liew Pik Hern1, Aini Janteng1,∗, Rashidah Omar2

1Faculty of Science and Natural Resources, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah, Malaysia 2Faculty of Computer and Mathematical Sciences, Universiti Teknologi Mara Cawangan Sabah, 88997 Kota Kinabalu, Sabah, Malaysia

Received June 9, 2020; Revised July 19, 2020; Accepted August 10, 2020 Cite This Paper in the following Citation Styles

(a): [1] Andy Liew Pik Hern, Aini Janteng, Rashidah Omar, ”Hankel Determinant H2(3) for Certain Subclasses of Univalent Functions,” Mathematics and Statistics, Vol. 8, No. 5, pp. 566-569, 2020. DOI: 10.13189/ms.2020.080510.

(b): Andy Liew Pik Hern, Aini Janteng, Rashidah Omar, (2020). Hankel Determinant H2(3) for Certain Subclasses of Univalent Functions. Mathematics and Statistics, 8(5), 566-569. DOI: 10.13189/ms.2020.080510.

Copyright ©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract Let S to be the class of functions which 1 Introduction are analytic, normalized and univalent in the unit disk U = {z : |z| < 1}. The main subclasses of S are starlike Let S denotes the class of normalized analytic univalent functions, convex functions, close-to-convex functions, quasi- functions f of the form convex functions, starlike functions with respect to (w.r.t.) ∞ X n symmetric points and convex functions w.r.t. symmetric points f(z) = z + anz (1) ∗ ∗ ∗ which are denoted by S , K, C, C , SS, and KS respectively. n=2 In recent past, a lot of mathematicians studied about Hankel determinant for numerous classes of functions contained in where z ∈ U = {z : |z| < 1}. In a few years, many mathematicians still looking the result S. The qth Hankel determinant for q ≥ 1 and n ≥ 0 is th 2 on Hankel determinants for various subclasses of S. The q defined by Hq(n). H2(1) = a3 − a2 is greatly familiar so called Fekete-Szego¨ functional. It has been discussed Hankel determinant for the conditions of q ≥ 1 and n ≥ 0 since 1930’s. Mathematicians still have lots of interest to (see [17]) as follows: this, especially in an altered version of a − µa 2. Indeed, 3 2 an an+1 ··· an+q−1 there are many papers explore the determinants H (2) and 2 an+1 an+2 ··· an+q H3(1). From the explicit form of the functional H3(1), Hq(n) = . . . . (2) . . .. . it holds H2(k) provided k from 1-3. Exceptionally, one . . . 2 of the determinant that is H2(3) = a3a5 − a4 has not an+q−1 an+q ··· an+2q−2 been discussed in many times yet. In this article, we deal This determinant has been studied by several authors. The with this Hankel determinant H (3) = a a − a 2. From 2 3 5 4 classical Fekete-Szego¨ functional is H (1). This functional this determinant, it consists of coefficients of function f 2 has been studied since 1930s and until now (see [1–3, 7, 8, which belongs to the classes S∗ and K so we may find S S 10, 14, 16, 19, 20, 23]). Fekete and Szego¨ then farther gener- the bounds of |H2(3)| for these classes. Likewise, we got 2 alized the estimation a3 − µa2 where µ is real and f ∈ S. the sharp results for S∗ and K for which a = 0 are obtained. S S 2 For example, in [1] and [3], the researchers generalised the Fekete-Szego¨ problems by using different operator such as q- Ruscheweyh operator and Komatu integral operator. In fact, H (2) H (1) Keywords Univalent Functions, Starlike Functions the determinants 2 and 3 have been discussed by many mathematicians (see [4, 5, 11–13, 18, 21, 22, 24]). Espe- w.r.t. Symmetric Points, Convex Functions w.r.t. Symmetric Points, Hankel Determinant cially, from [5] and [18], mathematicians extend their ideas to bi-univalent functions instead of univalent functions. Lately, Zaprawa [25] premeditated the determinant H2(3) for S. This included well known subclasses of S which are starlike functions, convex functions and functions whose Mathematics and Statistics 8(5): 566-569, 2020 567 derivative has a positive real part, denoted by S∗, K and R re- Lemma 2.6. [25] If p ∈ P then spectively. The determinant H2(3) is from the explicit form of 2 1 2 1 3 H3(1) where H3(1) = H2(3) + a2(a3a4 − a2a5) + a3H2(2). p2p4 − p3 ≤ 4 − |p2| + |p2| . Inspired by this, particularly, in section 3, we obtained the 2 4 bounds of |H2(3)| for the class of starlike functions w.r.t. sym- ∗ ∗ metric points, SS and the class of convex functions w.r.t. sym- 3 Results on H2(3) for SS and KS metric points, KS. Furthermore, in section 4, we obtained the ∗ bounds of |H2(3)| for the class of starlike functions w.r.t. sym- In this section, we obtained the bounds of H2(3) for SS and ∗ K . Let f be given by (1). Then metric conjugate points, SSC and the class of convex functions S w.r.t symmetric conjugate points, K . SC 2zf 0(z) f ∈ S∗ ⇔ ∈ P (9) S f(z) − f(−z) 2 Preliminary Results and 2 (zf 0(z))0 P p First, let denotes the class of functions consisting of , f ∈ KS ⇔ 0 ∈ P. (10) such that (f(z) − f(−z)) ∗ ∞ As f ∈ SS, we have (9) and ∃ p ∈ P , which yields X p(z) = 1 + p z + p z2 + ... = 1 + p zn. (3) 1 2 n 2zf 0(z) = (f(z) − f(−z)) p(z) (11) n=1 which are regular in the open unit disc U and satisfy Re p(z) > for some z ∈ U. By comparing and equating the coefficients 0 for any z ∈ U. Here p(z) is called the Caratheodory` function in (11) yields [6]. 1 a = p , (12) Lemma 2.1. [9] If p ∈ P then |pn| ≤ 2 for each n. 2 2 1 Lemma 2.2. [9] If p ∈ P then the sharp estimate 1 |p − µp p | ≤ 2 holds for n, k = 1, 2, ..., n > k, a3 = p2, (13) n k n−k 2 µ ∈ [0, 1]. 1 1  From this Lemma 2.2, Livingston [15] proved a = p p + p (14) 4 4 2 1 2 3 |pn − pkpn−k| ≤ 2.   [25] If p ∈ P then for µ ∈ the following sharp 1 1 Lemma 2.3. R a = p 2 + p (15) estimate holds 5 4 2 2 4 ( ∗ 2 1 Therefore, if f ∈ SS then 2 2 − µ|p1| ; µ ≤ 2 p2 − µp1 ≤ 2 1 (4) 2 − (1 − µ)|p1| ; µ ≥ . 1 2 H (3) = 4p 3 + 8p p − p 2p 2 − 4p p p − 4p 2 2 64 2 2 4 1 2 1 2 3 3 Lemma 2.4. [25] If p ∈ P then (16) f ∈ K 1 Similarly, since S, we have (10) that |p − p p | ≤ 8 − 2|p |2 + |p |3 . ∃ p ∈ P such that 3 1 2 4 1 1 Remark 2.1 Considering p(zn), we can obtain related versions 2 (zf 0(z))0 = (f(z) − f(−z))0 p(z) (17) of these lemmas writing pkn instead of pk, k = 1, 2, ··· . For example for some z ∈ U.

( 2 1 2 2 − µ|p2| ; µ ≤ 2 By comparing and equating coefficients in (17) yields p4 − µp2 ≤ (5) 2 − (1 − µ)|p |2; µ ≥ 1 . 2 2 1 a = p (18) and 2 4 1 1 |p − p p | ≤ 8 − 2|p |2 + |p |3 (6) 6 2 4 4 2 2 1 a3 = p2 (19) 2 6 Lemma 2.5. [25] If p ∈ P then p2p4 − p3 ≤ 4. The equal- ity holds only for functions 1  1  a4 = p3 + p1p2 (20) 1 + z3 16 2 p(z) = , (7) 1 − z3   1 1 2 a5 = p4 + p2 (21) 1 + z2 20 2 p(z) = 2 (8) 1 − z By substituting (18)-(21) into the definition of H2(3), we ob- and their rotations. tain 568 Hankel Determinant H2(3) for Certain Subclasses of Univalent Functions

In both situations, we applied the triangle inequality and Lem- mas 2.1, 2.2, 2.3 and 2.6. Let q = |p |, then from (25), we 1 2 H (3) = 128p p + 64p 3 − 60p 2 have 2 15360 2 4 2 3 2 2 "   −60p p p − 15p p (22) 1 1 2 1 3 1 2 3 1 2 |H (3)| ≤ 4 4 − |p | + |p | + 4 |p | (2) 2 64 2 2 4 2 2 # 3 2 From the expression of H2(3), it is not easy to get the bounds +3|p2| + |p2| (2) (27) of equations (16) and (22). Thus, we begin with a peculiar case. 1 = 4q3 − 2q2 + 10q + 16 (28) 64 Theorem 3.1. Let f be given by (1) with assumption that a2 = 0. It achieves the maximum value in the range of [0, 2] for q = ∗ 2. Thence, we may get the result as shown in the theorem. (a). If f ∈ SS then |H2(3)| ≤ 1. 1 Similarly, from (26), we have (b). If f ∈ KS then |H2(3)| ≤ 15 . "   1 1 2 1 3 |H2(3)| ≤ 60 4 − |p2| + |p2| + 60|p2|(2) Proof. Let f be given by (1). With assumption that a2 = 0, 15360 2 4 ∗ (12) gives p1 = 0. If f ∈ SS, then (16) yields # 2 3 +15|p2| (2) + 8|p2|(2) + 57|p2| (29) 1 3 2 |H2(3)| = p2 + p2p4 + p2p4 − p3 (23) 16 1 = 72q3 + 136q + 240 (30) 15360 Similarly to f ∈ KS, (22) gives which reaches the maximum value in the range of [0, 2] for 1 3 2 q = 2. The result follows. The proof of Theorem 3.2 has been |H2(3)| = 64p2 + 68p2p4 + 60 p2p4 − p3 15360 shown. (24) By applying Lemma 2.1 and 2.5, we obtain the bounds 1 and ∗ 1/15 for SS and KS respectively. ∗ The expression (23) equals to 1 if and only if |p2| = 2, 4 Results on H (3) for S and K 2 2 SC SC |p4| = 2 and |p2p4 − p3 | = 4. It is particularly for the rota- z ∗ tions of (8) so we have the extremal functions f(z) = 1−z2 In this section, we acquired the bounds of H2(3) for SSC and its rotations. and KSC . Let f be given by (1). Then 2zf 0(z) This may apply to the estimation of |H2(3)| for KS and this f ∈ S∗ ⇔ ∈ P (31) 1 1+z SC f(z) − f(−z) drives that for the particularly rotations of f(z) = 2 log 1−z there is |H2(3)| = 1/15. and 2 (zf 0(z))0 f ∈ KSC ⇔ 0 ∈ P. (32) f(z) − f(−z) Next, two general theorems are proven. Similar lines of proof in Theorem 3.1 and Theorem 3.2. The ∗ Theorem 3.2. Let f be given by (1). bounds of |H2(3)| for SSC and KSC are given as follows: ∗ 13 (a). If f ∈ SS then |H2(3)| ≤ 16 . 17 Theorem 4.1. Let f be given by (1) with assumption that a2 = (b). If f ∈ KS then |H2(3)| ≤ . 240 0 and an are real numbers. ∗ (a). If f ∈ SSC then |H2(3)| ≤ 1. 1 ∗ (b). If f ∈ KSC then |H2(3)| ≤ 15 . Proof. Let f ∈ SS. From (16), it follows that ∗ Next, we get two general theorems for SSC and KSC . 1  2 3 H2(3) = 4 p2p4 − p3 + 4p2(p4 − p1p3) + 3p2 64 Theorem 4.2. Let f be given by (1) with assumption that an +p 2 p − p 2 (25) are real numbers. 2 2 1 ∗ 13 (a). If f ∈ SSC then |H2(3)| ≤ 16 . 17 (b). If f ∈ KSC then |H2(3)| ≤ 240 .

Similarly, if f ∈ KS, then from (22) gives 5 Conclusions 1  2 H2(3) = 60 p2p4 − p3 + 60p2(p4 − p1p3) 15360 In conclusion, this article has obtained the bounds of Hankel 2 2 2 3 +15p p − p  + 8p p − p  + 57p  2 2 2 1 2 4 2 2 determinant H2(3) = a3a5 − a4 for functions f belongs to ∗ ∗ (26) the class of SS, KS, SSC and KSC . Mathematics and Statistics 8(5): 566-569, 2020 569

Conflicts of Interest [12] A. Janteng, S. A. Halim, M. Darus, Hankel determinant for star- like and convex functions, Int. J. Math. Anal., Vol. 1, No. 13, We as authors affirm that, there are no conflicts of interest as 619–625, 2007. regards to the publication of this article. [13] A. Janteng, S. A. Halim, M. Darus, Estimate on the second Han- kel functional for functions whose derivative has a positive real Acknowledgements part, J. Qual. Meas. Anal.(JQMA), Vol. 4, No. 1, 189-195, 2008. [14] J. A. Jenkins, On certain coefficients of univalent functions, An- We thank to UMSGreat grant, GUG0269-2/2018 for finan- alytic Functions, Princeton Univ. Press, 159-194, 1960. cial support and all the anonymous papers as references. [15] A. E. Livingston. The coefficients of multivalent close-to- convex functions, Proc. Am. Math. Soc. Vol. 21, No. 3, 545–552, 1969. REFERENCES [16] R. R. London, Fekete-Szego¨ inequalities for close-to-convex [1] A. Alsoboh, M. Darus. On Fekete–Szego¨ problems for certain functions, Proceedings of the American Mathematical Society, subclasses of analytic functions defined by differential operator Vol. 117, No. 4, 947–950, 1993. involving q-Ruscheweyh operator, Journal of Function Spaces, Hindawi, Vol. 2020, 2020. [17] J. Noonan, D. K. Thomas. On the second Hankel determinant of a really mean p-valent functions, Transactions of the American [2] S¸. Altinkaya, S. Yalcin. The Feketa-Szego¨ problem for a gen- Mathematical Society, Vol. 223, 337-346, 1976. eral class of bi-univalent functions satisfying subordinate condi- tions, Sahand Communications in Mathematical Analysis, Vol. [18] H. Orhan, N. Magesh, J. Yamini. Bounds for the second Hankel 5, No. 1, 1-7, 2017. determinant of certain bi-univalent functions, Turkish J. Math., [3] H. Arıkan, H. Orhan, M. C¸aglar.˘ Fekete-Szego¨ inequality for a Vol. 40, No. 3, 679–687, 2016. subclass of analytic functions defined by Komatu integral oper- ator, AIMS Mathematics, Vol. 5, No. 3, 1745-1756, 2020. doi: [19] A. Pfluger, The Fekete-Szego¨ inequality by a variational 10.3934/math.2020118 method, Annales Academiae Scientiorum Fennicae Seria A. I., Vol. 10, 447–454, 1985. [4] D. Bansal, S. Maharana, J. K. Prajapat. Third order Hankel de- terminant for certain univalent functions, J. Korean Math. Soc. [20] A. C. Shaeffer, D. C. Spencer, The coefficient of schlicht func- Vol.52, No. 6, 1139–1148, 2015. tions, Duke Math. J., Vol. 10, 611-635, 1943. [5] E. Deniz, M. Caglar, H. Orhan. Second Hankel determinant for bi-starlike and bi-convex functions of order beta, Appl. Math. [21] L. Shi, I. Ali, M. Arif, N. E. Cho, S. Hussain, H. Comput., Vol. 271, 301-307, 2015. Khan. A Study of third Hankel determinant problem for certain subfamilies of analytic functions involving car- [6] P. L. Duren. Univalent functions, Grundlehren der Mathema- dioid domain, Mathematics, MDPI, Vol. 7, No. 5, 2019. tischen Wissenschaften, Vol. 259, Springer, New York, USA, https://doi.org/10.3390/math7050418 1983. [22] D. Vamshee Krishna, B. Venkateswarlu, T. RamReddy. Third [7] M. Fekete, G.Szego,¨ Eine Bermerkung uber¨ ungerade schlichte Hankel determinant for bounded turning functions of order al- Funktionen, J. Lond. Math. Soc., Vol. 8, 85-89, 1933. pha, Journal of the Nigerian Mathematical Society, Vol. 34, Is- [8] G. M. Goluzin, Some questions in the theory of univalent func- sue 2, 121-127, 2015. tions, Trudy Mat. Inst. Steklova, Vol. 27, 1-112, 1949. [23] P. Zaprawa. Estimates of initial coefficients for bi-Univalent [9] T. Hayami, S. Owa. Generalized Hankel determinant for certain functions, Hindawi Publishing Corporation Abstract and Ap- classes, Int. J. Math. Anal., Vol. 4, No. 52, 2573–2585, 2010. plied Analysis, Vol. 2014 ,2014.

[10] A. Janteng, S. A. Halim, M. Darus. Fekete-Szego¨ problem for [24] P. Zaprawa. Third Hankel determinants for subclasses of uni- certain subclass of quasi-convex functions, Int. J. Contemp. valent functions. Mediterr. J. Math., Vol. 14, No. 19, 2017. Math. Sci., Vol. 1, No. 1, 45-51, 2006. https://doi.org/10.1007/s00009-016-0829-y [11] A. Janteng, S. A. Halim, M.Darus, Hankel determinant for func- tions starlike and convex with respect to symmetric points, Jour- [25] P. Zaprawa. On Hankel determinant H2(3) for univa- nal of Quality Measurement and Analysis, Vol. 2, No. 1, 37–43, lent functions, Results Math, Vol. 73, No. 89, 2018. 2006. https://doi.org/10.1007/s00025-018-0854-1 Mathematics and Statistics 8(5): 570-576, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080511

Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with Strongly Generalized Differentiability

N. A. Abdul Rahman

School of Mathematical Sciences Universiti Sains Malaysia, 11800 USM, Penang, Malaysia

Received April 13, 2020; Revised June 19, 2020; Accepted July 10, 2020 Cite This Paper in the following Citation Styles (a): [1] N. A. Abdul Rahman, "Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with Strongly Generalized Differentiability," Mathematics and Statistics, Vol. 8, No. 5, pp. 570-576, 2020. DOI: 10.13189/ms.2020.080511. (b): N. A. Abdul Rahman, (2020). Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with Strongly Generalized Differentiability. Mathematics and Statistics, 8(5), 570-576. DOI: 10.13189/ms.2020.080511.

Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract Fuzzy delay differential equation has always 1 Introduction been a tremendous way to model real-life problems. It has been developed throughout the last decade. Many types of Delay differential equations (DDEs) have long being used fuzzy derivatives have been considered, including the recently to model real-world problems comprising many fields such as introduced concept of strongly generalized differentiability. mathematics, physics and biology. There are useful whenever However, considering this interpretation, very few methods the model possesses functional terms. Some of recent works have been introduced, obstructing the potential of fuzzy using DDEs are done in [1] where the authors studied DDEs delay differential equations to be developed further. This under dissipative type conditions. Meanwhile, An et al. in- paper aims to provide solution for fuzzy nonlinear delay vestigated the impulsive hybrid interval-valued delay integro- differential equations and the derivatives considered in this differential equations [2]. As useful as DDEs are, there are paper is interpreted using the concept of strongly generalized still limitations encountered. Ordinary or classical DDEs can- differentiability. Under this method, the calculations will lead not handle real-worlds problems in the most ideal way due to to two cases i.e. two solutions, and one of the solutions is the presence of uncertainties and fuzziness. This is a common decreasing in the diameter. To fulfil this, a method resulting occurence when dealing with our surrounding since we cannot from the elegant combination of fuzzy Sumudu transform be certain on the measurement taken due to some constraint in and Adomian decomposition method is used, it is termed as our knowledge and senses. At instance, when we consider a fuzzy Sumudu decomposition method. A detailed procedure model of a population, uncertainties might arise due to birth, for solving fuzzy nonlinear delay differential equations with death and migrations. the mentioned type of derivatives is constructed in detail. A To handle the shortcoming, scientist come to many concept numerical example is provided afterwards to demonstrate the that have been proposed to handle uncertainty, some of which applicability of the method. It is shown that the solution is are probabilistic and stochastic theory. One of the newest tool not unique, and this is in accord with the concept of strongly for handling uncertainty is using fuzzy set theory pioneered by generalized differentiability. The two solutions can later be Zadeh [3]. At instance, fuzzy differential equations have been chosen by researcher with regards to the characteristic of the established over the years and have been opened to many types problems. Finally, conclusion is drawn. of interpretations of derivative. For example, using Zadeh ex- tension principle, differential inclusion and Seikkala derivative. Keywords Fuzzy Differential Equations, Fuzzy Sumudu The latest addition to the list is the concept of strongly general- Transform, Sumudu Decomposition Method, Nonlinear Delay ized differentiability. Soon after, fuzzy DDEs is coined out as Differential Equations a result from the combination of DDEs and fuzzy differential equations. One of the earliest result on fuzzy DDEs was discussed by Mathematics and Statistics 8(5): 570-576, 2020 571

Lupulescu and results on existence and uniqueness are pre- 2 Preliminaries sented [4]. The works are driven by Liu process. Since then, many works have been done considering fuzzy DDEs in many In this section, some important prerequisites are revisited. areas such as in [5], focusing fuzzy derivatives under the con- These involve the concept of fuzzy numbers, fuzzy functions cept of strongly generalized differentiability. On the other and some previous results of FST. hand, in [6], the existence of local and global solutions of fuzzy delay differential inclusions has been studied. Recent works on Definition 1. [3] A fuzzy number is a mapping ue : R → [0, 1] the topic are in [7], discussing on prey-predator model with de- that satisfies the following conditions. lay terms and in [8], investigating fuzzy DDEs under granular 1. ∀ u ∈ F( ), u is upper semi continuous, differentiability. In [9], Runge-Kutta method have been used e R e to solve DDEs with uncertain parameters and spatial pattern 2. ∀ u ∈ F(R), u is fuzzy convex, i.e., u(γs + (1 − γ)t) ≥ formation has been analysed. If we compare between fuzzy e e e min{u(s), u(t)} for all s, t ∈ R, and γ ∈ [0, 1], DDEs using Zadeh extension principle and fuzzy DDEs inter- e e preted under the concept of strongly generalized concept, it is 3. ∀ ue ∈ F(R), ue is normal, obvious that there are very few methods that can be used to the latter. This served as a motivation for the study in this paper 4. supp ue = {t ∈ R|ue(t) > 0} is the support of ue, and it has to pinpoint all possible extensions and modifications that need a compact closure cl(supp ue). to be done in order to handle fuzzy DDEs under the strongly generalized differentiability.

Definition 2. [18] Let ue ∈ F(R) and α ∈]0, 1]. The α-level One of the methods that have been used to solve fuzzy differ- α ential equations in recent literature is fuzzy Sumudu transform set of ue is the crisp set ue that contains all the elements with (FST) and this can be explored in [10], [11], and [12]. This membership degree greater than or equal to α, i.e. method possess unity property or often referred as scale pre- uα = {x ∈ |u(t) ≥ α}, serving property. Using this property, researcher might have e R e an insight or idea on the behaviour of the solutions as the vari- where uα denotes α-level set of fuzzy number u. able approaches certain values or numbers. In other words, the e e transformed function can be treated as the replica of the origi- nal function rather than as dummies which happen when con- sidering different type of fuzzy integral transform such as fuzzy Definition 3. [19] A parametric form of an arbitrary fuzzy Laplace transform [13]. Furthermore, FST have been applied α α α α number ue is an ordered pair [u , u ] of functions u and u , to several types of fuzzy differential equations, for example, for any α ∈ [0, 1], that fulfil the following conditions. fuzzy differential equations with fractional order derivatives in [14] and [15], and fuzzy partial derivatives [13] as well as fuzzy i. uα is a bounded left continuous monotonic increasing integral equations [16]. The method however suffers a draw- function in [0, 1], back because an integral transform can only be used to solve linear problems and a lot of a real world problems are actually ii. uα is a bounded left continuous monotonic decreasing modelled using nonlinear terms. function in [0, 1],

α α In this paper, we focus on the combination of FST and de- iii. u ≤ u . composition method, termed as fuzzy Sumudu decomposition method (FSDM). This method has been successfully applied on linear fuzzy differential equations [17]. In this work, we will try implementing the method on nonlinear fuzzy differ- Definition 4. [20] Let fe :]a, b[→ F(R) be a fuzzy function and ential equations with delay terms to exhibit the practicality of t0 ∈]a, b[. We say that fe is strongly generalized differentiable 0 the method on a wider aspect of fuzzy differential equations. on t0, if there exists an element fe (t0) ∈ F(R), such that Other than extending the applicability to wider range of DDEs, H FSDM also helps in simplifying the workings since the differ- i. for all h > 0 sufficiently small, ∃fe(t0 + h) − H ential equations are reduced using the FST part in the method. fe(t0), fe(t0) − fe(t0 − h) and the limits (in the metric This is illustrated throughout the procedures and example pro- D) vided in this paper.

This paper is divided into several sections. Next section H H fe(t0 + h) − fe(t0) fe(t0) − fe(t0 − h) provides some some basic concepts necessary to understand lim = lim h→0 h h→0 h the paper. This is followed by Section 3, providing step-by- 0 step procedures for solving nonlinear fuzzy differential equa- = fe (t0), tions. Section 4 demonstrates the applications of the proposed method on a numerical example. Later, conclusion is drawn. or Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with 572 Strongly Generalized Differentiability

H ii. for all h > 0 sufficiently small, ∃fe(t0)− fe(t0+h), fe(t0− been provided in [17]. In this paper, we modified the proce- H h) − fe(t0) and the limits (in the metric D) dures to handle nonlinear terms in fuzzy DDE. Consider dy e + Re(y) + Ne(t − τ) = fe(t), (1) H H dt fe(t0) − fe(t0 + h) fe(t0 − h) − fe(t0) lim = lim with the initial condition h→0 −h h→0 −h 0 y(0) = y = [y , y ], (2) = fe (t0). e0 0 0

where y = y(t), Re is a linear bounded fuzzy operator, fe(t) Theorem 1. [21] Let fe : R → F(R) be a continuous fuzzy e e α α a continuous function, Ne a nonlinear bounded fuzzy operator function and fe(t) = [f (t), f (t)], for every α ∈ [0, 1]. Then dy and e the first order fuzzy derivative. Using FST on both dt 1. if the fuzzy function feis (i)-differentiable, then f α(t) and sides of Eq. (1), α f (t) are both differentiable and dy  h i h i S e (u) + S R(y) (u) + S N(t − τ) (u) 0 e e f 0(t) = [(f 0)α(t), (f )α(t)], dt (3) h i = S fe(t) (u). 2. if the fuzzy function feis (ii)-differentiable, then f α(t) and α From Theorems 1 and 2, Eq. (3) is separated into two cases f (t) are both differentiable and to indicate the type of differentiability possessed. First, let con- 0 sider (i)-differentiable y(t), then, f 0(t) = [(f )α(t), (f 0)α(t)]. e S[et(u) − y0] h i h i e + S Re(y) (u) + S Ne(t − τ) (u) u (4) h i = S fe(t) (u), Definition 5. [10] Let fe : R → F(R) be a continuous fuzzy −t function. Suppose that fe(ut) e is improper fuzzy Riemann- represented parametrically as R ∞ −t integrable on [0, ∞[, then 0 fe(ut) e dt is called fuzzy Sumudu transform and is denoted by s[y(t)](u) − y ] 0 + s [R(y)] (u) + s [N(t − τ)] (u) Z ∞ u G(u) = S[f(t)](u) = f(ut) e−tdt, u ∈ [−τ , τ ], e e 1 2 = s f(t) (u), 0 (5) s[y(t)](u) − y ] where the variable u is used to factor the variable t in the 0 + s R(y) (u) + s N(t − τ) (u) u argument of the fuzzy function and τ1, τ2 > 0. = s f(t) (u), Theorem 2. [13] Let fe : R → F(R) be a continuous fuzzy- and we obtain, valued function, and feis the primitive of fe0 on [0, ∞). Then s[y(t)](u) =y − us [R(y)] (u) − s [N(t − τ)] (u) H 0 0 S[fe(t)] − fe(0)   [fe (t)](u) = , + us f(t) (u), S u     (6) s[y(t)](u) =y0 − us R(y) (u) − s N(t − τ) (u) where f is (i)-differentiable, or e + us f(t) (u). H 0 (−fe(0)) − (−S[fe(t)]) Then, we define S[fe (t)](u) = , u ∞ X y(t) = y (t), where feis (ii)-differentiable. n n=0 ∞ (7) X 3 Fuzzy Sumudu decomposition y(t) = yn(t). n=0 method for fuzzy nonlinear delay The fuzzy nonlinear operator can be further decomposed as the differential equations following. ∞ In this section, we revisit the required procedures for solv- X N(t − τ) = An(t), ing fuzzy DDE. The method is a sophisticated combination n=0 (8) of fuzzy Sumudu transform and the Adomian decomposition ∞ X method by George Adomian [22]. The procedure for solving N(t − τ) = An(t). linear fuzzy differential equations with the said method has n=0 Mathematics and Statistics 8(5): 570-576, 2020 573

where [An, An] is the fuzzy Adomian polynomial of This can be represented in parametric form as follows. ye0, ye1,..., yen: −y − (−s[y(t)](u)) 0 + s [R(y)] (u) + s [N(t − τ)] (u) " ∞ # u 1 d X A = N λny , = s f(t) (u), n n! dλ n n=0 (9) −y − (−s[y(t)](u)) " ∞ # 0     1 d X + s R(y) (u) + s N(t − τ) (u) A = N λny . u n n! dλ n   n=0 = s f(t) (u). (15) At instance, Analogous to the previous case, we have

A0 = f(y ) 0 y = s−1[y ] + s−1 us f(t) (u) , 0 0 A0 = f(y0) (10) y = s−1[y ] + s−1 us f(t) (u) , A = y f 1(y ) 0 0 1 1 0 (16) y = −s−1 us Ry (t) + us A (t) (u) (u) , A = y f 1(y ) n n−1 n−1 1 1 0 h h i i y = −s−1 us Ry (t) + us A (t) (u) (u) . n n−1 n−1 Substituting (7)-(10) into (6),

" ∞ # " ∞ # X X 4 Numerical example s y (t) (u) =y − us R y (t) (u) n 0 n n=0 n=0 Consider the following delay differential equation under " ∞ # fuzzy settings adapted from [23]. X   − u An(t) + us f(t) (u), 0 2 n=0 y = 1 − 2y (t/2) (17) (11) e e " ∞ # " ∞ # X X s yn(t) (u) =y0 − us R yn(t) (u) where n=0 n=0 " ∞ # X   ye(0) = [α − 1, 1 − α] (18) − u An(t) + us f(t) (u). n=0 We apply FST on both sides of (17) to get

Comparing coefficients of y, we obtain e 0 2 S(ye ) = S(1 − 2ye (t/2)) h i 2 s y (u) = y + us f(t) (u), = S(1) − S(2y (t/2)) (19) 0 0 e = 1 − 2S(y2(t/2)) s [y ](u) = y + us f(t) (u), e 0 0 (12) h i h i   Then, Equation 19 can be divided into two cases as stated s y (u) = −us Ry (t) (u) − us An−1(t) , n n−1 previously.     s [yn](u) = −us Ryn−1(t) (u) − us An−1(t) . Case 1: First, suppose ye(t) is (i)-differentiable, then Applying inverse FST, we conclude s y0(t) (u) = 1 − 2s(y2(t/2))(u), (20) y = s−1[y ] + s−1 us f(t) (u) , 0 2 0 0 s [y (t)] (u) = 1 − 2s(y (t/2))(u). −1 −1     y0 = s [y0] + s us f(t) (u) , h h i i (13) From part (i) of Theorem 2, we have y = −s−1 us Ry (t) + us A (t) (u) (u) , n n−1 n−1 −1       yn = −s us Ryn−1(t) + us An−1(t) (u) (u) . s[y(t)](u) − y(0) = 1 − 2s(y2(t/2)(u), u Next, we consider ye(t) to be (ii)-differentiable. From Theo- (21) s[y(t)](u) − y(0) rems 1 and 2, then = 1 − 2s(y2(t/2))(u). u H −y0 − (−S[y(t)](u)) h i h i e e + S Re(y) (u) + S Ne(t − τ) (u) By rearrangement to solve for s[y(t)](u), hence u h i 2 = S fe(t) (u). s[y(t)](u) = (1 − α) + u − 2us(y (t/2))(u), (22) (14) s[y(t)](u) = (α − 1) + u − 2us(y2(t/2))(u). Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with 574 Strongly Generalized Differentiability

Then inverse FST is applied to both sides of (22), and y(t) = (1 − α) + t − 2s−1(us(y2(t/2))(u)), y(t) =y (t) + y (t) + y (t) + ..., (23) 0 1 2 y(t) = (α − 1) + t − 2s−1(us(y2(t/2))(u)).  t3 t2  =(1 − α) + t − 2 (1 − α)2t + + (1 − α) and 12 2  t2 t3 t4 y (t) = (α − 1) + t, + 8 (1 − α)3 + 3(1 − α)2 + 7(1 − α) 0 4 24 384 y0(t) = (1 − α) + t, 5  (24) t y (t) = −2s−1 [us [A ](u)] , + + ... n+1 n 960 −1     (35) yn+1(t) = −2s us An (u) . From FSDM proposed, The result obtained is illustrated in Figure 1. For the sake of simplicity, only first, second and third term of y(t) is taken t t2 t e A = y2( ) = (α − 1)2 + + 2(α − 1)( ), (25) into account. 0 0 2 4 2 t t2 t A = y2( ) = (1 − α)2 + + 2(1 − α)( ), (26) Case 2: Now we consider ye(t) to be (ii)-differentiable, then 0 0 2 4 2 A = 2y (t/2)y (t/2), (27) 1 0 1 0 2 A1 = 2y0(t/2)y1(t/2). (28) s [y (t)] (u) = 1 − 2s(y (t/2))(u), (36)  0  2 For n = 0, s y (t) (u) = 1 − 2s(y (t/2))(u).  t3 t2  y (t) = −2 (α − 1)2t + + (α − 1) (29) As in part (ii) in Theorem 2, 1 12 2 and  3 2  2 t t s(y(t)) − (1 − α) y1(t) = −2 (1 − α) t + + (1 − α) (30) = 1 − 2s(y2(t/2))(u), 12 2 u (37) From (29) and (30), s(y(t)) − (1 − α) = 1 − 2s(y2(t/2))(u).  t  (α − 1)2 t3 t2  u y = −2 t + + (α − 1) 1 2 2 96 8 (31) Computing Case 2 using analogous step-by-step procedure for  t  (1 − α)2 t3 t2  Case 1, we have y = −2 t + + (1 − α) 1 2 2 96 8 y(t) =y (t) + y (t) + y (t) + ..., 0 1 2 For n = 1,  3 2  2 t t  t2 t3 t4 =(1 − α) + t − 2 (α − 1) t + + (α − 1) y (t) =8 (α − 1)3 + 3(α − 1)2 + 7(α − 1) 12 2 2 4 24 384  2 3 4 (32) 3 t 2 t t t5  + 8 (α − 1) + 3(α − 1) + 7(α − 1) + 4 24 384 960 t5  + + ... and 960 (38)  t2 t3 t4 y (t) =8 (1 − α)3 + 3(1 − α)2 + 7(1 − α) 2 4 24 384 (33) and t5  + y(t) =y (t) + y (t) + y (t) + ..., 960 0 1 2  t3 t2  The fuzzy series solution is given as the following. =(α − 1) + t − 2 (1 − α)2t + + (1 − α) 12 2 y(t) =y (t) + y (t) + y (t) + ...,  t2 t3 t4 0 1 2 + 8 (1 − α)3 + 3(1 − α)2 + 7(1 − α)  t3 t2  4 24 384 =(α − 1) + t − 2 (α − 1)2t + + (α − 1) 12 2 t5  + + ...  t2 t3 t4 960 + 8 (α − 1)3 + 3(α − 1)2 + 7(α − 1) 4 24 384 (39) t5  + + ... Considering the first, second and third term of our fuzzy series 960 solution, the result is depicted in Figure 2. (34) Mathematics and Statistics 8(5): 570-576, 2020 575

5 Discussion

From the results, we may conclude that the solutions are inline with the concept of strongly generalized differentiability in general. For Case 1, the results diverge as the value of t increases, meanwhile, for Case 2, the results contracts as we choose higher values of t. This two behaviours of the solutions is in accord with the concept of strongly generalized differentiability. We can see that unlike other type of concept for fuzzy derivatives, the solution we have is not unique. This is true for this type of differentiability we chose. Even so, this situation allows engineers and researchers to choose the best solution according to the characteristic of the problem. We can also see that the solution obtain when α = 1 is a crisp solution and it represents the solution for non-fuzzy DDEs.

We can also see that a switching point happens at t = 0. This means that for value of t greater than 0, the solution is (i)- differentiable while when the value of t is greater than 0, the Figure 2. The solutions when y(t) is (ii)-differentiable. solution is (ii)-differentiable. For further discussion on switch- e ing point, please refer [24]. point of the fuzzy solutions obtained as well as the application on more complex fuzzy differential equations such as the one involving higher order derivatives. Other type of initial condi- tions such as nonlinear fuzzy number can also be used in the future to examine the behaviour of the solutions.

Acknowledgements

This research is funded by Universiti Sains Malaysia Short Term grant under the code 304/PMATHS/6315269.

References

[1] T. Donchev and A. Nosheen, “Fuzzy functional differen- tial equations under dissipative-type conditions,” Ukrains kyi Matematychnyi Zhurnal, vol. 65, no. 06, pp. 787–795, Figure 1. The solutions when ye(t) is (i)-differentiable. 2013.

[2] T. V. An, N. Van Hoa, and N. A. Tuan, “Impulsive hybrid interval-valued functional integro-differential equations,” 6 Conclusion and Recommendations Journal of Intelligent & Fuzzy Systems, vol. 32, no. 1, pp. 529–541, 2017. In this paper, FSDM have been successfully used to find the solutions of fuzzy nonlinear DDEs. There might be some pre- [3] L. A. Zadeh, “Fuzzy sets,” Information and control, vious paper discussing on the same topic, but this paper con- vol. 8, no. 3, pp. 338–353, 1965. sidered a more recent type of fuzzy derivatives, the strongly generalized differentiability. Procedure for finding the solu- [4] V. Lupulescu, “On a class of fuzzy functional differen- tions has been constructed and the numerical example has il- tial equations,” Fuzzy Sets and Systems, vol. 160, no. 11, lustrated that this method is applicable in practice. It has also pp. 1547–1562, 2009. been shown that this research simplified the workings by re- ducing the fuzzy DDEs under the strongly generalized differ- [5] A. Khastan, J. J. Nieto, and R. Rodr´ıguez-Lopez,´ “Fuzzy entiability, where it can be seen in the literature, previous stud- delay differential equations under generalized differen- ies on the topic always involve tedious calculations. For future tiability,” Information Sciences, vol. 275, pp. 145–167, research, we recommend further discussion on the switching 2014. Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with 576 Strongly Generalized Differentiability

[6] C. Min, N.-j. Huang, and L.-H. Zhang, “Existence of lo- [19] M. Friedman, M. Ma, and A. Kandel, “Numerical solu- cal and global solutions of fuzzy delay differential in- tions of fuzzy differential and integral equations,” Fuzzy clusions,” Advances in Difference Equations, vol. 2014, Sets and Systems, vol. 106, no. 1, pp. 35–48, 1999. no. 1, 2014. [20] B. Bede, I. J. Rudas, and A. L. Bencsik, “First order linear [7] D. Pal, G. Mahapatra, and G. Samanta, “A study of bifur- fuzzy differential equations under generalized differentia- cation of prey–predator model with time delay and har- bility,” Information Sciences, vol. 177, no. 7, pp. 1648– vesting using fuzzy parameters,” Journal of Biological 1662, 2007. Systems, vol. 26, no. 02, pp. 339–372, 2018. [21] Y. Chalco-Cano and H. Roman-Flores,´ “On new solutions [8] N. T. K. Son, H. V. Long, and N. P. Dong, “Fuzzy de- of fuzzy differential equations,” Chaos, Solitons & Frac- lay differential equations under granular differentiability tals, vol. 38, no. 1, pp. 112–119, 2008. with applications,” Computational and Applied Mathe- matics, vol. 38, no. 3, p. 107, 2019. [22] G. Adomian, Solving frontier problems of physics: the decomposition method. Kluwer Academic Publishers, [9] S. Indrakumar and K. Kanagarajan, “Runge-kutta method Boston, 1994. of order four for solving fuzzy delay differential equa- tions under generalized differentiability,” Journal of Ap- [23] H. Eltayeb and E. Abdeldaim, “Sumudu decomposition plied Nonlinear Dynamics, vol. 7, no. 2, pp. 131–146, method for solving fractional delay differential equa- 2018. tions,” Research in Applied Mathematics, vol. 1, pp. 1– 13, 2017. [10] N. A. Abdul Rahman and M. Z. Ahmad, “Applications of the fuzzy Sumudu transform for the solution of first or- [24] L. Stefanini and B. Bede, “Generalized Hukuhara differ- der fuzzy differential equations,” Entropy, vol. 17, no. 7, entiability of interval-valued functions and interval differ- pp. 4582–4601, 2015. ential equations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 3, pp. 1311–1328, 2009. [11] M. Z. Ahmad and N. A. Abdul Rahman, “Explicit so- lution of fuzzy differential equations by mean of fuzzy Sumudu transform,” International Journal of Applied Physics and Mathematics, vol. 5, no. 2, pp. 86–93, 2015. [12] N. A. Khan, O. A. Razzaq, and M. Ayyaz, “On the so- lution of fuzzy differential equations by fuzzy sumudu transform,” Nonlinear Engineering, vol. 4, no. 1, pp. 49– 60, 2015. [13] N. A. Abdul Rahman and M. Z. Ahmad, “Fuzzy Sumudu transform for solving fuzzy partial differential equa- tions,” The Journal of Nonlinear Science and Applica- tions, vol. 9, no. 5, pp. 3226–3239, 2016. [14] N. A. Abdul Rahman and M. Z. Ahmad, “Solving fuzzy fractional differential equations using fuzzy Sumudu transform.,” The Journal of Nonlinear Science and Ap- plications, vol. 6, pp. 19–28, 2017. [15] M. A. Najeeb Alam Khan, Oyoon Abdul Razzaq, “Notes on fuzzy fractional Sumudu transform,” vol. 18, no. 1, pp. 63–73, 2018. [16] N. A. Abdul Rahman and M. Z. Ahmad, “Solving fuzzy volterra integral equations via fuzzy Sumudu transform.,” Applied Mathematics and Computational Intelligence, vol. 10, no. 5, pp. 2620–2632, 2017. [17] N. A. A. Rahman, “Fuzzy sumudu decomposition method for solving differential equations with uncertainty,” AIP Conference Proceedings, vol. 2184, no. 1, p. 060042, 2019. [18] O. Kaleva, “A note on fuzzy differential equations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 64, no. 5, pp. 895–900, 2006. Mathematics and Statistics 8(5): 577-582, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080512

Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material

Edy Nurfalah1,*, Irvana Arofah2, Ika Yuniwati3, Andi Haslinah4, Dwi Retno Lestari5

1Study Program of Mathematics Education, Universitas PGRI Ronggolawe Tuban, Indonesia 2Study Program of Mathematics, Universitas Pamulang, Indonesia 3Study Program of Mechanical Engineering, Politeknik Negeri Banyuwangi, Indonesia 4Study Program of Mechanical Engineering, Universitas Islam Makassar, Indonesia 5Ministry of Research and Technology/ National Agency for Research Dan Innovation, Indonesia

Received July 5, 2020; Revised August 21, 2020; Accepted September 11, 2020

Cite This Paper in the following Citation Styles (a): [1] Edy Nurfalah, Irvana Arofah, Ika Yuniwati, Andi Haslinah, Dwi Retno Lestari , "Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material," Mathematics and Statistics, Vol. 8, No. 5, pp. 577 - 582, 2020. DOI: 10.13189/ms.2020.080512. (b): Edy Nurfalah, Irvana Arofah, Ika Yuniwati, Andi Haslinah, Dwi Retno Lestari (2020). Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material. Mathematics and Statistics, 8(5), 577 - 582. DOI: 10.13189/ms.2020.080512. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract This work is a research development of value gained. two-tier multiples choice diagnostic test instruments on calculus material. The purpose of this study is; 1) Keywords Two-tier Multiple Choice, Calculus Obtaining the construction of a two-tier multiples choice Material, Retnawati Development Model, Reliability diagnostic test based on the validity of the contents and Value, Validity Constable, 2) obtaining the quality of two-tier multiples choice diagnostic tests based on the reliability value. The method used is focused on the construction of diagnostic tests. The development research was adapted from the 1. Introduction Retnawati development model. The research generated: 1) Construction of a two-tier multiples choice diagnostic test Mathematics education has a very important role, based on the validity of the contents and the construction because mathematics is a fundamental science that is used obtained that the two-tier multiples choice diagnostic test is widely in various areas of life. Good education is capable proven valid. 2) The quality of two-tier multiples choice of producing output or achievement and quality and has the diagnostic tests based on the reliability value gained that ability that can be beneficial for others [1]. Chambers [2] the compiled two-tier diagnostic test instruments. The mentions that mathematics is a science of abstract patterns validity of the content is evidenced by the average validity that have characteristics as a tool to solve problems, as a index (V), for the two-tier multiples choice diagnostic test foundation of scientific and technological studies, and can instrument obtained an average validity index (V) of provide ways to model the situation in real life. 0.9333 and for an interview guideline instrument acquired In addition, as the students learn mathematics, students the validity index (V) 0.7556 in which both the validity will learn about the power of mathematics that will later index (V) approaches the value 1. Whereas for the validity develop the skills of learning to learn. The student's of the construction acquired three dominant factors based reasoned ability through the mathematical learning process on the scree-plot and corresponds to many factors on the will increase students' readiness to become human beings calculus material examined in this study. The quality of who have a lifetime learner or a lifelong study. two-tier multiples choice diagnostic tests is compiled of Mathematics also plays a very important role in the two-tier diagnostic test instruments based on the reliability development of science, because mathematics is the basis 578 Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material

of science and technology and also mathematics is one of the extent to which the instrument measures what is the knowledges that has an important role in thinking, supposed to be measured. As for the reliability of an namely as a tool to solve problems in everyday life. instrument shows the consistency of the data measurement Based on the explanation, the importance of results, it means if the instrument is tested to the same mathematics is important so that mathematics needs to be person or group of people but at different times, or if the learned, understood, and mastered by students. But the instrument is tested to a person or group of people at the results of the mathematics learning performed in schools same time or can also be at different times. In addition to are still not optimal, it can be seen from the number of the validity and reliability there are several other things that students who are still experiencing difficulties in learning also need to be analyzed from the instrument such as mathematics. The difficulties experienced by the students difficulty level, differentiation power, and the effectiveness are explained in the research conducted by Yeo [3], that of distractors for multiple choice questions. from the results of the interview difficulties experienced by Based on what has been described in the above exposure, students in the understanding of mathematics is a lack of the authors are interested in conducting research by understanding of the problem caused, lack of knowledge to constructing a diagnostic test in the form of two-tier do strategies in resolving a problem, inability to translate multiple choice to analyze or diagnose the mistakes problems into mathematics, and the inability of students to students have made in complete mathematical problems. use true mathematics. Based on this, the researcher titled this study was Wijaya, Heuvel-Panhuizena, Doormana, and Robitzschc "development of Two-Tier Multiples Choice Diagnostic [4] in his research on students' difficulties in resolving test instruments on calculus material". context-based problems focusing on student error analysis. In the study, students' difficulty analysis was seen from four things: (1) comprehension, (2) Transformation, (3) 2. Methodological Research Mathematical Processing, and (4) encoding. Based on the The research methods used in this research are research it has been obtained that the most students development research focused on the construction of experiencing difficulty is at the stage of transformation diagnostic tests. The product developed in this (inability to transform context-based problems into development research is a test instrument constructed mathematical models) and comprehension stage (inability based on a two-tier multiples-choice format used to to understand the meaning of the problem). Based on the uncover the mathematical errors that occur in high school previous description, it can be obtained that students are students. The procedure on this research is divided into two still experiencing difficulties in resolving mathematical phases, namely the first stage of product development and problems so that the students make mistakes in the process the second is the application stage of the product [7]. of solving the mathematical problems. The development was adapted from the Retnawati Based on the study of literature, there are some mistakes development model [8], there were nine steps in it, namely that students have done in solving math problems. (1) Determining the purpose of the instrument preparation, Research conducted by Herutomo and Saputro [5] showed (2) Looking for relevant theories or material coverage, (3) that the problem of algebraic material that occurred in one Drafting instrument item indicator, (4) Arranging of the junior high school in Semarang is that students still instrument items, (5) Validation of contents, (6) revisions make mistakes in solving the problems of algebraic based on the input validator, (7) Conduct a trial to the operations, then the student is also wrong in interpreting respondent to obtain the participant's response data, (8) the meaning of 'scribble' that denominator and divisible. Perform analysis (reliability, difficulty level, and These things show that students do not use their knowledge differentiation power), and (9) assemble the instrument. of integer and fractional operations in working with Once the product development stage is completed, it will algebraic material. Students are still struggling and many proceed to the product application stage. At the stage of make mistakes in resolving algebraic matter. The most this product application includes two things that carry out basic difficulties experienced by students is translating the tests using the resulting product and then interpret the test story into a mathematical model. The result is obvious, if results. the mathematical model form is wrong then the next Aiken formulates the Aiken’s formula V to count process will also be wrong. content-validity coefficient based on assessment result The two-tier multiples choice diagnostic test is a form of from the expert panel as much as n people toward an item test instrument. Before an instrument is used then the from the terms of how far the item represents the measured instrument must be analyzed first which will then show the contract. The submitted formula by Aiken can be shown quality of the instrument and show that the instrument is below [10,11]: appropriate and feasible for use. The quality of an Σs instrument can be seen from two main criteria: validity and V = reliability. n(c − 1) Suryabrata [6] defines the validity of the instrument as Where,

Mathematics and Statistics 8(5): 577-582, 2020 579

V = validity index item indicator is an indicator that is adjusted to the standard S = score applied, each rater reduced low score in competency and basic competencies of mathematical category used (s=r–lo,  r = rater score choice and lo = low subjects on the calculus material. The diagnostic test is score in score categorizing) developed in the form of two-tier multiple choice, i.e. the N = number of raters first level is a question with five answers, while the second C = number of criterion/rating level contains the reason or the student's calculation of the answers. Reliability Based on the grids specified in the previous step, writing The instrument's reliability is intended to see the the question is done by adjusting the indicator on the consistency of the tests made if the observation is repeated. created grid. Based on the grids that have been created, the The level of instrument’s reliability empirically proven by 11 indicators are lowered to 15 items with each item having the amount of the reliability coefficient which is in the five alternate answers. Fifteen items were developed range of 0 to 1 [10,11]. The higher the coefficient value consisting of four grains of the domain limit of algebraic means the higher the reliability, and vice versa. The function, six grains of the domain of algebraic function, coefficient formula of Alpha Cronbach's used to estimate and five of the domains integral not necessarily the the test reliability and calculate it using Iteman 4.3 function of algebraic. computer program. The reliability estimation is based on Study of the contents or validation is done by three the index of instrument reliability that is good if > 0,7 experts who are of the Faculty of the Mathematics [10,11]. Education Studies Program. The research is analyzed quantitatively and qualitatively. Quantitatively each test item is score and analyzed with a formula Aiken. The score 3. Results of each test item on the validation sheet is between 1 and 5. While the question is qualitative in the form of summaries of the opinions of each expert for the improvement of grain 3.1. Compile the Test Specification items. In addition to test instruments, instrument interview The initial step in preparation of diagnostic tests on this guidelines are also validated in the same way. The score of research is determining test objectives. The test objectives each question on the interview guidelines is between 1 and developed in this study are to know the mistakes students 4. For the assessment of each item, a validation sheet is often have in solving math problems. The tested material is given to each member. The result of the expert validation a customized calculus material with the low percentage of sheet fill is then analyzed with the Aiken formula that will the UN's absorption in the 2015/2016 school year and the result in the validity index (V). 2016/2017 school year. The range of V digits that may be obtained is between 0 The next step is to compile the test grid. The test grid and 1. The higher the V number or the closer the value is 1 contains the material, indicator, and item number of the then the validity of an item/grain is also higher, and if the V question. The selected calculus material is a limit of number approaches 0 then the eligibility an item/item is algebraic functions, derivatives of algebraic functions, and also getting lower. The following is the result of the an integral indefinite algebraic function. The chosen validity index (V) calculation.

Table 1. Index validity of Two-Tier diagnostic test Instruments Items Validator 1 Validator 2 Validator 3 Index validity (V) Description 1 4 5 5 0,9167 Valid 2 5 4 5 0,9167 Valid 3 5 5 4 0,9167 Valid 4 3 5 5 0,8333 Valid 5 5 5 4 0,9167 Valid 6 4 5 5 0,9167 Valid 7 4 4 5 0,8333 Valid 8 4 5 5 0,9167 Valid 9 5 5 5 1 Valid 10 5 4 5 0,9167 Valid 11 5 5 5 1 Valid 12 5 5 5 1 Valid 13 5 5 5 1 Valid 14 5 4 5 0,9167 Valid 15 5 5 5 1 Valid Average 0,9333 Valid

580 Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material

Table 2. Validity Index Items Validator 1 Validator 2 Validator 3 Index validity (V) Description 1 4 4 3 0,8889 Valid 2 3 4 3 0,7778 Valid 3 2 4 3 0,6667 Valid 4 3 3 3 0,6667 Valid 5 2 4 3 0,6667 Valid 6 3 3 3 0,6667 Valid 7 3 3 3 0,6667 Valid 8 3 4 4 0,8889 Valid 9 3 4 3 0,7778 Valid 10 4 4 3 0,8889 Valid Average 0,7556 Valid

Based on the validation result of the diagnostic test Table 3. instrument two-tier multiples choice calculus material as in Based on the value of Eigen and component variance Table 1 it is obtained that each validator provides an analysis result factors can be obtained that the student's assessment with the final result of its validity index of more response data to the diagnostic test of two-tier multiples than 0.8 which means high validity. So in general it can be choice material calculus SMA contains 3 Eigen values concluded that the diagnostic test instrument two-tier greater than 1, so it can be said that the two-tier multiples multiples choice calculus material in this study is valid, choice diagnostic test contains 3 factors. It is also which means the diagnostic test instrument two-tier strengthened by the results of the scree-plot of Eigen value, multiples choice calculus material has fulfilled each which is derived graph from three components while the indicator of the problem and is valid for analyzing the other shows the ramps graph. These results indicate that mistakes of students. there are 3 dominant factors measured in the diagnostic test Non-test instruments in the form of interview guidelines instrument of two-tier multiples choice calculus material. are also validated by experts. Based on the validation results it is obtained that the interview guidelines are provided with valid categories. Guidelines validation results along with an interview instrument for each criterion are met. It indicates that the guidelines and the interview instruments are valid for use.

3.2 Validity of Construction of Diagnostic Test Instruments The product trials were carried out at SMA Yogyakarta in 35 students of the grade XII IPA. Based on the results of the test product data obtained will be used to analyze the validity of the construction and reliability, Figure 1. Table 3. Results of KMO and Bartlet test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .753 The number of factors contained in the instrument can be Approx. Chi-Square 349.916 known from the scree-plot as in Figure 1. Many factors are Bartlett's Test of df 105 characterized by the pouring of the chart of Eigen value Sphericity acquisition. Figure 1 shows that there are 3 factors Sig. .000 measured in the diagnostic instrument of two-tier multiples The validity of the construction is evidenced by the choice calculus material. Exploratory Factor Analysis (EFA) using SPSS. The result The next Eigen value can be presented with a scree plot of the analysis of the factors on the adequacy of the on Figure 1. Based on the results of the plot scree can be samples showed Khi-squared value in the Bartlet test of seen that the value of Eigen began to rise in the 1st factor. 349.916 with a degree of freedom 105 and a P-value of less Meanwhile starting from the 4th Factor until 15th factor than 0.01. Also acquired Kaiser-Meyer-Olkin measure of show that the value of Eigen is stably decrease. It indicates sampling adequacy (KMO) of 0.753. These two points that a diagnostic test device of two-tier multiples choice indicate that the sample size used in the analysis of this calculus material measures 3 dominant factors. Here are factor has been adequate. More results can be seen in the given a list table of Eigen values.

Mathematics and Statistics 8(5): 577-582, 2020 581

Table 4. Eigen value and component variance result factor analysis Component Eigen value Difference values of Eigen Proportion (%) Cumulatif (%) 1 6,040 3,62 40,268 40,268 2 2,420 0,661 16,135 56,403 3 1,759 0,775 11,728 68,130 4 0,984 0,116 6,559 74,689 5 0,868 0,191 5,789 80,478 6 0,677 0,178 4,526 84,993 7 0,499 0,05 3,326 88,320 8 0,449 0,082 2,990 91,310 9 0,367 0,068 2,448 93,758 10 0,299 0,041 1,993 95,751 11 0,258 0,106 1,717 97,468 12 0,152 0,032 1,017 98,485 13 0,120 0,043 0,801 99,286 14 0,077 0,047 0,511 99,797 15 0,030 0,203 100,000

After determining the number of factors contained, the concluded that the diagnostic test instrument two-tier next will be the naming factor. The naming factor is done multiple choice of calculus High school material is valid to based on the load factor after rotation, taking into account measure the student's mathematical skills in calculus SMA the magnitude of the payload of the most factors on each material. component or item. The naming factor contained in the instrument test of two-tier multiples choice calculus 3.3 Reliability of Diagnostic Test Instruments material is carried out by researchers based on the indicators and the arrangement of the grid instrument. The Reliability refers to the consistency of the test score or load of unrotated factors is presented in table 4 and the other measurement results of a measurement to another payload of the rotated factor is presented in the following measurement. In other words, a test is said to be reliable if table 5. the results of its measurements approach the actual state of the student or is able to distinguish between students who Table 5. Charge factor after rotation are clever and not. According to Ebel and Frisbie [9] Item Integral Limit Derivative reliability of the instrument is fulfilled if Cronbach's value Item_1 .076 .880 .213 is alpha ≥ 0.65. The result of the reliability of the two-tier diagnostic test instruments using SPSS was obtained by Item_2 .188 .934 .211 Cronbach's Alpha 0.780, making it larger than 0.65. Based Item_3 .108 .903 .273 on this, there is a conclusion that the prepared two-tier Item_4 .370 .620 .079 diagnostic test instruments are reliable. The results of Item_5 .100 .019 .559 reliability estimation using SPSS can be seen in the following table. Item_6 -.123 .159 .752 Item_7 .276 .307 .566 Table 6. Reliability Estimation Results Item_8 -.013 .241 .852 Cronbach's Alpha N of Items Item_9 .246 .268 .714 0,803 15 Item_10 .809 .393 -.129 Item_11 .861 .270 .099 Item_12 .855 .098 .062 4. Discussion Item_13 .877 .034 .194 Diagnostic test Instruments Two-tier multiples choice Item_14 .442 .009 .516 calculus material is made as many as 15 questions. Item_15 .578 .116 .399 Diagnostic test Instruments Two-tier multiples choice calculus material is validated at 35 students in one of the Based on the exploratory factor analysis can be state high school in Yogyakarta City. In addition to being

582 Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material

validated on students, such test instruments are also the scree-plot and corresponds to many factors on the validated by experts and validation results by experts calculus material examined in this study. The quality of mentioning that two-tier multiples choice diagnostic two-tier multiples choice diagnostic tests is compiled of instrument calculus material is worth using to analyze two-tier diagnostic test instruments based on the reliability students' mistakes on calculus material. value gained. It is shown with the obtained value of Furthermore, to prove the validity of the construct Cronbach's alpha 0.780 which is greater than 0.65. proved with exploratory factor analysis using the help of SPSS program. The analysis of the factors conducted using Bartlett test resulted in a KMO value of 0753. The KMO value is already more than 0.5 which means the samples used in this study were sufficient. Moreover, to see the REFERENCES Eigen value that is above 1. There are 3 factors that have an [1] Santrock, J. W. (2011). Educational Psychology. (5th ed). Eigen value above 1 and the difference between the three New York: McGraw-Hill Company. factors is also quite a lot. As for the other factors the difference is not more than 0.2. The same thing is also [2] Chambers, P. (2008). Teaching Mathematician, Developing as A Reflective Secondary. London: SAGE noticeable when noticing the scree plot in Figure 4 which indicates there are 3 dominant factors measured on this test. [3] Yeo, K.K.J. (2009). Secondary Students’ Difficulties in It is thus evident that this two-tier multiple-choice Solving Non-Routine Problems. Research in Mathematics diagnostic test device is valid to measure students' mistakes Education in Singapore. Retrieved from https://eric.ed.gov/ ?id=EJ904874 in calculus high school material. Furthermore, after the two-tier multiple-choice [4] Wijaya, A., Heuvel-Panhuizen, M., Doorman, M., Robitzsch, diagnostic test instrument on the calculus material proved A. (2014). Difficulties in Solving Context-based PISA to be valid, researchers tested the test instrument to analyze Mathematics Tasks: An Analysis of Students’ Errors. The Mathematics Enthusiait, 11(3). hlm: 555-584 students' mistakes on calculus material. The student's fault in solving the diagnostic test problem of two-tier [5] Herutomo, R.A. & Saputro, T.E.M. (2014). Analisis multiple-choice calculus material is seen based on the Kesalahan dan Miskonsepsi Siswa Kelas VIII pada Materi results of the diagnostic test provided. The test was given to Aljabar. Jurnal Ilmu Pendidikan dan Pengajaran, 1(2), 134-145. https://doi.org/10.17509/edusentris.v1i2.140. 5 SMA Negeri in Yogyakarta with 551 students as the subject of research. After accumulated all the students' [6] Suryabrata, S. (2008). Metodologi Penelitian. Jakarta: answer sheets, the researcher then corrected to see how Rineka Cipta. many students answered correctly and answered wrong in [7] Non Syafriafdi, Ahmad Fauzan, I Made Arnawa, Syafri each item. Once corrected for the wrong student answers it Anwar, Wahyu Widada, (2019). The Tools of Mathematics will be analyzed deeper to see the types of mistakes Learning Based on Realistic Mathematics Education students are doing. Approach in Elementary School to Improve Math Abilities. Universal Journal of Educational Research, 7(7), 1532 - 1536. DOI: 10.13189/ujer.2019.070707. 5. Conclusions [8] Retnawati, H. (2016). Validitas reliabilitas & karakteristik butir. Yogyakarta: Parama Publishing. Based on the results of research and discussion above it can be concluded that the construction of a two-tier [9] Ebel, R. L. & Frisbie, D. A. (1991). Essentials of Education Measurement. New jersey: prentice hall multiples choice diagnostic test based on the validity of the contents and the construct was obtained that the two-tier [10] Ramadhan, S., Mardapi, D., Prasetyo, Z. K., & Utomo, H. B. multiples choice diagnostic test is proven valid. The (2019). The Development of an Instrument to Measure the Higher Order Thinking Skill in Physics. European Journal validity of the content is evidenced by the average validity of Educational Research, 8(3), 743-751. doi: index (V), for the two-tier multiples choice diagnostic test 10.12973/eu-jer.8.3.743 instrument obtained an average validity index (V) of 0.9333 and for an interview guideline instrument acquired [11] Ramadhan, S., Sumiharsono, R., Mardapi, D., & Prasetyo, Z. K. (2020). The Quality of Test Instruments Constructed by the validity index (V) 0.7556 in which both the validity Teachers in Bima Regency, Indonesia: Document Analysis. index (V) approaches the value 1. Whereas for the validity International Journal of Instruction, 13(2). doi: of the construct acquired three dominant factors based on 10.29333/IJI.2020.13235A

Mathematics and Statistics 8(5): 583-589, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080513

Stochastic Latent Residual Approach for Consistency Model Assessment

Hani Syahida Zulkafli1,*, George Streftaris2, Gavin J. Gibson2

1Department of Mathematics, Faculty of Science, Universiti Putra Malaysia, Selangor, Malaysia 2School of Mathematics and Computer Sciences, Heriot-Watt University, United Kingdom

Received July 13, 2020; Revised August 22, 2020; Accepted September 17, 2020

Cite This Paper in the following Citation Styles (a): [1] Hani Syahida Zulkafli, George Streftaris, Gavin J. Gibson , "Stochastic Latent Residual Approach for Consistency Model Assessment," Mathematics and Statistics, Vol. 8, No. 5, pp. 583 - 589, 2020. DOI: 10.13189/ms.2020.080513. (b): Hani Syahida Zulkafli, George Streftaris, Gavin J. Gibson (2020). Stochastic Latent Residual Approach for Consistency Model Assessment. Mathematics and Statistics, 8(5), 583 - 589. DOI: 10.13189/ms.2020.080513. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Hypoglycaemia is a condition when blood 1. Introduction sugar levels in body are too low. This condition is usually a side effect of insulin treatment in diabetic patients. Hypoglycaemia is a condition of low glucose level in Symptoms of hypoglycaemia vary not only between blood, i.e. below 4mmol/L. It is a common side effect of individuals but also within individuals making it difficult insulin treatment in diabetic patients. It is crucial to treat a for the patients to recognize their hypoglycaemia episodes. hypoglycaemia episode promptly to avoid severe Given this condition, and because the symptoms are not hypoglycaemia episode, where patient needs other people’s exclusive to only hypoglycaemia, it is very important for help to recover. However, it is not easy for the patient to patients to be able to identify that they are having a identify a hypoglycaemia episode because symptoms of hypoglycaemia episode. Consistency models are statistical hypoglycaemia vary within individuals. A given symptom is not equally covarying with blood glucose levels [1] models that quantify the consistency of individual implying a degree of between-subject variability. symptoms reported during hypoglycaemia. Because there Individuals experiencing various symptoms of are variations of consistency model, it is important to hypoglycaemia are not necessarily able to recognize a identify which model best fits the data. The aim of this hypoglycaemic episode because the individuals’ ability to paper is to asses and verify the models. We developed an recognize hypoglycaemia is significantly correlated with assessment method based on stochastic latent residuals and the number of symptoms reported per episode [2]. There performed posterior predictive checking as the model are marked variability of the reported symptoms between verification. It was found that a grouped symptom episodes of hypoglycaemia [3] but the study is limited to consistency model with multiplicative form of symptom children respondents. A consistency model was developed propensity and episode intensity threshold fits the data to quantify the consistency of reporting the symptoms of better and has more reliable predictive ability as compared hypoglycaemia by adult patients [4]. Zulkafli et al [5] then to other models. This model can be used in assisting introduced the grouped symptoms models as one of the patients and medical practitioners to quantify patients’ consistency estimations models. This model adds another reporting symptoms capability, hence promote awareness source of variation to symptoms’ reportings by distributing of their hypoglycaemia episodes so that corrective actions the 26 symptoms to several groups according to the causes. can be quickly taken. Other functional form was briefly introduced as an alternative to be used in the consistency models [5]. Keywords Latent Residual, Posterior Predictive With several consistency models developed, the Checking, Model Verification, Model Assessment challenge is to evaluate the performance of each model before making decisions on which model can give better consistency estimates. 584 Stochastic Latent Residual Approach for Consistency Model Assessment

Residual analysis is one prominent way in validating a episodes 푘 in patient 푖 respectively. statistical model. Cox and Snell, [3] introduced a general The threshold, 휏푖푗푘 is assumed to follow a log-normal 2 2 definition of residual for non-linear models. , distribution, 휏푖푗푘 ~ Log-Normal(0, 휎푖 ) where 휎푖 is the Pearson and Anscombe residuals are examples of type of parameter associated with the variability of symptoms residuals commonly used in residual analysis. However, reported by individual patient 푖. The consistency estimate 1 these residuals have unknown which is defined as 푐푖 = 2. 100+휎푖 will affect the interpretation of the analysis [7]. Each of the parameter is assigned a prior distribution as Among works that has been done in measuring the follows; performance of statistical models related to diabetic data use coefficient of determination ( 푅2 ) goodness-of-fit 훼푖푗 ∼ Gamma(1,0.1), 푖 = 1, . . . ,66 and 푗 = 1, . . . ,26 measure [8] and robust method [9]. However, the work 훽푖푘 ∼ Gamma(1,0.1), 푖 = 1, . . . ,66 and 푘 = 1, . . . , 퐾푖 does not apply to the concept of latent residuals. 휎2 ∼ Inv-Gamma(1,0.1), 푖 = 1, . . . ,66. Latent residual analysis was used in analyzing binary 푖 response variable in regression framework [10,11]. 휒2 test This consistency model was later expanded by for latent model testing is sensitive to distributional separating the symptoms into different groups according to properties of the observed variables [12]. The test also will their causes in order to have an additional source of have high probability of Type 1 error with complex model variation [5]. Therefore, the prior corresponding to the [13]. Therefore, the intent of this paper is to present a symptoms propensity then become method for assessing the adequacy of the stochastic model 휃 with latent variables utilissing the concept of stochastic 훼푖푗푙 ∼ Gamma (휃, ) , 푙 = 1, … ,6 푢푙 latent residuals, 푧푖푗푘. 2 Also, one of the important aims of this work is to develop giving E(훼푖푗푙) = 푢푙 and Var(훼푖푗푙) = 푢푙 /휃, 푙 = 1, . . . ,6 a model which can be used to make prediction of values of Earlier work of the consistency model assumed a interest with quantified confidence. A good predictive threshold form ℎ(훼푖푗, 훽푖푘. ) = 훼푖푗훽푖푘 [4]. Later, another model, enables us to predict how consistent a patient is in option for the functional form was introduced, i.e. reporting hypoglycaemia when given some of his/her ℎ(훼푖푗, 훽푖푘. ) = 훼푖푗 + 훽푖푘 and their differeces were briefly specific characteristics. This can be used in order to assist discussed [5]. early detection of hypoglycaemia and give necessary advice to the patient. Therefore, the second objective of this 2.2. Stochastic Latent Residual paper is to examine the consistency model’s predictive capability by employing a validation approach relying on The stochastic latent residuals, 푧푖푗푘, would give rise to the posterior predictive distribution. the observed data under the considered model. Following the concept of generalised residuals [6], the data can be regarded as generated through a functional model, 푔(·), 2. Materials and Methods [15] depending on the vector of all model parameters and latent variables, say 휃, i.e. The methods of model assessment discussed in this paper are applied to data collected from 66 diabetic patients 푦 = 푔휃(푧) (1) where each subject is given a unique ID number [14]. Each where 푧 ∼ 푈(0, 1) are generalised residuals. Then, in the patient recorded his/her symptoms in each hypoglycaemia general case, (1) can be inverted to give the stochastic latent episode experienced for a duration of 9 to 12 months. residuals 푧 = 푔−1 (푦) (2) 2.1. The Consistency Model 휃 For the assumed discrete model we have A consistency model was developed under Bayesian approach [4]. Observed variable 푌푖푗푘 takes value 1 if 푦푖푗푘 = 퐼{푧푖푗푘 ≤ 푝푖푗푘} patient 푖 = 1, . . . , I reports symptom 푗 = 1, . . . , J in where 푧 ∼ 푈(0, 1) and 퐼{·} is the indicator function. episode 푘 = 1, . . . , K by patient 푖 = 1, . . . , I. Otherwise, 푖푗푘 푖 This implies that, under the assumed model, 푌푖푗푘 takes value 0. 푌푖푗푘~ Bernoulli (푝푖푗푘 ) where 푝푖푗푘 is the probability of symptom 푗 is reported in episode 푘 by 푧푖푗푘 = 푦푖푗푘푢1 + (1 − 푦푖푗푘)푢2 patient 푖 . A threshold, 휏 푖푗푘 is defined for patient 푖 where 푢1 ∼ 푈(0, 푝푖푗푘) and 푢2 ∼ 푈(푝푖푗푘, 1). Therefore, reporting symptoms 푗 at episode 푘 and symptom 푗 is if the model is adequate, 푧푖푗푘 ∼ 푈(0, 1) and a 푝-value considered as reported when the threshold 휏푖푗푘 exceeded for testing the hypothesis of this uniform distribution can by a functional form ℎ(훼푖푗, 훽푖푘. ), i.e 휏 푖푗푘 ≤ ℎ(훼푖푗, 훽푖푘). be obtained. To implement this method, 10,000 MCMC 훼푖푗 and 훽푖푘 are latent variables which correspond to the iterations were run for this model and obtained the latent propensity of symptoms 푗 for patient 푖 and intensity of residual, 푧푖푗푘, for each subject such that

Mathematics and Statistics 8(5): 583-589, 2020 585

(1) the positive predictive value (PPV) and negative predictive 푦푖푗푘 = 1, 푧푖푗푘 ~푈(0, 푝̂푖푗푘) If { (0) value (NPV). These four measures were calculated using 푦푖푗푘 = 0, 푧푖푗푘 ~푈(푝̂푖푗푘, 1) (푝) 푌푖푗푘 and 푌푖푗푘 in the validation sample and are defined as where 푝̂푖푗푘 is the estimated probability of patient 푖 follows reporting symptom 푗 at episode 푘 at each iteration. (푝) ∑푖푗푘 푌푖푗푘푌푖푗푘 Therefore, if the tested hypothesis is correct, a) PPV= (푝) for 푖, 푗, 푘 in the sample ∑푖푗푘 푌 (1) (0) 푖푗푘 푧푖푗푘 = (푧 , 푧 ) ∼ 푈(0, 1). 푖푗푘 푖푗푘 PPV is the proportion of symptoms with positive A Kolmogorov-Smirnov goodness-of-fit test was prediction that was correctly classified as reported. PPV conducted on each posterior sample of residuals obtained measures the probability of patient 푖 truly experiencing in each MCMC iteration, resulting in a corresponding 푝- symptom 푗 at episode 푘 given that the model predicts the value, 휋훾, where 훾 = 1, 2, 3, . . . , 10, 000 iterations. This symptom is likely to be experienced. will give a posterior distribution 푓(휋|푦푖푗푘) where 푦푖푗푘 (푝) ∑푖푗푘(1−푌푖푗푘)(1−푌 ) denotes the observation data. b) NPV= 푖푗푘 for 푖, 푗, 푘 in the sample. ∑ (푝) 푖푗푘(1−푌푖푗푘 ) 2.3. Posterior Predictive Checking NPV is the proportion of symptoms with negative reporting prediction that was correctly classified as absent. This approach is commonly used for checking the NPV measures the chance of patient 푖 having symptom 푗 model’s suitability, and is based on work that was not present at episode 푘 given that the model predicts that elaborated in [16] and later expanded in [17]. The purpose it is not likely to be reported. of the analysis is to compare the observed data with values (푝) predicted from the model. ∑푖푗푘 푌푖푗푘푌 c) TPR= 푖푗푘 for 푖, 푗, 푘 in the sample. The observations, 푌푖푗푘 are binary data that take value 1 ∑푖푗푘 푌푖푗푘 if patient 푖 reported symptom 푗 in episode 푘 value zero (푝) True Positive Rate (TPR), also known as the sensitivity otherwise. 푌푖푗푘 is defined as the predicted data, such that of the predictive model, measures the ability of the model these are the data that will be obtained if we use the same to correctly predict if symptom 푗 occurs at episode 푘. model to do prediction. 10% of the total number of (푝) ∑푖푗푘(1−푌푖푗푘)(1−푌 ) observations are randomly selected, which are then used as d) TNR= 푖푗푘 for 푖, 푗, 푘 in the sample. the validation sample. Then, the examined model is fitted ∑푖푗푘(1−푌푖푗푘) to the remaining data. The fitted model is subsequently True Negative Rate (TNR), or also called specificity, used to do prediction on symptom reporting for the sampled represents the capacity of the model to predict that (푝) patients episodes, 푌푖푗푘 for 푖, 푗, 푘 in the sample. Recall that, symptom 푗 is not reported at episode 푘 when the 푌푖푗푘 is Bernoulli distributed with probability, 푝푖푗푘 . The symptom is truly absent. , 푝푖푗푘 , is sampled from the fitted (푝) model and is used to obtain the reporting prediction, 푌푖푗푘 . Consequently, we compare the total number of predicted 3. Results and Discussion reportings, 푁푝 to the total number of observed reportings, (푝) 3.1. Model Assessment 푁표푏푠. Accordingly, the distributions of 푌푖푗푘 and 푌푖푗푘 were compared. As preliminary checking, we observe the of Four other measures are used to assess and describe the the residuals 풛 for each patient 푖 for grouped symptom usefulness of the model’s predictions [18]. The measures model with threshold ℎ(훼 , 훽 ) = 훼 훽 . Recall that are related to sensitivity, specificity and predictive values. 푖푗 푖푘 푖푗 푖푘 Here, sensitivity is defined as the proportion of experienced patient 푖 reports symptom 푗 at episode k when 휏 푖푗푘 ≤ symptoms that are correctly predicted as being reported by 훼푖푗훽푖푘. Thus, the observed variable, 푌푖푗푘 is equal to 1 when the models whereas specificity is the proportion of symptom 푗 is reported at episode 푘 by patient 푖 . symptoms that have not been experienced which are Otherwise, 푌푖푗푘 takes value zero. Figure 1 presents the correctly predicted as not reported by the model. Ideally, a histogram for one patient, Subject 4028. The distribution good predictive model should have high sensitivity and pattern suggests that the residuals do follow a Uniform (0,1) specificity. However, these two measures are often distribution. To further confirm the distribution of 풛 we inversely proportional, meaning as sensitivity increases also check on the histogram of 푝-values for this patient, specificity decreases and vice versa. The probability of the 휋(4028) (Figure 2). From this histogram, we can say there model giving correct prediction were evaluated by using is no evidence against the adequacy of fit of the model.

586 Stochastic Latent Residual Approach for Consistency Model Assessment

of 푝 -values, 휋훾 , representing another patient, Subject 5088 when using the two thresholds, ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 and ℎ(훼푖푗, 훽푖푘) = 훼푖푗 + 훽푖푘. Observing the posterior distributions of 휋훾 for subject 5088 it can be seen that there is no strong evidence against the models tested, although it appears that there is more evidence against the

model when the model with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗 + 훽푖푘 is fitted. This is evidenced from the higher concentration of 푝-values close to zero. As implied earlier, to have strong evidence against a tested model, i.e. to reject the hypothesis that the model is

adequate, the posterior 푝-values, 휋 , should be very small. Figure 1. Histogram of stochastic latent residuals for model with 훾 Therefore as a measure of model goodness of fit, the grouped symptoms using threshold (훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 for patient 4028. proportion of 휋훾 less than 0.05, Pr(휋훾<0.05) is calculated for each subject. For comparison purposes, cases with greater Pr(휋훾 <0.05) show stronger evidence against the model fit. For 67% of the 66 subjects the proportions of 휋훾 <0.05 suggest that better fit of the model with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 . Bar plots in Figure 4 display the Pr(휋훾<0.05) obtained from the grouped symptoms model when using different thresholds for Subjects 3022, 4028, 5088, 4045, 4023 and 2013. For these patients, their Pr( 휋훾 <0.05) when using threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗 + 훽푖푘 (yellow bars) are higher than when using ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 which is indicated by the red bars. The same procedure was repeated for comparing the models with and without grouped symptoms. For both models the ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 threshold is used, and the proportion Pr( 휋훾 <0.05) is calculated. Only seven patients show higher Pr( 휋훾 <0.05) when the grouped Figure 2. Posterior distribution of 푝-values, 휋, for fit of model with symptoms model is used compared to the model without grouped symptoms using thresholdℎ(훼 , 훽 ) = 훼 훽 for patient 4028 푖푗 푖푘 푖푗 푖푘 grouped symptoms. This suggests that the model with For comparison purposes, Figure 3 presents grouped symptoms fits the data better.

Figure 3. Posterior distribution of 푝-values, 휋, for fit of model with thresholds ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 (left) and ℎ(훼푖푗, 훽푖푘) = 훼푖푗 + 훽푖푘 (right) for patient 5088 in grouped symptoms model.

Mathematics and Statistics 8(5): 583-589, 2020 587

using threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 . Graphical plots in Figure 5 show the posterior distributions of the total predictednumber of reporting symptoms, with blue (dotted) lines marking the total number of predicted reportings, 푁푝, whereas the red (solid) lines refer to the total observed value, 푁표푏푠 for subjects 4045. The reporting symptoms for patient 4045 are very well predicted by the grouped symptoms model as indicated by the blue and red lines that almost overlap. The prediction made was 푁푝 =15.26 with 95% CI (9,22) and the symptom reportings i.e. the total observed value, 푁표푏푠 , is 15. However, the nongrouped symptoms model also made a good prediction, although it is slightly over estimated (푁푝 = 17.63). We also test the performance of different thresholds with the core model. Figure 6 gives the posterior distributions of Figure 4. Bar plots comparing the proportion of 푝-values, 휋 <0.05 the total predicted number of symptom reportings for between different thresholds when using the grouped symptoms model subjects 5009. With each threshold, the predicted distributions comfortably contain the total number of 3.2. Model Verification observation, 푁표푏푠 (represented by red solid lines). This indicates that for this patient, we cannot distinguish The posterior predictive checking approach was applied between the three threshold models in terms of their to study the predictive ability of the core model (without predictive ability. Note that graphs for all patients exhibit grouped symptoms) and the grouped symptoms model, similar trend.

Figure 5. Posterior density plots of number of reportings, 푁푝, for patients 4045 under non-grouped symptoms model (left) and grouped symptoms model (right) using threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘. Blue dotted lines show the number of predicted symptoms reportings of each model, and red lines represent the true number of reported symptoms, 푁표푏푠.

588 Stochastic Latent Residual Approach for Consistency Model Assessment

Figure 6. Posterior density plots of number of reportings, 푁푝, for patients 5009 under non-grouped symptoms model using thresholds 훼푖푗훽푖푘 (left), and 훼푖푗+ 훽푖푘 (right). Blue dotted lines show the number of predicted symptoms reportings of each model, and red lines represent the true number of reported symptoms, 푁표푏푠.

Finally, the performance of prediction for different models when using data from all patients in the analysis is compared, i.e. the models with and without grouped symptoms using thresholds ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 and ℎ(훼푖푗, 훽푖푘) = 훼푖푗 + 훽푖푘. The results are provided in Table 1. Among the four models, the model with grouped symptoms with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 gives the closest predicted value, 푁푝 , to the observed number of symptoms reported. Figure 7 shows the posterior distributions of the predicted number of symptoms reported. The total number of symptoms predicted to be reported, is 754.3, with a 95% of (713,797), which contains the observed number of reported symptoms, 771. The other three models considered here do not perform well in terms of this prediction, with the corresponding posterior predictive distributions failing to contain the true value.

Table 1. Model predictions for validation sample for all subjects (The Figure 7. Posterior density plots of number of reportings, 푁푝, for all number of reported symptoms, 푁표푏푠 = 771). patients under grouped symptoms model with threshold ℎ(훼푖푗, 훽푖푘) = 훼 훽 . Blue dotted lines show the number of predicted symptom 훼 훽 훼 + 훽 푖푗 푖푘 푖푗 푖푘 푖푗 푖푘 reportings of the model, and red lines represent the true number of mean 95% CI mean 95% CI reported symptoms, 푁표푏푠. PPV 0.413 (0.390,0.436) 0.330 (0.310,0.348) NPV 0.933 (0.930,0.936) 0.931 (0.928,0.935) Regarding the other four predictive measures presented TPR 0.464 (0.435,0.489) 0.469 (0.437,0.498) in Table 1, the models explored here do not display TNR 0.919 (0.913,0.925) 0.883 (0.876,0.890) substantial differences. It is also obvious that prediction

푁푝 866.63 (820,908) 1095.22 (1044,1142) referring to symptoms not being experienced (NPV, TNR) is much more successful, as compared to prediction for (a) Non-grouped symptoms model reported symptoms (PPV, TPR). The fact that the

훼푖푗훽푖푘 훼푖푗 + 훽푖푘 developed models perform better in predicting that mean 95% CI mean 95% CI symptoms will not be reported, may be explained by the PPV 0.408 (0.383,0.432) 0.380 (0.359,0.402) nature of the data, where the frequency of reporting NPV 0.926 (0.923,0.930) 0.931 (0.928,0.935) symptoms is relatively low (771/7033). The proportion of TPR 0.399 (0.435,0.489) 0.457 (0.431,0.492) symptoms with positive prediction that was correctly TNR 0.929 (0.913,0.925) 0.908 (0.902,0.914) classified as reported is highest when using the core model

푁푝 754.3 (713,797) 928.084 (876,969) with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 , i.e. PPV = 0.413, whereas the chance of a symptom not present in an episode (b) Grouped symptoms model given that the model predict it will not be reported is also

Mathematics and Statistics 8(5): 583-589, 2020 589

highest with this model (NPV = 0.933). Journal of Mathematical Sciences, 10(S):27-39, 2016. [6] Cox, D. R. and Snell, E. J. A general definition of residuals. Journal of the Royal Statistical Society. Series B 4. Conclusions (Methodological), pages 248–275, 1968. This paper discusses the assessment of models with [7] McCullagh, Peter, and John A. Nelder. "Generalized Linear different thresholds using the concept of stochastic latent Models 2nd Edition Chapman and Hall." London, UK, 1989. residuals. It was found that the grouped symptoms model [8] McEwan, P., Foos, V., Palmer, J.L., Lamotte, M., Lloyd, A. with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 fits the data better. and Grant, D. “Validation of the IMS CORE diabetes model,” Performing the model verification and posterior predictive Value in Health, 17(6), pp.714-724, 2014 checking to verify which model is best in predicting [9] Mohd Saifullah Rusiman, Siti Nasuha Md Nor, Suparman, symptom reporting, it is concluded that the grouped Siti Noor Asyikin Mohd Razali, "Robust Method in Multiple symptoms model with threshold ℎ(훼푖푗, 훽푖푘) = 훼푖푗훽푖푘 has Linear Regression Model on Diabetes Patients," more reliable predictive ability as compared to other Mathematics and Statistics, Vol. 8, No. 2A, pp. 36 - 39, 2020. DOI: 10.13189/ms.2020.081306. models. [10] Albert, J. and Chib, S. “Bayesian residual analysis for binary response regression models.” Biometrika, 82(4), pp.747-769, Acknowledgements 1995. [11] Farias, R.B. and Branco, M.D. “Efficient algorithms for This research is supported by Ministry of Education Bayesian binary regression model with skew-probit link,” In (MOE) through Fundamental Research Grant Scheme Recent Advances in : False Discovery Rates, , and Related Topics (pp. 143-168), 2011. (FRGS/1/2019/STG06/UPM/02/10) and Universiti Putra Malaysia, Putra-IPM grant GP-IPM/2018/9656900. [12] Fouladi, Rachel T. "Performance of modified test statistics in covariance and correlation structure analysis under conditions of multivariate nonnormality," Structural Equation Modeling 7, no. 3: 356-410, 2000. [13] Moshagen, M. “The model size effect in SEM: Inflated REFERENCES goodness-of-fit statistics are due to the size of the covariance matrix,” Structural Equation Modeling: A Multidisciplinary [1] Pennebaker, J. W, Cox, D. J., Gonder-Frederick, L., Wunsch, Journal, 19(1), pp.86-98, 2012. M. G., Evans, W. S., & Pohl, S. “Physical symptoms related [14] UK Hypoglycaemia Study Group: Risks of hypoglycaemia to blood glucose in insulin-dependent diabetics,” in types 1 and 2 diabetes: effects of treatment modalities and Psychosomatic Medicine, 43(6), 489–500, 1981. their duration, Diabetologia, 50:1140–1147, 2007. [2] Cox DJ, Gonder-Frederick L, Antoun B, Cryer PE, Clarke [15] Dawid, A. P. and Stone, M. “The functional-model basis of WL. “Perceived symptoms in the recognition of fiducial inference,” The Annals of Statistics, pages 1054– hypoglycemia,” Diabetes Care, 16(2):519-527, 1993. 1067, 1982 [3] Macfarlane PI, Smith CS. “Perceptions of hypoglycaemia in [16] Rubin, D. B. “Bayesianly justifiable and relevant frequency childhood diabetes mellitus: a questionnaire study,” Pract calculations for the applies statisticial,” The Annals of Diabetes, 5:56–58, 1988. Statistics, pages 1151–1172, 1984. [4] Zammitt, N., Streftaris, G., Gibson, G., Deary, I., and Frier, [17] Gelman, A., Meng, X.-L., and Stern, H. “Posterior predictive B. Modeling the consistency of hypoglycemic symptoms: assessment of model fitness via realized discrepancies,” high variability in diabetes, Diabetes Technology & Statistica sinica, 6(4):733–760, 1996. Therapeutics, 13(5):571–578, 2011. [18] Streftaris, G., Wallerstein, N., Gibson, G., and Arthur, S. [5] Zulkafli, H., Streftaris, G., Gibson, G., and Zammitt, N. “Modeling probability of blockage at culvert trash screens Bayesian modelling of the consistency of symptoms reported using bayesian approach,” Journal of Hydraulic Engineering, during hypoglycaemia for individual patients, Malaysian 139(7):716–726, 2013.

Mathematics and Statistics 8(5): 590-595, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080514

Determining Day of Given Date Mathematically

R. Sivaraman

National Awardee for Popularizing Mathematics among Masses, D G Vaishnav College, India

Received July 14, 2020; Revised August 20, 2020; Accepted September 17, 2020

Cite This Paper in the following Citation Styles (a): [1] R. Sivaraman , "Determining Day of Given Date Mathematically," Mathematics and Statistics, Vol. 8, No. 5, pp. 590 - 595, 2020. DOI: 10.13189/ms.2020.080514. (b): R. Sivaraman (2020). Determining Day of Given Date Mathematically. Mathematics and Statistics, 8(5), 590 - 595. DOI: 10.13189/ms.2020.080514. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract Computation of day of a week from given 1. Introduction date belonging to any century has been a great quest among astronomers and mathematicians for long time. In recent Ever since humans came to understand the functioning centuries, thanks to efforts of some great mathematicians of universe for various reasons, various forms of calendars we now know methods of accomplishing this task. In doing have been put in to use. Different civilizations used so, people have developed various methods, some of which different calendars. Since October 1582, when Pope are very concise and compact but not much accessible Gregory XIII introduced a new calendar as a correction to explanation is provided. The chief purpose of this paper is then existing Julian Calendar, various parts of the globe at to address this issue. Also, almost all known calculations different times, adopted the new calendar. At present, the involve either usage of tables or some pre-determined calendar introduced by Pope Gregory XIII was followed codes usually assigned for months, years or centuries. In throughout the globe and it was named in his honour as this paper, I had established the mathematical proof of “Gregorian Calendar”. determining the day of any given date which is applicable Before we start our actual mathematical investigation, for any number of years even to the time of BCE. I had we glance through the basic structure of Gregorian provided the detailed mathematical derivation of month calendar system. codes which were key factors in determining the day of any The Gregorian calendar is a solar calendar with 12 given date. Though the procedures for determining the day months of 28–31 days each. A regular Gregorian year of given date are quite well known, the way in which they consists of 365 days, but in certain years known as leap arrived is not so well known. This paper will throw great years, a leap day is added to February. Gregorian years are detail in that aspect. To be precise, I had explained the identified by consecutive year numbers. Various countries formula obtained by German Mathematician Zeller in had their own beginning of the year until in recent centuries detail and tried to simplify it further which will reduce its everyone had accepted the fact that January 1 as the complexity and at the same time, would be as effective as beginning of a New Year. Thus, the present Gregorian the original formula. The explanations for Leap Years and calendar consider one year from 1st January to 31st other astronomical facts were clearly presented in this December of any year containing 12 months and 365/366 paper to aid the derivation of the compact form of Zeller’s days. Formula. Some special cases and illustrations are provided In ancient times, astronomers knew only about seven wherever necessary to clarify the computations for better planets. Probably this might be the reason for fixing seven understanding of the concepts. days for a week. Hence, all the calendar calculations Keywords Leap Years, Congruence, Modulo regarding determination of day of a given date were based Arithmetic, Ceiling Function, Floor Function, Centurial on the number 7. Thanks to various mathematicians and Years, Month Codes astronomers we now have a global calendar satisfying our needs. Mathematics and Statistics 8(5): 590-595, 2020 591

2. Leap Years calendars for the years 1487, 1887, 2287, 2687. ... are identical. In the Gregorian calendar, the orbital period of the Earth Note that when we apply Cyclic Property rule for years around the Sun is not 365 days, but closer to 365.2425 days. occurring in BCE time, we should add 1 to 400 and make it To account for this longer period, every four years, we add 401, since there is no year 0 between 1 BCE and 1 CE. an extra day (since 0.25 × 4 = 1) in February, to make 366 Thus, the calendar for the year 44 BCE in which Julius days. Ceasar was assassinated would be same as – 44 + 401 = But observe that 365.25 – 365.2425 = 0.0075. So adding 357 CE which would be same as for the years 757, 1157, 1 day in February for every 4 years will produce an over 1557, 1957, 2357, . . . estimate of 0.0075 × 4 = 3 days. To bring down these 3 days, it is suggested that the years like 1600, 1700, 1800, 1900, 2000, 2100, . . . (called centurial years) which are 4. Known Formula for Determining divisible by 4, would be considered as leap years if they are Day of Given Date not only divisible by 4 but also by 400. This arrangement ensures us to count only 1 leap year and leaves 3 years Exactly 300 years after Gregorian calendar got among any consecutive 400 years period like 1301 – 1600, introduced; the problem of finding day of a given date in 1601 – 2000, 2001 – 2400, etc. compact computable form was studied by German Thus, among the period of 400 years from 1601 – 2000, mathematician Julius Christian Johannes Zeller, who the years 1700, 1800, 1900 were not leap years though they published an elegant algorithm for the same in 1882. Now are divisible by 4 but not by 400, whereas, 2000 is a leap this algorithm was named after him as “Zeller’s year since it is divisible by both 4 and 400. In this Congruence”. viewpoint, we present the following rule for a year being a According to Zeller’s Congruence rule, the day of any Leap Year (which will be crucial for our task of finding day date of the form d/m/Y (d – date, m – month, Y – Year) is of given date): given by “Every year that is exactly divisible by 4 is a leap year, +13(m 1)   YY  Y  except for years that are exactly divisible by 100, but these dY+ ++ − + (mod 7)→ (1) 5   4 100  400  centurial years are leap years if they are exactly divisible by      400.” Here, x is called the Floor Function or Greatest As a consequence of the above rule, we see that, in the  century from 1601 – 1700, there are 24 leap years (as 1700 Integer Function of x . is not a leap year). Similarly, the centuries 1701 – 1800, x is defined to be greatest integer ≤ x . As a 1801 – 1900 each contain 24 leap years (since 1800, 1900 are not leap years) but the century 1901 – 2000 contain 25 consequence of this definition, we find that if x > 0 such leap years as 2000 is a leap year. Hence in every period of that x= a ⋅ dddd1234 ⋅⋅⋅ then a⋅ dddd1234 ⋅⋅⋅ = a . four consecutive centurial years, there will be (24×3) + 25 Similarly, if x < 0 such that a ≥ 0 and if = 97 Leap years for every 400 years in Gregorian calendar. x=− a ⋅ dddd1234 ⋅⋅⋅then −a ⋅ dddd1234 ⋅⋅⋅ = a −1. ππ −  For example, 2= 1, = 0, = − 2 . 3. Cyclic Property of Gregorian  42   Calendar Similarly, (mod 7) denote the remainder when the whole term inside the bracket is divided upon by 7. In a period of every 400 years like say 1701 – 2000 or 2001 – 2400, ... we will find the total number of days. As there are 97 leap years in every 400 years the total number of days would be given by (365 × 303) + (366 × 97) = 5. Explanation of Zeller’s Congruence 110595 + 35502 = 146097. Rule 146097 Since = 20871, the total number of days in every Let us first rewrite the Zeller’s Congruence Formula and 7 try to understand it in a better perspective. 400 years namely 146097 is exactly divisible by 7, it follows that there are exactly 20871 weeks in a period of +13(m 1)   YY  Y  dY+  ++  −  + (mod 7)→ (1) every 400 years. Hence the whole system of Gregorian 5   4 100  400  calendar years repeats for every 400 years. This 3≤≤m 14 phenomenon can be termed as “Cyclic Property” of Gregorian calendar. For a given date of the form d/m/Y, where d, m, Y Due to this, we see that the calendar for the year 1582 (in represents the date, month and year respectively, we note which modern Gregorian calendar) was introduced will be that Zeller’s Formula in (1) contain 6 terms inside the same as the years 1982, 2382, 2782, ... Similarly, the bracket. The first and third terms d and Y are included as it

592 Determining Day of Given Date Mathematically

is for computation. By cyclic property of Gregorian calendar, we can find Using the Leap year rule mentioned in 2., we get the the day for 1/1/401 instead of 1/1/1. Here d = 1, m = 13, Y = fourth, fifth and sixth terms given by 400 (Since it is January) Zeller’s Congruence yields the following value: YY  Y  −+   13 × (13 + 1) 400 400 4 100  400  1 + + 400 + 5 4 100 The second term regarding month m need little 400 � � + �( 7) � � − � � explanation. 400 When making these calculations, Zeller made a novel = (1 + 36 + 400 + 100 4 � �� 푚표푑 approach by beginning the year with the month of March + 1)( 7) = 2 (instead of January) and ending with February of next − subsequent year. With this assumption, the twelve months 푚표푑 of a year in formula (1), is considered from m = 3,4,5,...,14 Since 2 correspond to Monday, it follows that the first where m = 3 corresponding to March, m = 4 for April and day of Common Era (CE) is a Monday. so on until m = 12 for December, m = 13 for January and m (ii) Let us now find on what day does the most famous = 14 for February but Zeller took Y – 1 instead of Y for the physicist of 20th Century Albert Einstein was born? It months January and February. th Thus for example, for calculating day corresponding to is known that he was born on 14 March (Incidentally, any date of January 2020, according to Zeller’s formula we it was now celebrated as world Pi Day) of 1879. Thus should consider m = 13 and Y = 2019 (As beginning with we have d = 14, m = 3, Y = 1879. Using Zeller’s March of 2019, January 2020 is viewed as 13th month of formula we get: 2019). Similarly, for finding any date corresponding to 13 × (3 + 1) 1879 1879 14 + + 1879 + February 2020, we should consider m = 14 since this is 5 4 100 considered as 14th and last month of the year 2019. This 1879 � � + � ( � 7) =�6− � � explains the reason for the term m = 3,4,5,...,14 associated 400 with the formula (1). � �� 푚표푑 Considering the shift of year beginning from January to Since 6 correspond to Friday, we know that Albert March, the number of days in each month is as follows: {31, Einstein was born on Friday. 30, 31, 30, 31, 31, 30, 31, 30, 31, 31, 28/29} corresponding Similar to these calculations, it is possible for us to to the number of days from months March to February. determine the day of any given date. Note that if the date Since any week contain seven days we shall modulo 7 (that correspond to BCE, then add 401 and convert it in to CE, is, divide each number by 7 and take the remainders) giving then proceed in the same way as presented above. Now that {3, 2, 3, 2, 3, 3, 2, 3, 2, 3, 3, 0}. I have explained the formation and application of the If we now cumulative sum of five consecutive numbers Zeller’s Congruence Formula, I will present ways to reduce from the above list we get 3 + 2 + 3 + 2 + 3 = 13, 2 + 3 + 2 the complexity of the formula presented above and also + 3 + 3 = 13, 3 + 2 + 3 + 3 + 2 = 13, and so on. In general, derive the month codes which will be used for several any set of five consecutive numbers from the above centuries at the same time thanks to Cyclic property of modulo list of numbers will constitute three 3’s and two 2’s Gregorian calendar. always giving a sum of 13. So for every 5 numbers we get a sum of 13. This explains the reason for the term 13/5 in the second term of formula (1). Since any week day should be 7. Rewriting Zeller’s Formula one of seven days, we finally divide the answer obtained through 6 terms of the formula by 7 and take out the We see that if the year number Y is quite large as in the remainder explaining the term (mod 7) in the formula. case for Einstein date, it is usually difficult to perform the We know that when any integer is divided by 7, the calculation in our mind. We can reduce the size of Y possible remainders are 0, 1, 2, 3, 4, 5, 6. Now depending suitably and make Zeller’s original formula in much more upon the remainder, we shall consider the following compact way. In this part, I provide mathematical proofs assignment of days corresponding to these remainders to for doing that. decide the day of a given date. If Y = 100c + y, where c denote the first two digits of Y 0 – Saturday, 1 – Sunday, 2 – Monday, 3 – Tuesday, 4 and y denote the last two digits of Y. Note here that y – Wednesday, 5 – Thursday, 6 – Friday. varies from 0 to 99 (both 0 and 99 inclusive). With this assumption, we will calculate the third, fourth, fifth and sixth terms involved in Zeller’s Formula. 6. Sample Computations (i) Reduction of Third Term: (i) Let us determine the day of 1/1/1 the first day of = + ≡+ Common Era denoted by CE. Y(mod 7) (100 cy )(mod 7) (2 cy )(mod 7)

Mathematics and Statistics 8(5): 590-595, 2020 593

(ii) Reduction of Fourth Term 13(m ++ 1)  13(4 1) (April) m =4→=≡ (mod 7) (mod 7) 6 100 + 55   ( 7) = ( 7) 4 4 13(m ++ 1)  13(5 1) 푌 푐 푦 m =5→=≡(mod 7) (mod 7) 1 � � 푚표푑= 25 +� ( �7)푚표푑    4 55   푦 (May) 4 + ( 7) � 푐 4� 푚표푑 13(m ++ 1)  13(6 1) 푦 m =6→=≡(mod 7) (mod 7) 4 (iii) Reduction of Fifth≡ � 푐Term� �� 푚표푑 55   100 + (June) ( 7) = ( 7) 100 100 13(m ++ 1)  13(7 1) 푌 푐 푦 m =7 →=≡ (mod 7) (mod 7) 6 � � 푚표푑= + � ( 7�) 푚표푑 55   100 ( + 0푦)( 7) = ( 7) (July) �푐 � 푚표푑 13(m ++ 1)  13(8 1) m =8→=≡(mod 7) (mod 7) 2 (iv) Reduction of Sixth≡ 푐 Term 푚표푑 푐 푚표푑 55   100 + (August) ( 7) = ( 7) 400 400 13(m ++ 1)  13(9 1) 푌 푐 푦 =→=≡ � � 푚표푑= + � ( 7�) 푚표푑 ( 7) m 9  (mod 7) (mod 7) 5 4 100 4 55   푐 푦 푐 (September) Substituting these �in the original� 푚표푑 Zeller’s≡ �formula� 푚표푑 we get 13(m ++ 1) 13(10 1)  m =10→=≡(mod 7) (mod 7) 0 + + ( 7)    4 100 400 55   푌 푌 푌 (October) �푌 � =� −(�2 +� ) +� 4 �+� 푚표푑 4 ++ 푦 13(m 1) 13(11 1) + ( 7) m =11→=≡(mod 7) (mod 7) 3 �4 푐 푦 푐 � � − 푐 55  푐 = + + + 5 ( 7) (November) �4�� 푚표푑4 푦 푐 13(m ++ 1) 13(12 1)  + + 2 ( 7) m =12→=≡(mod 7)  (mod 7) 5 �푦 � � �4 � 4푐� 푚표푑 55   푦 푐 ≡ �푦 � � � � − 푐� 푚표푑 (December) Thus the original Zeller’s formula now becomes Since January and February months are considered for +13(m 1)   yc  the previous year, we have to subtract 1 from the original d+  ++ yc  + −2 (mod 7) → (2) 5   44 second term to get correct codes for these months. Thus, we obtain Equation (2) is a concise formula to find the day of any given date compared to the calculation involved through 13(m ++ 1) 13(14 1)  m =→−13 1  (mod 7) = −≡1  (mod 7) 0 Equation (1). Moreover, Equation (2) is the usual reference 55   of Zeller’s formula in many sources for which we have (January) obtained a mathematical derivation. We shall call Equation 13(m ++ 1) 13(14 1)  (2) as modified Zeller’s Formula. m =→−14 1 (mod 7) = −≡1 (mod 7) 3 55   (February) 8. Derivation of Month Codes Thus the month codes beginning from January to December would be the following numbers respectively: In 7., we saw the reduction of Year number Y → considerably with lesser numbers c, y. Here we use 0,3,3,6,1,4,6, 2,5,0,3, 5 (3) Equation (2) to derive month codes which will further Now, keeping the Cyclic Property in mind, we first reduce our calculation of finding day of any given date. segregate the centuries for every 400 years in to four First, let us substitute each value of m from 3 to 14 (from classes as March to February) successively in the second term ofZeller’s formula which contains the information about Class I: (1 – 100, 401 – 500, 801 – 900, 1201 – 1300, months. 1601 – 1700, 2001 – 2100, 2401 – 2500, 2801 – 2900, ...) 13(m ++ 1)  13(3 1) Class II: (101 – 200, 501 – 600, 901 – 1000, 1301 – 1400, m =3→=≡ (mod 7) (mod 7) 3 55   1701 – 1800, 2101 – 2200, 2501 – 2600, 2901 – 3000, . . . ) (March) Class III: (201 – 300, 601 – 700, 1001 – 1100, 1401 –

594 Determining Day of Given Date Mathematically

1500, 1801 – 1900, 2201 – 2300, 2601 – 2700, 3001 – quick reference. 3100, . . . ) Class I : 0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 5. Class IV: (301 – 400, 701 – 800, 1101 – 1200, 1501 – Class II : 5, 1, 1, 4, 6, 2, 4, 0, 3, 5, 1, 3 (Class I codes + 1600, 1901 – 2000, 2301 – 2400, 2701 – 2800, 3101 – 5) (mod 7) 3200, . . . ) Class III : 3, 6, 6, 5, 4, 0, 2, 5, 3, 3, 6, 1 (Class I codes Note that according to the cyclic property, the days of + 3) (mod 7) each century in the corresponding class would be same. Class IV : 1, 4, 4, 0, 2, 5, 0, 3, 6, 1, 4, 6 (Class I Hence, I try to derive the month codes each for the above codes + 1) (mod 7) four classes which will eventually cover all centuries. Now considering current century years which is from Remarks: 2001 to 2100 beginning to Class I of our segregation, we Note that upon applying these month codes to find day find that c = 20. With this value of c, we try to compute of a given date, we should follow the three rules given fifth and sixth terms of the modified Zeller’s formula given below: in Equation (2). (i) Subtract 1 from the month codes corresponding to c 20   January/February of normal leap years which are cc=20 → − 2  (mod 7) =  −× 2 20  (mod 7) ≡ 0 44    divisible by 4 except the centurial leap years like 400, (Class I). 800, 1200, 1600, 2000, 2400,. . . (ii) Subtract 2 from the month codes corresponding to any Similarly considering the century from 1701 – 1800 month for all centurial non – leap years like 100, 200, belonging to Class II, we get c = 17. Doing as above, we 300, 500, 600, 700, 900, 1000, 1100, 1300, . . . obtain (iii) Subtract 1 for all months from March to December c 17   and 2 for January/February for centurial leap years cc=17 → − 2  (mod 7) =  −× 2 17  (mod 7) ≡ 5 44    like 400, 800, 1200, 1600, 2000, . . . (Class II) Similarly considering 1801 – 1900 corresponding to 9. Simplification of Actual Formula Class III and 1901 – 2000 of Class IV we obtain Using these month codes, we can greatly simplify the c 18   modified Zeller’s Formulas given in Equation (2). If we do cc=18 → − 2  (mod 7) =  −× 2 18  (mod 7) ≡ 3 44    so, we get the following compact formula: (Class III) y whereM is the month dM+ ++ y(mod 7)→ (4) c 19   4 cc=19 → − 2  (mod 7) =  −× 2 19  (mod 7) ≡ 1 44    code for the century belonging to one of the four classes mentioned above. (Class IV) The formula described in Equation (4) is usually The final answers namely 0, 5, 3, 1 for centuries presented in many books and online sources for belonging to corresponding classes will fix the month determining day of a given date. I have just provided the codes required for easy computation of the day of a given mathematical proof for arriving that result. date. We now consider three illustrations to justify the Since for Class I, the answer is 0, the month codes for formula described by Equation (4). any century belonging to Class I are given by precisely (i) Let us consider SrinivasaRamanujan’s Birthdate Equation (3) namely: 0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 5. which is on 22/12/1887. First we notice that the year Now since the answer for Class II is 5, the month codes 1887 lies in Class III. Hence M = 1 (the month code for any century belonging to Class II are obtained by just for December in Class III). Also d = 22, y = 87. Hence adding (modulo 7) to each of the month codes of Class I by Equation (4) we have given above. 87 . Since5 correspond to 22++ 1 87 + (mod 7)≡ 5 Thus the month codes for Class II centuries are: 5, 1, 1, 4, 4 6, 2, 4, 0, 3, 5, 1, 3. Thursday, we can conclude that one of the greatest In similar fashion, by adding 3 and 1 respectively to mathematicians of India, Srinivasa Ramanujan was month codes of Class I, we get month codes of each born on Thursday. century belonging to Class III and Class IV. (ii) Let us consider the Birthdate of another famous living The month codes for Class III centuries are: 3, 6, 6, 5, 4, great Indian mathematician C.S. Seshadri who was 0, 2, 5, 3, 3, 6, 1 born on 29/2/1932. We notice that 1932 correspond to The month codes for Class IV centuries are: 1, 4, 4, 0, 2, Class IV and 1932 is a normal leap year. Hence, M = 4 5, 0, 3, 6, 1, 4, 6 – 1 = 3 (month code of February – 1) according to rule We thus summarize the month codes of each Class for (i) presented in the Remarks presented above. Also, d

Mathematics and Statistics 8(5): 590-595, 2020 595

= 29, y = 32. Hence by Equation (4), we have subtract 1 from the actual to get M = 6 – 1 = 5. Using these 32 . Since, 2 correspond to values of M, we can determine the days for the required 29++ 3 32 + (mod 7)≡ 2 4 four dates as follows: Monday, we know that C.S. Seshadri was born on 31/12 /1700→+( 3 3) (mod 7) = 6 which is on Friday. Monday, but unfortunately he could celebrate his 31/12 /1800→+( 3 1) (mod 7) = 4 which is on birthday once in four years only. (iii) Let us now consider the date 18/8/1900 the Birthdate Wednesday of Smt. Vijayalakshmi Pandit, sister of Jawaharlal 31/12 /1900→+( 3 6) (mod 7) = 2 which is on Monday Nehru, the first Prime Minister of India. Note that th 31/12 / 2000→+ 3 5 (mod 7) ≡ 1 which is on Sunday 1900 is a centurial year and it belongs to 8 month of ( ) Class III. Hence M =−=523 (month code of Thus by cyclic property of every 400 years in Gregorian August – 2). Here, d = 18, y = 0. Hence by Equation (4) calendar, we see that the end of a century cannot occur on Tuesday or Thursday or Saturday. we have (18+++ 3 0 0) (mod 7) ≡ 0 . Since, 0 correspond to Saturday, we can conclude that Smt. Vijayalakshmi Pandit was born on Saturday. 11. Conclusions Thus depending on a normal year, normal leap year or centurial year knowing the class in which the given date The aspects of making Gregorian calendar and process belong, we can immediately compute the day quite easily of arriving the formulas and month codes were explained without using any tools. mathematically in this paper. With little practice, one can easily determine the day of any given date without having to seek any source from electronic equipments. 10. Proving an Important Fact

Using Equation (4) along with the rules presented in Remarks, we can prove the following interesting but important calendar fact mathematically. REFERENCES [1] Zeller, Christian, "Die Grundaufgaben der Theorem: Kalenderrechnung auf neue und vereinfachte Weise gelöst". WürttembergischeVierteljahrsheftefürLandesgeschichte (in “The last day of a century cannot be a Tuesday, German), Issue V, pp. 313–314, 1882. Thursday or Saturday” [2] Zeller, Christian, "Kalender-Formeln". Mathematisch-natur wissenschaftlicheMitteilungendes mathematisch-naturwiss Proof: enschaftlichenVereins in Württemberg (in German). Volume 1 (1): pp. 54–58, 1885. Because of the cyclic property, it is enough to consider the dates 31/12/1700, 31/12/1800, 31/12/1900 and [3] V. F. Rickey, “Mathematics of the Gregorian calendar”, 31/12/2000 as these dates correspond to date of end of 17th, Math. Intelligencer, Volume 7, pp. 53-56, 1985. th th th 18 , 19 and 20 centuries. Since y = 0 for all these four [4] J. Dutka, “On the Gregorian revision of the Julian calendar”, dates, the third and fourth terms in Equation (4) namely Math. Intelligencer, Volume 10, pp. 56-64, 1988. y becomes 0. We also note that d = 31 for all these y +  [5] G. Moyer, “The Gregorian calendar’, Sci. Amer., Vol. 246, 4 Issue 5, pp. 144-152, May 1982. four dates. Since 31≡ 3(mod 7) , from Equation (4), we see [6] W. M. Feldman, “Rabbinical Mathematics and Astronomy”, that the day corresponding to these four dates would be of M. L. Cailingold, London, 1931; 3rd corrected ed., the form (3+ M ) (mod 7) , where M is the month code Sepher-Hermon, New York, 1978. corresponding to December month of respective centuries. [7] E. M.’Reingold, J. Nievergelt and N. Deo, “Combinatorial We know that the years 1700, 1800, 1900 and 2000 Algorithms: Theory and Practice”, Prentice-Hall, correspond to classes I, II, III and IV respectively. Now for Englewood Cliffs, NJ, 1977. the three centurial non – leap years 1700, 1800, 1900 we [8] J. V. Uspensky and M. A. Heaslet, “Elementary Number need to subtract 2 from the actual month codes given by Theory”, McGraw-Hill, New York, 1939. their respective classes. In doing so, we get M = 5 – 2 = 3 (for 1700), M = 3 – 2 = 1 (for 1800), [9] Black Paul E., NIST document, “Zeller’s Congruence”, Dictionary of Algorithms and Data Structures, Webpage. M =−=−≡1 2 1 6(mod 7) (for 1900). For the centurial leap year 2000, according to rule (iii) in [10] Webpage of Data Genetics blog, Zeller’s Congurence” at Remarks of 8, since the month is a December, we should http://datagenetics.com/blog/november12019/index.html

Mathematics and Statistics 8(5): 596-609, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080515

Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

Piyali Mallick1,*, Lakshmi Narayan De2

1Department of Mathematics, Government General Degree College, Kharagpur-II, West Bengal, India 2Department of Mathematics, Haldia Govt. College, Haldia, West Bengal, India

Received July 27, 2020; Revised August 31, 2020; Accepted September 29, 2020

Cite This Paper in the following Citation Styles (a): [1] Piyali Mallick, Lakshmi Narayan De , "Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount," Mathematics and Statistics, Vol. 8, No. 5, pp. 596 – 609, 2020. DOI: 10.13189/ms.2020.080515. (b): Piyali Mallick, Lakshmi Narayan De (2020). Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount. Mathematics and Statistics, 8(5), 596 – 609. DOI: 10.13189/ms.2020.080515. Copyright©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract In this work, we propose a stochastic consequences gotten in this paper. inventory model under the situations that delay in imbursement is acceptable. Most of the inventory model on Keywords Probabilistic Inventory Model, Trade this topic supposed that the supplier would offer the retailer Credit, Permissible Delay in Payments a fixed delay period and the retailer could sell the goods and accumulate revenue and earn interest with in the credit period. They also assumed that the trade credit period is independent of the order quantity. Limited investigators 1. Introduction developed EOQ model under permissible delay in payments, where trade credit is connected with the order In developing traditional optimal ordering policy of an quantity. When the order quantity is a lesser amount of the inventory model, it is generally assumed that the retailer quantity at which the delay in payment is not permitted, the must pay to supplier for the products at the time receiving payments for the items must be made immediately. of substances as every business owner would like to have Otherwise, the fixed credit period is permitted. However, all sales on a cash basis. However, in practice it is not all these models were completely deterministic in nature. always possible in competitive market place. Supplier In reality, this trade credit period cannot be fixed. If it is allows retailer a certain a delay period (credit period) for fixed, then retailer will not be interested to buy higher settling down the account and no interest is charged on the quantity than the fixed quantity at which delay in payment unsettled account if the account is settled by the end of the is permitted. To reflect this situation, we assumed that trade credit period. The supplier will charge higher interest if the credit period is not static but fluctuates with the ordering account is not settled within the trade credit period. Using quantity. The demand throughout any arrangement period this trade credit policy, suppliers can attract additional follows a . We have calculated the customers by not demanding cash up front. Trade credit total variable cost for every unit of time. The optimum can be advantageous for the new retailer incapable to raise ordering policy of the scheme can be found with the aid of capital or secure business loans, yet needs stock quickly. three theorems (proofs are provided). An algorithm to Using trade credit, business to be flexible, adapting to determine the best ordering rule with the assistance of the market demands and seasonal variations so that retailer has propositions is established and numerical instances are a constant supply of goods even when his\her finances are provided for clarification. Sensitivity investigation of all not stable. Supplier can mix trade credit with bulk the parameters of the model is presented and deliberated. discounting to encourage buyers to speed more. Supplier’s Some previously published results are special cases of the trade credit can prevent buyers from looking elsewhere and Mathematics and Statistics 8(5): 596-609, 2020 597

strengthen the supplier-buyer relationship. The most of result than discrete, so in this paper only continuous cycle suppliers frequently make exercise this plan to boost their time is considered. It is also showed that the optimal commodities though there are some disadvantages of trade ordering strategy can be determined by means of our credit like late payment, cash flow problem, customer Theorems 1, 2 and 3. Outcomes found in this paper are assessment, account handling etc. Goyal [10] first exemplified with the support of a set of numerical established an EOQ model under permissible delay in examples and sensitiveness of different parameters are also payments. In his model supplier allows a fixed time period contained within. for settling down the account, supplier is essentially giving his customer a loan without interest throughout this period. Chung et al. [8] settled a substitute method to determine the 2. Assumption and Natation optimal ordering procedure under the condition of delay in payments. Shah and Shah [21] deliberated the same model Our proposed inventory model is framed with the by tolerating deficiencies. Shah and Shah [22] first following conventions and notations: considered a probabilistic model where delay in payment is a). Time period is infinite i.e., there is no restriction for tolerable. They assumed more realistic assumption that continuation of cycle. demand is not deterministic it follows probabilistic b). Length of time between two successive orders is 푇, distribution. Shah et al. [24] developed the equivalent which is known as cycle time. model, where time was treated as a continuous variable. In c). Items in inventory of the system are reviewed another paper, Shah and Shah [23] also established a regularly at time interval 푇 between two successive discrete-time probabilistic inventory model under orders, which is the fixed. At the termination of each permitted delay in payments. Many scholars, such as interval of length 푇 , items are ordered so as to bring Aggarwal and Jaggy [1], Hwang and Shinn [13], Jamal et the on-hand inventory level to a level 푄. al. [14], Sarker et al. [20], Huang [12], Mahato [17], Jiang d). In the time interval 푇 , demand 푥 follows a Wu et al. [15], Musa and Sani [18], Li et al. [16] and probability density function (p.d.f.) 푓(푥 |푇) , Pramanick and Maity [19] also developed inventory 푎(푇) ≤ 푥 ≤ 푏(푇) with 휇(푇) = 퐸(푥|푇 ) 푏(푇) ( | ) models combining acceptable delay in payment into = ∫푎(푇) 푥푓 푥 푇 푑푥 = 푅푇(say) (1) in continuous account. All inventory models cited above were made sense, where 휇(푇) is the mean demand during 푇 and 휇(푇) under the consideration that the trade credit plan is fixed. 푅 = denotes the average expected demand per The extend and pattern of trade credit in an industry or 푇 unit time during a cycle. It is also assumed that the business sector depend on a number of factors, including p.d.f. 푓(푥 |푇) of the demand 푥 during 푇 is the average rate of turnover of stock, the nature of the adequately well performed so as to all the expected goods involved – e.g. their perishability, the relative size of costs discussed below exist. Correspondingly, the buying and selling firms, and the degree of competition. distribution of the demand is expected to be fixed Several researchers made their work by assuming the fact over the planning horizon 푇. that delay period is dependent on size of buying of the e). In the procedure of obtaining the definite result, it is product. Chang et al. [2],Chung et al. [8], Chung et al. [5], assumed that the forms of the maximum annual Chang et al. [3], Chung et al. [7], Teng et al. [25] Chen et al. demand 푏(푇) as (푇) = 푃푅푇 , where 푃 ≥ 1 is a [4], Tiwari et al. [26] developed economic models under known constant. permitted delay in payment, where the trade credit period is f). Replenishment or renewal rate is infinite. Lead-time connected to the order number. Once the order quantity is is zero. Shortages are not acceptable. fewer than the amount at which the delay in payment is g). Supplier offers delay period when number of ordering allowed, the payment for the matters must be made quantity is greater or equal to 푊. instantly. If not, a fixed trade credit is allowed. The h). 퐴, 퐶, 푆 and 퐻 are the cost for placing per order, unit supplier practices this strategy to encourage retailer to buying cost per item, unit retailing/selling cost per order an extra quantity. However, these aforementioned item and unit stock holding cost per item per unit time models were entirely deterministic in nature. In reality, this respectively and are known constants. It is also trade credit period cannot be fixed. If it is fixed, then the presumed that 푆 ≥ 퐶. retailer will not be concerned in purchasing higher quantity i). To encourage retailer to buy bigger substances or than the fixed quantity at which delay in payment is amount, it is supposed that if the retailer buys permitted. To reflect this circumstance, an inventory model products from supplier fewer than a fixed amount 푊 is settled under the assumption that the trade credit period (say) then the retailer will not get some facilities such is not only allied to ordering quantity but also fluctuates as delay in payment. Consequently, delay period is with the ordering quantity. It is also supposed that the increasing function of 푄. For easiness, in this paper, it demand is a continuous random variable following some is assumed that the delay period is linearly dependent probabilistic distribution. As it is seen in paper of De and on ordering quantity. i.e., if 푄 ≥ 푊, a variable credit Goswami [10] that continuous cycle time produces better period 푀 (푀0+훼 푄; 훼 ∈ [0,1]), is allowed; else a

598 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

푾 delay in payment is not permitted. The motives Case I: ≤ 푴 = 푴 + 휶푷푹푻. 푷푹 ퟎ behind for selecting such value of 훼 is that, if 훼 < 0 (a) Ordering cost per unit time = 퐴 . then 푀0 +훼 푄 will be a decreasing function of 푄 푇 which is unrealistic supposition. If 훼 > 1 then the 푅푇퐻 (b) Stock holding cost per unit time (2푃 − 1) . delay period will be so high that the supplier may face 2 some problem to capitalize his personal turnover. He will face cash flow problem. So, it is assumed. But, (c) Now according to the norms, three probable cases can 푊 푊 happen specifically 0 < 푇 < , ≤ 푇 ≤ 푀 and normally 훼 should be in [0, 푎], where 푎 is close to 푃푅 푃푅 0 and less than 1. 푇 ≥ 푀. These three cases are treated distinctly which are j). The supplier provides a fixed credit period 푀 to discussed below. settle the accounts to the retailer and the retailer, in 푾 turn, also off ers a credit period 푁 to each of its Case (i) ퟎ < 푻 < customers to settle the accounts, where 푀 ≥ 푁. 푷푹 퐶푄푇퐼푝 k). When the retailer must pay the amount of buying cost Expected interest payable per unit time = = to the supplier, the retailer will borrow 100% 푇 퐶퐼푝푃푅푇 . Expected interest earned per unit time = purchasing cost from the bank to pay back the account 푆퐼푒 푇 푥 푅푇푆퐼푒 ∫ 퐸 ( ) 푡푑푡 = . with rate 퐼푝 . When 푇 ≥ 푀 , the retailer returns 푇 0 푇 2 money to the bank at the termination of the inventory 푾 cycle. However, when ≤ 푀 , the retailer returns Case (ii) ≤ 푻 ≤ 푴 money to the bank at 푇 = 푀. 푷푹 l). If the credit period is shorter than the cycle time, the Expected interest payable per unit time=0. Expected 푆퐼푒 푅푇2 retailer can sell the items, gather sales revenue and interest earned per unit time = [ + 푅푇(푀 − 푇)] = receives interest with rate 퐼 all over the inventory 푇 2 푒 푇 푅푆퐼푒 [푀 + 훼푃푅푇 − ]. cycle, where 퐼푝 ≥ 퐼푒. 0 2 m). 푇푉퐶(푇), a function of 푇 , is the total relevant cost and 푇∗ is the optimal cycle time. Case (iii) 푻 ≥ 푴 = 푴ퟎ + 휶푷푹푻

Expected interest payable per unit time = 퐶푄(푇−푀)퐼푝 푇 3. Model Formulation 퐶푃푅(푇−푀0−훼푃푅푇)퐼푝 = . 푇 The differential equation describing the inventory position 푄 (푡)(0 ≤ 푡 ≤ 푇) of the system during the Expected interest earned per unit time = 푥 푆퐼푒 푇 푥 푅푇푆퐼푒 ∫ 퐸 ( ) 푡푑푡 = . scheduling period 푇 is 푇 0 푇 2 푑푄 (푡) 푥 푥 = − (2) From the above arguments, the appropriate total cost per 푑푡 푇 unit time for the retailer can be stated as Using the boundary condition 푄 (0)=푄, the solution of 푥 푊 푇푉퐶 (푇), 𝑖푓 0 < 푇 < equation (2) is 1 푃푅 푊 푥 푇푉퐶(푇) = {푇푉퐶 (푇), 𝑖푓 ≤ 푇 ≤ 푀 + 훼푃푅푇 (6) 푄 (푡) = 푄 − 푡, 0 ≤ 푡 ≤ 푇 (3) 2 푃푅 0 푥 푇 푇푉퐶3 (푇), 𝑖푓 푀 0 + 훼푃푅푇 ≤ 푇 Since shortages are not permissible, using the state Where, 푄푥(푇) = 0 when 푥 = 푏(푇), 퐴 푅푇퐻 푅푇푆퐼푒 푇푉퐶 (푇) = + (2푃 − 1) + 퐶퐼푝푃푅푇 − (7) we find 푄 = 푏(푇) (4) 1 푇 2 2 퐴 푅푇퐻 푇 By means of equation (4), (3) turns into 푇푉퐶 (푇) = + (2푃 − 1) -푅푆퐼푒 [푀 + 훼푃푅푇 − ] 2 푇 2 0 2 푥 푄 (푡) = 푏(푇) − 푡 (5) (8) 푥 푇 퐴 푅푇퐻 퐶푃푅(푇−푀0−훼푃푅푇)퐼푝 푅푇푆퐼푒 The average expected inventory in the system for every 푇푉퐶 (푇) = + (2푃 − 1) + − 3 푇 2 푇 2 1 푇 푅푇 unit time is ∫ 퐸(푄 (푡))푑푡=(2푃 − 1) . (9) 푇 0 푥 2 ( ) ( ) ( ) The total annual variable cost involves the following All 푇푉퐶1 푇 , 푇푉퐶2 푇 , and 푇푉퐶3 푇 are defined on elements. Two circumstances may arise. 푇 > 0.

푊 Equations (7)-(9) produce I. ≤ 푀 = 푀 + 훼 푃푅푇 푃푅 0 ′ 퐴 푅(퐻(2푃 −1) + 2퐶퐼푝푃 −푆퐼푒) 푊 푇푉퐶 (푇) = − + (10) II. > 푀 = 푀 + 훼 푃푅푇 1 푇2 2 푃푅 0

Mathematics and Statistics 8(5): 596-609, 2020 599

2퐴 푊 푇푉퐶′′(푇) = > 0 (11) 푇푉퐶 (푇), 𝑖푓 0 < 푇 < 1 푇3 1 푃푅 푇푉퐶(푇) = { (18) ′ 퐴 푅(퐻(2푃 −1)− 2푆퐼푒훼푃푅+푆퐼푒) 푊 푇푉퐶2(푇) = − + (12) 푇푉퐶 (푇), 𝑖푓 푇 ≥ 푇2 2 2 푃푅 ′′ 2퐴 푊 푇푉퐶2 (푇) = > 0 (13) Here also 푇푉퐶(푇) is continuous except at 푇 = . 푇3 푃푅 퐴 푅(퐻(2푃 −1)+ 2퐶퐼푝푃− 2퐶퐼 훼푃2푅−푆퐼푒) 푇푉퐶′(푇) = − + 푝 (14) In this case Equations (10) and (12) yield 3 푇2 2

′′ 2퐴 ∗ 푊 ′ 푊 푇푉퐶 (푇) = > 0 (15) 푇1 ≥ implies 푇푉퐶1 ( )≤ 0 and hence푇푉퐶1(푇) is 3 푇3 푃푅 푃푅 푊 decreasing on (0, ) (19) Equations (11), (13) and (15) imply that 푇푉퐶1(푇) , 푃푅 푇푉퐶2(푇) and 푇푉퐶3(푇) are convex for 푇 > 0. 푊 푊 푇∗< implies 푇푉퐶′ ( ) > 0 and hence푇푉퐶 (푇) is 2 푃푅 2 푃푅 2 푾 푊 Case II: > 푴 = 푴 + 휶푷푹푻. increasing on [ , ∞ ) (20) 푷푹 ퟎ 푃푅 In this case equation (6) can be written as follows: Furthermore, it follows the result 푊 Theorem 1. (A) Suppose that 퐻(2푃 − 1) − 푇푉퐶1(푇), 𝑖푓 0 < 푇 < 푃푅 ∗ ∗ 푇푉퐶(푇) = { (16) 2푆퐼푒훼푃푅 + 푆퐼푒 < 0 then 푇 = ∞ and 푇푉퐶( 푇 ) = 푊 푇푉퐶 (푇), 𝑖푓 푇 ≥ −∞ ie., the retailer will try to continue his cycle as much 3 푃푅 as possible. 푊 Here 푇푉퐶(푇) is continuous except at 푇 = . 푃푅 Proof: If 퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 < 0, Equation 푊 ′ (12) implies that 푇푉퐶(푇) is decreasing for 푇 ≥ . Since Now solving 푇푉퐶𝑖 (푇) = 0 for = 1,2,3 , we obtain 푃푅 푅푇 lim 푇푉퐶(푇) = −푅푆퐼푒푀0 + lim (퐻(2푃 − 1) − 푇→∞ 푇→∞ 2 ∗ 2퐴 푇1 = √ if 푅(퐻(2푃 − 1) + 2푆퐼푒훼푃푅 + 푆퐼푒) = −∞ 푎푛푑 lim 푇푉퐶(푇) = ∞ so we 푅(퐻(2푃 −1) + 2퐶퐼푝푃 −푆퐼푒) 푇→0+ 2퐶퐼푝푃 − 푆퐼푒) > 0 conclude that 푇 ∗ = ∞ and 푇푉퐶( 푇 ∗) = −∞

2퐴 (B) Suppose that 퐻(2푃 − 1) − 2푆퐼 훼푃푅 + 푆퐼푒 = 0 푇∗ = if 푅(퐻(2푃 − 1) − 푒 2 √ ( ) 푅(퐻 2푃 −1 − 2푆퐼푒훼푃푅+푆퐼푒) then 푅(퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 (since 훼푃푅 ≥ 2푆퐼푒훼푃푅 + 푆퐼푒) > 0 1) and

푊 ∗ 2퐴 (a) If 푇∗ ≥ , then 푇 ∗ = ∞ and 푇푉퐶( 푇 ∗) = 푇3 = √ 2 if 푅(퐻(2푃 − 1 푅(퐻(2푃 −1)+ 2퐶퐼푝푃− 2퐶퐼푝훼푃 푅−푆퐼푒) 푃푅 2 −푅푆퐼푒푀0 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒) > 0 ∗ 푊 ∗ ∗ (b) If 푇1 < , then 푇푉퐶( 푇 ) = min [푇푉퐶1(푇1 ) − By the convexity of 푇푉퐶𝑖(푇)(𝑖 = 1,2,3), it is detected 푃푅 ∗ ∗ that 푅푆퐼푒푀0] and 푇 = 푇1 or ∞ associated with the least cost). ∗ < 0 , if 푇 < 푇𝑖 푇푉퐶′(푇) = {= 0 , if 푇 = 푇∗ (17) Proof: (a) If 퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 = 0 𝑖 𝑖 푊 ∗ and 푇∗ ≥ then equation (12) and (17) imply that > 0, if 푇 > 푇𝑖 1 푃푅 푇푉퐶(푇) is decreasing on (0, ∞). Consequently 푇∗ = ∞ and 푇푉퐶(푇∗) = ∞.

푊 4. Decision Rule of the Optimal Cycle (b) If 퐻(2푃 − 1) − 2푆퐼 훼푃푅 + 푆퐼푒 = 0 and 푇∗< , 푾 푒 1 푃푅 Time When ≤ 푴 = 푴ퟎ + 휶푷푹푻 then equation (12) and (17) imply that 푇푉퐶(푇) is 푷푹 푊 decreasing on (0, 푇∗) , increasing on [푇∗, ) and 1 1 푃푅 In this case two possibilities may arise namely 훼푃푅 ≥ 푊 decreasing on [ , ∞). Hence 푇 ∗ = 푇∗ or ∞ associated 1 and 푅 < 1 . These two cases are treated separately 푃푅 1 ∗ ∗ which are discussed below with the least cost) and 푇푉퐶( 푇 )= min [푇푉퐶1(푇1 ) − 푅푆퐼푒푀0]. Case (i) 휶푷푹 ≥ ퟏ (C) Suppose that 퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 Here 푇푉퐶(푇) will be modified as (since 훼푃푅 ≥ 1 then 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 (since 훼푃푅 ≥ 1) and and so 푇 can be grater than or equal to 푀0 + 훼푃푅푇)

600 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

푊 푊 푊 푀 (a) If 푇∗ < , 푇∗ < then 푇∗ = 푇∗ and 푇푉퐶 (푇) is decreasing on [ , 0 ] (24) 1 푃푅 2 푃푅 1 2 푃푅 1−훼푃푅 ∗ ∗ 푇푉퐶( 푇 )= 푇푉퐶1(푇1 ). 푀 푀 푇∗ < 0 implies 푇푉퐶′ ( 0 ) > 0 and hence 3 1−훼푃푅 3 1−훼푃푅 ∗ 푊 ∗ 푊 ∗ ∗ ∗ (b) If 푇1 < , 푇2 ≥ then 푇 = 푇1 or 푇2 푀0 푃푅 푃푅 푇푉퐶3(푇) is increasing on [ , ∞) (25) (associated with the least cost) and 푇푉퐶( 푇 ∗) = min 1−훼푃푅 ∗ ∗ [푇푉퐶1(푇1 ), 푇푉퐶2(푇2 )]. Furthermore, the result follows. 푊 푊 푊 (c) If 푇∗ ≥ , 푇∗ < then 푇∗ = and Theorem 2. (A) Suppose that 퐻(2푃 − 1) + 2퐶퐼푝푃 − 1 푃푅 2 푃푅 푃푅 2 ∗ ∗ 푊 2퐶퐼 훼푃 푅 − 푆퐼푒 < 0 then 푇 = ∞ and 푇푉퐶(푇 ) = 푇푉퐶( 푇 ∗)= 푇푉퐶 ( ). 푝 2 푃푅 ∞.ie., the retailer will try to continue his cycle as much as 푊 푊 possible. (d) If 푇∗ ≥ , 푇∗ ≥ then 푇∗ = 푇∗ and 1 푃푅 2 푃푅 2 ∗ ∗ Proof. If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 < 푇푉퐶( 푇 )= 푇푉퐶2( 푇2 ). 푝 푝 푒 0, then equations (14) and (21) imply that 푇푉퐶(푇) is 푊 푊 ∗ ∗ 푀0 Proof: (a) If 푇1 < , 푇2 < then Equations (17) decreasing for 푇 ≥ . Since lim 푇푉퐶(푇) = 푃푅 푃푅 1−훼푃푅 ∗ 푇→∞ and (20) imply that 푇푉퐶(푇) is decreasing on (0, 푇1 ], 푅푇 푊 푊 −퐶퐼푝푃푅푀0 + lim (퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒) = increasing on [푇∗, ) and decreasing on [ , ∞) . 푇→∞ 2 1 푃푅 푃푅 −∞ and lim 푇푉퐶(푇) = ∞ 푠표 푇∗ = ∗ ∗ ∗ ∗ + Consequently 푇 = 푇1 and 푇푉퐶( 푇 )= 푇푉퐶1(푇1 ). 푇→0 ∞ and 푇푉퐶(푇∗) = ∞. 푊 푊 (b) If 푇∗ < , 푇∗ ≥ then (17) implies that 1 푃푅 2 푃푅 (B) Suppose that 퐻(2푃 − 1) + 2퐶퐼 푃 − 푊 푝 푇푉퐶(푇) is decreasing on (0, 푇∗], increasing on [푇∗, ) , 2 1 1 푃푅 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0 then 푊 decreasing on [ , 푇∗ ] and increasing on [ 푇∗ , ∞) . 푃푅 2 2 (i) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 and 퐻(2푃 − ∗ ∗ ∗ 푝 푒 Consequently, 푇 = 푇1 or 푇2 (associated with the least ∗ ∗ ∗ ∗ 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0 then 푇 = cost) and 푇푉퐶( 푇 )= min [푇푉퐶1(푇1 ), 푇푉퐶2(푇2 )]. ∗ ∞ and 푇푉퐶(푇 ) = −퐶퐼푝푃푅푀0. 푊 푊 (c) If 푇∗ ≥ , 푇∗ < then Equations (19) and (20) 1 푃푅 2 푃푅 (ii) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 = 0 and 퐻(2푃 − 푊 imply that 푇푉퐶(푇) is decreasing on (0, ) and 1) − 2푆퐼 훼푃푅 + 푆퐼푒 > 0 then 푃푅 푒 푊 ∗ 푊 increasing on [ , ∞) . Consequently 푇 = and ∗ 푀0 ∗ ∗ 푃푅 푃푅 (a) If 푇2 > then 푇 = ∞ and 푇푉퐶(푇 ) = 푊 1−훼푃푅 푇푉퐶( 푇 ∗)= 푇푉퐶 ( ). 2 푃푅 −퐶퐼푝푃푅푀0.

∗ 푊 ∗ 푊 푊 ∗ 푀0 ∗ ∗ (d) If 푇1 ≥ , 푇2 ≥ then Equations (19) and (17) (b) If ≤ 푇 ≤ then 푇 = 푇 or ∞ (associated 푃푅 푃푅 푃푅 2 1−훼푃푅 2 푊 ∗ ∗ imply that 푇푉퐶(푇) is decreasing on (0, ), decreasing with the least cost) 푇푉퐶(푇 ) = min [ 푇푉퐶2( 푇2 ) , 푃푅 푊 −퐶퐼 푃푅푀 ]. on [ , 푇∗ ], and increasing on [푇∗ , ∞).Consequently 푝 0 푃푅 2 2 ∗ ∗ ∗ ∗ ∗ 푊 ∗ 푊 푇 = 푇2 and 푇푉퐶( 푇 )= 푇푉퐶2( 푇2 ). (c) If 푇 < then 푇 = or ∞ (associated with the 2 푃푅 푃푅 푊 least cost) 푇푉퐶(푇∗) = min [푇푉퐶 ( ), −퐶퐼 푃푅푀 ]. Case (ii) 휶푷푹 < ퟏ 2 푃푅 푝 0

Here 푇푉퐶(푇) will be modified as 푇푉퐶(푇) = (iii) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 and 퐻(2푃 − 푊 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0 then 푇푉퐶1(푇), 𝑖푓 0 < 푇 < 푃푅 푊 푀0 ∗ 푊 ∗ ∗ 푇푉퐶2(푇), 𝑖푓 ≤ 푇 ≤ (21) (a) If 푇1 ≥ then 푇 = ∞ and 푇푉퐶(푇 ) = 푃푅 1−훼푃푅 푃푅 푀 −퐶퐼 푃푅푀 . 푇푉퐶 (푇), 𝑖푓 0 ≤ 푇 푝 0 { 3 1−훼푃푅 푊 (b) If 푇∗ < then 푇∗ = 푇∗ or ∞ (associated with the In this case Equations (10), (12) and (14) yield that 1 푃푅 1 ∗ ∗ least cost) 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), −퐶퐼푝푃푅푀0]. 푊 푊 푇∗≥ implies 푇푉퐶′ ( )≤ 0 and hence 푇푉퐶 (푇) is 1 푃푅 1 푃푅 1 푊 (iv) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 and 퐻(2푃 − decreasing on (0, ) (22) 푃푅 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 then 푊 푊 ∗ ′ 푊 푀0 푇2 < implies 푇푉퐶2 ( ) > 0 and hence 푇푉퐶2(푇) is (a) If 푇∗ ≥ and 푇∗ > then 푇∗ = 푃푅 푃푅 1 푃푅 2 1−훼푃푅 푊 푀0 ∗ increasing on [ , ] (23) ∞ and 푇푉퐶(푇 ) = −퐶퐼푝푃푅푀0. 푃푅 1−훼푃푅 푊 푊 푀 ∗ 푀0 ′ 푀0 ∗ ∗ 0 ∗ ∗ 푇 > implies 푇푉퐶 ( ) < 0 and hence (b) If 푇1 ≥ and ≤ 푇2 ≤ then 푇 = 푇2 or 2 1−훼푃푅 2 1−훼푃푅 푃푅 푃푅 1−훼푃푅

Mathematics and Statistics 8(5): 596-609, 2020 601

∞(associated with the least cost) 푇푉퐶(푇∗) = min (12),(14) and (22) imply that 푇푉퐶(푇) is decreasing on (0, ∗ ∗ ∗ [푇푉퐶2( 푇2 ), −퐶퐼푝푃푅푀0]. ∞). Consequently 푇 = ∞ and 푇푉퐶(푇 ) = −퐶퐼푝푃푅푀0.

∗ 푊 ∗ 푊 ∗ 푊 (b) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0, 퐻(2푃 − 1) − (c) If 푇1 ≥ and 푇2 < then 푇 = or ∞ 푝 푒 푃푅 푃푅 푃푅 푊 ∗ 2푆퐼 훼푃푅 + 푆퐼푒 ≤ 0 and 푇∗ < then Equations (12),(14) (associated with the least cost) 푇푉퐶(푇 ) = min 푒 1 푃푅 푊 ∗ [푇푉퐶 ( ), −퐶퐼 푃푅푀 ]. and (17) imply that 푇푉퐶(푇) is decreasing on (0, 푇1 ], 2 푃푅 푝 0 ∗ 푊 푊 increasing on ( 푇1 , ) and decreasing on [ , ∞). 푊 푀 푃푅 푃푅 (d) If 푇∗ < and 푇∗ > 0 then 푇∗ = 푇∗ or ∞ ∗ ∗ 1 푃푅 2 1−훼푃푅 1 Consequently 푇 = 푇1 or ∞ (associated with the least cost) ∗ ∗ ∗ ∗ (associated with the least cost) 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), −퐶퐼푝푃푅푀0]. −퐶퐼푝푃푅푀0]. (iv)(a) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒>0 , 퐻(2푃 − 1) − ∗ 푊 푊 ∗ 푀0 ∗ ∗ ∗ 푊 ∗ 푀0 (e) If 푇1 < and ≤ 푇2 ≤ then 푇 = 푇2 , 2푆퐼 훼푃푅 + 푆퐼푒 >0, 푇 ≥ and 푇 > then 푃푅 푃푅 1−훼푃푅 푒 1 푃푅 2 1−훼푃푅 ∗ ∗ 푇1 or ∞ (associated with the least cost) ) 푇푉퐶(푇 ) = min Equations (14), (22) and (24) imply that 푇푉퐶(푇) is ∗ ∗ ∗ [푇푉퐶1( 푇1 ), 푇푉퐶2( 푇2 ), −퐶퐼푝푃푅푀0]. decreasing on [0,∞). Consequently 푇 = ∞ and 푇푉퐶(푇∗) = −퐶퐼 푃푅푀 . 푊 푊 푊 푝 0 (f) If 푇∗ < and 푇∗ < then 푇∗ = 푇∗ , or ∞ 1 푃푅 2 푃푅 1 푃푅 ∗ ∗ (b) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − (associated with the least cost) 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), 푊 ∗ 푊 푊 ∗ 푀0 푇푉퐶 ( ), −퐶퐼 푃푅푀 ]. 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 , 푇1 ≥ and ≤ 푇2 ≤ then 2 푃푅 푝 0 푃푅 푃푅 1−훼푃푅 Equations (14), (22) and (17) imply that 푇푉퐶(푇) is 2 푀 Proof: (i) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − decreasing on (0, 푇∗ ] , increasing on [푇∗ , 0 ] and 2 2 1−훼푃푅 푆퐼 = 0, 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 and 퐻(2푃 − 푀 푒 푝 푒 decreasing on [ 0 , ∞). Consequently 푇∗ = 푇∗ or 1) − 2푆퐼 훼푃푅 + 푆퐼푒 ≤ 0 then Equations (10), (12) and (14) 1−훼푃푅 2 푒 ∗ imply that 푇푉퐶(푇) is decreasing on (0, ∞) . ∞(associated with the least cost) 푇푉퐶(푇 ) = min ∗ 푅푇 [푇푉퐶2( 푇2 ), −퐶퐼푝푃푅푀0]. Since lim 푇푉퐶(푇) = −퐶퐼푝푃푅푀0 + lim (퐻(2푃 − 푇→∞ 푇→∞ 2 1) − 2푆퐼 훼푃푅 + 푆퐼푒) = −퐶퐼 푃푅푀 and lim 푇푉퐶(푇) = (c) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0 , 퐻(2푃 − 1) − 푒 푝 0 + 푝 푒 푇→0 푊 푊 ∞ 푠표 푇∗ = ∞ and 푇푉퐶(푇∗) = −퐶퐼 푃푅푀 . 2푆퐼 훼푃푅 + 푆퐼푒 > 0, 푇∗ ≥ and 푇∗< then Equations 푝 0 푒 1 푃푅 2 푃푅 (14), (22) and (23) imply that 푇푉퐶(푇) is decreasing on (0, (ii)(a) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 = 0, 푝 푝 푒 푊 ) , increasing on [ 푊 , 푀0 ] and decreasing on [ 푀0 , ∗ 푀0 푃푅 푃푅 1−훼푃푅 1−훼푃푅 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 = 0 and 푇2 > then 푊 1−훼푃푅 ∞). Consequently 푇∗ = or ∞ (associated with the least Equations (10),(14) and (24) imply that 푇푉퐶(푇) is 푃푅 푊 decreasing on (0, ∞). Consequently 푇∗ = cost) 푇푉퐶(푇∗) = min [푇푉퐶 ( ), −퐶퐼 푃푅푀 ]. 2 푃푅 푝 0 ∗ ∞ and 푇푉퐶(푇 ) = −퐶퐼푝푃푅푀0 (d) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 2 푊 푀 (b) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0, 2푆퐼 훼푃푅 + 푆퐼푒 > 0, 푇∗ < and 푇∗ > 0 then 푊 푒 1 푃푅 2 1−훼푃푅 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 and ≤ 푇∗ ≤ 푝 푒 푃푅 2 Equations (14), (17) and (24) imply that 푇푉퐶(푇) is 푀 푊 0 then Equation (10),(14) and (17) imply that 푇푉퐶(푇) decreasing on (0, 푇∗ ] , increasing on [ 푇∗ , ) and 1−훼푃푅 1 1 푃푅 ∗ ∗ 푀0 푊 ∗ ∗ is decreasing on (0, 푇2 ], increasing on [푇2 , ] and decreasing on [ , ∞). Consequently 푇 = 푇 or ∞ 1−훼푃푅 푃푅 1 푀0 ∗ ∗ ∗ ∗ decreasing on [ , ∞) . Consequently 푇 = 푇 or ∞ (associated with the least cost) 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), 1−훼푃푅 2 ∗ ∗ −퐶퐼푝푃푅푀0]. (associated with the least cost) 푇푉퐶(푇 ) = min [푇푉퐶2( 푇2 ), −퐶퐼푝푃푅푀0]. (e) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 푊 푊 푀 ( ) 2 2푆퐼 훼푃푅 + 푆퐼푒 > 0, 푇∗ < and ≤ 푇∗ ≤ 0 then (c) If 퐻 2푃 − 1 + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0, 푒 1 푃푅 푃푅 2 1−훼푃푅 ∗ 푊 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 = 0 and 푇2 < then Equations (14)and (17) imply that 푇푉퐶(푇) is decreasing 푃푅 푊 푊 on (0, 푇∗] , increasing on [푇∗, ), decreasing on [ , 푇∗] , Equations (10), (14) and (23) imply that 푇푉퐶(푇) is 1 1 푃푅 푃푅 2 푊 푊 푀 푀 푀 decreasing on (0 ] , increasing on [ , 0 ] and increasing on [푇∗, 0 ] and decreasing on [ 0 , ∞) . 푃푅 푃푅 1−훼푃푅 2 1−훼푃푅 1−훼푃푅 푀0 ∗ 푊 ∗ ∗ ∗ decreasing on[ , ∞). Consequently 푇 = or ∞ Consequently 푇 = 푇2 , 푇1 or ∞ (associated with the least 1−훼푃푅 푃푅 ∗ ∗ ∗ 푊 cost) T푉퐶(푇 ) =min[푇푉퐶 ( 푇 ),푇푉퐶 ( 푇 ), −퐶퐼 푃푅푀 ] (associated with the least cost) 푇푉퐶(푇∗) = min [푇푉퐶 ( ), 1 1 2 2 푝 0 2 푃푅 −퐶퐼푝푃푅푀0]. (f) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0, 퐻(2푃 − 1) − ∗ 푊 ∗ 푊 2푆퐼푒훼푃푅 + 푆퐼푒 >0, 푇1 < and 푇2 < then Equations (iii) (a)If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0, 퐻(2푃 − 푃푅 푃푅 ∗ 푊 (14),(17) and (23) imply that 푇푉퐶(푇) is decreasing on (0, 1) − 2푆퐼 훼푃푅 + 푆퐼푒 ≤ 0 and 푇 ≥ then Equations 푊 푊 푀 푒 1 푃푅 푇∗], increasing on [푇∗, ), increasing on [ , 0 ) and 1 1 푃푅 푃푅 1−훼푃푅

602 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

푀 푊 푊 푊 푀 decreasing on [ 0 , ∞). Since lim 푇푉퐶 (푇) > 푇푉퐶 ( ), (h) If 푇∗ ≥ , 푇∗ < and 푇∗ ≥ 0 then 1−훼푃푅 푊 1 2 푃푅 1 푃푅 2 푃푅 3 1−훼푃푅 푇→ 푃푅 ∗ 푊 ∗ ∗ 푊 푊 푇푉퐶(푇 )= min [푇푉퐶 ( ), 푇푉퐶 ( 푇 )] and 푇 = or so we conclude that 푇∗ = 푇∗, or ∞ (associated with the 2 푃푅 3 3 푃푅 1 푃푅 ∗ ∗ ∗ 푇3 (associated with the least cost). least cost) 푇푉퐶(푇 ) = min[ 푇푉퐶1( 푇1 ) , 푊 ∗ 푊 푊 ∗ 푀0 ∗ 푀0 푇푉퐶2( ), −퐶퐼푝푃푅푀0 ](C) Suppose that 퐻(2푃 − 1) + (i) If 푇 ≥ , ≤ 푇 ≤ and 푇 < then 푃푅 1 푃푅 푃푅 2 1−훼푃푅 3 1−훼푃푅 2 ∗ ∗ ∗ ∗ 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푇푉퐶(푇 )= 푇푉퐶2(푇2 ) and 푇 = 푇2 . 푆퐼푒> 0 and 푊 푊 푀 푀 (j) If 푇∗ ≥ , ≤ 푇∗≤ 0 and 푇∗ ≥ 0 then 1 푃푅 푃푅 2 1−훼푃푅 3 1−훼푃푅 (i) if 퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0 then ∗ ∗ ∗ ∗ ∗ 푇푉퐶(푇 )= min [푇푉퐶2(푇2 ), 푇푉퐶3(푇3 )] and 푇 = 푇2 or 푊 푀 푇∗ (associated with the least cost). (a) If 푇∗ < and 푇∗ ≥ 0 then 푇푉퐶(푇∗) = min 3 1 푃푅 3 1−훼푃푅 ∗ ∗ ∗ ∗ ∗ ∗ 푊 ∗ 푀0 ∗ 푀0 [푇푉퐶1( 푇1 ), 푇푉퐶3( 푇3 )] and 푇 = 푇1 or 푇3 (associated (k) If 푇 ≥ , 푇 > and 푇 < then 1 푃푅 2 1−훼푃푅 3 1−훼푃푅 with the least cost). 푀 푀 푇푉퐶(푇∗)=푇푉퐶 ( 0 ) and 푇∗ = 0 . 2 1−훼푃푅 1−훼푃푅 ∗ 푊 ∗ 푀0 ∗ (b) If 푇1 < and 푇3 < then 푇푉퐶(푇 )= min 푃푅 1−훼푃푅 ∗ 푊 ∗ 푀0 ∗ 푀0 푀 푀 (l) If 푇 ≥ , 푇 > and 푇 ≥ then ∗ 0 ∗ ∗ 0 1 푃푅 2 1−훼푃푅 3 1−훼푃푅 [ 푇푉퐶1( 푇 ) , 푇푉퐶3( ) ] and 푇 = 푇 or 1 1−훼푃푅 1 1−훼푃푅 푇푉퐶(푇∗)= 푇푉퐶 (푇∗) and 푇∗ = 푇∗. (associated with the least cost). 3 3 3

푊 푀 (c) If 푇∗ ≥ and 푇∗ ≥ 0 then 푇푉퐶(푇∗) = Proof: (C) 1 푃푅 3 1−훼푃푅 ∗ ∗ ∗ 2 푇푉퐶3( 푇3 ) and 푇 = 푇3 . (i)(a) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒> 0, 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0, 퐻(2푃 − 1) − ∗ 푊 ∗ 푀0 ∗ (d) If 푇1 ≥ and 푇3 < then 푇푉퐶(푇 ) = ∗ 푊 ∗ 푀0 푃푅 1−훼푃푅 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0 and 푇1 < and 푇3 ≥ then 푀 푀 푃푅 1−훼푃푅 푇푉퐶 ( 0 ) and 푇∗ = 0 . 3 1−훼푃푅 1−훼푃푅 Equations (12) and (17) imply that 푇푉퐶(푇) is decreasing 푊 푊 on (0, 푇∗], increasing on [푇∗, ), decreasing on [ , 푇∗], 1 1 푃푅 푃푅 3 (ii) if 퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 then ∗ ∗ and increasing on [ 푇3 ,∞). Hence 푇푉퐶(푇 ) = min 푊 푊 푀 ∗ ∗ ∗ ∗ ∗ (a) If 푇∗ < , 푇∗ < and 푇∗ < 0 then [푇푉퐶1( 푇1 ), 푇푉퐶3(푇3 )] and 푇 = 푇1 or 푇3 (associated 1 푃푅 2 푃푅 3 1−훼푃푅 푊 with the least cost). 푇푉퐶(푇∗)= min [푇푉퐶 ( 푇∗), 푇푉퐶 ( )] and 푇∗ = 푇∗ or 1 1 2 푃푅 1 푊 (b) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 > 0, (associated with the least cost). 푝 푝 푒 푃푅 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − ∗ 푊 ∗ 푊 ∗ 푀0 ∗ 푊 ∗ 푀0 (b) If 푇 < , 푇 < and 푇 ≥ then 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0 and 푇1 < and 푇3 < then 1 푃푅 2 푃푅 3 1−훼푃푅 푃푅 1−훼푃푅 ∗ ∗ ∗ ∗ ∗ 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶3( 푇3 )] and 푇 = 푇1 or Equations (12) , (17) and (25) imply that 푇푉퐶(푇) is ∗ ∗ ∗ 푊 푇3 (associated with the least cost). decreasing on (0, 푇 ], increasing on [푇 , ), decreasing 1 1 푃푅 푊 푀0 푀0 ∗ 푊 푊 ∗ 푀0 ∗ 푀0 on [ , ] , and increasing on [ , ∞) . Hence (c ) If 푇1 < , ≤ 푇2 ≤ and 푇3 < then 푃푅 1−훼푃푅 1−훼푃푅 푃푅 푃푅 1−훼푃푅 1−훼푃푅 푀 푇푉퐶(푇∗)= min [푇푉퐶 ( 푇∗), 푇푉퐶 (푇∗)] and 푇∗ = 푇∗ or 푇푉퐶(푇∗)= min [푇푉퐶 ( 푇∗), 푇푉퐶 ( 0 )] and 푇∗ = 푇∗ 1 1 2 2 1 1 1 3 1−훼푃푅 1 ∗ 푀 푇2 (associated with the least cost). or 0 (associated with the least cost). 1−훼푃푅 푊 푊 푀 푀 (d) If 푇∗ < , ≤ 푇∗≤ 0 and 푇∗ ≥ 0 then 1 푃푅 푃푅 2 1−훼푃푅 3 1−훼푃푅 (c) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 > 0, ∗ ∗ ∗ ∗ 푝 푝 푒 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶2(푇2 ), 푇푉퐶3(푇3 )] and ( ) ∗ ∗ ∗ ∗ 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0, 퐻 2푃 − 1 − 푇 = 푇1 or 푇2 or 푇3 (associated with the least cost). ∗ 푊 ∗ 푀0 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0, 푇1 ≥ and 푇3 ≥ then 푊 푀 푀 푃푅 1−훼푃푅 (e) If 푇∗ < , 푇∗ > 0 and 푇∗ < 0 then 1 푃푅 2 1−훼푃푅 3 1−훼푃푅 Equations (12) , (22) and(17) imply that 푇푉퐶(푇) is 푀 ∗ ∗ 푇푉퐶(푇∗)= min [푇푉퐶 ( 푇∗), 푇푉퐶 ( 0 )] and 푇∗ = 푇∗ decreasing on (0, 푇3 ] and increasing on [푇3 , ∞) . Hence 1 1 3 1−훼푃푅 1 ∗ ∗ ∗ ∗ 푇푉퐶(푇 )= 푇푉퐶3( 푇3 ) and 푇 = 푇3 . or 푀0 (associated with the least cost). 1−훼푃푅 2 (d) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, 푊 푀 푀 (f) If 푇∗ < , 푇∗ > 0 and 푇∗ ≥ 0 then 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0 , 퐻(2푃 − 1) − 1 푃푅 2 1−훼푃푅 3 1−훼푃푅 푝 푒 ∗ ∗ ∗ ∗ ∗ ∗ 푊 ∗ 푀0 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶3(푇3 )] and 푇 = 푇1 or 2푆퐼푒훼푃푅 + 푆퐼푒 ≤ 0, 푇1 ≥ and 푇3 < then ∗ 푃푅 1−훼푃푅 푇3 (associated with the least cost). Equations (12) , (22) and(25) imply that 푇푉퐶(푇) is 푀0 푀0 ∗ 푊 ∗ 푊 ∗ 푀0 ∗ decreasing on (0, ] and increasing on [ , ∞). (g) If 푇 ≥ , 푇 < and 푇 < then 푇푉퐶(푇 )= 1−훼푃푅 1−훼푃푅 1 푃푅 2 푃푅 3 1−훼푃푅 ∗ 푀0 ∗ 푀0 푊 ∗ 푊 Hence 푇푉퐶(푇 )= 푇푉퐶 ( ) and 푇 = . 푇푉퐶 ( ) and 푇 = . 3 1−훼푃푅 1−훼푃푅 2 푃푅 푃푅

Mathematics and Statistics 8(5): 596-609, 2020 603

2 (ii)(a) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒> 0, 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − ∗ 푊 ∗ 푀0 ∗ 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇 < , 푇 > and 푇 ≥ 푒 1 푃푅 2 1−훼푃푅 3 ∗ 푊 ∗ 푊 ∗ 푀0 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇 < , 푇 < and 푇 < 푀0 푒 1 푃푅 2 푃푅 3 1−훼푃푅 then Equations (17)and (24) imply that 푇푉퐶(푇) is 1−훼푃푅 then Equations (17),(23) and (25) imply that 푇푉퐶(푇) is ∗ ∗ 푊 ∗ ∗ decreasing on (0, 푇1 ] , increasing on [푇1 , ) , decreasing decreasing on (0, 푇1 ] and increasing on [푇1 , ∞) . Since 푃푅 푊 푊 ∗ ∗ ∗ lim 푇푉퐶 (푇) > 푇푉퐶 ( ) , so we conclude that then on [ , 푇3 ] and increasing on [푇3 , ∞). Hence 푇푉퐶(푇 )= 푊 1 2 푃푅 푃푅 푇→ ∗ ∗ ∗ ∗ ∗ 푃푅 min [ 푇푉퐶1( 푇1 ) , 푇푉퐶3(푇3 ) ] and 푇 = 푇1 or 푇3 푊 T푉퐶(푇∗)= min [푇푉퐶 ( 푇∗), 푇푉퐶 ( )] and 푇∗ = 푇∗ or (associated with the least cost). 1 1 2 푃푅 1 푊 (associated with the least cost). 2 푃푅 (g) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0, 퐻(2푃 − 1) − ( ) 2 푝 푒 (b) If 퐻 2푃 − 1 + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, ∗ 푊 ∗ 푊 ∗ 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 and 푇1 ≥ , 푇2 < and 푇3 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 푃푅 푃푅 푀0 ∗ 푊 ∗ 푊 ∗ < then Equations (17), (23) and (25) imply that 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇 < , 푇 < and 푇 ≥ 1−훼푃푅 푒 1 푃푅 2 푃푅 3 푀 푊 푊 0 then Equations (17) and (23) imply that 푇푉퐶(푇) is 푇푉퐶(푇) is decreasing on (0, ) and increasing on [ , ∞). 1−훼푃푅 푃푅 푃푅 푀 ∗ 푊 ∗ 푊 decreasing on (0, 푇∗ ] , increasing on [ 푇∗ , 0 ], Hence 푇푉퐶(푇 )= 푇푉퐶2( ) and 푇 = . 1 1 1−훼푃푅 푃푅 푃푅 푀 decreasing on [ 0 , 푇∗ ] and increasing on [푇∗ , ∞). 2 1−훼푃푅 3 3 (h) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, ∗ ∗ ∗ ∗ Hence 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶3( 푇3 )] and 푇 = 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − ∗ ∗ 푊 푊 푇1 or 푇3 (associated with the least cost). 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇∗ ≥ , 푇∗ < and 푇∗ ≥ 푒 1 푃푅 2 푃푅 3 2 푀0 (c) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, then Equations (22), (23)and (17) imply that 1−훼푃푅 ( ) 푊 푊 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻 2푃 − 1 − 푇푉퐶(푇) is decreasing on (0, ) , increasing on [ , ∗ 푊 푊 ∗ 푀0 푃푅 푃푅 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇 < , ≤ 푇 ≤ and 푀 푀 푒 1 푃푅 푃푅 2 1−훼푃푅 0 0 ∗ ∗ ], decreasing on[ , 푇3 ] and increasing on [푇3 , ∗ 푀0 1−훼푃푅 1−훼푃푅 푇 < then Equations (17) and (25) imply that 푊 3 1−훼푃푅 ∗ ∗ ∞) . Hence 푇푉퐶(푇 )= min [푇푉퐶2( ), 푇푉퐶3( 푇3 )] and ∗ ∗ 푊 푃푅 푇푉퐶(푇) is decreasing on (0, 푇 ] , increasing on [푇 , ) , 푊 1 1 푃푅 푇∗ = or 푇∗ (associated with the least cost). 푊 3 decreasing on [ , 푇∗] and increasing on [푇∗, ∞). Hence 푃푅 푃푅 2 2 ∗ ∗ ∗ ∗ ∗ (i) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 > 0, 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶2(푇2 )] and 푇 = 푇1 or 푝 푝 푒 ∗ 푇2 (associated with the least cost). 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 푊 푊 푀 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇∗ ≥ , ≤ 푇∗≤ 0 and (d) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 2퐶퐼 훼푃2푅 − 푆퐼 > 0, 푒 1 푃푅 푃푅 2 1−훼푃푅 푝 푝 푒 푀 푇∗ < 0 then Equations (22), (17) and (25) imply that 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 3 1−훼푃푅 ∗ 푊 푊 ∗ 푀0 푇푉퐶(푇) is decreasing on (0, 푇∗ ] and increasing on [푇∗, 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 and 푇1 < , ≤ 푇2 ≤ and 2 2 푃푅 푃푅 1−훼푃푅 ∗ ∗ ∗ ∗ 푀 ∞). Hence 푇푉퐶(푇 )= 푇푉퐶 (푇 ) and 푇 = 푇 . 푇∗ ≥ 0 then Equation (17) implies that 푇푉퐶(푇) is 2 2 2 3 1−훼푃푅 2 ∗ ∗ 푊 (j) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, decreasing on (0, 푇1 ] , increasing on [푇1 , ) , decreasing 푃푅 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0 , 퐻(2푃 − 1) − 푊 ∗ ∗ 푀0 푝 푒 on [ , 푇 ] , increasing on [푇 , ] , decreasing on 푊 푊 푀 푃푅 2 2 1−훼푃푅 ∗ ∗ 0 2푆퐼푒훼푃푅 + 푆퐼푒 > 0 and 푇1 ≥ , ≤ 푇2 ≤ and 푀0 ∗ ∗ ∗ 푃푅 푃푅 1−훼푃푅 [ , 푇 ] and increasing on [푇 , ∞). Hence 푇푉퐶(푇 )= 푀 1−훼푃푅 3 3 푇∗ ≥ 0 then Equations (22)and (17) imply that min [푇푉퐶 ( 푇∗), 푇푉퐶 (푇∗), 푇푉퐶 (푇∗)] and 푇∗ = 푇∗ or 3 1−훼푃푅 1 1 2 2 3 3 1 푇푉퐶(푇) is decreasing on (0, 푇∗ ] , increasing on [푇∗ , 푇∗ or 푇∗ (associated with the least cost). 2 2 2 3 푀0 푀0 ∗ ∗ ] , decreasing on [ , 푇3 ] and increasing on [푇3 , 2 1−훼푃푅 1−훼푃푅 (e) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, ∗ ∗ ∗ ∗ ∞) . Hence 푇푉퐶(푇 )= min [푇푉퐶2(푇2 ), 푇푉퐶3(푇3 )] and 푇 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0 , 퐻(2푃 − 1) − ∗ ∗ 푝 푒 = 푇2 or 푇3 (associated with the least cost). 푊 푀 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇∗ < , 푇∗> 0 and 푇∗ < 푒 1 푃푅 2 1−훼푃푅 3 2 (k) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, 푀0 then Equations (17), (24) and (25) imply that 1−훼푃푅 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 , 퐻(2푃 − 1) − 푊 ∗ ∗ ∗ 푊 ∗ 푀0 ∗ 푇푉퐶(푇) is decreasing on (0, 푇1 ] , increasing on [푇1 , ) , 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇 ≥ , 푇 > and 푇 < 푃푅 푒 1 푃푅 2 1−훼푃푅 3 푊 푀 푀 decreasing on [ , 0 ] and increasing on [ 0 , ∞) . 푀0 then Equations (22), (24)and (25) imply that 푃푅 1−훼푃푅 1−훼푃푅 1−훼푃푅 ∗ ∗ 푀0 ∗ 푀0 Hence 푇푉퐶(푇 )= min [푇푉퐶1( 푇1 ), 푇푉퐶3( )] and 푇 푇푉퐶(푇) is decreasing on (0, ] and increasing on 1−훼푃푅 1−훼푃푅 푀 ∗ 0 푀0 ∗ 푀0 ∗ = 푇1 or (associated with the least cost). [ , ∞). Hence 푇푉퐶(푇 )=푇푉퐶 ( ) and 푇 = 1−훼푃푅 1−훼푃푅 2 1−훼푃푅 푀0 2 . (f) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, 1−훼푃푅

604 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

2 (l) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 > 0, Equation (14) and (26) imply that 푇푉퐶(푇) is decreasing 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 > 0 , 퐻(2푃 − 1) − on (0, ∞). Again lim 푇푉퐶(푇) =∞ and lim 푇푉퐶(푇) = 푝 푒 푇→0+ 푇→∞ 푊 푀 2푆퐼 훼푃푅 + 푆퐼푒 > 0 and 푇∗ ≥ , 푇∗> 0 and 푇∗ ≥ −퐶퐼 푃푅푀 . Consequently 푇∗ = ∞ and 푇푉퐶(푇∗) 푒 1 푃푅 2 1−훼푃푅 3 푝 0 푀0 then Equations (22), (24)and (17) imply that =−퐶퐼푝푃푅푀0. 1−훼푃푅 ∗ ∗ 2 푇푉퐶(푇) is decreasing on (0, 푇3 ] and increasing on [푇3 , (b) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0, ∞) . Hence 푇푉퐶(푇∗)= 푇푉퐶 (푇∗) and 푇∗ = 푇∗ . 푊 3 3 3 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 and 푇∗ < then 푝 푒 1 푃푅 Equation (14) and (17) imply that 푇푉퐶(푇) is decreasing 푊 on (0, 푇∗] and increasing on [푇∗, ) and decreasing on 5. Decision Rule of the Optimal Cycle 1 1 푃푅 푊 푾 [ , ∞) . Consequently 푇∗ = 푇∗ or ∞ (linked with the 푃푅 1 Time When > 푴 = 푴ퟎ + ∗ ∗ 푷푹 smallest cost) and 푇푉퐶(푇 ) = min [ 푇푉퐶1( 푇1 ) , 휶푷푹푻. −퐶퐼푝푃푅푀0].

In this case Equations (10) and (14) yield (C) Suppose that 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼 훼푃2푅 − 푆퐼 > 0 then clearly 퐻(2푃 − 1) + 푊 푊 푝 푒 푇∗ ≥ implies 푇푉퐶′( ) ≤ 0 and hence 푇푉퐶 (푇) is 1 푃푅 1 푃푅 1 2퐶퐼푝푃 − 푆퐼푒 > 0 and 푊 decreasing on (0, ) (26) 푊 푊 푃푅 (i) If 푇∗ < and 푇∗ ≥ then 푇∗ =푇∗ or푇∗ (linked 1 푃푅 3 푃푅 1 3 ∗ 푊 ′ 푊 ∗ ∗ 푇3 < implies 푇푉퐶3 ( ) > 0 and hence 푇푉퐶3(푇) is with the smallest cost) and 푇푉퐶(푇 ) = min [푇푉퐶1( 푇1 ), 푃푅 푃푅 ∗ 푇푉퐶3( 푇3 )]. increasing on [ 푊 , ∞) (27) 푃푅 푊 푊 (ii) If 푇∗ ≥ and 푇∗ ≥ then 푇∗ = 푇∗ and Furthermore, the result follows. 1 푃푅 3 푃푅 3 ∗ ∗ 푇푉퐶(푇 ) =푇푉퐶3( 푇3 ). 푊 푊 Theorem 3. (iii) If 푇∗ < and 푇∗ < then 푇∗ = 푇∗ and 1 푃푅 3 푃푅 1 푇푉퐶(푇∗) = 푇푉퐶 ( 푇∗). (A) Suppose that 퐻(2푃 − 1) + 2퐶퐼푝푃 − 1 1 2 ∗ ∗ 2퐶퐼 훼푃 푅 − 푆퐼 < 0 then 푇 = ∞ and 푇푉퐶(푇 ) = -∞ ie., 푊 푊 푊 푝 푒 (iv) If 푇∗ ≥ and 푇∗ < then 푇∗ = and the retailer will try to continue his cycle as much as 1 푃푅 3 푃푅 푃푅 푊 푇푉퐶(푇∗) = 푇푉퐶 ( ). possible. 3 푃푅 Proof: See Theorem 2-(A) Proof: (B) Suppose that 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푊 푊 2 (i) If 푇∗ < and 푇∗ ≥ then Equation (17) implies 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0 then 1 푃푅 3 푃푅 ∗ ∗ that 푇푉퐶(푇) is decreasing on (0, 푇1 ] , increasing on [푇1 , (i) If 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 then 푇∗ = ∞ and 푊 푊 푝 푒 ) , decreasing on [ , 푇∗]and increasing on [푇∗, ∞). ∗ 푃푅 푃푅 3 3 푇푉퐶(푇 ) =−퐶퐼푝푃푅푀0. ∗ ∗ ∗ Consequently 푇 = 푇1 or 푇3 (linked with the smallest cost) and 푇푉퐶(푇∗) = min [푇푉퐶 ( 푇∗), 푇푉퐶 ( 푇∗)]. (ii) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 > 0 then 1 1 3 3 ∗ 푊 ∗ 푊 ∗ 푊 ∗ ∗ (ii) If 푇1 ≥ and 푇3 ≥ then Equations (26) and (a) If 푇1 ≥ then 푇 = ∞ and 푇푉퐶(푇 ) 푃푅 푃푅 푃푅 ∗ (17) imply that 푇푉퐶(푇) is decreasing on (0, 푇3 ] and =−퐶퐼푝푃푅푀0. ∗ ∗ ∗ ∗ increasing on [ 푇3 , ∞). So 푇 = 푇3 and 푇푉퐶(푇 ) ∗ 푊 =푇푉퐶 ( 푇 ). (b) If 푇∗ < then 푇∗ = 푇∗ or ∞ and 푇푉퐶(푇∗) = 3 3 1 푃푅 1 ∗ ∗ 푊 ∗ 푊 min [푇푉퐶1( 푇1 ), −퐶퐼푝푃푅푀0]. (iii) If 푇 < and 푇 < then Equations (17) and 1 푃푅 3 푃푅 ∗ 2 (25) imply that 푇푉퐶(푇) is decreasing on (0, 푇1 ] and Proof: (i) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 ∗ ∗ ∗ ∗ increasing on [ 푇1 , ∞) . So 푇 = 푇1 and 푇푉퐶(푇 ) = = 0 and 퐻(2푃 − 1) + 2퐶퐼푝푃 − 푆퐼푒 = 0 then Equation ∗ 푇푉퐶1( 푇1 ). (10) and (14) imply that 푇푉퐶(푇) is decreasing on (0, ∞). ∗ 푊 ∗ 푊 Again lim 푇푉퐶(푇)=∞ and lim 푇푉퐶(푇) = −퐶퐼푝푃푅푀0 . (iv) If 푇 ≥ and 푇 < then Equations (26) and 푇→0+ 푇→∞ 1 푃푅 3 푃푅 ∗ ∗ 푊 Consequently 푇 = ∞ and 푇푉퐶(푇 ) =−퐶퐼푝푃푅푀0. (17) imply that 푇푉퐶(푇) is decreasing on (0, ) and 푃푅 푊 푊 2 increasing on [ , ∞) . So 푇∗ = and 푇푉퐶(푇∗) = (ii)(a) If 퐻(2푃 − 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 0, 푃푅 푃푅 푊 푊 퐻(2푃 − 1) + 2퐶퐼 푃 − 푆퐼 = 0 and 푇∗ ≥ then 푇푉퐶 ( ). 푝 푒 1 푃푅 3 푃푅

Mathematics and Statistics 8(5): 596-609, 2020 605

µ(푇) 6. Algorithm 10푇, 푃 = 2 , 푅 = = 10. Other parameters are 퐴 = 푇 $ 50 per cycle, 퐻 = $ 0.5 per kg per month, 퐶 = $ 10 Step 1. If 훼푃푅 ≥ 1, go to step 5. per kg, 푆 = $ 12 per kg, 훼 = 0.5, 푀0 = 2 month, 퐼푒 = ∗ Step 2. Find 푇 from Theorem 2. $ 0.005 per $ per month, 퐼푝 = $ 0.05 per $ per month, 푊 = 30(here 푅 ≥ 1 ). Using Theorem 1, we get 푇∗= 푊 ∗ ∗ ∗ 푊 Step 3. If ≤ 푀0 + 훼푃푅푇 , then 푇0 = 푇 . 5.2705, 푇푉퐶(푇∗) = 17.77 and ≤ 푀 + 훼푃푅푇∗ is 푃푅 푃푅 0 ∗ Step 4. Go to step 7. satisfied. Again using Theorem 3, we get 푇 = ∞, ∗ 푊 ∗ 푇푉퐶(푇 ) = −∞ , and > 푀0 + 훼푃푅푇 , is not ∗ 푃푅 Step 5. Find 푇 from Theorem 1. satisfied. Hence optimal cycle time is 5.2705 month and 푊 optimal cost is $17.77. Step 6. If ≤ 푀 + 훼푃푅푇∗, then 푇∗ = 푇∗. 푃푅 0 0 (iii) Let the probability density of demand 푥 kg of the Step 7. Find 푇∗ from Theorem 3. item throughout period 푇 month be normal with 푊 parameters mean 9푇 and standard deviation 3푇 in Step 8. If > 푀 + 훼푃푅푇∗, then 푇∗ = 푇∗. 푃푅 0 00 푎(푇) = 0 ≤ 푥 ≤ 푏(푇) = 18푇 i.e., 푓(푥|푇) = (푥−휇(푇))2 ∗ ∗ ∗ 1 − Step 9. If only 푇 exists and 푇 does not exist, then 푇 2휎(푇)2 0 00 0 { 푒 , 푎(푇) ≤ 푥 ≤ 푏(푇) . Therefore we get is the optimal cycle time. √2휋휎(푇) 0, otherwise ∗ ∗ µ(푇) Step 10. If only 푇00 exists and 푇0 does not exist, then µ(푇) = 9푇, 푃 = 2 , 푅 = = 9. Other parameters are 푇∗ is the optimal cycle time. 푇 00 퐴 = $ 50 per cycle, 퐻 = $ 0.5 per kg per month, 퐶 = ∗ ∗ $ 10 per kg, 푆 = $ 12 per kg, 훼 = 0.5, 푀 = 2 month, Step 11. If both 푇0 and 푇00 exist, then calculate 0 ∗ ∗ 퐼 = $ 0.005 per $ per month, 퐼 = $ 0.05 per $ per 푇푉퐶 (푇0 ) and 푇푉퐶 (푇00). 푒 푝 month, 푊 = 30(here 푅 < 1 ). Using Theorem 2, we get ∗ ∗ 푊 Step 12. If 푇푉퐶 (푇0 ) ≥ 푇푉퐶(푇00), then optimum cycle 푇∗ = 2.9695, 푇푉퐶(푇∗) = 28.6548 and ≤ 푀 + time is 푇∗ , otherwise 푇∗ is the optimal cycle time. 푃푅 0 00 0 훼푃푅푇∗ is satisfied. Again using Theorem 3, we get 푇∗= 푊 2.8172, 퐶(푇∗) = 17.4965 , and > 푀 + 훼푃푅푇∗ , 푃푅 0 7. Numerical Example is not satisfied. Hence optimal cycle time is 2.9695 month and optimal cost is $28.6548. Let us study inventory structure with the subsequent parameters in suitable units. (i) Let the probability density of demand 푥 kg of the 8. Sensitivity Analysis item throughout period 푇 month be uniform in 푎(푇) = 10푇 ≤ 푥 ≤ 푏(푇) = 60푇 i.e., 푓(푥|푇) = At this time, we study two instances and debate the 1 , 푎(푇) ≤ 푥 ≤ 푏(푇) sensitivity investigation of all the parameters in each case. {푎(푇)−푏(푇) . Therefore we get µ(푇) = 0, otherwise (I) In the first problem, let the probability density µ(푇) 35푇, 푃 = 1.7 , 푅 = = 35. Other parameters are demand 푥 kg of the item throughout period 푇 month be 푇 uniform in 푎(푇) = 0 ≤ 푥 ≤ 푏(푇) = 18푇 i.e., 퐴 = $ 50 per cycle, 퐻 = $ 0.5 per kg per month, 퐶 = 1 , 푎(푇) ≤ 푥 ≤ 푏(푇) $ 10 per kg, 푆 = $ 12 per kg, 훼 = 0.5, 푀0 = 2 month, 푓(푥|푇) = {푎(푇)−푏(푇) . Therefore we 퐼푒 = $ 0.025 per $ per month, 퐼푝 = $ 0.05 per $ per 0, otherwise µ(푇) month, 푊 = 30(here 푅 ≥ 1 ). Using Theorem 1, we get µ(푇) = 9푇, 푃 = 2 , 푅 = = 9. Other parameters 푊 get 푇∗ = ∞, 푇푉퐶(푇∗) = −∞ and ≤ 푀 + 훼푃푅푇∗ 푇 푃푅 0 are 퐴 = $ 50 per cycle, 퐻 = $ 0.6 per kg per month, ∗ is satisfied. Again using Theorem 3, we get 푇 = ∞, 퐶 = $ 10 per kg, 푆 = $ 13 per kg, 훼 = 0.4, 푀0 = 2 ∗ 푊 ∗ 푉퐶(푇 ) = −∞ , and > 푀 + 훼푃푅푇 , is not month, 퐼푒 = $ 0.004 per $ per month, 퐼푝 = $ 0.05 per 푃푅 0 satisfied. Hence optimal cycle time is infinity and optimal $ per month, 푊 = 30. Solving the problem, we obtain, cost is minus infinity (i.e., the retailer will try to continue optimal cycle time is 3.1735 month and optimal cost is production cycle as much as possible). $30.5739. (ii) Let the probability density of demand 푥 kg of the (II) whereas in the second problem, the probability item throughout period 푇 month be uniform in 푎(푇) = density of demand 푥 kg of the item throughout period 0 ≤ 푥 ≤ 푏(푇) = 20푇 i.e., 푓(푥|푇) = 푇 month be normal with parameters mean 10푇 and 1 standard deviation 0.85푇 in 푎(푇) = 7.5푇 ≤ 푥 ≤ , 푎(푇) ≤ 푥 ≤ 푏(푇) {푎(푇)−푏(푇) . Therefore we get µ(푇) = 푏(푇) = 12.5푇 i.e., 푓(푥|푇) = 0, otherwise

606 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

(푥−휇(푇))2 1 − parameters are 퐴 = $ 50 per cycle, 퐻 = $ 1.5 per kg 푒 2휎(푇)2 , 푎(푇) ≤ 푥 ≤ 푏(푇) {√2휋휎(푇) . Therefore we get per month, 퐶 = $ 10 per kg, 푆 = $ 13 per kg, 훼 = 0, otherwise 0.05, 푀0 = 2 month, 퐼푒 = $ 0.025 per $ per month, µ(푇) 퐼 = $ 0.05 per $ per month, 푊 = 35. The optimal µ(푇) = 10푇, 푃 = 1.25 , 푅 = = 10. Other 푝 푇 cycle time is 1.7747 month and optimal cost is $56.3471.

Table 1. Sensitivity analysis of different parameters

Example (I) Example (II) Parameter Change Cycle time Total Cost Cycle time Total Cost -20% 2.8385(-10.6%) 27.2474(-10.9%) 1.5873(-10.6%) 50.3984(-10.6%) -10% 3.0107(-05.1%) 28.9570(-05.3%) 1.6836(-05.1%) 53.4555(-0.51%) 푨 +10% 3.3284(+04.9%) 32.1119(+05.0%) 1.8613(+04.8%) 59.0973(+04.9% +20% 3.4765(+09.5% ) 33.5814(+09.8% ) 1.9441(+09.5% ) 61.7251(+09.5% ) -20% 3.8665(+20.0% ) 24.9267(-18.5% ) 1.9157(+07.9% ) 52.2015(-07.4% ) -10% 3.4692(+09.3% ) 27.8889(-08.8% ) 1.8411(+03.7% ) 54.3139(-03.6% ) 푯 +10% 2.9426(-07.3% ) 33.0475(+08.1% ) 1.7149(-03.4% ) 58.3095(+03.5% ) +20% 2.7556(-13.2%) 35.3528(+15.6%) 1.6609(-06.4%) 60.2079(+06.9%) -20% 3.3293(+04.9% ) 29.2866(-04.2% ) 1.9841(+11.8% ) 50.3984(-10.6% ) -10% 3.2372(+02.0% ) 30.0484(-01.7% ) 1.8707(+05.4% ) 53.4555(-05.1% ) 푹 +10% 3.1341(-01.2% ) 30.87770(+01.0% ) 1.6927(-04.6% ) 59.0973(+04.9% ) +20% 3.1163(-01.8% ) 30.9659(+01.3% ) 1.6200(-08.7% ) 61.7251(+09.5% ) -20% 2.9779(-06.2% ) 32.6447(+06.8% ) 1.7747(+00.0% ) 56.3471(+00.0% ) -10% 3.0710(-03.2%) 31.6258(+03.8%) 1.7747(+00.0% ) 56.3471(+00.0% ) 휶 +10% 3.2871(+03.6% ) 29.4858(-03.5% ) 1.7747(+00.0% ) 56.3471(+00.0% ) +20% 3.4137(+07.6% ) 28.3572(-07.3% ) 1.7747(+00.0% ) 56.3471(+00.0% ) -20% 3.7914(+19.5%) 254394(-16.8% ) 2.1442(+20.9% ) 46.6368(-17.2% ) -10% 3.4415(08.4% ) 28.1203(-08.0% ) 1.9334(+08.9% ) 51.7204(-08.2% ) 푷 +10% 2.9598(-06.7% ) 32.8499(+07.4% ) 1.6495(-07.1% ) 60.6217(+07.6% ) +20% 2.7841(-12.3% ) 34.9819(+14.4% ) 1.5476(-12.8% ) 64.6142(+14.7% ) -20% 2.9903(-05.8% ) 32.6922(+06.9% ) 1.7568(-01.8% ) 56.9201(+01.0% ) -10% 3.0778(-03.8% ) 31.6474(+03.5% ) 1.7656(-00.5% ) 56.6347(+00.5% ) 푰풆 +10% 3.2788(+03.3% ) 29.4690(-03.6% ) 1.7838(+00.5% ) 56.0580(-00.5% ) +20% 3.3952(+06.9% ) 28.3294(-07.3% ) 1.7931(+01.0% ) 55.7673(-01.0% ) -20% 3.1735(+00.0% ) 30.5739(+00.0% ) 1.8490(+04.2% ) 54.0833(-04.0% ) -10% 3.1735(+00.0% ) 30.5739(+00.0% ) 1.8107(+02.0% ) 55.2268(-02.0% ) 푰풑 +10% 3.1735(+00.0% ) 30.5739(+00.0% ) 1.7407(-01.9%) 57.4456(+01.9%) +20% 3.1735(+00.0% ) 30.5739(+00.0% ) 1.7087(-03.7% ) 58.5234(+03.9% ) -20% 3.1735(+00.0%) 30.5739(00.0%) 1.8490(+04.2%) 54.0832(-04.0%) -10% 3.1735(+00.0%) 30.5739(00.0%) 1.8107(+02.0%) 55.2268(-2.0%) 푪 +10% 3.1735(+00.0%) 30.5739(00.0%) 1.7407(-01.9% ) 57.4456(+01.9% ) +20% 3.1735(+00.0%) 30.5739(00.0%) 1.7087(-03.7% ) 58.5235(+03.9%) -20% 2.9903(-05.8% ) 32.6922(+06.9% ) 1.7568(-01.8% ) 56.9209(+01.0%) -10% 3.0078(-03.0% ) 31.6474(+03.5% ) 1.7656(-00.5% ) 56.6347(+00.5% ) 푺 +10% 3.2788(+03.3% ) 29.4690(-03.6% ) 1.7838(+00.5% ) 56.0585(-00.5% ) +20% 3.3952(+06.9%) 28.3294(-07.3%) 1.7931(+01.0%) 55.7673(-01.0%)

Mathematics and Statistics 8(5): 596-609, 2020 607

Table 1 continued

-100% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) -50% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) -20% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) -10% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) 푾 +10% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +20% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +50% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +100% 1.7218(-45%) 58.079250(89.9%) 1.7747(00.0%) 56.3471(00.0%) -75% 3.1735(+00.0%) 30.5739(00.0%) 2.0439(+15.2%) 45.8009(-18.7%) -50% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) -20% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) -10% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) 푴ퟎ +10% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +20% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +50% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%) +75% 3.1735(+00.0%) 30.5739(00.0%) 1.7747(00.0%) 56.3471(00.0%)

The Table-1 represents the sensitivity of decision 퐷푇ℎ 퐷푇푠퐼푒 푇푉퐶4(푇) = + 푐퐼푝퐷푇 − (28) variable ’cycle time’ and total cost to changes in each of the 2 2 퐷푇ℎ 푇 11 parameters in both the problems. Here we observe that 푇푉퐶5(푇) = − 퐷푠퐼푒[푀 − ] (29) cycle time and total cost are moderately sensitive to 2 2 퐷푇ℎ 퐷푇푠퐼 푇푉퐶 (푇) = + 푐퐼 퐷(푇 − 푀) − 푒 (30) changes in the parameters 퐴, 퐻, 푅, 푃, 퐼푒 and, that is, even a 6 2 푝 2 small change in the values of those parameters make ∗ ∗ 2퐴 significant change in the decision parameters and total cost. 푇4 = 푇6 = √ (31) 퐷(ℎ + 2푐퐼푝 −푠퐼푒) Here we also note that in problem (I) change in 퐶 or 퐼푝 does not change the values of cycle time and total cost ∗ 2퐴 푇5 = √ (32) whereas in problem (II) cycle time and total cost undergo 퐷(ℎ + 푠퐼푒) significant changes with the changes in the values of 퐶 or 퐼 . Again, in problem (II) a change in 훼 does not change Then Equations (28), (29), (30), (31), and (32) will be 푝 consistent with Equations (2), (3), (4), (12) and (13) in the values of cycle time and total cost whereas in problem Chung et al.’s model [6] respectively. Again 퐻(2푃 − (I) cycle time and total cost undergo significant changes 2 when 훼 is changed. From the sensitivity of problem (I) 1) + 2퐶퐼푝푃 − 2퐶퐼푝훼푃 푅 − 푆퐼푒 = 퐻(2푃 − 1) + and problem (II), we can conclude that the sensitivity of the 2퐶퐼푝푃 − 푆퐼푒,퐻(2푃 − 1) − 2푆퐼푒훼푃푅 + 푆퐼푒 = h + s퐼푒 > 0 and 훼푃푅 = 0 < 1. So Theorem 1, Theorem parameters 퐶 , 퐼푝 and 훼 are entirely dependent of parameter values of distinct problems. In general, we can 2(퐵(𝑖), 퐵(𝑖𝑖𝑖), 퐵(𝑖푣), 퐶(𝑖)) and Theorem conclude about the sensitivity of these parameters. 3 (퐵(𝑖𝑖), 퐶(𝑖), 퐶(𝑖푣)) will not be required. However, other However, cycle time and total cost are not sensitive at all to theorems will be consistent with Chung et al.’s [6] model. Thus Chung et al.’s [6] model is a special case of this changes in 푊 and 푀0 . But if we make outstanding variations in their values the result may experience model. noticeable changes. Finally, the eff ects of anew defined parameters can be profoundly detected from the overhead 10. Conclusions table. It is noted that as 훼 increases, the total cost rises This paper deals with a probabilistic economic order whereas cycle time (not strictly) in case of same result will quantity inventory model under condition of permissible hold in strict sense. This indicates just how variable trade delay in payments to take the order quantity into account. credit is significant in optimal consequence. To reflect realistic commercial circumstances, it is supposed that the trade credit period is not only allied to the order quantity but also varies with the ordering quantity. If 9. Special Case < 푊 , the delay in payments is not allowed. Else, a flexible trade credit period 푀 = 푀 + 훼 푄 is permitted. When 푃 = 1 and 훼 = 0( 퐻 = ℎ, 푆 = 푠, 퐶 = 푐 ). 0 It is also supposed that demand rate follows a probability Let 푀 = 푀0, 퐷 = 푅 and density function. Under these conventions, the model is

608 Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount

푊 settled. It is shown that, if ≥ , one can swiftly determine Research Society. Vol.36, 335-338, 1985 푃푅 the optimal ordering quantity by using Theorem 3. [12] Huang YF. Optimal retailer’s ordering policies in the EOQ 푊 Otherwise, if < , then the optimal ordering strategy model under trade credit financing. Journal of the 푃푅 Operational Research Society. Vol.54, 1011-1015, 2003 can be found from Theorem 1 and Theorem 2. We develop an algorithm, which will support one to determine the [13] Hwang H, Shinn SW. Retailer’s pricing and lot sizing policy optimal 푇∗ efficiently. Numerical examples are provided for exponentially deteriorating product under the condition for illustration. To check the fluctuations in the decision of permissible delay in payments. Computers and Operations Research. Vol.24,539-547, 1997 variables for changes in diff erent parameters, a sensitivity scrutiny is also carried out. Lastly, we have shown that [14] Jamal AMM, Sarker BR, Wang S. A ordering policy for Chung et al.’s model [6] is a special case of our model. deteriorating items with allowable shortages and permissible delay in payment. Journal of the Operations Research Society. Vol.48, 826-833,1997 [15] Jiang W, Skouri K, Teng JT, Ouyang LY. A note on “Replenishment policies for non-instantaneous deteriorating REFERENCES items with price and stock sensitive demand under permissible delay in payments”. International Journal of [1] Aggarwal SP, Jaggi CK. Ordering policies of deteriorating Production Economics. Vol.155, 324-329, 2014 items under permissible delay in payments. Journal of the Operation Research Society. Vol. 46, 652-662, 1995 [16] Li R, Teng JT, Zheng Y. Optimal credit term, order quantity and selling price for perishable product when demand [2] Chang CT, Ouyang LY, Teng JT. An EOQ model for depends on selling price, expiration date and credit period. deteriorating items under supplier credits linked to ordering Annals of operations Research. Vol.280, 377-405, 2019 quantity. Applied Mathematical Modelling. Vol. 27, No.12, 983-996, 2003 [17] Mahato G C. An EPQ-based model for exponentially [3] Chang HC, Chia-HHo, Ouyang LY. The optimal pricing and deteriorating items under retailer partial trade credit policy ordering policy for an integrated inventory model when in supply chain. Expert system with applications. Vol.39, trade credit linked to order quantity. Applied Mathematical No.3, 3537-3550, 2012 Modelling. Vol.33, 2978-2991, 2009 [18] Musa A, Sani B. Inventory ordering policies of delayed [4] Chen SC, Barron LEC, Teng JT. Retailer’s economic order decorating items under permissible delay in payments. quantity model when the supplier off ers conditionally International Journal of Production Economics. Vol.136, permissible delay in payments linked to order quantity. No.1, 75-83, 2012 International Journal of Production Economics. Vol. 155, [19] Pramanick P, Maity MK. A note on “Replenishment policies 284-291, 2014 for non-instantaneous deteriorating items with price and [5] Chung KJ, Goyal SK, Huang YF. The optimal inventory stock sensitive demand under permissible delay in policies under permissible delay in payments depending on payments”. Engineering Application of Artificial the ordering quantity. International Journal of production Intelligence. Vol.85, 194-207, 2019 Economics. Vol.95, No.2, 203-213, 2005 [20] Sarker BR, Jamal AMM, Wang S. Optimal payment time [6] Chung KJ, Liao J J. Lot-sizing decisions under trade credit under permissible delay in payment for products with depending on the ordering quantity. Computers and deterioration. Production Planning and Control. Vol.11, Operations Research. Vol.31, No. 6,909-928, 2004 380-390, 2001 [7] Chung KJ, Liao J J. The optimal ordering policy of the EOQ [21] Shah NH, Shah YK. A lot size model for exponentially model under trade credit depending on the ordering quantity deteriorating inventory when delay in payments is from the DCF approach. European journal of Operations permissible. Cahiers du CERO, Belgium. Vol 35, 1-9, 1993 Research. Vol.196, No.2, 563-568, 2009 [22] Shah NH. Probabilistic time scheduling model for [8] Chung, KJ, Hung CH, Dye CY. An inventory model for exponentially decaying inventory when delay in payments is deteriorating items with linear trend demand under the permissible. International Journal of Production Economics. condition of permissible delay in payments. Production Vol.32, 77-82,1993 planning and control. Vol.12, 274-282, 2001 [23] Shah NH, Shah YK. A discrete in time Probabilistic [9] Chung KJ, Chang SL, Yang WD. The optimal cycle time for inventory model for deteriorating items under conditions of exponential deteriorating products under trade credit permissible delay in payments. International Journal of financing. The Engineering Economist. Vol.46, 232-42, System science. Vol.29,121-125, 1998 2001 [24] Shah VR, Patel NC, Shah DK. Economic ordering quantity [10] De LN, Goswami A. Probabilistic EOQ model for when delay in payments of order and shortages are permitted. deteriorating items under trade credit financing. Gujarat Statistical Review. Vol.15, No.2, 51-56, 1998 International Journal of System Science. Vol.40, No.4, 335-346, 2009 [25] Teng JT, Yang HL. An Inventory model for increasing demand under two levels of trade credit linked to order [11] Goyal SK. Economic order quantity under condition of quantity. Applied Mathematical ModellingVol.37,7624-76 permissible delay in payments. Journal of operation 32, 2013

Mathematics and Statistics 8(5): 596-609, 2020 609

[26] Tiwari S, Barron LEC, Shaikh AA, Choh M. Retailer’s Computers and Industrial Engineering. Vol.139, Article optimal ordering policy for deteriorating items under order 105559, 2020 size dependent trade credit and complete backlogging.

Mathematics and Statistics 8(5): 610-619, 2020 http://www.hrpub.org DOI: 10.13189/ms.2020.080516

Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion

Chatarina Enny Murwaningtyas1,2,∗, Sri Haryatmi Kartiko1, Gunardi1, Herry Pribawanto Suryawan3

1Department of Mathematics, Universitas Gadjah Mada, Yogyakarta, Indonesia 2Department of Mathematics Education, Universitas Sanata Dharma, Yogyakarta, Indonesia 3Department of Mathematics, Universitas Sanata Dharma, Yogyakarta, Indonesia

Received June 8, 2020; Revised August 10, 2020; Accepted August 25, 2020

Cite This Paper in the following Citation Styles (a): [1] Chatarina Enny Murwaningtyas, Sri Haryatmi Kartiko, Gunardi, Herry Pribawanto Suryawan , ”Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion,” Mathematics and Statistics, Vol. 8, No. 5, pp. 100-109, 2020. DOI: 10.13189/ms.2020.080516. (b): Chatarina Enny Murwaningtyas, Sri Haryatmi Kartiko, Gunardi, Herry Pribawanto Suryawan , (2020). Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion. Mathematics and Statistics, 8(5), 100-109. DOI: 10.13189/ms.2020.080516.

Copyright ©2020 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract This paper deals with an Indonesian option 1 Introduction pricing using mixed fractional Brownian motion to model the underlying stock price. There have been researched on The Jakarta Stock Exchange, currently called the Indonesia the Indonesian option pricing by using Brownian motion. Stock Exchange after merging with the Surabaya Stock Ex- Another research states that logarithmic returns of the Jakarta change, launched an option on October 6, 2004. The op- composite index have long-range dependence. Motivated by tion traded in Indonesia is different to the usual options. An the fact that there is long-range dependence on logarithmic Indonesia option [1] is an American option that is given a bar- returns of Indonesian stock prices, we use mixed fractional rier, but the Indonesian option only has maximum gain of 10% Brownian motion to model on logarithmic returns of stock of a strike price. The option price depends on the weighted prices. The Indonesian option is different from other options moving average (WMA) price of the underlying stock price. in terms of its exercise time. The option can be exercised at The WMA price is a ratio of the total value of all transactions maturity or at any time before maturity with profit less than ten to the total volume of the stock traded in the last 30 minutes. percent of the strike price. Also, the option will be exercised Calculating the Indonesia option by using the WMA price is automatically if the stock price hits a barrier price. Therefore, not easy due to model complexity. If the WMA price is calcu- the mathematical model is unique, and we apply the method of lated during the last 30 minutes, then the WMA price and the the partial differential equation to study it. An implicit finite stock price do not differ in terms of value. This study assumed difference scheme has been developed to solve the partial the WMA price is equal to the stock price. differential equation that is used to obtain Indonesian option In Indonesian options, if a stock price hits the barrier value, prices. We study the stability and convergence of the implicit then the option will be exercised automatically with a gain of finite difference scheme. We also present several examples 10% of a strike price. On the contrary, if the stock price does of numerical solutions. Based on theoretical analysis and the not hit the barrier, then the option can be exercised any time numerical solutions, the scheme proposed in this paper is before or at the maturity date. When the stock price does not efficient and reliable. hit the barrier, option buyers tend to wait until maturity. This is due to the fact that the barrier value is close enough to the strike price and the maximum duration of the contract is only 3 Keywords Indonesian Option Pricing, Mixed Fractional months. Therefore, we are interested in studying the pricing of Brownian Motion, Finite Difference Indonesian options that can be exercised at maturity or when the stock prices hit the barrier. Gunardi et al. [2] introduced pricing of Indonesian options. The pricing of Indonesian options in [2, 3, 4] used Black- Mathematics and Statistics 8(5): 610-619, 2020 611

Scholes and variance gamma models. The Black-Scholes an MFBM defined in Definition A.2 (Appendix A). The stock model used geometric Brownian motion to model logarithmic price satisfies returns of stock prices. This model assumes that logarithmic ˆ ˆH returns of stock prices ware normally and independent identi- dSt = µStdt + ασStdBt + βσStdBt ,S0 > 0, cally distributed (iid). However, empirical studies have shown where St denotes a stock price at time t, t ∈ [0,T ], with an that logarithmic returns of stock prices usually exhibit proper- expected return µ and a volatility σ, Bˆt is a Brownian motion, ties of self-similarity, heavy tails, and long-range dependence ˆH Bt is an independent FBM of Hurst index H with respect to a [5, 6, 7]. Even Cajueiro [5] and Fakhriyana [7] stated that re- probability measure PˆH . turns of the Jakarta Composite Index have long-range depen- According to the fractional Girsanov theorem [21], it is dence properties. In this situation, it is suitable to model the known that there is a risk-neutral measure PH , so that if stock price using a fractional Brownian motion (FBM). ˆ ˆH H ασBt + βσBt = ασBt + βσBt − µ + r is To use a FBM in option pricing, we must define a risk- H neutral measure and the Itoˆ formula, with analog in Brownian dSt = rStdt + ασStdBt + βσStdBt ,S0 > 0. (1) motion. Hu and Øksendal [8] contributed to finding the Itoˆ Lemma 1. The stochastic differential equation (1) admits a formula that can be used in the FBM model. However, the solution determination of option prices still had an arbitrage opportu- 1 2 1 2 2H H  nity. Cheridito [9] proposed a mixed fractional Brownian mo- St =S0 exp rt− 2 (ασ) t− 2 (βσ) t +ασBt +βσBt . tion (MFBM) to reduce an arbitrage opportunity. In this paper, (2) we employ the MFBM on the Indonesian option pricing to re- In mathematical finance, the Black-Scholes equation is a duce the arbitrage opportunity. partial differential equation (PDE) which is used to determine In the stock market, there are many types of options traded. the price of an option based on the Black-Scholes model. The European and American options are standard or vanilla op- Black-Scholes type differential equation based on an MFBM is tions. European options can be exercised at maturity, whereas constructed in the following theorem. American options can be exercised at any time during the con- tract. Pricing of European options using MFBM has been stud- Theorem 2. Let V (t, S) be an option value that depends on ied in [10, 11]. Chen et al. [12] investigated numerically pric- a time t and a stock price S. Then, under an MFBM model, ing of American options under the generalization of MFBM. V (t, S) satisfies Options that have more complicated rules than vanilla options ∂V ∂V 1 ∂2V are called exotic options. Examples of exotic options are Asian 2 + rS + (ασS) 2 options, rainbow options, currency options, barrier options, ∂t ∂S 2 ∂S ∂2V and also Indonesian options. Rao [13] and Zang et al. [14] + (βσS)2Ht2H−1 − rV = 0. (3) discussed the pricing of Asian power options under MFBM. ∂S2 Wang [15] explored the pricing of Asian rainbow options un- der FBM. Currency options pricing under FBM and MFBM 3 A Finite Difference Method for In- has been studied in [16, 17, 18]. Numerical solution of barrier donesian option pricing options pricing under MFBM have been evaluated by Ballestra et al. [19]. An Indonesian option is an option that can be exercised at Indonesian option is one type of barrier options. Because maturity or at any time before maturity but the profit does not analytic solutions for barrier options are not easy to find [19], exceed 10 percent of the strike price. The option will be exer- we determine Indonesian options using numerical solutions. cised automatically if the stock price hits a barrier price. The One numerical solution that can be used is the finite difference barrier price in an Indonesian option is 110% of the strike price method discussed in [20]. The purpose of this paper is to deter- for a call option and 90% of the strike price for a put option. mine Indonesian option prices under the MFBM model using Because the benefits of an Indonesian option is very small, the finite difference method. In this article, we also show that more option contract holders often choose to exercise their con- the resulting finite difference scheme is stable and convergent. tracts at maturity. In other words, an Indonesian option is an option that can be exercised at maturity or when the stock hits 2 An option pricing model by using the barrier price. Let L is a barrier of an Indonesian option and tL is the first MFBM time of the stock price hitting the barrier;

A mixed fractional Black Scholes market is a model con- tL = min {t| t ∈ [0,T ],St ≥ L} . (4) sisting of two assets, one riskless asset (bank account) and one An Indonesian call option with a strike price K can be exer- risky asset (stock). A bank account satisfies cised at maturity T or until the stock price of St hits the barrier at L = 1.1K. The payoff function at time T of the call option dA = rA dt, A = 1, t t 0 can be expressed as follows :  where At denotes a bank account at time t, t ∈ [0,T ], with an S − K if t > T , f(S ) = T L T r(T −tL) (5) interest rate r. Meanwhile, a stock price is modeled by using (L − K)e if tL ≤ T. 612 Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion

Similarly, the payoff function at time T of an Indonesian L = 0.9K put option with barrier price can be expressed as  1 2 2 2H−1 1  follows : cj = − 2 (ασj) − (βσj) H(T − k∆τ) − 2 rj, ∆τ. (15)  K − S if t > T , f(S ) = T L Using (4) and (5), we can write an initial condition of the T r(T −tL) (6) (K − L)e if tL ≤ T. Indonesian call option as follows:

The partial differential equation used in the Indonesian op-  j∆S − K if L > j∆S, V 0 = (16) tion pricing is a PDE with a final time condition. Because fi- j L − K if L ≤ j∆S, nite difference methods usually use an initial time condition, we make changes on variable τ i.e. τ = T − t. Under this and boundary conditions of the call option as follows: transformation, PDE (3) becomes, k k −rk∆τ V0 = 0 and VM = (L − K)e . (17) ∂V ∂V 1 ∂2V − rS − (ασS)2 In another case, using (4) and (6), we get an initial condition ∂τ ∂S 2 ∂S2 and boundary conditions of the Indonesian put option shown ∂2V below respectively: − (βσS)2H(T − τ)2H−1 + rV = 0. (7) ∂S2  K − j∆S if L < j∆S, V 0 = We must set up a discrete grid in this case with respect to j K − L if L ≥ j∆S, stock prices and time to solve the PDE by finite difference and methods. Suppose Smax is a suitably large stock price and in V k = 0 and V k = (K − L)e−rk∆τ . this case Smax = L. We need Smax since the domain for the 0 M PDE is unbounded with respect to stock prices, but we must We analyze the stability and convergence of the implicit bound it in some ways for computing purposes. The grid con- finite difference scheme using Fourier analysis. Firstly, we sists of points (τk,Sj) such that Sj = j∆S and τk = k∆τ discuss the stability of the implicit finite difference scheme. k k with j = 0, 1,...,M and k = 0, 1,...,N. Let Vj be difference solution of (12) and Uj be another ap- Using Taylor series expansion, we have k proximate solution of (12), we define a roundoff error εj = k k Vj − Uj . Next, we obtain a following roundoff error equation V k − V k−1 ∂V j j = + O(∆τ), (8) ∆τ ∂τ k−1 k k k εj = ajεj−1 + bjεj + cjεj+1. (18) V k − V k ∂V   Furthermore, we define a grid function as follows: j+1 j−1 = + O (∆S)2 , (9) 2∆S ∂S  εk if S − ∆S < S ≤ S + ∆S , j = 1, ..., M −1, εk(S) = j j 2 j 2 and ∆S ∆S 0 if 0 ≤ S ≤ 2 or Smax − 2 < S ≤ Smax. k k k 2 Vj+1 − 2Vj + Vj−1 ∂ V  2 The grid function can be expanded in a Fourier series below: 2 = 2 + O (∆S) . (10) (∆S) ∂S ∞ X   εk(S) = ξk(l) exp i2πlS , k = 1, 2,...,N, Smax Substitution of (8), (9) and (10) in (7) yields l=−∞

V k−V k−1 V k −V k 2 V k −2V k+V k j j −rj∆S j+1 j−1 −(ασ) (j∆S)2 j+1 j j−1 where ∆τ 2∆S 2 (∆S)2 Smax V k −2V k+V k Z −(βσ)2(j∆S)2H(T −k∆τ)2H−1 j+1 j j−1 k 1 k  −i2πlS  (∆S)2 ξ (l) = ε (S) exp dS. Smax Smax k + rVj = 0, (11) 0   Moreover, we let where the local truncation error is O ∆τ + (∆S)2 . Rewrit- k  k k k T ing (11), we get an implicit scheme as follows ε = ε1 , ε2 , . . . , εN−1 .

k−1 k k k And we introduce a norm, Vj = ajVj−1 + bjVj + cjVj+1, (12) 1 1    S  2 where M−1 2 Zmax k X k 2 k 2 ε 2 =  εj ∆S =  ε (S) dS .  1 2 2 2H−1 1  j=1 aj = − 2 (ασj) − (βσj) H(T − k∆τ) + 2 rj ∆τ, 0 (13) Further, by using Parseval equality,

Smax ∞   2 2 2H−1   Z b = 1 + (ασj) + 2(βσj) H(T − k∆τ) + r ∆τ , k 2 X k 2 j ε (S) dS = ξ (l) , (14) 0 l=−∞ Mathematics and Statistics 8(5): 610-619, 2020 613 we obtain From (12), (13), (14), (15) and definition Rk in (25), we have ∞ j k 2 X k 2 ε = ξ (l) . 2 V (τk−1,Sj) = ajV (τk,Sj−1) + bjV (τk,Sj) l=−∞ + c V (τ ,S ) − ∆τRk. (27) At the , we assume that the solution of equation (18) j k j+1 j has the following form By subtracting (12) from (27), we obtain k k iωj∆S εj = ξ e , (19) k−1 k k k k j = ajj−1 + bjj + cjj+1 − ∆τRj , (28) 2πl √ where ω = and i = −1. Substituting (19) into (18), k k Smax where an error  = V (τ ,S ) − V . The error equation satis- we obtain j k j j fies a boundary conditions, k−1 iωj∆S k iω(j−1)∆S k iωj∆S k iω(j+1)∆S ξ e =ajξ e +bjξ e +cjξ e k k 0 = M = 0, k = 1, 2, ..., N, k iωj∆S −iω∆S iω∆S =ξ e aje +bj +cje . (20) and an initial condition, Equation (20) can be rewritten as follows, 0 = 0, j = 1, 2, ..., M. (29) k−1 k −iω∆S iω∆S j ξ = ξ aje + bj + cje , (21) k−1 k Next, we define the following grid functions, ξ = ξ ϑj, (22)  k if S − ∆S < S ≤ S + ∆S , j = 1, ..., M −1, where k(S) = j j 2 j 2 −iω∆S iω∆S ∆S ∆S ϑj = aje + bj + cje . (23) 0 if 0 ≤ S ≤ 2 or Smax − 2 < S ≤ Smax, By substituting (13), (14) and (15) into (23), we obtain and

2 2 2H−1  k ∆S ∆S ϑj = −(ασj) −2(βσj) H(T −k∆τ) ∆τ cos(ω∆S) k Rj if Sj − 2 < S ≤ Sj + 2 , j = 1, ..., M −1, R (S) = ∆S ∆S 2 2 2H−1 0 0 ≤ S ≤ S − < S ≤ S . +(ασj) +2(βσj) H(T −k∆τ) +r ∆τ if 2 or max 2 max − rji∆τ sin(ω∆S) + 1. (24) The grid functions can be expanded in a Fourier series respec- tively as follows Proposition 3. If ξk, k ∈ N, is a solution of (21), then |ξk| ≤ 0 ∞ |ξ |. X   k(S) = %k(l) exp i2πlS , k = 1, 2,...,N, Smax Hence by (19) and Proposition 3, we have the following the- l=−∞ orem. and Theorem 4. The difference scheme (12) is unconditionally sta- ∞ ble. X   Rk(S) = ρk(l) exp i2πlS , k = 1, 2,...,N, Smax Now we analyze the convergence of implicit finite differ- l=−∞ ence scheme. Let V (τ ,S ) is exact solution of (7) at a point k j where (τk,Sj) and Smax V (τ ,S )−V (τ ,S ) V (τ ,S )−V (τ ,S ) Z k k j k−1 j k j+1 k j−1 k 1 k  −i2πlS  Rj = ∆τ −rj∆S 2∆S % (l) =  (S) exp dS, Smax Smax 2 V (τ ,S )−2V (τ ,S )+V (τ ,S ) − 1 (ασ)2(j∆S) k j+1 k j k j−1 0 2 (∆S)2 − (βσ)2(j∆S)2H(T − k∆τ)2H−1 and

V (τk,Sj+1)−2V (τk,Sj )+V (τk,Sj−1) × Smax (∆S)2 Z   ρk(l) = 1 Rk(S) exp −i2πlS dS. + rV (τk,Sj), (25) Smax Smax 0 where k = 1, 2,...,N and j = 1, 2,...,M −1. Consequently, there is a positive constant Ck,j, so as Thus, we let 1 k  k k k T  = 1 , 2 , . . . , N−1 k k,j 2 Rj ≤ C1 ∆τ + (∆S) , and k  k k k T then, we have R = R1 ,R2 ,...,RN−1 ,

k 2 and we define their corresponding norms Rj ≤ C1 ∆τ + (∆S) , (26) 1 1    S  2 where M−1 2 max X 2 Z 2 k = k ∆S = k(S) dS , n k,j o 2  j    C1 = max C k = 1, 2,...,N; j = 1, 2,...,M − 1 . 1 j=1 0 614 Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion and

1 1 (a) Maturity Times (T) = 1/12 M−1  2  Smax  2 2 Z 2 k X k k 100 R 2 =  Rj ∆S =  R (S) dS , j=1 0 (30) 50 respectively. By using Parseval equality, we get Option Price 0 0.9 0.5 Smax 0.8 0.4 ∞ 0.3 Z 0.7 0.2 2 X 2 0.6 0.1 k k 0.5 0  (S) dS = % (l) Hurst index (H) Volatility of Stock Price ( σ) 0 l=−∞ (b) Maturity Times (T) = 2/12 and 100 Smax ∞ Z 2 X 2 Rk(S) dS = ρk(l) , 50 0 l=−∞ Option Price respectively. As a consequence, we can show that 0 0.9 0.5 0.8 0.4 0.3 ∞ 0.7 0.2 0.6 0.1 k 2 X k 2 0.5 0  = % (l) (31) Hurst index (H) 2 Volatility of Stock Price ( σ) l=−∞ (c) Maturity Times (T) = 3/12 and ∞ 100 k 2 X k 2 R 2 = ρ (l) . (32) l=−∞ 50 Option Price Further, we assume that the solution of (28) has the following 0 form 0.9 0.5 0.8 0.4 k k iωj∆S 0.3 0.7 0.2 j = % e (33) 0.6 0.1 0.5 0 Hurst index (H) and Volatility of Stock Price ( σ) k k iωj∆S Rj = ρ e . (34) Substituting (33) and (34) into (28), we obtain 1 2 Figure 1. Indonesian option prices on H and σ for values T = 12 , T = 12 k−1 iωj∆S iωj∆S k −iω∆S iω∆S k 3 % e =e % aje +bj +cje −∆τρ . and T = 12 . (35) Equation (35) can be simply rewritten as follows 4 Numerical examples and discussions %k−1 = %k a e−iω∆S + b + c eiω∆S − ∆τρk. (36) j j j An Indonesian option pricing based on an MFBM has been By using equations (13), (14), (15) and (36), we obtain studied. An implicit difference scheme of (7) is given in (12) and initial and boundary conditions of an Indonesian call op- k−1  2 2 2H−1 % = −(ασj) −2(βσj) H(T −k∆τ) ∆τ cos(ω∆S) tion is given in (16) and (17), respectively. We provide several +(ασj)2 +2(βσj)2H(T −k∆τ)2H−1 +r ∆τ numerical results that illustrate the stability and convergence of k k the finite difference method in calculating an Indonesian call −rji∆τ sin (ω∆S) + 1] % − ∆τρ . (37) option price using Matlab in this section. In Examples 1, 2 and Equation (37) can be effectively expressed as follows 3, we show that the scheme is stable. We also show that the scheme is convergent in Example 4. Furthermore, Example 5 %k = 1 %k−1 + 1 ∆τρk, (38) compares the option price generated by the scheme with the ϑj ϑj 1 exact solution in [2] when α = 0 , β = 1, H = 2 . where ϑj is defined in (24). Example 1. An Indonesian call option pricing model is based Proposition 5. Assuming that %k(k = 1, 2,...,N) is a solu- on (12) where α = β = 1, an initial condition (16) and bound- tion of (37), then there exist a positive constant C2, so that ary conditions (17) under the following parameters,

k 1 ∆S = 1, ∆τ = 0.0001, r = 0.05,S0 = 1000,K = 1000, |% | ≤ C2k∆τ|ρ |. and various values of parameters, The following theorem gives convergence of the different  1 2 3 scheme (12). H ∈ (0.5, 1), σ ∈ (0, 0.5),T ∈ 12 , 12 , 12

Theorem 6. The difference scheme (12) is L2-convergent, and Figure 1 exhibits the price surface of an Indonesian call op- the convergence order is O(∆τ + (∆S)2). tion with a change of the Hurst index (H) and a change of Mathematics and Statistics 8(5): 610-619, 2020 615

(a) Volatility of Stock Price ( σ) = 0.01 (a) Volatility of Stock Price ( σ) = 0.01

100 100

50 50 Option Price 0 Option Price 0 0.9 1100 0.9 0.5 0.8 0.8 0.4 1050 0.3 0.7 1000 0.7 0.2 0.6 950 0.6 0.1 0.5 900 0.5 0 Hurst index (H) Strike Price (K) Hurst index (H) Maturity Times (T)

(b) Volatility of Stock Price ( σ) = 0.05 (b) Volatility of Stock Price ( σ) = 0.05

100 100

50 50 Option Price 0 Option Price 0 0.9 1100 0.9 0.5 0.8 0.8 0.4 1050 0.3 0.7 1000 0.7 0.2 0.6 950 0.6 0.1 0.5 900 0.5 0 Hurst index (H) Strike Price (K) Hurst index (H) Maturity Times (T)

(c) Volatility of Stock Price ( σ) = 0.1 (c) Volatility of Stock Price ( σ) = 0.1

100 100

50 50 Option Price Option Price 0 0 0.9 1100 0.9 0.5 0.8 0.8 0.4 1050 0.3 0.7 1000 0.7 0.2 0.6 950 0.6 0.1 0.5 900 0.5 0 Hurst index (H) Strike Price (K) Hurst index (H) Maturity Times (T)

Figure 2. Indonesian option prices on H and K for values σ = 0.01, σ = Figure 3. Indonesian option prices with H and T for values of σ = 0.01, 0.05 and σ = 0.1. σ = 0.05 and σ = 0.1 stock price volatility (σ) for difference maturity time (T ). The and various values of parameters Hurst Index, stock price volatility and maturity time affect op- tion prices. As the Hurst index decreases and the stock price H ∈ (0.5, 1),T ∈ (0, 0.5), σ ∈ {0.01, 0.05, 0.1}. volatility and maturity time increase, we see that the price of Indonesian options increase. Figure 3 shows the price surface of an Indonesian call option with a change of the Hurst index (H) and a change of maturity Example 2. Consider an Indonesian call option pricing at time (T ) for various values of stock price volatility (σ). Simi- (12), (16) and (17) with α = β = 1 and parameters, lar to the result obtained in Example 1, we see that the price of Indonesian options increase when the stock price volatility and ∆S = 1, ∆τ = 0.0001, r = 0.05,S = 1000,T = 3 , 0 12 maturity time increase while the Hurst index decreases. and various values of parameters, Example 4. Consider an Indonesian call option pricing at H ∈ (0.5, 1),K ∈ (900, 1100), σ ∈ {0.01, 0.05, 0.1} (12), (16) and (17) with α = β = 1 and parameters,

Figure 2 shows the price surface of an Indonesian call option r =0.05, σ =0.1,T =0.25,S =1000,K =1000,H =0.7. with a change of Hurst index (H) and a change of strike price 0 (K) for various volatility values of the stock price (σ). As the This example will show the convergence of the scheme (12). stock price volatility increases, the Hurst index and strike price The convergence is demonstrated by the difference between decrease, we see that the price of Indonesian options increase. consecutive approximation processes in Table 1. The numer- ical results from Table 1 confirm the results of the theoretical Example 3. Consider an Indonesian call option pricing prob- analysis (B.8) in Theorem 6. lem (12), (16) and (17) with α = β = 1 and parameters,

∆S = 1, ∆t = 0.0001, r = 0.05,S0 = 1000,K = 1000, Example 5. Let Indonesian call option pricing at (12), (16) 616 Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion

Table 1. Convergence results of the scheme (12)

(a) Indonesian option prices on different σ ∆S ∆τ Value Difference Ratio 100 10.00000 0.001000000 30.7251 Numerical Solution 80 5.00000 0.000500000 30.8103 0.0852 Exact Solution 2.50000 0.000250000 30.8352 0.0249 3.4217 60 1.25000 0.000125000 30.8433 0.0081 3.0741 40 0.62500 0.000062500 30.8463 0.0030 2.7000

Option Price 0.31250 0.000031250 30.8475 0.0012 2.5000 20 0.15625 0.000015625 30.8480 0.0005 2.4000

0 0 0.1 0.2 0.3 0.4 0.5 Volatility of Stock Price ( σ) and visible shapes of option price solutions of the proposed (b) Indonesian option prices on different T 100 scheme are similar to the option price solutions in [2] (Exam- Numerical Solution 80 ple 5). Therefore, it can be concluded that the implicit finite Exact Solution difference scheme used to determine Indonesian option prices 60 is stable and convergent. 40 Option Price 20 5 Conclusions 0 0 0.1 0.2 0.3 0.4 0.5 Maturity Times (T) In this paper, we apply an implicit finite difference method to (c) Indonesian option prices on different K solve Indonesian option pricing problems. Given that Jakarta 100 Composite Index is long-range dependent, an MFBM is used to 80 Numerical Solution Exact Solution model the stock returns. The implicit finite difference scheme 60 has been developed to solve a partial differential equation

40 that is used to determine Indonesian option prices. We study

Option Price the stability and convergence of the implicit finite difference 20 scheme for Indonesian option pricing. We also present several 0 examples of numerical solutions for Indonesian option pric- 900 950 1000 1050 1100 Strike Price (K) ing. Based on theoretical analysis and numerical solutions, the scheme proposed in this paper is efficient and reliable.

Figure 4. The price of Indonesian options uses the exact and numerical solu- 1 Acknowledgements tion for H = 2 . The authors gratefully acknowledge that this research was and (17) with α = 0, β = 1,H = 1 and parameters, supported by Universitas Sanata Dharma and Universitas Gad- 2 jah Mada. The authors also thank the referees for their com- 2 ments and suggestions which improve the paper significantly. ∆S = 1, ∆τ = 0.0001, r = 0.05, σ = 0.1,T = 12 , S0 = 1000,K = 1000.

1 Appendix Equation (7) with α = 0, β = 1 and H = 2 is a stock price model under a Brownian motion. Figure 4 shows the com- parison of numerical and exact solutions of Indonesian option A Review of a mixed fractional Brownian mo- prices for stock prices modeled by Brownian motion. The ex- tion act solution for determining Indonesian option prices is ob- In Appendix A, we recall several definitions and lemma tained by a formula in [2]. Whereas, the numerical solution which are used in this paper. is obtained by the implicit finite difference method (12) with 1 α = 0, β = 1 and H = 2 . Definition A.1. [21] Let H ∈ (0, 1) be given. A fractional BH = (BH ) H Moreover, if we set α = 1 and β = 0 in (12), then we get Brownian motion t t≥0 of Hurst index is a con- a similar trend of option prices as shown in Figure 4. As can tinuous and centered Gaussian process with covariance func- be seen, both solutions overlap each other. In other words, the tion numerical solution is similar to the analytical solution. E BH ,BH  = 1 |t|2H + |u|2H − |t − u|2H  , In Examples 1, 2, 3 and 4, we choose small ∆S and ∆τ t u 2 values. The implicit finite difference scheme can still produce for all t, u > 0. Indonesian option prices using these values. In other words, even though the values chosen are very small, it still produces A FBM is a generalization of the standard Brownian mo- 1 option prices. We need to mention here that the calculation tion. To see this take H = 2 in the Definition A.1. Standard process takes a longer time. In addition, we can see that trends Brownian motion has been employed to model stock prices in Mathematics and Statistics 8(5): 610-619, 2020 617

the Black-Scholes model. However, it cannot model time series Now, applying Lemma A.3 and f(t, St) = V (t, S), we obtain with long-range dependence (long memory). It is known that a  ∂V ∂V 1 2 ∂2V 2 2H−1 ∂2V  FBM is able to model time series with long-range dependence dV = ∂t +rS ∂S + 2 (ασS) ∂S2 +(βσS) Ht ∂S2 dt for 1 < H < 1 . 2 + ασS ∂V dB + βσS ∂V dBH . (B.3) One main problem of using a FBM in financial models is that ∂S t ∂S t it exhibits arbitrage which is usually excluded in the modeling. Substituting (B.3) and (1) into (B.2), we have To avoid the possibility of arbitrage, Cheridito [22] introduced  ∂V ∂V  1 2 ∂2V an MFBM. dΠ = ∂t + rS ∂S − q + 2 (ασS) ∂S2

2  Definition A.2. [22, 23] A mixed fractional Brownian motion 2 2H−1 ∂ V ∂V  +(βσS) Ht ∂S2 dt + ασS ∂S − q dBt H  H,α,β of parameters α, β and H is a process M = Mt , ∂V  H t≥0 + βσS ∂S − q dBt . defined on a probability space (Ω, F, H ) by P ∂V Further, we choose q = ∂S to eliminate the random noise. H,α,β H Mt = αBt + βBt , t ≥ 0, Then we get

 2 2  where (B ) is a Brownian motion and (BH ) is an inde- ∂V 1 2 ∂ V 2 2H−1 ∂ V t t≥0 t t≥0 dΠ= ∂t + 2 (ασS) ∂S2 +(βσS) Ht ∂S2 dt. (B.4) pendent FBM of Hurst index H. On the other hand, the portfolio becomes riskless if the port- We rewrite the following lemma which is derived from the folio yield is only determined by the risk-free interest rate r, Ito formula [21, 24] and properties of an MFBM. The lemma which satisfies dΠ = rΠdt. From (B.1), we have will be used later in option pricing based on stock price mod- ∂V  eled by an MFBM. rΠdt = r(V − qS)dt = rV − rS ∂S dt, (B.5)

Lemma A.3. [25] Let f = f(t, St) is a differentiable function. and also from (B.4) and (B.5), we get Let (S ) be a stochastic process given by t t≥0  ∂V 1 2 ∂2V 2 2H−1 ∂2V  ∂t + 2 (ασS) ∂S2 +(βσS) Ht ∂S2 dt H dSt = µStdt + σ1StdBt + σ2StdBt , ∂V  = rV − rS ∂S dt, H where Bt is a Brownian motion, Bt is a FBM, and assume H which yields (3). that Bt and Bt are independent, then we have  2 2 2 2  ∂f ∂f σ1St ∂ f 2 2 2H−1 ∂ f Proof of Proposition 3 df = +µSt + 2 +Hσ2St t 2 dt ∂t ∂St 2 ∂St ∂St Proof. Since |ϑj| ≥ 1 and using (22) for k = 1, we have ∂f ∂f H + σ1St dBt +σ2St dB . t 1 1 0 0 ∂St ∂St ξ = ξ ≤ ξ . |ϑj| B Proofs If |ξk−1| ≤ |ξ0|, then using (22), we obtain Proof of Lemma 1 k 1 k−1 1 0 0 ξ = ξ ≤ ξ ≤ ξ . |ϑj| |ϑj| Proof. Using Lemma A.3 with µ = r, σ1 = ασ and σ2 = βσ and taking f(St) = ln(St), be obtained: This completes the proof. 1 2 2 2H−1 d ln(St) = r − (ασ) − (βσ) Ht dt 2 Proof of Theorem 4 H + ασdBt + βσdBt , Proof. Using Proposition 3 and (19), we obtain and hence, εk ≤ ε0 , k = 1, 2, ..., N,   2 2 St 1 2 1 2 2H H ln = rt − 2 (ασ) t − 2 (βσ) t + ασBt + βσBt , which means that the difference scheme (12) is unconditionally S0 stable. which can be related as (2). Proof of Proposition 5 Proof of Theorem 2 Proof. From (26) and (30), we have Proof. To prove the statement, a portfolio consisting an option 1 V (t, S) and a quantity q of stock, will be first set, i.e. M−1  2 k X 22 R ≤ C1 ∆τ + (∆S) ∆S Π = V (t, S) − qS. (B.1) 2   j=1 √ 2 Thus, changes in portfolio value in a short time can be written ≤ C1 ∆τ + (∆S) M∆S as p 2 dΠ = dV (t, S) − qdS. (B.2) ≤ C1 Smax ∆τ + (∆S) (B.6) 618 Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion

where k = 1, 2,...,N. If the series of the right hand side of [5] D.O. Cajueiro and B.M. Tabak, Testing for long-range depen- k (32) convergent, then there is a positive constant C2 , such that dence in world stock markets, Chaos, Solitons and Fractals, vol. 37, no. 3, pp. 918–927, 2008. k k k 1 k 1 |ρ | ≡ |ρ (l)| ≤ C2 |ρ | ≡ C2 |ρ (l)|

Then, we have [6] C. Necula and A.N. Radu, Long memory in Eastern European k 1 financial markets returns, Economic Research - Ekonomska |ρ | ≤ C2|ρ |, (B.7) Istrazivanja,ˇ vol. 25, no. 2, pp. 316–377, Jan. 2012.  k where C2 = max C2 k = 1, 2,...,N . By using (29) and (31), we have %0 = 0. For k = 1, from (38) and (B.7), we get [7] D. Fakhriyana, Irhamah, and K. Fithriasari, Modeling Jakarta 1 1 1 |% | = ∆τ|ρ | ≤ C2∆τ|ρ | composite index with long memory and asymmetric volatility approach, AIP Conference Proceedings, vol. 2194, no. 020025, n 1 Suppose now that |% | ≤ C2n∆τ|ρ |, n = 1, 2, . . . , k−1, then 2019. by using 38 and B.7, we obtain

k 1 1 1 1 [8] Y. Hu and B. Øksendal, Fractional white noise calculus |% | ≤ C2(k − 1)∆τ|ρ | + C2∆τ|ρ | |ϑj | |ϑj | and applications to finance, Infinite Dimensional Analysis,  (k−1) 1  1 Quantum Probability and Related Topics, vol. 06, no. 01, pp. ≤ + C2k∆τ|ρ | k|ϑj | k|ϑj | 1–32, 2003. 1 ≤ C2k∆τ|ρ |

This completes the proof. [9] P. Cheridito, Arbitrage in fractional Brownian motion models, Finance and Stochastics, vol. 7, no. 4, pp. 533–553, 2003. Proof of Theorem 6 Proof. By using Proposition and (31), (32) and (B.6), we ob- [10] C.E. Murwaningtyas, S.H. Kartiko, Gunardi, and H.P. tain Suryawan, European option pricing by using a mixed fractional Brownian motion, Journal of Physics: Conference Series, vol. k 1 1097, no. 1, 2018.  2 ≤ C2k∆τ R 2 p 2 ≤ C1C2k∆τ Smax(∆τ + (∆S) ) [11] C.E. Murwaningtyas, S.H. Kartiko, Gunardi, and H.P. Because k∆τ ≤ T , we have Suryawan, Option pricing by using a mixed fractional Brownian motion with jumps, Journal of Physics: Conference k 2 Series, vol. 1180, no. 1, p. 012081, Sep. 2019.  2 ≤ C(∆τ + (∆S) ) (B.8) √ where C = C C T S 1 2 max [12] W. Chen, B. Yan, G. Lian, and Y. Zhang, Numerically pricing American options under the generalized mixed fractional Brownian motion model, Physica A, vol. 451, pp. 180–189, 2016. REFERENCES [13] B.L.S. Prakasa Rao, Pricing geometric Asian power options [1] Keputusan Direksi PT Bursa Efek Jakarta Nomor Kep- under mixed fractional Brownian motion environment, Physica 310/BEJ/09-2004 tentang peraturan nomor II-D ten- A, vol. 446, pp. 92–99, 2016. tang perdagangan opsi saham, Online available from https://idx.co.id/media/1344/3.pdf. [Accessed: 22-Jan-2020]. [14] W.G.G. Zhang, Z. Li, and Y.J.J. Liu, Analytical pricing of [2] Gunardi, J.H.M. Anderluh, J.A.M. Van Der Weide, Subanar, geometric Asian power options on an underlying driven by a and S. Haryatmi, Indonesian options, Rep. 06-11, Delft Univ. mixed fractional Brownian motion, Physica A, vol. 490, pp. Technol. Netherlands, 2006. 402–418, 2018.

[3] Gunardi, J.A.M. Van Der Weide, Subanar, and S. Haryatmi, [15] L. Wang, R. Zhang, L. Yang, Y. Su, and F. Ma, Pricing P(I)DE approach for Indonesian options pricing, Journal of the geometric Asian rainbow options under fractional Brownian Indonesian Mathematical Society, vol. 14, no. 1, p. 37-45, 2008. motion, Physica A, vol. 494, pp. 8–16, 2018.

[4] Gunardi, The greeks of indonesian call option, Far East Journal [16] L. Sun, Pricing currency options in the mixed fractional of Mathematical Sciences, vol. 101, no. 10, pp. 2111–2120, Brownian motion, Physica A, vol. 392, no. 16, pp. 3441–3458, 2017. 2013. Mathematics and Statistics 8(5): 610-619, 2020 619

[17] K.H. Kim, S. Yun, N.U. Kim, and J.H. Ri, Pricing formula for Springer, 2008. European currency option and exchange option in a generalized jump mixed fractional Brownian motion with time-varying coefficients, Physica A, vol. 522, pp. 215–231, 2019. [22] P. Cheridito, Mixed fractional Brownian motion, Bernoulli, vol. 7, no. 6, pp. 913–934, 2001. [18] K.H. Kim, N.U. Kim, D.C. Ju, and J.H. Ri, “Efficient hedging currency options in fractional Brownian motion model with jumps,” Physica A, vol. 539, p. 122868, 2020. [23] M. Zili, On the mixed fractional Brownian motion, On the mixed fractional Brownian motion, vol. 2006, pp. 1–9, 2006. [19] L.V. Ballestra, G. Pacelli, and D. Radi, A very efficient approach for pricing barrier options on an underlying described by the mixed fractional Brownian motion, Chaos, Solitons and Fractals, vol. 87, pp. 240–248, 2016. [24] T.E. Duncan, Y. Hu, and B. Pasik-Duncan, Stochastic calculus for fractional Brownian motion I. Theory, SIAM Journal on Control and Optimization, vol. 38, no. 2, pp. 582–612, 2000. [20] L. Song and W. Wang, Solution of the fractional Black-Scholes option pricing model by finite difference method, Abstract and Applied Analysis, vol. 2013, 2013. [25] Z. Yang, Efficient valuation and exercise boundary of American fractional lookback option in a mixed jump-diffusion model, In- [21] F. Biagini, Y. Hu, B. Øksendal, and T. Zhang, Stochastic ternational Journal of Financial Engineering, vol. 04, no. 02n03, Calculus for Fractional Brownian Motion and Applications. p. 1750033, 2017.

Mathematics and Statistics

Call for Papers

Mathematics and Statistics is an international peer-reviewed journal that publishes original and high-quality research papers in all areas of mathematics and statistics. As an important academic exchange platform, scientists and researchers can know the most up- to-date academic trends and seek valuable primary sources for reference.

Aims & Scope

Algebra Discrete Mathematics Analysis Dynamical Systems Applied Mathematics Geometry and Topology Approximation Theory Statistical Modelling Combinatorics Number Theory Computational Statistics Numerical Analysis Computing in Mathematics Probability Theory  Recreational Mathematics

Editorial Board

Dshalalow Jewgeni Florida Inst. of Technology, USA Jiafeng Lu Zhejiang Normal University, China Nadeem-ur Rehman Aligarh Muslim University, India Debaraj Sen Concordia University, Canada Mauro Spreafico University of São Paulo, Brazil Veli Shakhmurov Okan University, Turkey Antonio Maria Scarfone National Research Council, Italy Liang-yun Zhang Nanjing Agricultural University, China Ilgar Jabbarov Ganja state university, Azerbaijan Mohammad Syed Pukhta Sher-e-Kashmir University, India Vadim Kryakvin Southern Federal University, Russia Rakhshanda Dzhabarzadeh National Academy of Science of Azerbaijan, Azerbaijan Contact Us Sergey Sudoplatov Sobolev Institute of Mathematics, Russia Birol Altin Gazi University, Turkey Horizon Research Publishing Araz Aliev Baku State University, Azerbaijan 2880 ZANKER RD STE 203 Francisco Gallego Lupianez Universidad Complutense de Madrid, Spain SAN JOSE, CA 95134 Hui Zhang St. Jude Children's Research Hospital, USA USA Yusif Abilov Odlar Yurdu University, Azerbaijan Email: [email protected] Evgeny Maleko Magnitogorsk State Technical University, Russia İmdat İşcan Giresun University, Turkey Emanuele Galligani University of Modena and Reggio Emillia, Italy Mahammad Nurmammadov Baku State University, Azerbaijan

Manuscripts Submission

Manuscripts to be considered for publication have to be submitted by Online Manuscript Tracking System(http://www.hrpub.org/submission.php). If you are experiencing difficulties during the submission process, please feel free to contact the editor at [email protected].