On Three-Level A-Optimal Designs for Test-Control Discrete Choice Experiments
Total Page:16
File Type:pdf, Size:1020Kb
Statistics and Applications {ISSN 2454-7395 (online)} Volume 19, No. 1, 2021 (New Series), pp 199–208 On Three-Level A-Optimal Designs for Test-Control Discrete Choice Experiments Rakhi Singh1, Ashish Das2 and Feng-Shun Chai3 1 University of North Carolina at Greensboro, Greensboro, NC, USA 2 Indian Institute of Technology Bombay, Mumbai, India 3 Academia Sinica, Taipei, Taiwan Received: 21 December 2020; Revised: 24 January 2021; Accepted: 27 January 2021 Abstract Choice experiments are conducted when it is important to study the importance of different factors based on the perceived utility of choice options. We study the optimality of discrete choice experiments under a newly introduced inference problem of test-control discrete choice experiments; it is akin to the test-control inference problem in factorial ex- periments. For each factor, we have one control level and this control level is then compared with all the test levels of the same factor. For three-level choice designs with multiple factors, we first obtain a lower bound to the A-values for estimating the two test-control contrasts for each factor. We then provide some A-optimal designs for a small number of factors obtained through a complete search. For practical use with a somewhat large number of factors, we then provide some highly efficient designs. Key words: Choice set; Test-control contrast effects; Hadamard matrix; Multinomial logit model; Linear paired comparison model. AMS Subject Classifications: 62K05, 05B05 1. Introduction Discrete choice experiments are used for quantifying the influence of the attributes which characterize the choice options. They are useful in many applied sciences, for example, psychology, marketing research, etc., where options (or, products) have to be judged with respect to a subjective criterion like preference or taste. For a latest application, see Ong et al. (2020). In choice experiments, respondents are shown a collection of choice sets and each of these choice sets consists of several options. Respondents are then asked to select one preferred option from each of the choice sets. We consider choice experiments with N choice sets each having two options (referred to as choice pairs hereafter); so, N choice pairs are shown to respondents and they are asked to pick one of the two options that they prefer from each of the N pairs. Each option is described by the same k factors, with each factor having two or more levels. We consider each factor at three levels. A choice design d then is a collection of these N choice pairs. Excellent reviews of the choice designs are provided in Street and Burgess (2012) and Großmann and Schwabe (2015), and a recent paper (Das and Corresponding Author: Rakhi Singh Email: r [email protected] 200 RAKHI SINGH, ASHISH DAS AND FENG-SHUN CHAI [Vol. 19, No. 1 Singh, 2020) provides a unified theory on optimal choice experiments connecting different approaches to choice experiments. Discrete choice experiments (DCEs) have usually been studied under the multinomial logit model (El-Helbawy and Bradley, 1978; Street and Burgess, 2007). Under the multino- mial logit model, D-optimal designs have been studied for several situations, and orthogonal contrasts of main effects and two-factor interactions for the k factors are usually of interest. The multinomial logit model is non-linear, hence, the information matrix is a function of unknown parameters. The locally optimal designs are therefore obtained, and under the indifference assumption (that all treatment combinations have equal utility) of choice ex- periments, these locally optimal designs have been just termed as optimal designs. DCEs with only two options in each choice set can be equivalently studied under the traditional linear paired comparison model (McFadden, 1974; Huber and Zwerina, 1996; Großmann and Schwabe, 2015). The relationship between the two approaches (MNL models and linear paired comparison models) for studying DCEs has been studied in Das and Singh (2020). They also obtained the information matrices under different inference problems including briefly introducing the test-control inference problem in DCEs. So far, the inference prob- lems that have been studied in choice experiments focus on comparing all levels of each factor with equal importance (Street and Burgess, 2007; Großmann and Schwabe, 2015; Chai et al., 2017). We focus on the test-control inference problem for paired choice DCEs with each fac- tor at three levels. The same setup with a traditional inference problem (of equal focus on all pairwise comparisons) was studied in Chai et al. (2017). The difference between the current paper and Chai et al. (2017) lies only in the studied inference problem, which ultimately leads to obtaining different optimal designs. We defer most of the technical details until the next section. The primary goal in a test-control inference problem is to compare the test levels to a (pre-specified) control level. Here, we are not interested in making all pairwise comparisons, we are only interested in a subset of those comparisons. To the best of our knowledge, no one has worked on finding optimal choice designs when the interest might lie in making test- control comparisons. We are also not aware of any practical choice experiment which was conducted with this intention, however, it is not too hard to imagine that such an inference problem will find its use with practitioners. This is useful when manufacturers/service providers or policymakers want to study the effect of new test levels as against the existing control levels. Test-control inference problem has been studied by several authors; see, for example, Hedayat et al. (1988) and Majumdar (1996) for block designs, and Gupta (1995) and Gupta (1998) for multiple factors. D-optimality is invariant to reparameterizations, and thus, D-optimal designs remain optimal even when the inference problem is changed (Großmann and Schwabe, 2015). On the other hand, A-optimal designs change with the inference problem which is one of the reasons behind us studying the A-optimal designs under the inference problem of test-control experiments. For linear models, it has been shown that when the inference problem is test- control, one often benefits by using the A-optimal designs specially designed for catering to this problem (see Banerjee and Mukerjee, 2008, for example). By studying optimal designs for the test-control inference problem for DCEs, we intend to do the same for DCEs (results in Table 2 and the final paragraph). A-optimal designs are the designs that minimize the sum of variances of the treatment contrasts of interest. For example, if the information matrix 2021] THREE-LEVEL A-OPTIMAL DESIGNS FOR TEST-CONTROL DCE 201 ∗ −1 for treatment contrasts of interest is Md, then the design d which minimizes trace(Md ) among all designs is called an A-optimal design. We provide constructions of A-optimal and A-efficient designs for estimating the test-control contrasts under the indifference assumption of the multinomial logit model. We also provide designs having high A-efficiencies. 2. Background We only present here the details relevant to the current problem, and for more details, Das and Singh (2020) is suggested to be consulted. With each of the k factors at three levels, there are a total of 3k options. Let the systematic component of the utility for options be denoted by a 3k-tuple vector τ. Without loss of generality, let the options be arranged lexicographically. For example, for k = 2, the systematic component of the utility vector is τ = (τ00, τ01, τ02, τ10, τ11, τ12, τ20, τ21, τ22). The coding that we use is more commonly known as effects coding, see Großmann and Schwabe (2015), for example. In effects coding, for one factor at three levels, level 0 is coded as (1 0), level 1 is coded as (0 1) and level 2 is coded as (−1 − 1); here, level 2 is the control level, and levels 0 and 1 are test levels. The nth choice pair is denoted by Tn = (t(n1), t(n2)), with t(nj) is the jth option in the nth choice pair, n = 1,...,N, j = 1, 2. Corresponding to the jth option in N choice sets, T T T T Aj = (t(1j) t(2j) ··· t(Nj)) is a N × k matrix representing the levels of the k attributes. Let a N × 2k matrix Xj denote the effects coded matrix corresponding to Aj implying that 0, 1 and 2 in Aj is replaced by the vectors (1, 0), (0, 1) and (−1, −1), respectively, in Xj. Then, the effects coded difference matrix for the first and second option is X = X1 − X2. (1) Let B be a 2k × 3k matrix such that the ith column of B corresponds to the effects coding for the ith option, i = 1,..., 3k. It is assumed that the 3k options are arranged lexicographically. For example, if k = 3, the 3rd column in B would correspond to the effects coding corresponding to the option (002) which is (1 0 1 0 − 1 − 1), or that, the 7th column would correspond to option (020), that is, (1 0 − 1 − 1 1 0). The matrix B has been called BE in Das and Singh (2020). For simplicity, we drop the subscript E in the current work. This should not be confused with B used in Street and Burgess (2007), since, the matrix B has traditionally corresponded to the orthonormal coding. The inference problem studied in the current paper is Bτ which corresponds to the situations where the primary interest lies in making test-control comparisons which means that some new levels (called test levels) of factors are compared with an existing control level for the same factor. From Das and Singh (2020), the average information matrix for 1 the inference problem Bτ is I(Bτ) = 4N Md where T −1 T T −1 Md = (BB ) X X(BB ) .