Statistical Inference in Two-Stage Online Controlled Experiments with Treatment Selection and Validation

Statistical Inference in Two-Stage Online Controlled Experiments with Treatment Selection and Validation

Statistical Inference in Two-Stage Online Controlled Experiments with Treatment Selection and Validation Alex Deng Tianxi Li Yu Guo Microsoft Department of Statistics Microsoft One Microsoft Way University of Michigan One Microsoft Way Redmond, WA 98052 Ann Arbor, MI 48109 Redmond, WA 98052 [email protected] [email protected] [email protected] ABSTRACT 1. INTRODUCTION Online controlled experiments, also called A/B testing, have Controlled experiments have been used in many scientific been established as the mantra for data-driven decision mak- fields as the gold standard for causal inference, even though ing in many web-facing companies. A/B Testing support only in the recent decade has it been introduced to online decision making by directly comparing two variants at a services development (Christian 2012; Kohavi et al. 2009; time. It can be used for comparison between (1) two can- Manzi 2012) where it gained the name A/B testing. At Mi- didate treatments and (2) a candidate treatment and an crosoft Bing, we test everything via controlled experiments, established control. In practice, one typically runs an ex- including UX design, backend ranking, site performance and periment with multiple treatments together with a control monetization. Hundreds of experiments are running on a to make decision for both purposes simultaneously. This is daily basis. At any time, a visitor to Bing participates in known to have two issues. First, having multiple treatments fifteen or more concurrent experiments, and can be assigned increases false positives due to multiple comparison. Second, to one of billions of possible treatment combinations (Kohavi the selection process causes an upward bias in estimated ef- et al. 2013). fect size of the best observed treatment. To overcome these Surprisingly, the majority of the current A/B testing lit- two issues, a two stage process is recommended, in which erature focuses on single stage A/B testing, where one ex- we select the best treatment from the first screening stage periment (with one or more treatments) is conducted and and then run the same experiment with only the selected analyzed at a time. A likely cause could be that A/B test- best treatment and the control in the validation stage. Tra- ing is traditionally conducted at the end of a feature de- ditional application of this two-stage design often focus only velopment cycle, to make final feature ship decision and on results from the second stage. In this paper, we propose a measure feature impact on key metrics. As A/B testing general methodology for combining the first screening stage gains more recognition as one of the most effective sources data together with validation stage data for more sensitive of data-driven decision making, and with scaling of our ex- hypothesis testing and more accurate point estimation of perimentation platform, A/B testing is now employed earlier the treatment effect. Our method is widely applicable to in feature development cycles. Thus comes the need for mul- existing online controlled experimentation systems. tiple stages of experiment, in which the results of an earlier screening stage can inform the design of a later validation Categories and Subject Descriptors stage. For example, when there are multiple shipping can- didates, we design the screening stage experiment to select G.3 [ Probability and Statistics]: Experiment Design the most promising one. If such a best candidate exists, we conduct a second validation run to make final ship decision General Terms and measure treatment effect. This type of two stage exper- iments with treatment selection and validation is commonly Measurement, Experimentation, Design, Theory used in practice. The space of treatment candidates ranges from 2 to 5 or even 10 in the screening stage. When candi- Keywords date number exceeds 10, we can aggressively sift candidates Controlled experiment, A/B testing, Search quality evalua- via offline measurement or “paired test” such as interleaving tion, Meta analysis, Bias correction, Empirical Bayes (Radlinski and Craswell 2013) to boost statistical power in the data analysis. In this paper, we focus on statistical inference in this two-stage design with treatment selection and validation. The validation stage involves only winner from the screen- ing stage and a control. It is analyzed in the traditional A/B testing framework and is well-understood (Section 2.1). The first screening stage, however, includes simultaneous analy- Copyright is held by the International World Wide Web Conference Com- sis of multiple treatments. We need to adjust hypothesis mittee (IW3C2). IW3C2 reserves the right to provide a hyperlink to the author’s site if the Material is used in electronic media. testing procedure to control for inflated false positives in WWW’14, April 7–11, 2014, Seoul, Korea. multiple comparison (Section 2.2). We improve on tradi- ACM 978-1-4503-2744-2/14/04. tional adjustments such as Bonferroni and Holm’s method http://dx.doi.org/10.1145/2566486.2568028. 609 (c) (Holm 1979), as they are typically too conservative. In Sec- where µ is the mean of X , αi represents user random effect tion 3 we propose a sharp adjustment method that is exact and ǫi is random noise. Random user effect α has mean 0 in the sense that it touches the claimed Type I error. Point and variance Var(α). Residual E[ǫ|α] = 0. The random pair estimation is also nontrivial as the treatment selection in- (αi, ǫi) are i.i.d. for all user. However, we don’t assume troduces an upward bias (Lemma 4). One might wonder independence of ǫi and αi, as the distribution of ǫ might why this is important since in A/B testing people are gen- depend on α, e.g. users who click more might also have erally only interested in finding the best candidate to ship. larger random variation in their clicks. However ǫi and αi We found in a data driven organization it is equally cru- are uncorrelated by construction since E(ǫα) = E[E(ǫ|α)] = cial to keep accurate records of impacts made by each in- 0. dividual feature. These records help us understand the re- For treatment group, turn on investment, and prioritize development to benefit (t) users/customers. In Section 4 we propose several methods Xi = µ + θi + αi + ǫi, to correct the bias and investigate more efficient estimators where fixed treatment effect θ can vary from user to user but by combining data from both stages. In Section 2.3, we show θ is uncorrelated to the noise ǫ. The average treatment effect an insightful theoretical result to ensure we can almost al- (ATE) is defined as the expectation of θ. The null hypothesis ways treat the treatment effect estimates from two stages as is that δ := E(θ) = 0 and the alternative is that it is not independent, given treatment procedures which assign inde- 0 for a two-sided test. For one-sided test, the alternative pendently, despite overlap of experiment subjects in the two is δ ≤ 0 (looking for positive change) or δ ≥ 0 (looking for stages. With this result, we propose our complete recipe of negative change). The t-test is based on the t-statistic: hypothesis testing in Section 3 and several options for point (t) (c) estimation in Section 4. X − X To the knowledge of the authors, this is the first paper , (2) (t) (c) in the context of online controlled experiments that stud- Var X − X ies the statistical inference for two-stage experiment with ò 1 2 treatment selection and validation. This framework can be where observed difference between treatment and control (t) (c) widely usable in existing online A/B testing systems. Key ∆ = X − X is an unbiased estimator for the shift of contributions of our work include: the mean and the t-statistic is a normalized version of that • A theoretic proof showing negligible correlation be- estimator. In traditional t-test (Student 1908), one needs tween treatment effect estimates from two stages, given to assume equal variance and normality of X(t) and X(c). the treatment assignment procedure in the two stages In practice, the equal variance assumption can be relaxed are independent, which is generally useful in theoreti- by using Welch’s t-test (Welch 1947). For online experi- cal development for all multi-staged experiments. ments with the sample sizes for both control and treatment • A more sensitive hypothesis testing procedure that at least in the thousands, even the normality assumption on correctly adjusts for multiple comparison and utilized X is usually unnecessary. To see that, by central limit the- data from both stages. (t) (c) orem, X and X are both asymptotically normal as the • Several novel bias correction methods to correct the sample size m and n for treatment and control increases. ∆ upward bias from the treatment selection, and their is therefore approximately normal with variance comparison in terms of bias and variance trade-off. (t) (c) (t) (c) • Demonstration from empirical evidence that we get Var(∆) = Var X − X = Var X + Var X . more sensitive hypothesis test and more accurate point 1 2 1 2 1 2 estimates by combining data from both stages. The t-statistics (2) is approximately standard normal so t- test in large sample case is equivalent to z-test. The central 2. BACKGROUND AND WEAK DEPEND- limit theorem only assumes finite variance which almost al- ENCE BETWEEN ESTIMATES IN TWO ways applies in online experimentation. The speed of con- vergence to normal can be quantified by using Berry-Essen STAGES theorem (DasGupta 2008). We have verified that most met- Before diving into two-stage model, we first briefly cover rics we tested in Bing are well approximated by normal dis- the analysis of one stage test.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us