Why Experimenters Should Not Randomize, and What They Should Do Instead

Why Experimenters Should Not Randomize, and What They Should Do Instead

Why experimenters should not randomize, and what they should do instead Maximilian Kasy Department of Economics, Harvard University Maximilian Kasy (Harvard) Experimental design 1 / 42 Introduction project STAR Covariate means within school for the actual (D) and for the optimal (D∗) treatment assignment School 16 D = 0 D = 1 D∗ = 0 D∗ = 1 girl 0.42 0.54 0.46 0.41 black 1.00 1.00 1.00 1.00 birth date 1980.18 1980.48 1980.24 1980.27 free lunch 0.98 1.00 0.98 1.00 n 123 37 123 37 School 38 D = 0 D = 1 D∗ = 0 D∗ = 1 girl 0.45 0.60 0.49 0.47 black 0.00 0.00 0.00 0.00 birth date 1980.15 1980.30 1980.19 1980.17 free lunch 0.86 0.33 0.73 0.73 n 49 15 49 15 Maximilian Kasy (Harvard) Experimental design 2 / 42 Introduction Some intuitions \compare apples with apples" ) balance covariate distribution not just balance of means! don't add random noise to estimators { why add random noise to experimental designs? optimal design for STAR: 19% reduction in mean squared error relative to actual assignment equivalent to 9% sample size, or 773 students Maximilian Kasy (Harvard) Experimental design 3 / 42 Introduction Some context - a very brief history of experiments How to ensure we compare apples with apples? 1 physics - Galileo,... controlled experiment, not much heterogeneity, no self-selection ) no randomization necessary 2 modern RCTs - Fisher, Neyman,... observationally homogenous units with unobserved heterogeneity ) randomized controlled trials (setup for most of the experimental design literature) 3 medicine, economics: lots of unobserved and observed heterogeneity ) topic of this talk Maximilian Kasy (Harvard) Experimental design 4 / 42 Introduction The setup 1 Sampling: random sample of n units baseline survey ) vector of covariates Xi 2 Treatment assignment: binary treatment assigned by Di = di (X ; U) X matrix of covariates; U randomization device 3 Realization of outcomes: 1 0 Yi = Di Yi + (1 − Di )Yi 4 Estimation: estimator βb of the (conditional) average treatment effect, 1 P 1 0 β = n i E[Yi − Yi jXi ; θ] Maximilian Kasy (Harvard) Experimental design 5 / 42 Introduction Questions How should we assign treatment? In particular, if X has continuous or many discrete components? How should we estimate β? What is the role of prior information? Maximilian Kasy (Harvard) Experimental design 6 / 42 Introduction Framework proposed in this talk 1 Decision theoretic: d and βb minimize risk R(d; βbjX ) (e.g., expected squared error) 2 Nonparametric: no functional form assumptions 3 Bayesian: R(d; βbjX ) averages expected loss over a prior. d prior: distribution over the functions x ! E[Yi jXi = x; θ] 4 Non-informative: limit of risk functions under priors such that Var(β) ! 1 Maximilian Kasy (Harvard) Experimental design 7 / 42 Introduction Main results 1 The unique optimal treatment assignment does not involve randomization. 2 Identification using conditional independence is still guaranteed without randomization. 3 Tractable nonparametric priors 4 Explicit expressions for risk as a function of treatment assignment ) choose d to minimize these 5 MATLAB code to find optimal treatment assignment 6 Magnitude of gains: between 5 and 20% reduction in MSE relative to randomization, for realistic parameter values in simulations For project STAR: 19% gain relative to actual assignment Maximilian Kasy (Harvard) Experimental design 8 / 42 Introduction Roadmap 1 Motivating examples 2 Formal decision problem and the optimality of non-randomized designs 3 Nonparametric Bayesian estimators and risk 4 Choice of prior parameters 5 Discrete optimization, and how to use my MATLAB code 6 Simulation results and application to project STAR 7 Outlook: Optimal policy and statistical decisions Maximilian Kasy (Harvard) Experimental design 9 / 42 Introduction Notation random variables: Xi ; Di ; Yi values of the corresponding variables: x; d; y matrices/vectors for observations i = 1;:::; n: X ; D; Y vector of values: d shorthand for data generating process: θ \frequentist" probabilities and expectations: conditional on θ \Bayesian" probabilities and expectations: unconditional Maximilian Kasy (Harvard) Experimental design 10 / 42 Introduction Example 1 - No covariates P 2 d nd := 1(Di = d), σd = Var(Yi jθ) X Di 1 − Di βb := Yi − Yi n1 n − n1 i Two alternative designs: 1 Randomization conditional on n1 2 Complete randomization: Di i.i.d., P(Di = 1) = p Corresponding estimator variances 1 n1 fixed ) σ2 σ2 1 + 0 n1 n − n1 2 n1 random ) 2 2 σ1 σ0 En1 + n1 n − n1 Choosing (unique) minimizing n1 is optimal. Indifferent which of observationally equivalent units get treatment. Maximilian Kasy (Harvard) Experimental design 11 / 42 Introduction Example 2 - discrete covariate Xi 2 f0;:::; kg, P nx := i 1(Xi = x) P nd;x := i 1(Xi = x; Di = d), 2 d σd;x = Var(Yi jXi = x; θ) X nx X Di 1 − Di βb := 1(Xi = x) Yi − Yi n n1;x nx − n1;x x i Three alternative designs: 1 Stratified randomization, conditional on nd;x P 2 Randomization conditional on nd = 1(Di = d) 3 Complete randomization Maximilian Kasy (Harvard) Experimental design 12 / 42 Introduction Corresponding estimator variances 1 Stratified; nd;x fixed ) " 2 2 # X nx σ1;x σ0;x V (fn g) := + d;x n n n − n x 1;x x 1;x 2 P nd;x random but nd = x nd;x fixed ) " # X E V (fnd;x g) n1;x = n1 x 3 nd;x and nd random ) E [V (fnd;x g)] ) Choosing unique minimizing fnd;x g is optimal. Maximilian Kasy (Harvard) Experimental design 13 / 42 Introduction Example 3 - Continuous covariate Xi 2 R continuously distributed ) no two observations have the same Xi ! Alternative designs: 1 Complete randomization 2 Randomization conditional on nd 3 Discretize and stratify: Choose bins [xj ; xj+1] P X~i = j · 1(Xi 2 [xj ; xj+1]) stratify based on X~i 4 Special case: pairwise randomization 5 \Fully stratify" But what does that mean??? Maximilian Kasy (Harvard) Experimental design 14 / 42 Introduction Some references Optimal design of experiments: Smith (1918), Kiefer and Wolfowitz (1959), Cox and Reid (2000), Shah and Sinha (1989) Nonparametric estimation of treatment effects: Imbens (2004) Gaussian process priors: Wahba (1990) (Splines), Matheron (1973); Yakowitz and Szidarovszky (1985) (\Kriging" in Geostatistics), Williams and Rasmussen (2006) (machine learning) Bayesian statistics, and design: Robert (2007), O'Hagan and Kingman (1978), Berry (2006) Simulated annealing: Kirkpatrick et al. (1983) Maximilian Kasy (Harvard) Experimental design 15 / 42 Decision problem A formal decision problem risk function of treatment assignment d(X ; U), estimator βb, under loss L, data generating process θ: R(d; βbjX ; U; θ) := E[L(β;b β)jX ; U; θ] (1) (d affects the distribution of βb) (conditional) Bayesian risk: Z RB (d; βbjX ; U) := R(d; βbjX ; U; θ)dP(θ) (2) Z RB (d; βbjX ) := RB (d; βbjX ; U)dP(U) (3) Z RB (d; βb) := RB (d; βbjX ; U)dP(X )dP(U) (4) conditional minimax risk: Rmm(d; βbjX ; U) := max R(d; βbjX ; U; θ) (5) θ objective: min RB or min Rmm Maximilian Kasy (Harvard) Experimental design 16 / 42 Decision problem Optimality of deterministic designs Theorem Given βb(Y ; X ; D) 1 d∗(X ) 2 argmin RB (d; βbjX ) (6) d(X )2f0;1gn minimizes RB (d; βb) among all d(X ; U) (random or not). 2 Suppose RB (d1; βbjX ) − RB (d2; βbjX ) is continuously distributed 8d1 6= d2 ) d∗(X ) is the unique minimizer of (6). 3 Similar claims hold for Rmm(d; βbjX ; U), if the latter is finite. Intuition: similar to why estimators should not randomize RB (d; βbjX ; U) does not depend on U ) neither do its minimizers d∗; βb∗ Maximilian Kasy (Harvard) Experimental design 17 / 42 Decision problem Conditional independence Theorem Assume i.i.d. sampling stable unit treatment values, and D = d(X ; U) for U ? (Y 0; Y 1; X )jθ. Then conditional independence holds; di P(Yi jXi ; Di = di ; θ) = P(Yi jXi ; θ): This is true in particular for deterministic treatment assignment rules D = d(X ). Intuition: under i.i.d. sampling di di P(Yi jX ; θ) = P(Yi jXi ; θ): Maximilian Kasy (Harvard) Experimental design 18 / 42 Nonparametric Bayes Nonparametric Bayes Let f (Xi ; Di ) = E[Yi jXi ; Di ; θ]. Assumption (Prior moments) E[f (x; d)] = µ(x; d) Cov(f (x1; d1); f (x2; d2)) = C((x1; d1); (x2; d2)) Assumption (Mean squared error objective) Loss L(β;b β) = (βb − β)2, Bayes risk RB (d; βbjX ) = E[(βb − β)2jX ] Assumption (Linear estimators) P βb = w0 + i wi Yi , where wi might depend on X and on D, but not on Y . Maximilian Kasy (Harvard) Experimental design 19 / 42 Nonparametric Bayes Best linear predictor, posterior variance Notation for (prior) moments µi = E[Yi jX ; D]; µβ = E[βjX ; D] Σ = Var(Y jX ; D; θ); Ci;j = C((Xi ; Di ); (Xj ; Dj )); and C i = Cov(Yi ; βjX ; D) Theorem Under these assumptions, the optimal estimator equals 0 −1 βb = µβ + C · (C + Σ) · (Y − µ); and the corresponding expected loss (risk) equals 0 RB (d; βbjX ) = Var(βjX ) − C · (C + Σ)−1 · C: Maximilian Kasy (Harvard) Experimental design 20 / 42 Nonparametric Bayes More explicit formulas Assumption (Homoskedasticity) d 2 Var(Yi jXi ; θ) = σ Assumption (Restricting prior moments) 1 E[f ] = 0. 2 The functions f (:; 0) and f (:; 1) are uncorrelated. 3 The prior moments of f (:; 0) and f (:; 1) are the same. Maximilian Kasy (Harvard) Experimental design 21 / 42 Nonparametric Bayes Submatrix notation d Y = (Yi : Di = d) d d 2 V = Var(Y jX ; D) = (Ci;j : Di = d; Dj = d) + diag(σ : Di = d) d d d C = Cov(Y ; βjX ; D) = (C i : Di = d) Theorem (Explicit estimator and risk function) Under these additional assumptions, 10 00 βb = C · (V 1)−1 · Y 1 + C · (V 0)−1 · Y 0 and 10 1 00 0 RB (d; βbjX ) = Var(βjX ) − C · (V 1)−1 · C − C · (V 0)−1 · C : Maximilian Kasy (Harvard) Experimental design 22 / 42 Nonparametric Bayes Insisting on the comparison-of-means estimator Assumption (Simple estimator) 1 X 1 X βb = Di Yi − (1 − Di )Yi ; n1 n0 i i P where nd = i 1(Di = d).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    42 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us