Exercise B: How to Do Random Assignment Using Ms Excel
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Appendix B. Random Number Tables
Appendix B. Random Number Tables Reproduced from Million Random Digits, used with permission of the Rand Corporation, Copyright, 1955, The Free Press. The publication is available for free on the Internet at http://www.rand.org/publications/classics/randomdigits. All of the sampling plans presented in this handbook are based on the assumption that the packages constituting the sample are chosen at random from the inspection lot. Randomness in this instance means that every package in the lot has an equal chance of being selected as part of the sample. It does not matter what other packages have already been chosen, what the package net contents are, or where the package is located in the lot. To obtain a random sample, two steps are necessary. First it is necessary to identify each package in the lot of packages with a specific number whether on the shelf, in the warehouse, or coming off the packaging line. Then it is necessary to obtain a series of random numbers. These random numbers indicate exactly which packages in the lot shall be taken for the sample. The Random Number Table The random number tables in Appendix B are composed of the digits from 0 through 9, with approximately equal frequency of occurrence. This appendix consists of 8 pages. On each page digits are printed in blocks of five columns and blocks of five rows. The printing of the table in blocks is intended only to make it easier to locate specific columns and rows. Random Starting Place Starting Page. The Random Digit pages numbered B-2 through B-8. -
Effects of a Prescribed Fire on Oak Woodland Stand Structure1
Effects of a Prescribed Fire on Oak Woodland Stand Structure1 Danny L. Fry2 Abstract Fire damage and tree characteristics of mixed deciduous oak woodlands were recorded after a prescription burn in the summer of 1999 on Mt. Hamilton Range, Santa Clara County, California. Trees were tagged and monitored to determine the effects of fire intensity on damage, recovery and survivorship. Fire-caused mortality was low; 2-year post-burn survey indicates that only three oaks have died from the low intensity ground fire. Using ANOVA, there was an overall significant difference for percent tree crown scorched and bole char height between plots, but not between tree-size classes. Using logistic regression, tree diameter and aspect predicted crown resprouting. Crown damage was also a significant predictor of resprouting with the likelihood increasing with percent scorched. Both valley and blue oaks produced crown resprouts on trees with 100 percent of their crown scorched. Although overall tree damage was low, crown resprouts developed on 80 percent of the trees and were found as shortly as two weeks after the fire. Stand structural characteristics have not been altered substantially by the event. Long term monitoring of fire effects will provide information on what changes fire causes to stand structure, its possible usefulness as a management tool, and how it should be applied to the landscape to achieve management objectives. Introduction Numerous studies have focused on the effects of human land use practices on oak woodland stand structure and regeneration. Studies examining stand structure in oak woodlands have shown either persistence or strong recruitment following fire (McClaran and Bartolome 1989, Mensing 1992). -
Call Numbers
Call numbers: It is our recommendation that libraries NOT put J, +, E, Ref, etc. in the call number field in front of the Dewey or other call number. Use the Home Location field to indicate the collection for the item. It is difficult if not impossible to sort lists if the call number fields aren’t entered systematically. Dewey Call Numbers for Non-Fiction Each library follows its own practice for how long a Dewey number they use and what letters are used for the author’s name. Some libraries use a number (Cutter number from a table) after the letters for the author’s name. Other just use letters for the author’s name. Call Numbers for Fiction For fiction, the call number is usually the author’s Last Name, First Name. (Use a comma between last and first name.) It is usually already in the call number field when you barcode. Call Numbers for Paperbacks Each library follows its own practice. Just be consistent for easier handling of the weeding lists. WCTS libraries should follow the format used by WCTS for the items processed by them for your library. Most call numbers are either the author’s name or just the first letters of the author’s last name. Please DO catalog your paperbacks so they can be shared with other libraries. Call Numbers for Magazines To get the call numbers to display in the correct order by date, the call number needs to begin with the date of the issue in a number format, followed by the issue in alphanumeric format. -
Survey Experiments
IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity .......................................................................................................... -
Girls' Elite 2 0 2 0 - 2 1 S E a S O N by the Numbers
GIRLS' ELITE 2 0 2 0 - 2 1 S E A S O N BY THE NUMBERS COMPARING NORMAL SEASON TO 2020-21 NORMAL 2020-21 SEASON SEASON SEASON LENGTH SEASON LENGTH 6.5 Months; Dec - Jun 6.5 Months, Split Season The 2020-21 Season will be split into two segments running from mid-September through mid-February, taking a break for the IHSA season, and then returning May through mid- June. The season length is virtually the exact same amount of time as previous years. TRAINING PROGRAM TRAINING PROGRAM 25 Weeks; 157 Hours 25 Weeks; 156 Hours The training hours for the 2020-21 season are nearly exact to last season's plan. The training hours do not include 16 additional in-house scrimmage hours on the weekends Sep-Dec. Courtney DeBolt-Slinko returns as our Technical Director. 4 new courts this season. STRENGTH PROGRAM STRENGTH PROGRAM 3 Days/Week; 72 Hours 3 Days/Week; 76 Hours Similar to the Training Time, the 2020-21 schedule will actually allow for a 4 additional hours at Oak Strength in our Sparta Science Strength & Conditioning program. These hours are in addition to the volleyball-specific Training Time. Oak Strength is expanding by 8,800 sq. ft. RECRUITING SUPPORT RECRUITING SUPPORT Full Season Enhanced Full Season In response to the recruiting challenges created by the pandemic, we are ADDING livestreaming/recording of scrimmages and scheduled in-person visits from Lauren, Mikaela or Peter. This is in addition to our normal support services throughout the season. TOURNAMENT DATES TOURNAMENT DATES 24-28 Dates; 10-12 Events TBD Dates; TBD Events We are preparing for 15 Dates/6 Events Dec-Feb. -
Analysis of Variance and Analysis of Variance and Design of Experiments of Experiments-I
Analysis of Variance and Design of Experimentseriments--II MODULE ––IVIV LECTURE - 19 EXPERIMENTAL DESIGNS AND THEIR ANALYSIS Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur 2 Design of experiment means how to design an experiment in the sense that how the observations or measurements should be obtained to answer a qqyuery inavalid, efficient and economical way. The desigggning of experiment and the analysis of obtained data are inseparable. If the experiment is designed properly keeping in mind the question, then the data generated is valid and proper analysis of data provides the valid statistical inferences. If the experiment is not well designed, the validity of the statistical inferences is questionable and may be invalid. It is important to understand first the basic terminologies used in the experimental design. Experimental unit For conducting an experiment, the experimental material is divided into smaller parts and each part is referred to as experimental unit. The experimental unit is randomly assigned to a treatment. The phrase “randomly assigned” is very important in this definition. Experiment A way of getting an answer to a question which the experimenter wants to know. Treatment Different objects or procedures which are to be compared in an experiment are called treatments. Sampling unit The object that is measured in an experiment is called the sampling unit. This may be different from the experimental unit. 3 Factor A factor is a variable defining a categorization. A factor can be fixed or random in nature. • A factor is termed as fixed factor if all the levels of interest are included in the experiment. -
The Politics of Random Assignment: Implementing Studies and Impacting Policy
The Politics of Random Assignment: Implementing Studies and Impacting Policy Judith M. Gueron Manpower Demonstration Research Corporation (MDRC) As the only nonacademic presenting a paper at this conference, I see it as my charge to focus on the challenge of implementing random assignment in the field. I will not spend time arguing for the methodological strengths of social experiments or advocating for more such field trials. Others have done so eloquently.1 But I will make my biases clear. For 25 years, I and many of my MDRC colleagues have fought to implement random assignment in diverse arenas and to show that this approach is feasible, ethical, uniquely convincing, and superior for answering certain questions. Our organization is widely credited with being one of the pioneers of this approach, and through its use producing results that are trusted across the political spectrum and that have made a powerful difference in social policy and research practice. So, I am a believer, but not, I hope, a blind one. I do not think that random assignment is a panacea or that it can address all the critical policy questions, or substitute for other types of analysis, or is always appropriate. But I do believe that it offers unique power in answering the “Does it make a difference?” question. With random assignment, you can know something with much greater certainty and, as a result, can more confidently separate fact from advocacy. This paper focuses on implementing experiments. In laying out the ingredients of success, I argue that creative and flexible research design skills are essential, but that just as important are operational and political skills, applied both to marketing the experiment in the first place and to helping interpret and promote its findings down the line. -
Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs
Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs February 2016 This publication was produced by the Evidence-Based Policy team of the Laura and John Arnold Foundation (now Arnold Ventures). This publication is in the public domain. Authorization to reproduce it in whole or in part for educational purposes is granted. We welcome comments and suggestions on this document ([email protected]). 1 Purpose This is a checklist of key items to get right when conducting a randomized controlled trial (RCT) to evaluate a social program or practice. The checklist is designed to be a practical resource for researchers and sponsors of research. It describes items that are critical to the success of an RCT in producing valid findings about a social program’s effectiveness. This document is limited to key items, and does not address all contingencies that may affect a study’s success.1 Items in this checklist are categorized according to the following phases of an RCT: 1. Planning the study; 2. Carrying out random assignment; 3. Measuring outcomes for the study sample; and 4. Analyzing the study results. 1. Key items to get right in planning the study Choose (i) the program to be evaluated, (ii) target population for the study, and (iii) key outcomes to be measured. These should include, wherever possible, ultimate outcomes of policy importance. As illustrative examples: . An RCT of a pregnancy prevention program preferably should measure outcomes such as pregnancies or births, and not just intermediate outcomes such as condom use. An RCT of a remedial reading program preferably should measure outcomes such as reading comprehension, and not just participants’ ability to sound out words. -
Fracking by the Numbers
Fracking by the Numbers The Damage to Our Water, Land and Climate from a Decade of Dirty Drilling Fracking by the Numbers The Damage to Our Water, Land and Climate from a Decade of Dirty Drilling Written by: Elizabeth Ridlington and Kim Norman Frontier Group Rachel Richardson Environment America Research & Policy Center April 2016 Acknowledgments Environment America Research & Policy Center sincerely thanks Amy Mall, Senior Policy Analyst, Land & Wildlife Program, Natural Resources Defense Council; Courtney Bernhardt, Senior Research Analyst, Environmental Integrity Project; and Professor Anthony Ingraffea of Cornell University for their review of drafts of this document, as well as their insights and suggestions. Frontier Group interns Dana Bradley and Danielle Elefritz provided valuable research assistance. Our appreciation goes to Jeff Inglis for data assistance. Thanks also to Tony Dutzik and Gideon Weissman of Frontier Group for editorial help. We also are grateful to the many state agency staff who answered our numerous questions and requests for data. Many of them are listed by name in the methodology. The authors bear responsibility for any factual errors. The recommendations are those of Environment America Research & Policy Center. The views expressed in this report are those of the authors and do not necessarily reflect the views of our funders or those who provided review. 2016 Environment America Research & Policy Center. Some Rights Reserved. This work is licensed under a Creative Commons Attribution Non-Commercial No Derivatives 3.0 Unported License. To view the terms of this license, visit creativecommons.org/licenses/by-nc-nd/3.0. Environment America Research & Policy Center is a 501(c)(3) organization. -
How to Randomize?
TRANSLATING RESEARCH INTO ACTION How to Randomize? Roland Rathelot J-PAL Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize and Common Critiques 4. How to Randomize 5. Sampling and Sample Size 6. Threats and Analysis 7. Project from Start to Finish 8. Cost-Effectiveness Analysis and Scaling Up Lecture Overview • Unit and method of randomization • Real-world constraints • Revisiting unit and method • Variations on simple treatment-control Lecture Overview • Unit and method of randomization • Real-world constraints • Revisiting unit and method • Variations on simple treatment-control Unit of Randomization: Options 1. Randomizing at the individual level 2. Randomizing at the group level “Cluster Randomized Trial” • Which level to randomize? Unit of Randomization: Considerations • What unit does the program target for treatment? • What is the unit of analysis? Unit of Randomization: Individual? Unit of Randomization: Individual? Unit of Randomization: Clusters? Unit of Randomization: Class? Unit of Randomization: Class? Unit of Randomization: School? Unit of Randomization: School? How to Choose the Level • Nature of the Treatment – How is the intervention administered? – What is the catchment area of each “unit of intervention” – How wide is the potential impact? • Aggregation level of available data • Power requirements • Generally, best to randomize at the level at which the treatment is administered. Suppose an intervention targets health outcomes of children through info on hand-washing. What is the appropriate level of randomization? A. Child level 30% B. Household level 23% C. Classroom level D. School level 17% E. Village level 13% 10% F. Don’t know 7% A. B. C. D. -
Chapter 5 Experiments, Good And
Chapter 5 Experiments, Good and Bad Point of both observational studies and designed experiments is to identify variable or set of variables, called explanatory variables, which are thought to predict outcome or response variable. Confounding between explanatory variables occurs when two or more explanatory variables are not separated and so it is not clear how much each explanatory variable contributes in prediction of response variable. Lurking variable is explanatory variable not considered in study but confounded with one or more explanatory variables in study. Confounding with lurking variables effectively reduced in randomized comparative experiments where subjects are assigned to treatments at random. Confounding with a (only one at a time) lurking variable reduced in observational studies by controlling for it by comparing matched groups. Consequently, experiments much more effec- tive than observed studies at detecting which explanatory variables cause differences in response. In both cases, statistically significant observed differences in average responses implies differences are \real", did not occur by chance alone. Exercise 5.1 (Experiments, Good and Bad) 1. Randomized comparative experiment: effect of temperature on mice rate of oxy- gen consumption. For example, mice rate of oxygen consumption 10.3 mL/sec when subjected to 10o F. temperature (Fo) 0 10 20 30 ROC (mL/sec) 9.7 10.3 11.2 14.0 (a) Explanatory variable considered in study is (choose one) i. temperature ii. rate of oxygen consumption iii. mice iv. mouse weight 25 26 Chapter 5. Experiments, Good and Bad (ATTENDANCE 3) (b) Response is (choose one) i. temperature ii. rate of oxygen consumption iii. -
Randomized Experimentsexperiments Randomized Trials
Impact Evaluation RandomizedRandomized ExperimentsExperiments Randomized Trials How do researchers learn about counterfactual states of the world in practice? In many fields, and especially in medical research, evidence about counterfactuals is generated by randomized trials. In principle, randomized trials ensure that outcomes in the control group really do capture the counterfactual for a treatment group. 2 Randomization To answer causal questions, statisticians recommend a formal two-stage statistical model. In the first stage, a random sample of participants is selected from a defined population. In the second stage, this sample of participants is randomly assigned to treatment and comparison (control) conditions. 3 Population Randomization Sample Randomization Treatment Group Control Group 4 External & Internal Validity The purpose of the first-stage is to ensure that the results in the sample will represent the results in the population within a defined level of sampling error (external validity). The purpose of the second-stage is to ensure that the observed effect on the dependent variable is due to some aspect of the treatment rather than other confounding factors (internal validity). 5 Population Non-target group Target group Randomization Treatment group Comparison group 6 Two-Stage Randomized Trials In large samples, two-stage randomized trials ensure that: [Y1 | D =1]= [Y1 | D = 0] and [Y0 | D =1]= [Y0 | D = 0] • Thus, the estimator ˆ ˆ ˆ δ = [Y1 | D =1]-[Y0 | D = 0] • Consistently estimates ATE 7 One-Stage Randomized Trials Instead, if randomization takes place on a selected subpopulation –e.g., list of volunteers-, it only ensures: [Y0 | D =1] = [Y0 | D = 0] • And hence, the estimator ˆ ˆ ˆ δ = [Y1 | D =1]-[Y0 | D = 0] • Only estimates TOT Consistently 8 Randomized Trials Furthermore, even in idealized randomized designs, 1.