Practical Meta-Analysis -- Lipsey & Wilson

Examples of Different Types of Effect Sizes: The The Major Leagues • Standardized Difference • The effect size (ES) makes meta-analysis possible. – group contrast research • The ES encodes the selected research findings on a • treatment groups numeric scale. • naturally occurring groups – inherently continuous construct • There are many different types of ES measures, each suited to different research situations. • Odds-Ratio – group contrast research • Each ES type may also have multiple methods of • treatment groups computation. • naturally occurring groups – inherently dichotomous construct • – association between variables research

Effect Size Overheads 1 Effect Size Overheads 2

Examples of Different Types of Effect Sizes: Two from the Minor Leagues What Makes Something an Effect Size for Meta-Analytic Purposes • Proportion – research • The type of ES must be comparable across the • HIV/AIDS prevalence rates collection of studies of interest. • Proportion of homeless persons found to be alcohol abusers • This is generally accomplished through • Standardized Gain Score standardization. – gain or change between two measurement points on the • Must be able to calculate a for that same variable • reading speed before and after a reading improvement class type of ES – the standard error is needed to calculate the ES weights, called inverse weights (more on this latter) – all meta-analytic analyses are weighted

Effect Size Overheads 3 Effect Size Overheads 4

Effect Size Overheads 1 Practical Meta-Analysis -- Lipsey & Wilson

The Standardized Mean Difference The Correlation Coefficient

2 2 X G1 - X G 2 s1 (n1 -1)+ s2 (n2 -1) ES = spooled = ES = r spooled n1 + n2 - 2

• Represents a standardized group contrast on an • Represents the strength of association between two inherently continuous measure. inherently continuous measures. • Uses the pooled (some situations • Generally reported directly as “r” (the Pearson use control group standard deviation). product coefficient). • Commonly called “d” or occasionally “g”.

Effect Size Overheads 5 Effect Size Overheads 6

Methods of Calculating the The Odds-Ratio Standardized Mean Difference • The Odds-Ratio is based on a 2 by 2 , such as the one below. • The standardized mean difference probably has more methods of calculation than any other effect size Frequencies type. Success Failure ad ES = Treatment Group a b bc Control Group c d

• The Odds-Ratio is the odds of success in the treatment group relative to the odds of success in the control group.

Effect Size Overheads 7 Effect Size Overheads 8

Effect Size Overheads 2 Practical Meta-Analysis -- Lipsey & Wilson

The different formulas represent degrees of approximation to the ES value that would be obtained based on the Methods of Calculating the and standard deviations Standardized Mean Difference – direct calculation based on means and standard deviations – algebraically equivalent formulas (t-test) Direction Calculation Method – exact probability value for a t-test Great – approximations based on continuous (correlation X - X X - X coefficient) ES = 1 2 = 1 2 2 2 s s1 (n1 -1) + s2 (n2 -1) pooled – estimates of the mean difference (adjusted means, n + n - 2 regression B weight, gain score means) 1 2 – estimates of the pooled standard deviation (gain score Good standard deviation, one-way ANOVA with 3 or more groups, ANCOVA)

– approximations based on dichotomous data Poor Effect Size Overheads 9 Effect Size Overheads 10

Methods of Calculating the Methods of Calculating the Standardized Mean Difference Standardized Mean Difference

Algebraically Equivalent Formulas: A study may report a grouped distribution from which you can calculate means and standard n + n ES = t 1 2 independent t-test deviations and apply to direct calculation method. n1n2

F(n + n ) ES = 1 2 two-group one-way ANOVA n1n2

exact p-values from a t-test or F-ratio can be converted into t-value and the above formula applied

Effect Size Overheads 11 Effect Size Overheads 12

Effect Size Overheads 3 Practical Meta-Analysis -- Lipsey & Wilson

Methods of Calculating the Methods of Calculating the Standardized Mean Difference Standardized Mean Difference

Close Approximation Based on Continuous Data -- Estimates of the Numerator of ES -- Point-Biserial Correlation. For example, the correlation The Mean Difference between treatment/no treatment and outcome measured on a continuous scale. -- difference between gain scores

2r -- difference between adjusted means ES = 1- r 2 -- unstandardized regression coefficient for group membership

Effect Size Overheads 13 Effect Size Overheads 14

Methods of Calculating the Methods of Calculating the Standardized Mean Difference Standardized Mean Difference

Estimates of the Denominator of ES -- Estimates of the Denominator of ES -- Pooled Standard Deviation Pooled Standard Deviation MS s = between one-way ANOVA >2 groups pooled F spooled = se n-1 standard error of the mean ( X n ) 2 X 2 n - å j j å j j n MS = å j between k -1

Effect Size Overheads 15 Effect Size Overheads 16

Effect Size Overheads 4 Practical Meta-Analysis -- Lipsey & Wilson

Methods of Calculating the Methods of Calculating the Standardized Mean Difference Standardized Mean Difference

Estimates of the Denominator of ES -- Estimates of the Denominator of ES -- Pooled Standard Deviation Pooled Standard Deviation

s gain MS error df error -1 spooled = standard deviation of gain s = × ANCOVA, where r is the 2(1- r) pooled 2 scores, where r is the correlation 1- r df error - 2 correlation between the between pretest and posttest covariate and the DV scores

Effect Size Overheads 17 Effect Size Overheads 18

Methods of Calculating the Methods of Calculating the Standardized Mean Difference Standardized Mean Difference

Estimates of the Denominator of ES -- Approximations Based on Dichotomous Data Pooled Standard Deviation

ES = probit( pgroup1 ) - probit( pgroup2 ) SS B + SS AB + SSW spooled = A two-way factorial ANOVA df B + df AB + dfW where B is the irrelevant factor the difference between the probits transformation and AB is the of the proportion successful in each group between the irrelevant factor and group membership (factor converts proportion into a z-value A).

Effect Size Overheads 19 Effect Size Overheads 20

Effect Size Overheads 5 Practical Meta-Analysis -- Lipsey & Wilson

Methods of Calculating the Data to Code Along with the ES Standardized Mean Difference • The Effect Size – may want to code the data from which the ES is calculated Approximations Based on Dichotomous Data – confidence in ES calculation – method of calculation c 2 – any additional data needed for calculation of the inverse ES = 2 chi-square must be based on variance weight N - c 2 a 2 by 2 contingency table • Sample Size (i.e., have only 1 df) • ES specific attrition 2r ES = • Construct measured 1- r 2 • Point in time when variable measured • Reliability of measure • Type of statistical test used Effect Size Overheads 21 Effect Size Overheads 22

Interpreting Effect Size Results Interpreting Effect Size Results • Cohen’s “Rules-of-Thumb” • Rules-of-Thumb do not take into account – standardized mean difference effect size the context of the intervention • small = 0.20 • medium = 0.50 – a “small” effect may be highly meaningful for • large = 0.80 an intervention that requires few resources and – correlation coefficient imposes little on the participants • small = 0.10 – small effects may be more meaningful for • medium = 0.25 serious and fairly intractable problems • large = 0.40 – odds-ratio • Cohen’s Rules-of-Thumb do, however, • small = 1.50 correspond to the distribution of effects • medium = 2.50 • large = 4.30 across meta-analyses found by Lipsey and Effect Size Overheads 23 Wilson (1993) Effect Size Overheads 24

Effect Size Overheads 6 Practical Meta-Analysis -- Lipsey & Wilson

Translation of Effect Sizes Methodological Adequacy of • Original metric Research Base • Success Rates (Rosenthal and Rubin’s BESD) – Proportion of “successes” in the treatment and • Findings must be interpreted within the comparison groups assuming an overall success rate bounds of the methodological quality of the of 50% research base synthesized. – Can be adapted to alternative overall success rates • Example using the sex offender data • Studies often cannot simply be grouped into – Assuming a comparison group recidivism rate of “good” and “bad” studies. 15%, the effect size of 0.45 for the cognitive- • Some methodological weaknesses may bias behavioral treatments translates into a recidivism the overall findings, others may merely add rate for the treatment group of 7% “noise” to the distribution.

Effect Size Overheads 25 Effect Size Overheads 26

Concluding Comments of Study Features • Meta-analysis is a replicable and defensible method of synthesizing findings across studies • Relative comparisons of effect sizes across • Meta-analysis often points out gaps in the studies are inherently correlational! research literature, providing a solid • Important study features are often foundation for the next generation of research confounding, obscuring the interpretive on that topic meaning of observed differences • Meta-analysis illustrates the importance of • If the confounding is not severe and you have a sufficient number of studies, you can • Meta-analysis facilitates generalization of the model “out” the influence of method knowledge gain through individual evaluations features to clarifyEffect substantive Size Overheads differences 27 Effect Size Overheads 28

Effect Size Overheads 7