Chapter 25 Solutions and Mini-Project Notes

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 25 Solutions and Mini-Project Notes

CHAPTER 25 SOLUTIONS AND MINI-PROJECT NOTES

CHAPTER 25 META-ANALYSIS: RESOLVING INCONSISTENCIES ACROSS STUDIES EXERCISE SOLUTIONS

25.1 Whether or not studies are statistically significant depends very much on the sample size used. Vote- counting does not take that into account.

25.2 Studies should be included or excluded on the basis of quality and methodology, not on the basis of the results. Knowing the results might bias the selection; the temptation would be to include only successful ones.

25.3 a. They would like to make causal conclusions. b. Yes. They provide just as much information about the true size of the effect as the studies that did find significance. Excluding them would bias the results in favor of a strong effect.

25.4 Benefits 1, 2 and 4 are most relevant. The combined results may find useful treatments that were missed because of low power and may also detect patterns across studies that were not evident in individual studies.

25.5 If results are combined across studies, Simpson's Paradox could result. Confounding variables shouldn't be a problem because observational studies are excluded. Subtle differences in treatments by the same name could be a problem, such as the strength of a dose of chemotherapy where a larger dose could be too toxic. The file drawer problem could exist, so the treatments appear more effective than they are. Flawed original studies could produce erroneous summary results. A statistically significant treatment effect might not be strong enough to outweigh the cost or side effects, so have little practical use. False findings of "no difference" may lead doctors to rule out useful treatments, thinking that the finding is conclusive.

25.6 a. The first benefit listed, "detecting small or moderate relationships" appears to be most applicable in this meta-analysis, given the news article quote that meta-analysis "can enable researchers to draw statistically significant conclusions from studies that individually are too small." For the criticism, a number of them could be applied, all of which address the same point - that the patients in these studies were not a representative sample. They tended to be sicker and have more drug use than normal. b. Statistical significance versus practical importance is definitely not a problem, since if the adverse drug reactions are really that common, knowledge of this problem is of great practical importance.

25.7 An advantage is that the interval will be narrower, thus giving a more precise estimate. A disadvantage is that if the studies were on different populations, used subtle treatment differences or something similar, then the interval would not represent the truth for any group or treatment and would be misleading. This could be akin to Simpson's Paradox.

25.8 1,000 researchers; 50 could easily be contacted.

25.9 Some studies used outdated technology; some are of such poor quality that they would just muddy the results; researchers might want to use experiments only and not include observational studies.

25.10 The total sample was 100,000 but we don't know how many were in the subsample used for this calculation. Further, the studies would all be observational so confounding factors could be a problem. Finally, the interval covers 1.0 but it is wide and is predominantly above 1.0 so a larger sample may actually result in an interval entirely above 1.0, indicating a problem. Their conclusion is akin to accepting a null hypothesis.

25.11 The confidence interval for the difference just barely covers zero, going from -0.8 to 6.4 so the conclusion that the cognitive interventions "are not superior" is a bit misleading. The overall sample size was relatively small and a larger sample would probably have resulted in a confidence interval entirely above zero. A better way to word the conclusion would be that the cognitive interventions were not statistically significantly better than the sham techniques, but that the evidence was in the direction of a positive

Page 1 of 3 CHAPTER 25 SOLUTIONS AND MINI-PROJECT NOTES

difference.

25.12 The number of individuals included in any given study has been too small to be able to make a precise conclusion about radon risk. A meta-analysis, which combines data from many studies, will yield much more data and will thus produce more precise results from which conclusions can be drawn.

25.13 The second approach is more credible. The studies were probably all too small to have enough power to detect differences individually, so vote-counting would not find differences either. Meta-analysis would be able to pool the results to get more power to detect differences.

25.14 Vote-counting could not detect a relationship but meta-analysis could, by increasing the overall power. Vote-counting includes only statistically significant studies, and there were none.

25.15 a. He analyzed 96 studies done between 1979 and 1996, including “those that were made public in the medical literature… and those that were not.” b. He combined the studies to find that “in 52 percent of them, the effect of the antidepressant could not be distinguished from that of the placebo.” c. The statement does appear to be using vote-counting. However, we would need to know if the meaning of “could not be distinguished” is that the difference was not statistically significant. If that’s what it means, then this is an instance of vote-counting. d. If the statement is an example of vote-counting, then it could be quite misleading. Depending on the size of a study, a small difference could be statistically significant, or, more importantly in this example, a real difference in the population could go undetected in a study if the sample size isn’t large enough.

Page 2 of 3 CHAPTER 25 SOLUTIONS AND MINI-PROJECT NOTES

NOTES ABOUT THE MINI-PROJECTS FOR CHAPTER 25

Mini-Project 25.1 There is no way to summarize what to expect for this project. Simply make sure all parts were addressed.

Mini-Project 25.2 News reports typically leave out important information. Some possibilities are criteria for selecting studies, quality issues, file drawer problems and number of individuals used in the studies. There may be others.

Page 3 of 3

Recommended publications