Death by Scientific Method: Estimated Mortality Associated with the Failure

Death by Scientific Method: Estimated Mortality Associated with the Failure

medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 1 Death by scientific method: Estimated mortality associated with the failure to conduct routine prospective cumulative systematic reviews in medicine and public health Robert A. Hahn,1 Steven M. Teutsch2 1Corresponding author: Department of Anthropology, Emory University, Atlanta, Georgia. Email: [email protected] 2Adjunct Professor, UCLA Fielding School of Public Health; Senior Fellow, Leonard D. Schaef- fer Center for Health Policy and Economics, University of Southern California, Los Angeles, California Abstract Failure to routinely assess the state of knowledge as new studies accumulate results in 1) non-use of effective interventions, 2) continued use of ineffective or harmful interventions, and 3) unnecessary research. We use a published cumulative meta-analysis of interventions to reduce the harms of acute myocardial infarctions (1966-1992), and applied population attributable risk to assess the mortality consequences of the failure to cumulatively assess the state of knowledge. Failure to use knowledge that would have been available with cumulative meta-analysis may have resulted in annual estimated mortality: 41,000 deaths from non-use of intravenous dilators, 35,000 deaths from non-use of aspirin, and 37,000 deaths annually from non-use of ß-blockers. Continued use of Class 1 anti-arrhythmic drugs, which would have been found to be harmful in 1981, resulted 39,000 deaths annually. Failure to routinely update the state knowledge can have NOTE: This preprint reports new research that has not been certified by peer review and should not be used to guide clinical practice. medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 2 large health consequences. The process of building knowledge and practice in medicine and public health needs fundamental revision. medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 3 Introduction Practitioners of medicine and public health presumably seek to act on the basis of the best available knowledge of whatever condition they are treating. They may resort to established practice or to the findings of a recent or widely cited publication.(1) Both of these approaches can be problematic. Theorists of science agree(2, 3) that the best available knowledge is ascer- tained by means of systematic reviews—often meta-analyses—which gather available studies of given questions (meeting specified standards), evaluate these studies (using explicit criteria), and synthesize the resulting body of evidence (with a standardized process) to determine the state of collective knowledge on the question. Systematic review can find that the evidence available is insufficient to draw a conclusion, or that available evidence of various strengths supports or negates a conclusion. The systematic review indicates the weight of available evidence to date on given quesons—a critical component in the decision of a course of action. While use of systematic reviews for “evidence-based practice” has grown in recent decades, it is not routine, and the practice of prospective cumulative systematic reviews, i.e., adding each new study of a topic to an ongoing—thus cumulative—systematic review is rare. Nor is it clear that when such systematic evidence is available, it is readily accessible and used by practitioners.(4) In this paper we use available cumulative systematic reviews to estimate the mortality conse- quences of failure to routinely conduct cumulative systematic reviews and continually translate their findings into practice. Available cumulative systematic reviews examine what might have happened had ongoing systematic reviews been conducted and the findings deployed in practice. medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 4 We conclude by recommending routine prospective cumulative systematic reviews to promote those benefits. Recommendations for this practice have been made for more than 25 years,(5, 6) without substantial uptake by the research community in either medicine or public health. Methods Our analysis centers on the classic study by Antman and colleagues(5) (7) which con- ducted retrospective cumulative meta-analyses on 15 interventions to reduce mortality among patients who had suffered acute myocardial infarctions; the study search period ran from 1966 to 1991. We assessed mortality associated with 3 interventions in the Antman study indicating ben- efit: intravenous vasodilators (nitroglycerin and nitroprusside) administered during hospitaliza- tion, and ß-blockers and aspirin administered both during hospitalization and for secondary pre- vention after hospital discharge. We also assessed the consequences of one intervention shown by cumulative meta-analysis to be harmful for routine use in the treatment of acute myocardial infarctions—Class 1 anti-arrhythmic drugs; because this harm was unrecognized, the harmful practice continued.(5) Antman’s study provides three critical pieces of information: 1. the year in which the ongoing, cumulative meta analysis first indicated benefit (or harm) at a chosen level of statistical significance 2. the effect size determined—commonly an odds ratio indicating the proportion of mor- tality reduced by the intervention compared with placebo or no intervention, and medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 5 3. the year in which the intervention became routine in practice (or was eliminated in practice because it is harmful or ineffective). Routine practice was assessed by examination of samples of published review papers and texts on treatment for myocardial infarction in each year over recent decades. Review papers and textbook statements were classified as: a) recommending routine use, b) recommending use in specific circumstances, c) recommending rare use or nonuse, d) experimental, or e) use not men- tioned. We used information about mortality from acute myocardial infarction before, during, and after hospitalization to estimate mortality associated with failure to use information on benefits (or harms) of interventions that were not used (or were used) because cumulative systematic reviews had not been conducted, so that the requisite knowledge was not available. Deaths from my- ocardial infarction occurring prior to hospitalization were excluded from the analysis because they would not have been subject to the interventions under study. Deaths that occurred after hospital admission and after hospital discharge were included because they might have been ad- dressed by the interventions for either acute or longer-term treatment. While rates and numbers of deaths associated with myocardial infarction have changed over the study period, we use the number of deaths at the approximate midpoint of the study period, i.e., 1980 to estimate at- tributable mortality. medRxiv preprint doi: https://doi.org/10.1101/2020.10.20.20216242; this version posted October 22, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 6 We use the epidemiological method of population attributable risk (PAR) to estimate the number of deaths that might have been averted had the intervention discovered retrospectively by cumu- lative meta analysis been known. PAR = Pe (RR-1) / ((Pe (RR-1)) + 1), where Pe is the prevalence of nonuse of the practice we are assessing, and RR is the relative risk of the outcome, i.e., death in this case, associated with nonuse of the practice, compared with use, i.e., the number of deaths that might have been averted with use of the preventive measure in question. We assess the consequences of 100% prevalence of nonuse of the preventive mea- sure. Thus, Pe drops from the equation, and the equation becomes simply PAR = (RR-1) / RR. In sensitivity analyses, we assess the benefits of partial adoption, i.e., Pe <100%, or changing other parameters. The RR is the inverse of the odds ratio associated with the benefit of each in- tervention explored and reported in Lau.(7) Results and Discussion Mortality associated

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us