Journal of the Royal Society of Medicine
Total Page:16
File Type:pdf, Size:1020Kb
Journal of the Royal Society of Medicine http://jrs.sagepub.com/ Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO Kay Dickersin and Iain Chalmers J R Soc Med 2011 104: 532 DOI: 10.1258/jrsm.2011.11k042 The online version of this article can be found at: http://jrs.sagepub.com/content/104/12/532 Published by: http://www.sagepublications.com On behalf of: The Royal Society of Medicine Additional services and information for Journal of the Royal Society of Medicine can be found at: Email Alerts: http://jrs.sagepub.com/cgi/alerts Subscriptions: http://jrs.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav >> Version of Record - Dec 1, 2011 What is This? Downloaded from jrs.sagepub.com at The Royal Society of Medicine Library on October 14, 2014 FROM THE JAMES LIND LIBRARY Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO Kay Dickersin1 • Iain Chalmers2 1Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205, USA 2The James Lind Initiative, Oxford, UK Correspondence to: Kay Dickersin. Email: [email protected] DECLARATIONS Why is incomplete reporting dissemination. Research results that are not stat- of research a problem? istically significant (‘negative’) tend to be under- Competing interests reported,9 while results that are regarded as excit- None declared Under-reportingoftheresultsofresearchinanyfield ing or statistically significant (‘positive’) tend to be 10 –12 Funding of scientific enquiry is scientific misconduct because over-reported. The nature and direction of it delays discovery and understanding. In the field of research results can influence whether or not None 9,13 clinical research, incomplete and biased reporting research is reported at all, and if so, in which 14 Ethical approval has resulted in patients suffering and dying forms. They can also influence the speed at 1 15– 17 Not applicable unnecessarily. Relianceonanincompleteevidence which results are reported, the language in 18,19 base for decision-making can lead to imprecise or which they are published, and the likelihood 20– 25 Guarantor incorrect conclusions about an intervention’s that the research will be cited. KD effects. Biased reporting of clinical research can Failure to publish research findings is perva- 26,27 result in overestimates of beneficial effects2 and sup- sive. Studies demonstrating failure to publish Contributorship pression of harmful effects of treatments. Further- have included research conducted in many Both authors more, planners of new research are unable to countries, including Australia, France, Germany, contributed equally benefit from relevant past research. Spain, Switzerland, the United Kingdom and the United States. For example, an analysis of Acknowledgements Failure to publish is also unethical. Participants follow-up studies based on 29,729 reports of The authors are in clinical research are usually assured that their involvement will contribute to knowledge; but research made available only in abstract form grateful to Doug this does not happen if the research is not reported found that fewer than half of the studies went on Altman and Mike publicly and accessibly. Moreover, failure to to full publication, and that positive results were Clarke for drawing publish is simply a waste of precious research positively associated with full publication, regard- their attention to and other resources.3 Every year an estimated less of whether ‘positive’ results had been defined relevant historical 12,000 clinical trials which should have been fully as any ‘statistically significant’ result or as ‘a result 14 material; and to reported are not, wasting just under a million favoring the experimental treatment’. Doug Altman, tonnes of carbon dioxide annually – the carbon An-Wen Chan and emission equivalent of about 800,000 round-trip 4 Sally Hopewell for flights between London and New York. Recognition and investigation commenting on In brief, failure to report research findings is not of biased reporting of research 5–8 earlier drafts of this only unscientific but also unethical. How did brief history of this problem come to be recognized and investigated, The problem of reporting bias has been recognized and what steps are being taken today to deal with it? for hundreds of years. In the 17th century, Francis reporting biases. Bacon noted that ‘The human intellect … is more Additional material Evidence of biased reporting moved by affirmatives than by negatives’;28 and for this article is of studies Robert Boyle, the chemist, lamented the common available from the tendency among scientists not to publish their James Lind Library ‘Reporting bias’ occurs when the nature and direc- results until they had a ‘system’ worked out, website (www. tion of the results of research influences their with the result that ‘many excellent notions or 532 J R Soc Med 2011: 104: 532–538. DOI 10.1258/jrsm.2011.11k042 Downloaded from jrs.sagepub.com at The Royal Society of Medicine Library on October 14, 2014 Recognizing, investigating and dealing with incomplete and biased reporting of clinical research jameslindlibrary.org), experiments are, by sober and modest men, sup- scientists, then by health researchers.38–40 Unsur- where it was pressed’.29 Other scientists, across many fields, prisingly, researchers who have exposed reporting originally published have also recognized the problem over the biases are often those who have also been involved years.30– 35 in the application of methods for research For example, the bronze statue of Albert Ein- synthesis. stein outside the National Academy of Sciences Investigations of biased reporting of research in Washington, DC is inscribed with a quotation began with surveys of journal articles, which from a letter that he wrote on 3 March 1954, for a revealed improbably high proportions of pub- conference of the Emergency Civil Liberties lished studies showing statistically significant Committee: differences.41 –43 Subsequent surveys of authors and peer reviewers showed that research that Academic freedom as I understand it means having had yielded ‘negative’ results was less likely the right to seek the truth and to publish and teach than other research to be submitted or recom- what is believed to be true. Naturally this right mended for publication.44–47 These findings comes together with the duty not to withhold a were reinforced by the results of experimental part of what is believed to be true. It is clear that studies, which showed that studies with no any restriction on academic freedom hinders the dis- reported statistically significant differences were semination of knowledge in the population and less likely to be accepted for publication.48– 50 therefore restrains rational judgement and action.36 The most direct evidence of publication bias in the medical field has come from following up In 1959, the father of medical statistics in Britain, cohorts of studies identified at the time of Austin Bradford Hill, wrote: funding,51 ethics approval,52,53 submission for drug licences,54– 56 or when they were reported A negative result may be dull but often it is no less in summary form, for example in conference important than the positive; and in view of that abstracts.14,57 Systematic reviews of this body of importance it must, surely, be established by ade- evidence have shown that ‘positive findings’ are quate publication of the evidence.33 the principal factor associated with subsequent publication: a systematic review of data from five And in the same year, Seymour Kety, an American cohort studies following research projects from psychiatrist wrote: inception found that, overall, the odds of publi- cation for studies with ‘positive’ findings was A positive result is exciting and interesting and gets about two and a half times greater than the odds published quickly. A negative result, or one which is of publication of studies with ‘negative’ or ‘null’ inconsistent with current opinion, is either unexcit- results, and that study results were the principal ing or attributed to some error and is not published. factor explaining these differences in So that at first in the case of a new therapy there is a reporting.9,13,27,58 clustering toward positive results with fewer nega- Even when studies are eventually reported in tive results being published. Then some brave or substantive publications, ‘negative’ findings take naı¨ve or nonconformist soul, like the little child longer to appear in print:15,17,59,60 on average, who said that the emperor had no clothes, comes clinical trials with ‘positive results’ are published up with a negative result which he dares to about a year sooner than trials with ‘null or nega- publish. That starts the pendulum swinging in the tive results’. There is also evidence that, compared other direction, and now negative results become to negative or null results, statistically significant popular and important.37 results tend to be published in journals with higher impact factors,52 and that publication in Although the importance of reporting biases had the mainstream (‘non-grey’) literature is associ- been recognized for centuries, it was not until ated with an overall 9% larger estimate of treat- the second half of the 20th century that researchers ment effects compared to reports in the grey began to investigate