Tools to Assess the Risk of Bias Due to Inaccessible Data in Evidence Syntheses: Protocol for a Systematic Review

Tools to Assess the Risk of Bias Due to Inaccessible Data in Evidence Syntheses: Protocol for a Systematic Review

Tools to assess the risk of bias due to inaccessible data in evidence syntheses: protocol for a systematic review Correspondence to : Dr. Matthew Page, School of Public Health and Preventive Medicine, Monash University, 553 St Kilda Road, Melbourne, Victoria, 3004, Australia. Email address: [email protected] 1 BACKGROUND The credibility of evidence syntheses can be compromised when some outcome data are inaccessible to users because of the nature of the results (e.g. statistical significance, magnitude or direction of effect estimates) (1). Examples include when a study with null findings or results favouring the comparator is not published (“publication bias” (2)), or when outcomes that are statistically non-significant are not reported or are only partially reported in a journal article (“outcome reporting bias” (3)). Such biased reporting of research is common. Syntheses of cohorts of trials followed from protocol/registration to publication suggest that half of all trials are not published (2, 4), that trials with statistically significant results are twice as likely to be published (4), and that 13% to 50% of pre-specified outcomes are not reported, or only partially reported, in trial publications (5). Several approaches to address the risk of such bias due to inaccessible data have been advocated. Review authors are encouraged to perform comprehensive and sensitive searches of various sources (e.g. bibliographic databases such as MEDLINE®, and “grey literature” databases such as OpenSIGLE). Trial registries such as ClinicalTrials.gov can be used to identify completed but unpublished trials, pre-specified but non-reported outcomes, and aggregated data posted prior to a trial’s publication (1). Clinical study reports prepared by drug/device manufacturers for regulators (such as the European Medicines Agency) provide more extensive data from a trial than that presented in journal articles (6). Funnel plots and tests for funnel plot asymmetry may be used in some circumstances to infer publication bias, by examining whether smaller trials are asymmetrically distributed around the meta-analytic estimate or the larger trials (7). And the Cochrane risk of bias tool for randomized trials (“selective reporting” domain) (8), or the “outcome reporting bias in trials” (ORBIT) classification system (3), can be used to record non-/partially reported outcomes in trials. 2 Despite the availability of several approaches, most systematic reviewers do not adequately address the risk of bias due to inaccessible data (9-11). In a cross-sectional study of 300 systematic reviews indexed in MEDLINE® in February 2014 (9), an assessment of publication bias was considered infeasible by 56% of authors. Authors nearly always claimed this was because the small number of studies or inability to perform a meta-analysis precluded the use of funnel plots. Funnel plots and associated statistical tests were used in 31% of reviews. However, 43% of these reviews included fewer than 10 studies of varying size, meaning the plots were difficult to interpret and tests had low statistical power (7). Only 19% of reviews included a search of a trial registry, and only 7% included a search of another source of data disseminated outside of journal articles. The risk of outcome reporting bias in the included studies was assessed in only 24% of reviews (9). Another study showed that even when outcome reporting bias was detected in trials, few authors acknowledged that the synthesis was missing data that were not/partially reported (12). Possibly the risk of bias due to inaccessible data is addressed inadequately because all the factors that must be considered are fragmented. A similar problem occurred a decade ago with the assessment of risk of bias in randomized trials. Some systematic reviewers might have assessed problems with randomization, while others might have assessed problems with blinding or attrition (13). It was not until all the important bias domains were brought together into a single tool – the Cochrane risk of bias tool for randomized trials (8) - that systematic reviewers started to assess risk of bias in trials comprehensively (14). Linking all the components needed to judge the risk of bias due to inaccessible data into a single, novel tool, may have a similar impact on the conduct of evidence syntheses. Doing so should allow systematic reviewers to reach conclusions that are more trustworthy for decision makers. 3 When developing a new measurement tool, its structure and content should be informed by a review of existing tools (15). Several reviews have examined the properties of tools designed to assess the methodological quality/risk of bias in randomized trials (16, 17), non- randomized studies of interventions (18), diagnostic test accuracy studies (19), and systematic reviews (20, 21)). However, to our knowledge, no prior review has focused on tools to assess risk of bias due to inaccessible data. Therefore, the aim of this systematic review is to summarise the properties of existing tools designed to assess risk of bias due to inaccessible data in evidence syntheses. METHODS Eligibility criteria We will include any paper reporting a tool designed to assess the risk of bias due to unpublished studies (“publication bias”), the risk of bias due to non/partial reporting of outcomes (“outcome reporting bias”), or both sources of bias. By “tool”, we mean a structured approach that requires users to identify potential problems in the studies, and to make a judgement about the corresponding risk of bias in the results. Eligible tools can take any form, including scales, checklists, or domain-based tools. To be considered a scale, each item has to have a numeric score attached to it, and an overall summary score can be calculated (16). Checklists include multiple questions, but the developers’ intention is not to attach a numerical score to each response, nor to calculate an overall score (17). Domain- based tools require users to judge quality/risk of bias within specific bias domains, and to record the information on which each judgement is based (8). 4 We will only include tools that were designed for completion by authors conducting a synthesis of evidence (e.g. meta-analysis, narrative synthesis). Tools with a broad scope (e.g. to assess the overall quality/risk of bias in the evidence) will be eligible if one of the mandatory components focuses on risk of bias due to inaccessible data. We will include multi-dimensional tools with a statistical component (e.g. those that require users to answer questions as well as perform statistical tests for funnel plot asymmetry). We will also include any studies that have evaluated the properties of existing tools (e.g. construct validity, inter- rater reliability, time taken to complete assessments). Papers will be eligible regardless of the date, language, or format of publication. We will exclude papers describing guidelines to address bias due to inaccessible data (e.g. the Cochrane Handbook chapter on reporting bias (22)), and general lists of items to consider rather than structured tools (which cannot be used for evaluative application). Tools will be ineligible if they were developed for one specific systematic review, since such tools are unlikely to have been developed systematically. We will exclude tools developed for users to appraise published systematic reviews, such as the ROBIS tool (23) or AMSTAR (24). We will also exclude papers that only describe the development or evaluation of statistical methods to assess or adjust for the risk of bias due to inaccessible data, as these have been reviewed extensively elsewhere (7, 25, 26). Search methods We will search Ovid MEDLINE (January 1946 to February 2017), Ovid EMBASE (January 1980 to February 2017), and Ovid PsycINFO (January 1806 to February 2017). We have developed search strategies in conjunction with an information specialist, using search terms 5 adopted by Whiting et al. to identify quality assessment tools (21) (see full Boolean search strategies in Appendix 1). To capture any tools posted on the Internet or not published by formal academic publishers (i.e. grey literature), we will search Google Scholar using the phrase “reporting bias tool”. We will screen the titles of the first 300 results, as recommended by Haddaway et al. (27). To capture any papers that may be missed by all searches, we will screen the references of included articles. Study selection and data collection One author will screen all titles and abstracts retrieved by the searches. The same author will screen any full text articles retrieved, and a second author will verify all inclusion decisions independently. Disagreements in the latter stage will be resolved via discussion until consensus is reached, or referral to a third author where necessary. One author will extract data from included papers using a standardised data collection form. Another author will verify the accuracy of the data extracted. We will extract the following data: • type of tool (scale, checklist, domain-based or other); • types of bias due to inaccessible data addressed by the tool (e.g. unpublished studies, non-/partially reported outcomes); • level of assessment (i.e. whether users direct assessments at the meta-analysis or at the individual studies included in the meta-analysis); • whether the tool is designed for general use (generic) or targets syntheses of specific study designs or topic areas (specific); 6 • items included in the tool; • how items within the tool are rated; • how items were selected for inclusion; • methods used to develop the tool (e.g. Delphi study, expert consensus meeting); • availability of guidance to assist with completion of the tool (e.g. guidance manual); • any psychometric properties recorded for the tool (e.g. inter-rater reliability via Cohen’s kappa coefficient (28)); • time taken to complete the tool. Data analysis We will summarise the characteristics of included tools in tables. We will sum the number of items per tool and calculate the median (interquartile range) number of items across all tools.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us