<<

CLINICAL TRIAL WORKSHOP

Wilbroad Mutale, MD, MPhil, PhD University of Zambia STUDY DESIGNS: OBSERVATIONAL STUDIES DIRECTION FOR STUDIES STRENGTH ORDER OF STUDIES

• Descriptive • Case control • Cohort • RCT OBSERVATION VS.

• Observations are readily obtained, but subject to bias, that is systematic errors

• Some observational designs are less subject to bias than others OBSERVATION VS EXPERIMENT

are hard to do well

• Experiments can answer narrow questions definitively

• Generalizability of results from experiments may be at issue, eg new drug testing that excludes women subjects EXPERIMENT:

• Identify subjects,

• Place in common context,

• Intervene, then

• Observe effects of intervention OBSERVATIONAL STUDIES

• Case-control studies • Cohort Studies • Population studies (e.g Ecological) & SERIES

• Example: “Normal plasma cholesterol in an 88- year-old man who eats 25 eggs a day”(Case report)

• Then followed by similar cases else where(series) ADVANTAGES OF CASE REPORTS/ SERIES

• Excellent at identifying unusual situation

• Good for generating hypotheses amenable to rigorous test Disadvantages DISADVANTAGES

• Generally short-term

• Investigators self-select (bias!)

• Generally no controls CASE-CONTROL STUDIES

• Cases compared with controls

• Tend to be Retrospective HOW ARE CASE-CONTROL STUDIES DONE?

• Identify cases (with condition of interest, (e.g.coccidioidomycosis)

• Match to -free controls who are similar with respect to known risk factors for condition

• Compare degree of exposure to possible risk factor (e.g exposure to dust cloud). THE LOGIC OF CASE- CONTROL STUDIES

• Cases differ from controls only in having the Disease.

• If exposure does not predispose to having the disease, then exposure should be equally distributed between the cases and controls. THE LOGIC OF CASE-CONTROL STUDIES

• The extent of greater previous exposure among the cases reflects the increased risk that exposure confers MEASURES OF EFFECT IN CASE CONTROL STUDIES • (ratio of probabilities of contracting disease given exposure), or

(ratio of the odds of contracting disease given exposure)

• OR of 1 indicate no effect of exposure (equal odds) EXAMPLE CASE CONTROL REPORTING

• “Physically being in a dust cloud (OR 3.0; CI, 1.6-5.4; P<.001) significantly increased the risk for being diagnosed with coccidioidomycosis” RELATIVE COMPARISON

• The relative Risk or risk ratio: Compares the risk in exposed to unexposed

• RR is better measure of association in that it compares to the baseline

• It says something about how much the increase or decrease compared to baseline COHORT STUDIES

• Prospective

• Controlled

• Can determine causes and of as well as identify risk factor

• Generally expensive and difficult to carry out Procedure for cohort study PROCEDURE FOR COHORT STUDY

• Identify groups of exposed subjects and control

• Match for other risk factors & Follow over time

• Record the fraction in each group who develop the condition of interest

• Compare these fractions using RR LOGIC OF THE COHORT STUDY

• Differences in the rate at which exposed and control subjects contract a disease is due to the differences in exposure, since other known risk factors are equally present in the two groups INCIDENCE RATE

• The numerator is the number of new event that occur in a defined time period and the denominator is the population at risk of experiencing the event during the period

• Each person in the study population contributes one person- year to the denominator per year

=# of cases/events in a given period in time/Total person years (Sum) EXAMPLE RR

• Risk in exposed =49.6

• Risk in unexpsoed =17.7

• RR=49.6/17.7=2.8(Two times increase) EXPERIMENTAL STUDIES

• Clinical trials: A is a comparative, prospective experiment conducted in human or animal subjects. (John will talk more) INFERENCE:

• Controlled trial • …make it possible to ascribe differences in outcome to differences in treatment NEW TECHNOLOGIES IN HEALTHCARE

• We are confronted by an ever-increasing array of new technologies that promise to revolutionize healthcare • While some of these technologies hold true promise, others many not add the benefits we imagine • Developing a systematic approach to health technology assessment is vital for modern health systems, yet capacity is lacking in many LMICs GENERAL CONSIDERATIONS

• Although many different types of technologies (e.g., diagnostic tools, devices, information systems, etc.) are applicable to healthcare, our focus is diagnostic tests • Diagnostic tests are initially evaluated in two important ways: • Analytical performance studies • Clinical performance studies ANALYTICAL VS CLINICAL PERFORMANCE STUDIES

• The purpose of an analytical performance study is to establish the intrinsic capabilities of the test • How well does the test detect a specific analyte or group of analytes • The purpose of a clinical performance study is to measure the performance of the test in clinical populations • How well does the test detect the disease or outcome of interest • While complementary, these two types of studies are not equivalent ANALYTICAL PERFORMANCE STUDIES

• Purpose: To determine how well the test detect a specific analyte or group of analytes • What is evaluated: Specimen type(s), accuracy, limit of detection (analytical sensitivity), interference/cross- reactivity (analytical specificity), , run time, stability • Types of specimens: Commercial samples • Setting: Laboratory CLINICAL PERFORMANCE STUDIES

• Purpose: To determine how well the test detects the disease or outcome of interest • What is evaluated: clinical sensitivity, clinical specificity, positive and negative predictive values, positive and negative likelihood ratios • Types of specimens: Stored specimens, prospectively collected specimens • Setting: Clinical or laboratory COMMON DESIGNS FOR CLINICAL PERFORMANCE STUDIES

• Clinical performance studies are commonly cross- sectional • Since diagnostic tests are used for a variety of different clinical applications (screening, diagnostic confirmation, staging, monitoring, prognosis), the design should consider the test’s intended use • Once performance is established, other designs are used in field/implementation studies CHOOSING A REFERENCE STANDARD

• The reference standard is used to define “true positives” and “true negatives” • Choosing the right reference (gold) standard helps to establish external validity in clinical performance studies • Surrogate markers of the disease/outcome should be avoided MEASURES OF CLINICAL PERFORMANCE

• Sensitivity = true positive rate (TPR) • Specificity = true negative rate (TNR) • Positive predictive value (PPV) • Negative predictive value (NPV)

• Positive likelihood ratio (PLR) • Negative likelihood ratio (NLR) SENSITIVITY AND SPECIFICITY

Reference Standard

+ - Index + True positives False positives Te s t - False negatives True negatives Sensitivity (TPR) Specificity (TNR) = TP/(TP+FN) = TN/(TN+FP) POSITIVE AND NEGATIVE PREDICTIVE VALUES

Reference Standard

+ - Index + True positives False positives PPV = Te s t TP/(TP+FP) - False negatives True negatives NPV = TN/(FN+TN) Sensitivity (TPR) = Specificity (TNR) TP/(TP+FN) = TN/(TN+FP) EXAMPLE

Cervical cancer

+ - HPV + TP = 90 FP = 40 PPV = 90/(90+40) - FN =10 TN = 60 NPV = 60/(10+60) Sensitivity (TPR) = FPR = PLR = 90/(90+10) 40/(40+60) 0.9/0.4 FNR = Specificity (TNR) NLR = 10/(90+10) = 60/(60+40) 0.1/0.6 REPORTING STANDARDS FOR CLINICAL PERFORMANCE STUDIES

• STARD reporting guidelines: https://www.equator- network.org/reporting-guidelines/stard/ • Methods section should include: • Description of index test and reference standard • Rationale for choosing reference standard • Definition of test positivity, how indeterminant tests were handled • How missing on the reference standard and index test were handled • Whether information on the reference standard and index test were available clinically • Measures for estimating and comparing clinical performance • size determination MINIMIZING BIAS IN CLINICAL PERFORMANCE STUDIES

can occur when the study population is not representative of the target population • Work-up/verification bias can occur when there are differences in the testing strategy for different participants resulting in inconsistent verification of the disease/outcome of interest OTHER CONSIDERATIONS

• Once the clinical performance of a new test has been established, additional studies may be needed to guide decisions about whether to adopt the new test as part of routine practice, these include: • Implementation studies to assess acceptability, feasibility, and/or readiness for change • Randomized trials to assess clinical effectiveness and/or implementation strategies • Decision analyses to assess cost-effectiveness and/or budget impact THANK YOU FOR LISTENING!