<<

Implementation of : Evidence Needs

Mary V. Relling and David L. Veenstra*

February 26, 2015

*The authors are participants in the activities of the IOM Roundtable on Translating Genomic- Based Research for Health.

The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organizations or of the Institute of . The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.

Copyright 2015 by the National Academy of Sciences. All rights reserved.

The types of evidence needed to support the use of in the clinic varies by stakeholder and circumstance. In this IOM series, seven individually authored commentaries explore this important issue, discussing the challenges involved in and opportunities for moving clinical sequencing forward appropriately and effectively.

Implementation of Pharmacogenomics: Evidence Needs

Mary V. Relling, St. Jude Children’s Research Hospital; and David L. Veenstra, Pharmaceutical Outcomes Research and Policy Program, University of Washington1, 2

BACKGROUND

There has been much attention to the gaps in evidence that preclude translating genomic testing into clinical use, particularly for disease risk (EGAPP, 2014). However, there are multiple examples of using genomic testing to inform treatment decisions (Bielinski et al., 2014; Gottesman et al., 2013; Hoffman et al., 2014; Johnson et al., 2013; O'Donnell et al., 2014; Pulley et al., 2012; Shuldiner et al., 2014), and in many instances, there is sufficient evidence to justify using to inform choice or dosage of . Prescribing decisions are routinely made on the basis of imperfect evidence and on extrapolations between solid evidence of mechanisms underlying interpatient variability in response and unstudied clinical scenarios. For example, it is well documented that acyclovir depends on glomerular ; most serious adverse effects of acyclovir are dose related; and most data indicate higher doses are more effective as antivirals than lower doses. In the clinic setting, one may encounter a patient with an abnormally low creatinine clearance of only 25 ml/min/1.73 m2, and there is a viral infection that must be treated. It makes sense to reduce the acyclovir in dose or in frequency. It also makes sense to have the extent of reduction mirror the degree of renal dysfunction, as is recommended by several groups (Gupta et al., 2005).

POLICY ISSUES/EVIDENCE GAPS

But are there randomized controlled clinical trials proving that antiviral outcomes are preserved and that is reduced by a 30 percent versus a 50 percent versus a 75 percent dosage reduction or by prolonging the interval between doses from 8 versus 12 versus 24 hours? No, there are not. And to expend precious health care research dollars to test for every permutation of renal function for every renally cleared drug would be a tremendous misplacement of research dollars. Prescribing recommendations can be based on extrapolations based on imperfect data, and we make the assumption that such recommendations are better than the alternative or ignoring renal dysfunction. The analogy holds for drug selection and dose

1 The authors are participants in the activities of the IOM Roundtable on Translating Genomic-Based Research for Health. 2 Suggested citation: Relling, M. V., and D. L. Veenstra. 2015. Implementation of pharmacogenomics: Evidence needs. Discussion Paper, Institute of Medicine, Washington, DC. http://nam.edu/wp-content/uploads/2015/06/PharmacogenomicsImplementation.

1 refinements made every day by physicians and pharmacists on the basis of drug-drug interactions. A better understanding of our evidence thresholds for nongenomic versus genomic- based interventions in pharmacotherapy is needed. The Clinical Pharmacogenetics Implementation Consortium, a joint effort of the National Institute of Health’s Pharmacogenomics Research Network and the PharmGKB (Relling and Klein, 2011), was formed in recognition of the fact that there is sufficient evidence for analytic validity and clinical utility to implement some pharmacogenetic tests for prescribing decisions, but there have been insufficient resources to illustrate exactly how to use the pharmacogenetic test information in prescribing. Largely adhering to the standards outlined in the IOM report Clinical Practice Guidelines We Can Trust (2011), the consortium publishes evidence-based, freely available, peer-reviewed clinical guidelines that provide the resources needed to allow translation of clinical genetic test results into actionable prescribing decisions (Caudle et al., 2014). The process is arduous, but the number of examples for which genetic tests are actionable is relatively small. Nonetheless, this “final step” of defining logical implementation steps, after the evidence for a specific aspect of genomic medicine has been generated, itself requires resources (Chute and Kohane, 2013). Not only must the guidelines be created, but they must be constantly updated to capture new variants and new clinical data that will inform genomically based prescribing. Moreover, substantial effort is needed to make evidence-based recommendations for and that are highly studied (e.g., CYP2D6 and metoprolol) but for which prescribing recommendations are not possible (e.g., there is not enough evidence to warrant changing the dose or drug choice even for those with high-risk CYP2D6 diplotypes). For genetic tests (e.g., MTHFR) that may be heavily marketed to clinicians and consumers but for which evidence does not support their clinical use for drug prescribing, reliable recommendations that the test results are not actionable is useful to clinicians (Hickey et al., 2013). /drug groups are prioritized for guideline development if there is substantial scientific evidence linking genomic variability with variability in drug effects, if the drugs involved have a narrow therapeutic range, if the underlying disease is serious, if the consequence of suboptimal prescribing is serious, if there are available alternatives with sound rationale, and if the genetic tests are available as laboratory tests approved under the Clinical Laboratory Improvement Amendments (Caudle et al., 2014; Relling and Klein, 2011). Examples of the types of evidence that may support genomically based prescribing include some or all of the following: randomized clinical studies of genetically based prescribing outcomes versus “standard of care,” preclinical and clinical studies linking pharmacologic effects or drug concentrations to genomic variation, case reports, in vivo pharmacokinetic studies in individuals with various , and in vitro functional studies (Caudle et al., 2014; Relling and Klein, 2011). For most gene-drug pairs, randomized controlled trials comparing clinical outcomes with -guided dosing versus conventional dosing are not available. Although considered a gold standard for informing some clinical decision making, randomized studies in pharmacogenomics can be problematic, given that the greatest benefit will only be observed in the (usually small) percentage of any population harboring the high-risk variants, and control groups often do not reflect usual clinical care. However, for many pharmacogenetic traits, the mechanisms are well understood, and randomized controlled trials are not necessary. Many actionable genetic variants affect drugs on a pharmacokinetic basis, analogous to the effects measured by using creatinine to assess renal or bilirubin to assess hepatic function. Thus, many pharmacogenetic prescribing recommendations can be based on underlying pharmacokinetic

2 evidence. These include many variants affecting and transport, such as thiopurine methyltransferase and thiopurines (Relling et al., 2013). In such cases, not only is there not a need to generate evidence via randomized trials using reduced doses versus normal doses in the small percentage of patients with high-risk variants, but it would likely be unethical to do so. There are other pharmacogenetic associations for which the evidence linking the genomic variants with drug effects, such as HLA-B and carbamazepine (Leckband et al., 2013), has a less clear mechanistic basis; in such cases, there is understandably a higher bar for the evidence required to justify clinical action. Assuming that the costs of genomic testing continue to decline, evidence of cost- effectiveness becomes less important for the clinical decision-making process and guideline development, and generation of evidence can focus on how best to utilize this information in the clinic. Indeed, for many areas of pharmacogenetic tests, the gaps to be filled are not necessarily the need to generate more evidence: the gaps lie in the disconnect between our knowledge of how medications should be prescribed and a health care system that is not designed to accommodate acting on that knowledge. Genomic tests are “lifelong” tests that can have implications for all modes of patient care (inpatient, outpatient, community, and long-term care facilities) and disciplines (, family medicine, specialty clinics, and so forth). But most health care systems do not capture genetic test data in machine-readable formats, the data are not available for all clinicians in contact with the patient, there is no system that follows the patient for life, and reimbursement for preemptive screening/testing is often problematic. The lack of truly comprehensive electronic health care records, the lack of interoperability among health care systems, the fact that medications are dispensed and that genetic data are generated from thousands of different independent sites, and the lack of coordination of health care on a per- patient basis are substantial impediments to moving the promise of pharmacogenomics into clinical use.

KEY RECOMMENDATIONS FOR EVIDENCE DEVELOPMENT

1. Develop national standards and conventions so that results of genomic tests are machine readable and retrievable in medical records. 2. Encourage truly interoperable electronic health care records that are shared widely among health care providers, , laboratories, and patients to facilitate the use of decision support tools linking pharmacogenetic tests with medication use and prescribing. 3. Identify mechanisms to encourage clinical implementation of guideline-based pharmacogenomic tests, without necessarily requiring research components. 4. Support a publicly funded and sustainable resource for cataloging genomic variation and developing guidelines that can be linked directly to electronic health records. 5. Improve consensus about the evidence required for clinical action in pharmacogenetics.

REFERENCES

Bielinski, S. J., J. E. Olson, J. Pathak, R. M. Weinshilboum, L. Wang, K. J. Lyke, E. Ryu, P. V. Targonski, M. D. Van Norstrand, M. A. Hathcock, P. Y. Takahashi, J. B. McCormick, K. J. Johnson, K. J. Maschke, C. R. Rohrer Vitek, M. S. Ellingson, E. D. Wieben, G. Farrugia, J. A. Morrisette, K. J. Kruckeberg, J. K. Bruflat, L. M. Peterson, J. H. Blommel, J. M. Skierka, M. J. Ferber, J. L. Black, L. M. Baudhuin, E. W. Klee, J. L. Ross, T. L. Veldhuizen, C. G.

3 Schultz, P. J. Caraballo, R. R. Freimuth, C. G. Chute, and I. J. Kullo. 2014. Preemptive for : Design of the right drug, right dose, right time-using genomic data to individualize treatment protocol. Mayo Clinical Proceedings 89(1):25–33. Caudle, K. E., T. E. Klein, J. M. Hoffman, D. J. Muller, M. Whirl-Carrillo, L. Gong, E. M. McDonagh, K. Sangkuhl, C. F. Thorn, M. Schwab, J. A. Agundez, R. R. Freimuth, V. Huser, M. T. Lee, O. F. Iwuchukwu, K. R. Crews, S. A. Scott, M. Wadelius, J. J. Swen, R. F. Tyndale, C. M. Stein, D. Roden, M. V. Relling, M. S. Williams, and S. G. Johnson. 2014. Incorporation of pharmacogenomics into routine clinical practice: The Clinical Pharmacogenetics Implementation Consortium (CPIC) guideline development process. Current 15(2):209–217. Chute, C. G., and I. S. Kohane. 2013. Genomic medicine, health information technology, and patient care. Journal of the American Medical Association 309(14):1467–1468. EGAPP (Evaluation of Genomic Applications in Practice and Prevention) Working Group. 2014. The EGAPP initiative: Lessons learned. in Medicine 16(3):217–224. Gottesman, O., S. A. Scott, S. B. Ellis, C. L. Overby, A. Ludtke, J. S. Hulot, J. Hall, K. Chatani, K. Myers, J. L. Kannry, and E. P. Bottinger. 2013. The Clipmerge PGx Program: Clinical implementation of personalized medicine through electronic health records and - pharmacogenomics. Clinical and Therapeutics 94(2):214-217. Gupta, S. K., J. A. Eustace, J. A. Winston, Boydstun, II, T. S. Ahuja, R. A. Rodriguez, K. T. Tashima, M. Roland, N. Franceschini, F. J. Palella, J. L. Lennox, P. E. Klotman, S. A. Nachman, S. D. Hall, and L. A. Szczech. 2005. Guidelines for the management of chronic kidney disease in HIV-infected patients: Recommendations of the HIV Medicine Association of the Infectious Diseases Society of America. Clinical Infectious Diseases 40(11):1559– 1585. Hickey, S. E., C. J. Curry, and H. V. Toriello. 2013. ACMG practice guideline: Lack of evidence for MTHFR polymorphism testing. Genetics in Medicine 15(2):153–156. Hoffman, J. M., C. E. Haidar, M. R. Wilkinson, K. R. Crews, D. K. Baker, N. M. Kornegay, W. Yang, C. H. Pui, U. M. Reiss, A. H. Gaur, S. C. Howard, W. E. Evans, U. Broeckel, and M. V. Relling. 2014. PG4KDS: A model for the clinical implementation of pre-emptive pharmacogenetics. American Journal of Part C: Seminars in Medical Genetics 166c(1):45–55. IOM (Institute of Medicine). 2011. Clinical practice guidelines we can trust. Washington, DC: The National Academies Press. Johnson, J. A., A. R. Elsey, M. J. Clare-Salzler, D. Nessl, M. Conlon, and D. R. Nelson. 2013. Institutional profile: University of Florida and Shands Hospital Personalized Medicine Program: Clinical implementation of pharmacogenetics. Pharmacogenomics 14(7):723–726. Leckband, S. G., J. R. Kelsoe, H. M. Dunnenberger, A. L. George, Jr., E. Tran, R. Berger, D. J. Muller, M. Whirl-Carrillo, K. E. Caudle, and M. Pirmohamed. 2013. Clinical Pharmacogenetics Implementation Consortium guidelines for HLA-B genotype and carbamazepine dosing. and Therapeutics 94(3):324–328. O'Donnell, P. H., K. Danahey, M. Jacobs, N. R. Wadhwa, S. Yuen, A. Bush, Y. Sacro, M. J. Sorrentino, M. Siegler, W. Harper, A. Warrick, S. Das, D. Saner, C. L. Corless, and M. J. Ratain. 2014. Adoption of a clinical pharmacogenomics implementation program during outpatient care: Initial results of the University of Chicago “1,200 patients project. ” American Journal of Medical Genetics Part C: Seminars in Medical Genetics 166c(1):68–75.

4

Pulley, J. M., J. C. Denny, J. F. Peterson, G. R. Bernard, C. L. Vnencak-Jones, A. H. Ramirez, J. T. Delaney, E. Bowton, K. Brothers, K. Johnson, D. C. Crawford, J. Schildcrout, D. R. Masys, H. H. Dilks, R. A. Wilke, E. W. Clayton, E. Shultz, M. Laposata, J. McPherson, J. N. Jirjis, and D. M. Roden. 2012. Operational implementation of prospective genotyping for personalized medicine: The design of the Vanderbilt Predict Project. Clinical Pharmacology and Therapeutics 92(1):87–95. Relling, M. V., E. E. Gardner, W. J. Sandborn, K. Schmiegelow, C. H. Pui, S. W. Yee, C. M. Stein, M. Carrillo, W. E. Evans, J. K. Hicks, M. Schwab, and T. E. Klein. 2013. Clinical Pharmacogenetics Implementation Consortium guidelines for thiopurine methyltransferase genotype and thiopurine dosing: 2013 update. Clinical Pharmacology and Therapeutics 93(4):324–325. Relling, M. V., and T. E. Klein. 2011. CPIC: Clinical Pharmacogenetics Implementation Consortium of the Pharmacogenomics Research Network. Clinical Pharmacology and Therapeutics 89(3):464–467. Shuldiner, A. R., K. Palmer, R. E. Pakyz, T. D. Alestock, K. A. Maloney, C. O'Neill, S. Bhatty, J. Schub, C. L. Overby, R. B. Horenstein, T. I. Pollin, M. D. Kelemen, A. L. Beitelshees, S. W. Robinson, M. G. Blitzer, P. F. McArdle, L. Brown, L. J. Jeng, R. Y. Zhao, N. Ambulos, and M. R. Vesely. 2014. Implementation of pharmacogenetics: The University of Maryland Personalized Anti-Platelet Pharmacogenetics Program. American Journal of Medical Genetics Part C: Seminars in Medical Genetics 166c(1):76–84.

5