Technical Appendix 1

Total Page:16

File Type:pdf, Size:1020Kb

Technical Appendix 1

Technical Appendix 1

This technical appendix includes a detailed account of model validation efforts, as specified by the Assessment of the Validation Status of Health-Economic decision models (AdViSHE) tool.

Responses to each of the items in the tool are provided below.

Part A. Validation of the conceptual model

A1/ Face validity testing (conceptual model): Have experts been asked to judge the appropriateness of the conceptual model?

Yes, both modeling (Dr. Milton C. Weinstein) and clinical (Dr. Gary H. Lyman) experts were involved in the development of the economic model structure and are included as authors on this manuscript. Dr. Milton C. Weinstein is the Henry J. Kaiser Professor of Health Policy and Management at the Harvard T.H. Chan School of Public Health, and is an author of four books, including “Decision Making in Health and Medicine: Integrating Evidence and Values” and “Cost-Effectiveness in Health and Medicine”, the report of the Panel of Cost Effectiveness in Health and Medicine. Dr. Weinstein has published more than 300 papers in peer-reviewed journals (medical, public health, and economics), and was awarded the Avedis Donabedian

Lifetime Achievement Award from the International Society for Pharmacoeconomics and

Outcomes Research. Dr. Gary H. Lyman is a medical oncologist specializing in the treatment of breast cancer. Dr. Lyman is a Professor at the School of Medicine in the Division of Medical

Oncology and an Affiliate Professor at the School of Public Health at the University of

Washington. Dr. Lyman is also a Medical Oncologist in the Breast Cancer Program at the

Seattle Cancer Care Alliance and a member of the Clinical Research Division at the Fred

Hutchinson Cancer Research Center.

1 A2/ Cross validity testing (conceptual model): Has this model been compared to other conceptual models found in the literature or clinical textbooks?

Yes, the model has been compared to other clinical models found in the literature. It is noted in section 2.1 that the model was adapted from previously published models. Further, the authors commented in the discussion section that similar findings were reported in previous analyses examining FN prophylaxis strategies in breast cancer in the European setting.

2 Part B: Input data validation

B1/ Face validity testing (input data): Have experts been asked to judge the appropriateness of the input data?

Yes, both clinical (Dr. Gary H. Lyman) and modeling (Dr. Milton C. Weinstein) experts were included in the estimation of the economic model parameters, and were involved in the selection of input parameter sources.

B2/ Model fit testing: When input parameters are based on regression models, have statistical tests been performed?

Input parameters were not based on regression models. Input parameters were obtained from peer-reviewed publications and other publicly available data sources.

Part C: Validation of the computerized model

C1/ External review: Has the computerized model been examined by modelling experts?

The modelling experts listed as authors on this publication, including Dr. Milton C.

Weinstein, have examined the economic model. The model underwent an extensive quality assurance process internally by the authors, including examining both the formulas in the

Microsoft Excel® workbook as well as the code in Visual Basic for Applications for programming errors. The conceptual model was translated appropriately to the Microsoft Excel® workbook and was tested extensively to ensure that changes in input parameters yielded expected results.

C2/ Extreme value testing: Has the model been run for specific, extreme sets of parameter values in order to detect any coding errors?

3 Yes, the model was run for specific, extreme sets of parameter values in order to detect any coding errors as part of the internal quality assurance process. The efficacy of each G-CSF was varied to be either extremely beneficial or detrimental. The hospitalization cost of FN was varied from zero to extremely high values, and the drug acquisition cost of each G-CSF was also set to zero or an extremely high value to ensure accuracy of model results. The input parameters for all G-CSFs were set equivalent to those of no prophylaxis to ensure that the same model outputs were generated. The utility values were set to one to ensure that life-years and quality- adjusted life-years were equivalent.

C3/ Testing of traces: Have patients been tracked through the model to determine whether its logic is correct?

Cohorts of patients were tracked throughout the model to ensure that the logic was correct. Patients were tracked in both the decision tree and the Markov portions to ensure that the entire cohort of patients was accounted for during each cycle of the model.

C4/ Unit testing: Have individual sub-modules of the computerized model been tested?

Yes, individual sub-modules of the computerized model were tested according to a pre- specified protocol. This process included varying all costs to zero individually, each utility to zero and one, as well as the tests related to efficacy as described in C2. The results and sensitivity analyses were also tested extensively (by ensuring that any changes to input values were reflected in the output), and the results of the deterministic sensitivity analysis were also manually checked.

Part D: Operational validation

4 D1/ Face validity testing (model outcomes): Have experts been asked to judge the appropriateness of the model outcomes?

Yes, both clinical (Dr. Gary H. Lyman) and modeling (Dr. Milton C. Weinstein) experts were included in the selection of model outcomes, which include incremental cost per FN event avoided, incremental cost per life-year saved, and incremental cost per quality-adjusted life-year saved.

D2/ Cross validation testing (model outcomes): Have the model outcomes been compared to the outcomes of other models that address similar problems?

The authors note in the discussion section that similar findings were reported in previous analyses examining FN prophylaxis strategies in breast cancer in the European setting. While we did not specify the outcomes used by those models in the discussion section, Danova et al.

(2009) used life-year gained and quality-adjusted life-year gained as the outcomes of interest

[50]. Liu et al. (2009) [48] and Borget et al. (2009) [49] reported cost per FN event avoided in addition to life-year and quality-adjusted life-year gained. Whyte et al. (2011) use QALYs and net monetary benefit as the outcomes of interest [13].

D3/ Validation against outcomes using alternative input data: Have the model outcomes been compared to the outcomes obtained when using alternative input data?

An alternative baseline FN risk of 23.2% per Do et al. [47] was used in place of 19% baseline FN risk per Younis et al. [18]. Additionally, while the PSA was initially conducted using a beta distribution with a 95% confidence interval of 0.19 to 1.00 for the proportion of FN events requiring hospitalization, we performed an alternative PSA using a uniform distribution assuming lower and upper bounds of 0.19 and 1. These alternative analyses are described in sections 2.3.1 of the Methods and 3.2 of the Results.

5 A calibration process was performed to compare the estimated proportion of patients with at least one FN event in the no prophylaxis arm from the model with the results from the literature. For example, the baseline FN risk in Cycle 1 of the model was estimated by calibrating the model using Solver in Microsoft Excel® where the absolute difference between the model-predicted risk at the end of four cycles and the risk of FN over the course of the study by Younis et al. (for TC) [18] is minimized.

D4/ Validation against empirical data: Have the model outcomes been compared to empirical data?

D4.A/ Comparison against the data sources on which the model is based (dependent validation).

As stated in D3, the model predicted risk of FN was verified against the rates reported in the literature.

D4.B/ Comparison against a data source that was not used to build the model

(independent validation).

This is not applicable in this case.

Part E: Other validation techniques

E1/ Other validation techniques: Have any other validation techniques been performed?

Optum performed structured “walk-throughs” with both clinical and modeling experts during the course of model development. The model was validated as described above.

6 Technical Appendix 2

Source references for data inputs [36-38] are included here.

36. Moeremans K, Caekelbergh K, Spaepen E, Annemans L. Economic aspects and drivers of febrile neutropenia in cancer: a multicentre retrospective analysis in Belgium. Presented at: ISPOR 8th Annual European Congress; November 6-8, 2005; Florence, Italy.

37. Somers L, Malfait M, Danel A. Cost-utility of granulocyte-colony stimulating factors for primary prophylaxis of chemotherapy induced febrile neutropenia in breast cancer patients in Belgium. Presented at: ISPOR 15th Annual European Congress; November 3-7, 2012; Berlin, Germany.

38. Verhoef G, Somers L, Bosly A. Number needed to treat (NNT) and cost-utility of granulocyte-colony stimulating factors (G-CSF) for primary prophylaxis (PP) of chemotherapy induced febrile neutropenia (FN) In non Hodgkin’s lymphoma (NHL) patients in Belgium. Presented at: Belgian Hematology Society General Annual Meeting; January 30-February 1, 2014; Ghent, Belgium.

7

Recommended publications