Optimal Adaptive Designs and Adaptive Randomization Techniques for Clinical Trials

Total Page:16

File Type:pdf, Size:1020Kb

Optimal Adaptive Designs and Adaptive Randomization Techniques for Clinical Trials UPPSALA DISSERTATIONS IN MATHEMATICS 113 Optimal adaptive designs and adaptive randomization techniques for clinical trials Yevgen Ryeznik Department of Mathematics Uppsala University UPPSALA 2019 Dissertation presented at Uppsala University to be publicly examined in Ång 4101, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Friday, 26 April 2019 at 09:00 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Associate Professor Franz König (Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna). Abstract Ryeznik, Y. 2019. Optimal adaptive designs and adaptive randomization techniques for clinical trials. Uppsala Dissertations in Mathematics 113. 94 pp. Uppsala: Department of Mathematics. ISBN 978-91-506-2748-0. In this Ph.D. thesis, we investigate how to optimize the design of clinical trials by constructing optimal adaptive designs, and how to implement the design by adaptive randomization. The results of the thesis are summarized by four research papers preceded by three chapters: an introduction, a short summary of the results obtained, and possible topics for future work. In Paper I, we investigate the structure of a D-optimal design for dose-finding studies with censored time-to-event outcomes. We show that the D-optimal design can be much more efficient than uniform allocation design for the parameter estimation. The D-optimal design obtained depends on true parameters of the dose-response model, so it is a locally D-optimal design. We construct two-stage and multi-stage adaptive designs as approximations of the D-optimal design when prior information about model parameters is not available. Adaptive designs provide very good approximations to the locally D-optimal design, and can potentially reduce total sample size in a study with a pre-specified stopping criterion. In Paper II, we investigate statistical properties of several restricted randomization procedures which target unequal allocation proportions in a multi-arm trial. We compare procedures in terms of their operational characteristics such as balance, randomness, type I error/ power, and allocation ratio preserving (ARP) property. We conclude that there is no single “best” randomization procedure for all the target allocation proportions, but the choice of randomization can be done through computer-intensive simulations for a particular target allocation. In Paper III, we combine the results from the papers I and II to implement optimal designs in practice when the sample size is small. The simulation study done in the paper shows that the choice of randomization procedure has an impact on the quality of dose-response estimation. An adaptive design with a small cohort size should be implemented with a procedure that ensures a “well-balanced” allocation according to the D-optimal design at each stage. In Paper IV, we obtain an optimal design for a comparative study with unequal treatment costs and investigate its properties. We demonstrate that unequal allocation may decrease the total study cost while having the same power as traditional equal allocation. However, a larger sample size may be required. We suggest a strategy on how to choose a suitable randomization procedure which provides a good trade-off between balance and randomness to implement optimal allocation. If there is a strong linear trend in observations, then the ARP property is important to maintain the type I error and power at a certain level. Otherwise, a randomization- based inference can be a good alternative for non-ARP procedures. Keywords: optimal designs, optimal adaptive designs, randomization in clinical trials, restricted randomization, adaptive randomization, unequal cost, allocation ratio preserving randomization procedures, heterogeneous costs, multi-arm clinical trials, randomization-based inference, unequal allocation Yevgen Ryeznik, Department of Mathematics, Box 480, Uppsala University, SE-75106 Uppsala, Sweden. © Yevgen Ryeznik 2019 ISSN 1401-2049 ISBN 978-91-506-2748-0 urn:nbn:se:uu:diva-377552 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-377552) 'HGLFDWHG WR P\ IDPLO\ DQG P\ WHDFKHUV /LVW RI SDSHUV 7KLV WKHVLV LV EDVHG RQ WKH IROORZLQJ SDSHUV ZKLFK DUH UHIHUUHG WR LQ WKH WH[W E\ WKHLU 5RPDQ QXPHUDOV , <HYJHQ 5\H]QLN 2OHNVDQGU 6YHUGORY DQG $QGUHZ & +RRNHU ³$GDSWLYH 2SWLPDO 'HVLJQV IRU 'RVH)LQGLQJ 6WXGLHV ZLWK 7LPHWR(YHQW 2XWFRPHV´ 7KH $$36 -RXUQDO ,, <HYJHQ 5\H]QLN DQG 2OHNVDQGU 6YHUGORY ³$ &RPSDUDWLYH 6WXG\ RI 5HVWULFWHG 5DQGRPL]DWLRQ 3URFHGXUHV IRU 0XOWLDUP 7ULDOV ZLWK (TXDO RU 8QHTXDO 7UHDWPHQW $OORFDWLRQ 5DWLRV´ 6WDWLVWLFV LQ 0HGLFLQH ,,, <HYJHQ 5\H]QLN 2OHNVDQGU 6YHUGORY DQG $QGUHZ & +RRNHU ³,PSOHPHQWLQJ 2SWLPDO 'HVLJQV IRU 'RVH5HVSRQVH 6WXGLHV 7KURXJK $GDSWLYH 5DQGRPL]DWLRQ IRU D 6PDOO 3RSXODWLRQ *URXS´ 7KH $$36 -RXUQDO ,9 2OHNVDQGU 6YHUGORY DQG <HYJHQ 5\H]QLN ³,PSOHPHQWLQJ 8QHTXDO 5DQGRPL]DWLRQ LQ &OLQLFDO 7ULDOV ZLWK +HWHURJHQHRXV 7UHWDPHQW &RVW´ 6WDWLVWLFV LQ 0HGLFLQH Accepted 5HSULQWV ZHUH PDGH ZLWK SHUPLVVLRQ IURP WKH SXEOLVKHUV 7KH IROORZLQJ SDSHUV KDYH DOVR EHHQ SXEOLVKHG GXULQJ WKH 3K' VWXG\ SHULRG EXW DUH QRW LQFOXGHG LQ WKH WKHVLV 2OHNVDQGU 6YHUGORY <HYJHQ 5\H]QLN DQG 6KHQJ :X ³([DFW %D\HVLDQ ,QIHUHQFH &RPSDULQJ %LQRPLDO 3URSRUWLRQV :LWK $SSOLFD WLRQ WR 3URRIRI&RQFHSW &OLQLFDO 7ULDOV´ 7KHUDSHXWLF ,QQRYDWLRQ 5HJXODWRU\ 6FLHQFH <HYJHQ 5\H]QLN 2OHNVDQGU 6YHUGORY DQG :HQJ .HH :RQJ ³5$5WRRO $ 0$7/$%6RIWZDUH 3DFNDJH IRU 'HVLJQLQJ 5HVSRQVH$GDSWLYH 5DQGRPL]HG &OLQLFDO 7ULDOV ZLWK 7LPHWR(YHQW 2XWFRPHV´ -RXUQDO RI 6WDWLVWLFDO 6RIWZDUH MVVYL &RQWHQWV ,QWURGXFWLRQ &OLQLFDO 7ULDOV LQ 'UXJ 'HYHORSPHQW 2SWLPDO 'HVLJQV 6WDWLVWLFDO %DFNJURXQG 'HVLJQLQJ D 'RVH5HVSRQVH 7ULDO ZLWK 7LPHWR(YHQW 2XWFRPHV 'HVLJQLQJ D 0XOWLDUP &RPSDUDWLYH 7ULDO ZLWK +HWHURJHQHRXV 7UHDWPHQW &RVWV $GDSWLYH 'HVLJQV 5DQGRPL]DWLRQ 6WDWLVWLFDO ,QIHUHQFH 3DUDPHWHU (VWLPDWLRQ IRU D 'RVH)LQGLQJ 6WXG\ 3RSXODWLRQ DQG 5DQGRPL]DWLRQEDVHG 7HVWV 6XPPDU\ RI WKH 5HVXOWV 5HVXOWV RI WKH 7KHVLV 3DSHU , 3DSHU ,, 3DSHU ,,, 3DSHU ,9 )XWXWUH :RUN ([WHQVLRQV WR JOREDO RSWLPDO DGDSWLYH GHVLJQV &RPSDULVRQ RI WKH ORFDOO\ 'RSWLPDO GHVLJQ DQG JOREDO '2SWLPDO GHVLJQ ,QFRUSRUDWLQJ FRYDULDWHV ([WHQVLRQV WR PL[HG HIIHFWV DQG H[SRVXUHUHVSRQVH PRGHOV 6WDWLVWLFDO ,QIHUHQFH 6DPPDQIDWWQLQJ Sn 6YHQVND $FNQRZOHGJHPHQWV 5HIHUHQFHV ,QWURGXFWLRQ ,Q WKLV 3K' WKHVLV ZH FRQVLGHU VHYHUDO VWUDWHJLHV WR RSWLPL]H WKH GHVLJQ RI FOLQLFDO WULDOV ZLWK GRVHILQGLQJ REMHFWLYHV 7KH RSWLPL]DWLRQ RI D WULDO GHVLJQ PHDQV WKDW DQ H[SHULPHQWHU FDQ DFKLHYH KLJKHU TXDOLW\ UHVXOWV HJ PRUH DFFX UDWH HVWLPDWHV RI WKH SDUDPHWHUV RI LQWHUHVW IRU D JLYHQ VDPSOH VL]H RU UHGXFH WKH VDPSOH VL]H WR UHDFK WKH WDUJHWHG OHYHO RI REMHFWLYHV¶ TXDOLW\ :H GHYHORS VRPH QRYHO VWDWLVWLFDO GHVLJQV WKDW IDFLOLWDWH HIILFLHQW OHDUQLQJ IURP DFFXPXODWLQJ GDWD LQ WKH WULDO DQG SHUIRUP DGDSWLYH DOORFDWLRQ RI VWXG\ VXEMHFWV WR WUHDWPHQW JURXSV GRVH OHYHOV WR PD[LPL]H WKH DPRXQW RI LQIRUPD WLRQ WKDW FDQ EH REWDLQHG XQGHU PRGHO XQFHUWDLQW\ :H VKRZ WKURXJK H[WHQVLYH VLPXODWLRQV WKDW RXU RSWLPDO DGDSWLYH GHVLJQV FDQ EH PRUH HIILFLHQW WKDQ PRUH FRQYHQWLRQDO IL[HG DOORFDWLRQ GHVLJQV :H DOVR LQYHVWLJDWH LPSOHPHQWDWLRQ LV VXHV RI RSWLPDO GHVLJQV WKURXJK LQFRUSRUDWLRQ RI DGDSWLYH UDQGRPL]DWLRQ 7KH XVH RI UDQGRPL]DWLRQ LV DQ LPSRUWDQW SUDFWLFDO DVSHFW DV UDQGRPL]DWLRQ KHOSV SURWHFW WKH VWXG\ IURP YDULRXV H[SHULPHQWDO ELDVHV DQG HQVXUH WKDW WULDO UHVXOWV DUH VWDWLVWLFDOO\ ULJRURXV )LQDOO\ ZH LQYHVWLJDWH VWDWLVWLFDO DQDO\VLV LVVXHV IRO ORZLQJ WKH SURSRVHG DGDSWLYH GHVLJQV :H FRQFOXGH WKDW RSWLPL]DWLRQ RI D WULDO GHVLJQ WKH FKRLFH RI D UDQGRPL]DWLRQ SURFHGXUH WR LPSOHPHQW WKH RSWLPDO GH VLJQ DQG WKH FKRLFH RI SURSHU VWDWLVWLFDO PHWKRGRORJLHV WR DQDO\]H WKH GDWD DUH DOO YHU\ LPSRUWDQW FRPSRQHQWV WR DFKLHYH YDOLG DQG UREXVW FOLQLFDO WULDO UHVXOWV ,Q VHFWLRQ ZH GHVFULEH FOLQLFDO WULDOV LQ WKH GUXJ GHYHORSPHQW SURFHVV ZLWK SDUWLFXODU DWWHQWLRQ WR GRVHUHVSRQVH WULDOV DQG JLYH D PRWLYDWLRQ IRU WKH FXUUHQW UHVHDUFK 6HFWLRQ SURYLGHV VWDWLVWLFDO EDFNJURXQG RQ RSWLPDO GH VLJQV IRU GRVHUHVSRQVH VWXGLHV ZLWK WLPHWRHYHQW RXWFRPHV DQG RSWLPDO GH VLJQ IRU FRPSDUDWLYH VWXG\ ZLWK XQHTXDO WUHDWPHQW FRVW 6HFWLRQ GHVFULEHV WKH FRQFHSW RI DGDSWLYH PXOWLVWDJH GHVLJQ 6HFWLRQ GHDOV ZLWK WKH UDQ GRPL]DWLRQ WHFKQLTXH DQG KRZ LW FDQ EH XVHG WR LPSOHPHQW RSWLPDO GHVLJQV LQ SUDFWLFH 6HFWLRQ GHVFULEHV VRPH PHWKRGV RI VWDWLVWLFDO DQDO\VLV RI GDWD IROORZLQJ DGDSWLYH GHVLJQV )LQDOO\ VHFWLRQ JLYHV D KLJK OHYHO RYHUYLHZ RI WKH IRUWKFRPLQJ UHVXOWV WKDW DUH GHVFULEHG LQ GHWDLO LQ &KDSWHU &OLQLFDO 7ULDOV LQ 'UXJ 'HYHORSPHQW %\ WKH WHUP FOLQLFDO WULDO ZH DVVXPH D UHVHDUFK VWXG\ FRQGXFWHG WR DVVHVV WKH XWLOLW\ HJ VDIHW\ WROHUDELOLW\ HIILFDF\ ULVNEHQHILW UDWLR RI DQ LQWHUYHQWLRQ HJ GUXJV ELRORJLFV PHGLFDO GHYLFHV VFUHHQLQJ PHWKRGV QRYHO VXUJLFDO SUR FHGXUHV QRYHO WKHUDSLHV HWF LQ YROXQWHHUV KHDOWK\ VXEMHFWV RU SDWLHQWV ZLWK WKH GLVHDVH RI LQWHUHVW >@ ,Q WKH FRQWHPSRUDU\ ZRUOG FRQWLQXRXV GHYHORSPHQWV LQ ELRWHFKQRORJ\ JH QRPLFV DQG PHGLFLQH OHDG WR VLJQLILFDQW GLVFRYHULHV WKDW FDQ SRWHQWLDOO\ EH FRPH WUHDWPHQWV IRU KLJK XQPHW PHGLFDO QHHGV VXFK DV FDQFHU $O]KHLPHU¶V GLVHDVH +,9$,'6 HWF 7KH GHYHORSPHQW RI D QHZ GUXJ LV D ORQJ DQG FRP SOH[ SURFHVV WKDW FDQ WDNH \HDUV DQG HYHQ GHFDGHV 0RVW FRPPRQO\ GUXJ GH YHORSPHQW LV FDUULHG RXW E\ SKDUPDFHXWLFDO RU ELRWHFKQRORJ\ FRPSDQLHV DQG WKLV SURFHVV LV VSOLW LQWR VHYHUDO VWDJHV >@ 7KH SURFHVV VWDUWV ZLWK GUXJ GLV FRYHU\ DQG SUHFOLQLFDO GUXJ GHYHORSPHQW ZKLFK LQYROYHV LGHQWLILFDWLRQ RI WKH ULJKW
Recommended publications
  • Design of Experiments Application, Concepts, Examples: State of the Art
    Periodicals of Engineering and Natural Scinces ISSN 2303-4521 Vol 5, No 3, December 2017, pp. 421‒439 Available online at: http://pen.ius.edu.ba Design of Experiments Application, Concepts, Examples: State of the Art Benjamin Durakovic Industrial Engineering, International University of Sarajevo Article Info ABSTRACT Article history: Design of Experiments (DOE) is statistical tool deployed in various types of th system, process and product design, development and optimization. It is Received Aug 3 , 2018 multipurpose tool that can be used in various situations such as design for th Revised Oct 20 , 2018 comparisons, variable screening, transfer function identification, optimization th Accepted Dec 1 , 2018 and robust design. This paper explores historical aspects of DOE and provides state of the art of its application, guides researchers how to conceptualize, plan and conduct experiments, and how to analyze and interpret data including Keyword: examples. Also, this paper reveals that in past 20 years application of DOE have Design of Experiments, been grooving rapidly in manufacturing as well as non-manufacturing full factorial design; industries. It was most popular tool in scientific areas of medicine, engineering, fractional factorial design; biochemistry, physics, computer science and counts about 50% of its product design; applications compared to all other scientific areas. quality improvement; Corresponding Author: Benjamin Durakovic International University of Sarajevo Hrasnicka cesta 15 7100 Sarajevo, Bosnia Email: [email protected] 1. Introduction Design of Experiments (DOE) mathematical methodology used for planning and conducting experiments as well as analyzing and interpreting data obtained from the experiments. It is a branch of applied statistics that is used for conducting scientific studies of a system, process or product in which input variables (Xs) were manipulated to investigate its effects on measured response variable (Y).
    [Show full text]
  • How Many Stratification Factors Is Too Many" to Use in a Randomization
    How many Strati cation Factors is \Too Many" to Use in a Randomization Plan? Terry M. Therneau, PhD Abstract The issue of strati cation and its role in patient assignment has gen- erated much discussion, mostly fo cused on its imp ortance to a study [1,2] or lack thereof [3,4]. This rep ort fo cuses on a much narrower prob- lem: assuming that strati ed assignment is desired, how many factors can b e accomo dated? This is investigated for two metho ds of balanced patient assignment, the rst is based on the minimization metho d of Taves [5] and the second on the commonly used metho d of strati ed assignment [6,7]. Simulation results show that the former metho d can accomo date a large number of factors 10-20 without diculty, but that the latter b egins to fail if the total numb er of distinct combinations of factor levels is greater than approximately n=2. The two metho ds are related to a linear discriminant mo del, which helps to explain the results. 1 1 Intro duction This work arose out of my involvement with lung cancer studies within the North Central Cancer Treatment Group. These studies are small, typically 100 to 200 patients, and there are several imp ortant prognostic factors on whichwe wish to balance. Some of the factors, such as p erformance or Karnofsky score, are likely to contribute a larger survival impact than the treatment under study; hazards ratios of 3{4 are not uncommon. In this setting it b ecomes very dicult to interpret study results if any of the factors are signi cantly out of balance.
    [Show full text]
  • Design of Experiments (DOE) Using JMP Charles E
    PharmaSUG 2014 - Paper PO08 Design of Experiments (DOE) Using JMP Charles E. Shipp, Consider Consulting, Los Angeles, CA ABSTRACT JMP has provided some of the best design of experiment software for years. The JMP team continues the tradition of providing state-of-the-art DOE support. In addition to the full range of classical and modern design of experiment approaches, JMP provides a template for Custom Design for specific requirements. The other choices include: Screening Design; Response Surface Design; Choice Design; Accelerated Life Test Design; Nonlinear Design; Space Filling Design; Full Factorial Design; Taguchi Arrays; Mixture Design; and Augmented Design. Further, sample size and power plots are available. We give an introduction to these methods followed by a few examples with factors. BRIEF HISTORY AND BACKGROUND From early times, there has been a desire to improve processes with controlled experimentation. A famous early experiment cured scurvy with seamen taking citrus fruit. Lives were saved. Thomas Edison invented the light bulb with many experiments. These experiments depended on trial and error with no software to guide, measure, and refine experimentation. Japan led the way with improving manufacturing—Genichi Taguchi wanted to create repeatable and robust quality in manufacturing. He had been taught management and statistical methods by William Edwards Deming. Deming taught “Plan, Do, Check, and Act”: PLAN (design) the experiment; DO the experiment by performing the steps; CHECK the results by testing information; and ACT on the decisions based on those results. In America, other pioneers brought us “Total Quality” and “Six Sigma” with “Design of Experiments” being an advanced methodology.
    [Show full text]
  • Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Arxiv:1810.08389V1 [Stat.ME] 19 Oct 20
    Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Adam Kapelner, Department of Mathematics, Queens College, CUNY, Abba M. Krieger, Department of Statistics, The Wharton School of the University of Pennsylvania, Uri Shalit and David Azriel Faculty of Industrial Engineering and Management, The Technion October 22, 2018 Abstract There is a movement in design of experiments away from the classic randomization put forward by Fisher, Cochran and others to one based on optimization. In fixed-sample trials comparing two groups, measurements of subjects are known in advance and subjects can be divided optimally into two groups based on a criterion of homogeneity or \imbalance" between the two groups. These designs are far from random. This paper seeks to understand the benefits and the costs over classic randomization in the context of different performance criterions such as Efron's worst-case analysis. In the criterion that we motivate, randomization beats optimization. However, the optimal arXiv:1810.08389v1 [stat.ME] 19 Oct 2018 design is shown to lie between these two extremes. Much-needed further work will provide a procedure to find this optimal designs in different scenarios in practice. Until then, it is best to randomize. Keywords: randomization, experimental design, optimization, restricted randomization 1 1 Introduction In this short survey, we wish to investigate performance differences between completely random experimental designs and non-random designs that optimize for observed covariate imbalance. We demonstrate that depending on how we wish to evaluate our estimator, the optimal strategy will change. We motivate a performance criterion that when applied, does not crown either as the better choice, but a design that is a harmony between the two of them.
    [Show full text]
  • Some Restricted Randomization Rules in Sequential Designs Jose F
    This article was downloaded by: [Georgia Tech Library] On: 10 April 2013, At: 12:48 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Communications in Statistics - Theory and Methods Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/lsta20 Some Restricted randomization rules in sequential designs Jose F. Soares a & C.F. Jeff Wu b a Federal University of Minas Gerais, Brazil b University of Wisconsin Madison and Mathematical Sciences Research Institute, Berkeley Version of record first published: 27 Jun 2007. To cite this article: Jose F. Soares & C.F. Jeff Wu (1983): Some Restricted randomization rules in sequential designs, Communications in Statistics - Theory and Methods, 12:17, 2017-2034 To link to this article: http://dx.doi.org/10.1080/03610928308828586 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
    [Show full text]
  • Optimal Design of Experiments Session 4: Some Theory
    Optimal design of experiments Session 4: Some theory Peter Goos 1 / 40 Optimal design theory Ï continuous or approximate optimal designs Ï implicitly assume an infinitely large number of observations are available Ï is mathematically convenient Ï exact or discrete designs Ï finite number of observations Ï fewer theoretical results 2 / 40 Continuous versus exact designs continuous Ï ½ ¾ x1 x2 ... xh Ï » Æ w1 w2 ... wh Ï x1,x2,...,xh: design points or support points w ,w ,...,w : weights (w 0, P w 1) Ï 1 2 h i ¸ i i Æ Ï h: number of different points exact Ï ½ ¾ x1 x2 ... xh Ï » Æ n1 n2 ... nh Ï n1,n2,...,nh: (integer) numbers of observations at x1,...,xn P n n Ï i i Æ Ï h: number of different points 3 / 40 Information matrix Ï all criteria to select a design are based on information matrix Ï model matrix 2 3 2 T 3 1 1 1 1 1 1 f (x1) ¡ ¡ Å Å Å T 61 1 1 1 1 17 6f (x2)7 6 Å ¡ ¡ Å Å 7 6 T 7 X 61 1 1 1 1 17 6f (x3)7 6 ¡ Å ¡ Å Å 7 6 7 Æ 6 . 7 Æ 6 . 7 4 . 5 4 . 5 T 100000 f (xn) " " " " "2 "2 I x1 x2 x1x2 x1 x2 4 / 40 Information matrix Ï (total) information matrix 1 1 Xn M XT X f(x )fT (x ) 2 2 i i Æ σ Æ σ i 1 Æ Ï per observation information matrix 1 T f(xi)f (xi) σ2 5 / 40 Information matrix industrial example 2 3 11 0 0 0 6 6 6 0 6 0 0 0 07 6 7 1 6 0 0 6 0 0 07 ¡XT X¢ 6 7 2 6 7 σ Æ 6 0 0 0 4 0 07 6 7 4 6 0 0 0 6 45 6 0 0 0 4 6 6 / 40 Information matrix Ï exact designs h X T M ni f(xi)f (xi) Æ i 1 Æ where h = number of different points ni = number of replications of point i Ï continuous designs h X T M wi f(xi)f (xi) Æ i 1 Æ 7 / 40 D-optimality criterion Ï seeks designs that minimize variance-covariance matrix of ¯ˆ ¯ 2 T 1¯ Ï .
    [Show full text]
  • Clinical Performance Assessment: Considerations for Computer
    Contains Nonbinding Recommendations Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data in - Premarket Notification (510(k)) Submissions Guidance for Industry and FDA Staff Document issued on: January 22, 2020 Document originally issued on July 3, 2012 For questions regarding this guidance document, contact Nicholas Petrick at 301-796-2563, or by e- mail at [email protected]; or Robert Ochs at 301-796-6661, or by e-mail at [email protected]. U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health Contains Nonbinding Recommendations Preface Public Comment You may submit electronic comments and suggestions at any time for Agency consideration to https://www.regulations.gov. Submit written comments to the Dockets Management Staff, Food and Drug Administration, 5630 Fishers Lane, Room 1061, (HFA-305), Rockville, MD 20852. Identify all comments with the docket number FDA-2009-D-0503. Comments may not be acted upon by the Agency until the document is next revised or updated. Additional Copies Additional copies are available from the Internet. You may also send an e-mail request to CDRH- [email protected] to receive a copy of the guidance. Please use the document number (1698) to identify the guidance you are requesting. Table of Contents 1. INTRODUCTION .................................................................................................................... 4 2.
    [Show full text]
  • FDA Guidance: “Design Considerations for Pivotal Clinical Investigations for Medical Devices”
    FDA Guidance: “Design Considerations for Pivotal Clinical Investigations for Medical Devices” Greg Campbell, Ph.D. Director, Division of Biostatistics Center for Devices and Radiological Health U.S. Food and Drug Administration [email protected] FDA Stakeholder Webinar December 4, 2013 Design Considerations for Pivotal Clinical Investigations • http://www.fda.gov/medicaldevices/deviceregulation andguidance/guidancedocuments/ucm373750.htm • Draft issued Aug. 15, 2011; Final issued Nov. 7, 2013. • It is guidance for industry, clinical investigators, institutional review boards and FDA staff. 2 Design Considerations for Pivotal Clinical Investigations • Guidance should help manufacturers select appropriate trial design. • Better trial design and improve the quality of data that may better support the safety and effectiveness of a device. • Better quality data may lead to timelier FDA approval or clearance of premarket submissions and speed U.S. patient access to new devices. 3 Outline • Regulatory Considerations • Principles for the Choice of Clinical Study Design • Clinical Outcome Studies – RCTs – Controls – Non-Randomized Controls – Observational Uncontrolled One-Arm Studies • OPCs and Performance Goals – Diagnostic Clinical Outcome Studies • Diagnostic Clinical Performance Studies • Sustaining the Level of Evidence of Studies • Protocol and Statistical Analysis Plan • Closing Remarks 4 Stages of Medical Device Clinical Studies 1) Exploratory Stage – first-in-human and feasibility/pilot studies, iterative learning and product development. 2) Pivotal Stage – definitive study to support the safety and effectiveness evaluation of the medical device for its intended use. 3) Postmarket Stage – includes studies intended to better understand the long-term effectiveness and safety of the device, including rare adverse events. Not all products will go through all stages.
    [Show full text]
  • Concepts Underlying the Design of Experiments
    Concepts Underlying the Design of Experiments March 2000 University of Reading Statistical Services Centre Biometrics Advisory and Support Service to DFID Contents 1. Introduction 3 2. Specifying the objectives 5 3. Selection of treatments 6 3.1 Terminology 6 3.2 Control treatments 7 3.3 Factorial treatment structure 7 4. Choosing the sites 9 5. Replication and levels of variation 10 6. Choosing the blocks 12 7. Size and shape of plots in field experiments 13 8. Allocating treatment to units 14 9. Taking measurements 15 10. Data management and analysis 17 11. Taking design seriously 17 © 2000 Statistical Services Centre, The University of Reading, UK 1. Introduction This guide describes the main concepts that are involved in designing an experiment. Its direct use is to assist scientists involved in the design of on-station field trials, but many of the concepts also apply to the design of other research exercises. These range from simulation exercises, and laboratory studies, to on-farm experiments, participatory studies and surveys. What characterises the design of on-station experiments is that the researcher has control of the treatments to apply and the units (plots) to which they will be applied. The results therefore usually provide a detailed knowledge of the effects of the treat- ments within the experiment. The major limitation is that they provide this information within the artificial "environment" of small plots in a research station. A laboratory study should achieve more precision, but in a yet more artificial environment. Even more precise is a simulation model, partly because it is then very cheap to collect data.
    [Show full text]
  • The Theory of the Design of Experiments
    The Theory of the Design of Experiments D.R. COX Honorary Fellow Nuffield College Oxford, UK AND N. REID Professor of Statistics University of Toronto, Canada CHAPMAN & HALL/CRC Boca Raton London New York Washington, D.C. C195X/disclaimer Page 1 Friday, April 28, 2000 10:59 AM Library of Congress Cataloging-in-Publication Data Cox, D. R. (David Roxbee) The theory of the design of experiments / D. R. Cox, N. Reid. p. cm. — (Monographs on statistics and applied probability ; 86) Includes bibliographical references and index. ISBN 1-58488-195-X (alk. paper) 1. Experimental design. I. Reid, N. II.Title. III. Series. QA279 .C73 2000 001.4 '34 —dc21 00-029529 CIP This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W.
    [Show full text]
  • Argus: Interactive a Priori Power Analysis
    © 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/ Argus: Interactive a priori Power Analysis Xiaoyi Wang, Alexander Eiselmayer, Wendy E. Mackay, Kasper Hornbæk, Chat Wacharamanotham 50 Reading Time (Minutes) Replications: 3 Power history Power of the hypothesis Screen – Paper 1.0 40 1.0 Replications: 2 0.8 0.8 30 Pairwise differences 0.6 0.6 20 0.4 0.4 10 0 1 2 3 4 5 6 One_Column Two_Column One_Column Two_Column 0.2 Minutes with 95% confidence interval 0.2 Screen Paper Fatigue Effect Minutes 0.0 0.0 0 2 3 4 6 8 10 12 14 8 12 16 20 24 28 32 36 40 44 48 Fig. 1: Argus interface: (A) Expected-averages view helps users estimate the means of the dependent variables through interactive chart. (B) Confound sliders incorporate potential confounds, e.g., fatigue or practice effects. (C) Power trade-off view simulates data to calculate statistical power; and (D) Pairwise-difference view displays confidence intervals for mean differences, animated as a dance of intervals. (E) History view displays an interactive power history tree so users can quickly compare statistical power with previously explored configurations. Abstract— A key challenge HCI researchers face when designing a controlled experiment is choosing the appropriate number of participants, or sample size. A priori power analysis examines the relationships among multiple parameters, including the complexity associated with human participants, e.g., order and fatigue effects, to calculate the statistical power of a given experiment design.
    [Show full text]
  • Advanced Power Analysis Workshop
    Advanced Power Analysis Workshop www.nicebread.de PD Dr. Felix Schönbrodt & Dr. Stella Bollmann www.researchtransparency.org Ludwig-Maximilians-Universität München Twitter: @nicebread303 •Part I: General concepts of power analysis •Part II: Hands-on: Repeated measures ANOVA and multiple regression •Part III: Power analysis in multilevel models •Part IV: Tailored design analyses by simulations in 2 Part I: General concepts of power analysis •What is “statistical power”? •Why power is important •From power analysis to design analysis: Planning for precision (and other stuff) •How to determine the expected/minimally interesting effect size 3 What is statistical power? A 2x2 classification matrix Reality: Reality: Effect present No effect present Test indicates: True Positive False Positive Effect present Test indicates: False Negative True Negative No effect present 5 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 6 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 7 A priori power analysis: We assume that the effect exists in reality Reality: Reality: Effect present No effect present Power = α=5% Test indicates: 1- β p < .05 True Positive False Positive Test indicates: β=20% p > .05 False Negative True Negative 8 Calibrate your power feeling total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 9 total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 10 The power ofwithin-SS designs Thepower 11 May, K., & Hittner, J.
    [Show full text]