We need to talk about reliability: making better use of test-retest studies for study design and interpretation Granville J. Matheson Department of Clinical Neuroscience, Center for Psychiatry Research, Karolinska Institutet and Stockholm County Council, Stockholm, Sweden ABSTRACT Neuroimaging, in addition to many other fields of clinical research, is both time- consuming and expensive, and recruitable patients can be scarce. These constraints limit the possibility of large-sample experimental designs, and often lead to statistically underpowered studies. This problem is exacerbated by the use of outcome measures whose accuracy is sometimes insufficient to answer the scientific questions posed. Reliability is usually assessed in validation studies using healthy participants, however these results are often not easily applicable to clinical studies examining different populations. I present a new method and tools for using summary statistics from previously published test-retest studies to approximate the reliability of outcomes in new samples. In this way, the feasibility of a new study can be assessed during planning stages, and before collecting any new data. An R package called relfeas also accompanies this article for performing these calculations. In summary, these methods and tools will allow researchers to avoid performing costly studies which are, by virtue of their design, unlikely to yield informative conclusions. Subjects Neuroscience, Psychiatry and Psychology, Radiology and Medical Imaging, Statistics Keywords Reliability, Positron Emission Tomography, Neuroimaging, Study design, R package, Power analysis, Intraclass correlation coefficient Submitted 27 September 2018 Accepted 7 April 2019 Published 24 May 2019 INTRODUCTION Corresponding author Granville J. Matheson, In the assessment of individual differences, reliability is typically assessed using test-
[email protected], retest reliability, inter-rater reliability or internal consistency.