Sample size matters when estimating test-retest reliability of behaviour

[thumbnail of manuscript_accepted.pdf]
Text - Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.
Restricted to Repository staff only

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Williams, B. orcid id iconORCID: https://orcid.org/0000-0003-3844-3117, Fitzgibbon, L. orcid id iconORCID: https://orcid.org/0000-0002-8563-391X, Brady, D. and Christakou, A. orcid id iconORCID: https://orcid.org/0000-0002-4267-3436 (2024) Sample size matters when estimating test-retest reliability of behaviour. Behavior Research Methods. ISSN 1554-3528 doi: 10.3758/s13428-025-02599-1 (In Press)

Abstract/Summary

Intraclass correlation coefficients (ICCs) are a commonly used metric in test-retest reliability research to assess a measure’s ability to quantify systematic between-subject differences. However, estimates of between-subject differences are also influenced by factors including within-subject variability, random errors, and measurement bias. Here, we use data collected from a large online sample (N=150) to: 1. Quantify test-retest reliability of behavioural and computational measures of reversal learning using ICCs. 2. Use our dataset as the basis of a simulation study investigating effects of sample size on variance component estimation, and the association between estimates of variance components and ICC measures. In line with previously published work we find reliable behavioural and computational measures of reversal learning, a commonly used assay of behavioural flexibility. Reliable estimates of between-subjects, within-subjects (across-session), and error variance components for behavioural and computational measures (with ± .05 precision and 80% confidence) required sample sizes ranging from 10 to >300 (behavioural median N: between-subjects=167, within-subjects=34, error=103; computational median N: between-subjects=68, within-subjects=20, error=45). These sample sizes exceed those often used in reliability studies, suggesting larger sample sizes than are commonly used for reliability studies (circa 30) are required to robustly estimate reliability of task performance measures. Additionally, we found that ICC estimates showed highly positive and highly negative correlations respectively with between-subject and error variance components as might be expected, which remained relatively stable across sample sizes. However, ICC estimates were weakly or not correlated with within-subjects variance, providing evidence for the importance of variance decomposition for reliability studies.

Altmetric Badge

Item Type Article
URI https://reading-clone.eprints-hosting.org/id/eprint/120436
Identification Number/DOI 10.3758/s13428-025-02599-1
Refereed Yes
Divisions Interdisciplinary Research Centres (IDRCs) > Centre for Integrative Neuroscience and Neurodynamics (CINN)
Life Sciences > School of Psychology and Clinical Language Sciences > Department of Psychology
Life Sciences > School of Psychology and Clinical Language Sciences > Neuroscience
Life Sciences > School of Psychology and Clinical Language Sciences > Language and Cognition
Uncontrolled Keywords reliability; test retest; sample size; reinforcement learning; computational modelling; reversal learning; cognitive flexibility
Publisher Springer
Download/View statistics View download statistics for this item

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar