Assessing risk of bias in prevalence studies: modification of an existing tool and evidence of interrater agreement

Damian Hoy, Peter G Brooks, Anthony Woolf, Fiona Blyth, Lyn March, Christopher Bain, Peter Baker, Emma Smith, Rachelle Buchbinder

Research output: Contribution to journalArticleResearchpeer-review

627 Citations (Scopus)


Objective: In the course of performing systematic reviews on the prevalence of low back and neck pain, we required a tool to assess the risk of study bias. Our objectives were to (1) modify an existing checklist and (2) test the final tool for interrater agreement. Study Design and Setting: The final tool consists of 10 items addressing four domains of bias plus a summary risk of bias assessment. Two researchers tested the interrater agreement of the tool by independently assessing 54 randomly selected studies. Interrater agreement overall and for each individual item was assessed using the proportion of agreement and Kappa statistic. Results: Raters found the tool easy to use, and there was high interrater agreement: overall agreement was 91 and the Kappa statistic was 0.82 (95 confidence interval: 0.76, 0.86). Agreement was almost perfect for the individual items on the tool and moderate for the summary assessment. Conclusion: We have addressed a research gap by modifying and testing a tool to assess risk of study bias. Further research may be useful for assessing the applicability of the tool across different conditions. ? 2012 Elsevier Inc. All rights reserved.
Original languageEnglish
Pages (from-to)934 - 939
Number of pages6
JournalJournal of Clinical Epidemiology
Issue number9
Publication statusPublished - 2012

Cite this