TY - JOUR
T1 - Evaluating the relationship between citation set size, team size and screening methods used in systematic reviews
T2 - a cross-sectional study
AU - O’Hearn, Katie
AU - MacDonald, Cameron
AU - Tsampalieros, Anne
AU - Kadota, Leo
AU - Sandarage, Ryan
AU - Jayawarden, Supun Kotteduwa
AU - Datko, Michele
AU - Reynolds, John M.
AU - Bui, Thanh
AU - Sultan, Shagufta
AU - Sampson, Margaret
AU - Pratt, Misty
AU - Barrowman, Nick
AU - Nama, Nassr
AU - Page, Matthew
AU - McNally, James Dayre
N1 - Funding Information:
This work was supported in part by a Summer Studentship (CM) from the CHEO Research Institute.
Publisher Copyright:
© 2021, The Author(s).
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/7/8
Y1 - 2021/7/8
N2 - Background: Standard practice for conducting systematic reviews (SRs) is time consuming and involves the study team screening hundreds or thousands of citations. As the volume of medical literature grows, the citation set sizes and corresponding screening efforts increase. While larger team size and alternate screening methods have the potential to reduce workload and decrease SR completion times, it is unknown whether investigators adapt team size or methods in response to citation set sizes. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods. Methods: MEDLINE was searched in April 2019 for SRs on any health topic. A total of 1880 unique publications were identified and sorted into five citation set size categories (after deduplication): < 1,000, 1,001–2,500, 2,501–5,000, 5,001–10,000, and > 10,000. A random sample of 259 SRs were selected (~ 50 per category) for data extraction and analysis. Results: With the exception of the pairwise t test comparing the under 1000 and over 10,000 categories (median 5 vs. 6, p = 0.049) no statistically significant relationship was evident between author number and citation set size. While visual inspection was suggestive, statistical testing did not consistently identify a relationship between citation set size and number of screeners (title-abstract, full text) or data extractors. However, logistic regression identified investigators were significantly more likely to deviate from gold-standard screening methods (i.e. independent duplicate screening) with larger citation sets. For every doubling of citation size, the odds of using gold-standard screening decreased by 15 and 20% at title-abstract and full text review, respectively. Finally, few SRs reported using crowdsourcing (n = 2) or computer-assisted screening (n = 1). Conclusions: Large citation set sizes present a challenge to SR teams, especially when faced with time-sensitive health policy questions. Our study suggests that with increasing citation set size, authors are less likely to adhere to gold-standard screening methods. It is possible that adjunct screening methods, such as crowdsourcing (large team) and computer-assisted technologies, may provide a viable solution for authors to complete their SRs in a timely manner.
AB - Background: Standard practice for conducting systematic reviews (SRs) is time consuming and involves the study team screening hundreds or thousands of citations. As the volume of medical literature grows, the citation set sizes and corresponding screening efforts increase. While larger team size and alternate screening methods have the potential to reduce workload and decrease SR completion times, it is unknown whether investigators adapt team size or methods in response to citation set sizes. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods. Methods: MEDLINE was searched in April 2019 for SRs on any health topic. A total of 1880 unique publications were identified and sorted into five citation set size categories (after deduplication): < 1,000, 1,001–2,500, 2,501–5,000, 5,001–10,000, and > 10,000. A random sample of 259 SRs were selected (~ 50 per category) for data extraction and analysis. Results: With the exception of the pairwise t test comparing the under 1000 and over 10,000 categories (median 5 vs. 6, p = 0.049) no statistically significant relationship was evident between author number and citation set size. While visual inspection was suggestive, statistical testing did not consistently identify a relationship between citation set size and number of screeners (title-abstract, full text) or data extractors. However, logistic regression identified investigators were significantly more likely to deviate from gold-standard screening methods (i.e. independent duplicate screening) with larger citation sets. For every doubling of citation size, the odds of using gold-standard screening decreased by 15 and 20% at title-abstract and full text review, respectively. Finally, few SRs reported using crowdsourcing (n = 2) or computer-assisted screening (n = 1). Conclusions: Large citation set sizes present a challenge to SR teams, especially when faced with time-sensitive health policy questions. Our study suggests that with increasing citation set size, authors are less likely to adhere to gold-standard screening methods. It is possible that adjunct screening methods, such as crowdsourcing (large team) and computer-assisted technologies, may provide a viable solution for authors to complete their SRs in a timely manner.
KW - Crowdsourcing
KW - Machine learning
KW - Scoping reviews
KW - Systematic reviews
UR - http://www.scopus.com/inward/record.url?scp=85109698663&partnerID=8YFLogxK
U2 - 10.1186/s12874-021-01335-5
DO - 10.1186/s12874-021-01335-5
M3 - Article
C2 - 34238247
AN - SCOPUS:85109698663
SN - 1471-2288
VL - 21
JO - BMC Medical Research Methodology
JF - BMC Medical Research Methodology
IS - 1
M1 - 142
ER -