Epilepsyecosystem.org: crowd-sourcing reproducible seizure prediction with long-term human intracranial EEG

Levin Kuhlmann, Philippa Karoly, Dean R. Freestone, Benjamin H. Brinkmann, Andriy Temko, Alexandre Barachant, Feng Li, Gilberto Titericz, Brian W. Lang, Daniel Lavery, Kelly Roman, Derek Broadhead, Scott Dobson, Gareth Jones, Qingnan Tang, Irina Ivanenko, Oleg Panichev, Timothée Proix, Michal Náhlík, Daniel B. GrunbergChip Reuben, Gregory Worrell, Brian Litt, David T.J. Liley, David B. Grayden, Mark J. Cook

Research output: Contribution to journalArticleResearchpeer-review

38 Citations (Scopus)

Abstract

Accurate seizure prediction will transform epilepsy management by offering warnings to patients or triggering interventions. However, state-of-the-art algorithm design relies on accessing adequate long-term data. Crowd-sourcing ecosystems leverage quality data to enable cost-effective, rapid development of predictive algorithms. A crowd-sourcing ecosystem for seizure prediction is presented involving an international competition, a follow-up held-out data evaluation, and an online platform, Epilepsyecosystem.org, for yielding further improvements in prediction performance. Crowd-sourced algorithms were obtained via the 'Melbourne-University AES-MathWorks-NIH Seizure Prediction Challenge' conducted at kaggle.com. Long-term continuous intracranial electroencephalography (iEEG) data (442 days of recordings and 211 lead seizures per patient) from prediction-resistant patients who had the lowest seizure prediction performances from the NeuroVista Seizure Advisory System clinical trial were analysed. Contestants (646 individuals in 478 teams) from around the world developed algorithms to distinguish between 10-min inter-seizure versus pre-seizure data clips. Over 10 000 algorithms were submitted. The top algorithms as determined by using the contest data were evaluated on a much larger held-out dataset. The data and top algorithms are available online for further investigation and development. The top performing contest entry scored 0.81 area under the classification curve. The performance reduced by only 6.7% on held-out data. Many other teams also showed high prediction reproducibility. Pseudo-prospective evaluation demonstrated that many algorithms, when used alone or weighted by circadian information, performed better than the benchmarks, including an average increase in sensitivity of 1.9 times the original clinical trial sensitivity for matched time in warning. These results indicate that clinically-relevant seizure prediction is possible in a wider range of patients than previously thought possible. Moreover, different algorithms performed best for different patients, supporting the use of patient-specific algorithms and long-term monitoring. The crowd-sourcing ecosystem for seizure prediction will enable further worldwide community study of the data to yield greater improvements in prediction performance by way of competition, collaboration and synergism.10.1093/brain/awy210_video1awy210media15817489051001.

Original languageEnglish
Pages (from-to)2619-2630
Number of pages12
JournalBrain
Volume141
Issue number9
DOIs
Publication statusPublished - Sep 2018
Externally publishedYes

Cite this