There has been a considerable amount of research over the last few years devoted towards studying what factors lead to student success in online courses, whether for-credit or open. However, there has been relatively limited work towards formally studying which findings replicate across courses. In this paper, we present an architecture to facilitate replication of this type of research, which can ingest data from an edX Massively Open Online Course (MOOC) and test whether a range of findings apply, in their original form or slightly modified using an automated search process. We identify 21 findings from previously published studies on completion in MOOCs, render them into production rules within our architecture, and test them in the case of a single MOOC, using a post-hoc method to control for multiple comparisons. We find that nine of these previously published results replicate successfully in the current data set and that contradictory results are found in two cases. This work represents a step towards automated replication of correlational research findings at large scale.
|Number of pages||21|
|Journal||Technology, Instruction, Cognition and Learning|
|Publication status||Published - 2017|