Curtailing marking variation and enhancing feedback in large scale undergraduate chemistry courses through reducing academic judgement: a case study

Stephen George-Williams, Mary-Rose Carroll, Angela Ziebell, Christopher Thompson, Tina Overton

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.

Original languageEnglish
Pages (from-to)881-893
Number of pages13
JournalAssessment & Evaluation in Higher Education
Volume44
Issue number6
DOIs
Publication statusPublished - 18 Aug 2019

Keywords

  • Electronic marking
  • large cohorts
  • marking criteria
  • sessional teaching staff

Cite this