Curtailing marking variation and enhancing feedback in large scale undergraduate chemistry courses through reducing academic judgement: a case study

Stephen George-Williams, Mary-Rose Carroll, Angela Ziebell, Christopher Thompson, Tina Overton

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.

Original languageEnglish
Pages (from-to)881-893
Number of pages13
JournalAssessment and Evaluation in Higher Education
Volume44
Issue number6
DOIs
Publication statusPublished - 18 Aug 2019

Keywords

  • Electronic marking
  • large cohorts
  • marking criteria
  • sessional teaching staff

Cite this

@article{bc5151a09d404636b64031d26fa0b41c,
title = "Curtailing marking variation and enhancing feedback in large scale undergraduate chemistry courses through reducing academic judgement: a case study",
abstract = "Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.",
keywords = "Electronic marking, large cohorts, marking criteria, sessional teaching staff",
author = "Stephen George-Williams and Mary-Rose Carroll and Angela Ziebell and Christopher Thompson and Tina Overton",
year = "2019",
month = "8",
day = "18",
doi = "10.1080/02602938.2018.1545897",
language = "English",
volume = "44",
pages = "881--893",
journal = "Assessment & Evaluation in Higher Education",
issn = "0260-2938",
publisher = "Taylor & Francis",
number = "6",

}

Curtailing marking variation and enhancing feedback in large scale undergraduate chemistry courses through reducing academic judgement : a case study. / George-Williams, Stephen; Carroll, Mary-Rose; Ziebell, Angela; Thompson, Christopher; Overton, Tina.

In: Assessment and Evaluation in Higher Education, Vol. 44, No. 6, 18.08.2019, p. 881-893.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Curtailing marking variation and enhancing feedback in large scale undergraduate chemistry courses through reducing academic judgement

T2 - a case study

AU - George-Williams, Stephen

AU - Carroll, Mary-Rose

AU - Ziebell, Angela

AU - Thompson, Christopher

AU - Overton, Tina

PY - 2019/8/18

Y1 - 2019/8/18

N2 - Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.

AB - Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.

KW - Electronic marking

KW - large cohorts

KW - marking criteria

KW - sessional teaching staff

UR - http://www.scopus.com/inward/record.url?scp=85059320740&partnerID=8YFLogxK

U2 - 10.1080/02602938.2018.1545897

DO - 10.1080/02602938.2018.1545897

M3 - Article

VL - 44

SP - 881

EP - 893

JO - Assessment & Evaluation in Higher Education

JF - Assessment & Evaluation in Higher Education

SN - 0260-2938

IS - 6

ER -