Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations

Leah P. Macfadyen, Shane Dawson, Stewart Prest, Dragan Gašević

Research output: Contribution to journalArticleResearchpeer-review

12 Citations (Scopus)

Abstract

Student evaluation of teaching (SET) is now common practice across higher education, with the results used for both course improvement and quality assurance purposes. While much research has examined the validity of SETs for measuring teaching quality, few studies have investigated the factors that influence student participation in the SET process. This study aimed to address this deficit through the analysis of an SET respondent pool at a large Canadian research-intensive university. The findings were largely consistent with available research (showing influence of student gender, age, specialisation area and final grade on SET completion). However, the study also identified additional influential course-specific factors such as term of study, course year level and course type as statistically significant. Collectively, such findings point to substantively significant patterns of bias in the characteristics of the respondent pool. Further research is needed to specify and quantify the impact (if any) on SET scores. We conclude, however, by recommending that such bias does not invalidate SET implementation, but instead should be embraced and reported within standard institutional practice, allowing better understanding of feedback received, and driving future efforts at recruiting student respondents.

Original languageEnglish
Pages (from-to)821-839
Number of pages19
JournalAssessment & Evaluation in Higher Education
Volume41
Issue number6
DOIs
Publication statusPublished - 2016
Externally publishedYes

Keywords

  • course evaluation
  • multilevel analysis
  • response bias
  • response rate
  • student evaluation of teaching

Cite this