Assessing the quality of automatic-generated short answers using GPT-4

Luiz Rodrigues, Filipe Dwan Pereira, Luciano Cabral, Dragan Gašević, Geber Ramalho, Rafael Ferreira Mello

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Open-ended assessments play a pivotal role in enabling instructors to evaluate student knowledge acquisition and provide constructive feedback. Integrating large language models (LLMs) such as GPT-4 in educational settings presents a transformative opportunity for assessment methodologies. However, existing literature on LLMs addressing open-ended questions lacks breadth, relying on limited data or overlooking question difficulty levels. This study evaluates GPT-4's proficiency in responding to open-ended questions spanning diverse topics and cognitive complexities in comparison to human responses. To facilitate this assessment, we generated a dataset of 738 open-ended questions across Biology, Earth Sciences, and Physics and systematically categorized it based on Bloom's Taxonomy. Each question included eight human-generated responses and two from GPT-4. The outcomes indicate GPT-4's superior performance over humans, encompassing both native and non-native speakers, irrespective of gender. Nevertheless, this advantage was not sustained in ’remembering’ or ’creating’ questions aligned with Bloom's Taxonomy. These results highlight GPT-4's potential for underpinning advanced question-answering systems, its promising role in supporting non-native speakers, and its capacity to augment teacher assistance in assessments. However, limitations in nuanced argumentation and creativity underscore areas necessitating refinement in these models, guiding future research toward bolstering pedagogical support.

Original languageEnglish
Article number100248
Number of pages12
JournalComputers and Education: Artificial Intelligence
Volume7
DOIs
Publication statusPublished - Dec 2024

Keywords

  • Automatic answer generation
  • GPT-4
  • Large language models
  • Natural language processing
  • Question-answering

Cite this