Abstract
Recent advancements in multimodal large language models (MLLMs) have made significant progress in integrating information across various modalities, yet real-world applications in educational and scientific domains remain challenging. This paper introduces the Multimodal Scientific ASR (MS-ASR) task, which focuses on transcribing scientific conference videos by leveraging visual information from slides to enhance the accuracy of technical terminologies. Realized that traditional metrics like WER fall short in assessing performance accurately, prompting the proposal of severity-aware WER (SWER) that considers the content type and severity of ASR errors. We propose the Scientific Vision Augmented ASR (SciVASR) framework as a baseline method, enabling MLLMs to improve transcript quality through post-editing. Evaluations of state-of-the-art MLLMs, including GPT-4o, show a 45% improvement over speech-only baselines, highlighting the importance of multimodal information integration.
Original language | English |
---|---|
Title of host publication | EMNLP 2024, The 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 |
Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung (Vivian) Chen |
Place of Publication | Kerrville TX USA |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 13274–13288 |
Number of pages | 15 |
ISBN (Print) | 9798891761681 |
Publication status | Published - 2024 |
Event | Empirical Methods in Natural Language Processing 2024 - Hyatt Regency Miami Hotel, Miami, United States of America Duration: 12 Nov 2024 → 16 Nov 2024 https://aclanthology.org/volumes/2024.emnlp-main/ https://2024.emnlp.org/ https://aclanthology.org/events/emnlp-2024/#2024emnlp-main |
Conference
Conference | Empirical Methods in Natural Language Processing 2024 |
---|---|
Abbreviated title | EMNLP 2024 |
Country/Territory | United States of America |
City | Miami |
Period | 12/11/24 → 16/11/24 |
Internet address |