Self-supervised vision-based detection of the active speaker as support for socially aware language acquisition

Kalin Stefanov, Jonas Beskow, Giampiero Salvi

Research output: Contribution to journalArticleResearchpeer-review

14 Citations (Scopus)

Abstract

This paper presents a self-supervised method for visual detection of the active speaker in a multiperson spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multiperson face-to-face interaction data set. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.

Original languageEnglish
Pages (from-to)250-259
Number of pages10
JournalIEEE Transactions on Cognitive and Developmental Systems
Volume12
Issue number2
DOIs
Publication statusPublished - Jun 2019
Externally publishedYes

Keywords

  • Active speaker detection and localization
  • Cognitive systems and development
  • Language acquisition through development
  • Transfer learning

Cite this