Abstract
This paper presents a self-supervised method for visual detection of the active speaker in a multiperson spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multiperson face-to-face interaction data set. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.
Original language | English |
---|---|
Pages (from-to) | 250-259 |
Number of pages | 10 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 12 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jun 2019 |
Externally published | Yes |
Keywords
- Active speaker detection and localization
- Cognitive systems and development
- Language acquisition through development
- Transfer learning