Machine learning for multimodal mental health detection: A systematic review of passive sensing approaches

Lin Sze Khoo, Mei Kuan Lim, Chun Yong Chong, Roisin McNaney

Research output: Contribution to journalReview ArticleResearchpeer-review

Abstract

As mental health (MH) disorders become increasingly prevalent, their multifaceted symptoms and comorbidities with other conditions introduce complexity to diagnosis, posing a risk of underdiagnosis. While machine learning (ML) has been explored to mitigate these challenges, we hypothesized that multiple data modalities support more comprehensive detection and that non-intrusive collection approaches better capture natural behaviors. To understand the current trends, we systematically reviewed 184 studies to assess feature extraction, feature fusion, and ML methodologies applied to detect MH disorders from passively sensed multimodal data, including audio and video recordings, social media, smartphones, and wearable devices. Our findings revealed varying correlations of modality-specific features in individualized contexts, potentially influenced by demographics and personalities. We also observed the growing adoption of neural network architectures for model-level fusion and as ML algorithms, which have demonstrated promising efficacy in handling high-dimensional features while modeling within and cross-modality relationships. This work provides future researchers with a clear taxonomy of methodological approaches to multimodal detection of MH disorders to inspire future methodological advancements. The comprehensive analysis also guides and supports future researchers in making informed decisions to select an optimal data source that aligns with specific use cases based on the MH disorder of interest.

Original languageEnglish
Article number348
Number of pages65
JournalSensors
Volume24
Issue number2
DOIs
Publication statusPublished - 6 Jan 2024

Keywords

  • machine learning
  • mental health
  • multimodal detection
  • passive sensing
  • systematic review

Cite this