論文

査読有り
2021年7月12日

Multi-Modal Adaptive Fusion Transformer Network for the Estimation of Depression Level

Sensors(IF:4.35)
  • Hao Sun
  • ,
  • Jiaqing Liu
  • ,
  • Shurong Chai
  • ,
  • Zhaolin Qiu
  • ,
  • Lanfen Lin
  • ,
  • Xinyin Huang
  • ,
  • Yenwei Chen

21
14
開始ページ
4764
終了ページ
4764
記述言語
掲載種別
研究論文(学術雑誌)
DOI
10.3390/s21144764
出版者・発行元
MDPI AG

Depression is a severe psychological condition that affects millions of people worldwide. As depression has received more attention in recent years, it has become imperative to develop automatic methods for detecting depression. Although numerous machine learning methods have been proposed for estimating the levels of depression via audio, visual, and audiovisual emotion sensing, several challenges still exist. For example, it is difficult to extract long-term temporal context information from long sequences of audio and visual data, and it is also difficult to select and fuse useful multi-modal information or features effectively. In addition, how to include other information or tasks to enhance the estimation accuracy is also one of the challenges. In this study, we propose a multi-modal adaptive fusion transformer network for estimating the levels of depression. Transformer-based models have achieved state-of-the-art performance in language understanding and sequence modeling. Thus, the proposed transformer-based network is utilized to extract long-term temporal context information from uni-modal audio and visual data in our work. This is the first transformer-based approach for depression detection. We also propose an adaptive fusion method for adaptively fusing useful multi-modal features. Furthermore, inspired by current multi-task learning work, we also incorporate an auxiliary task (depression classification) to enhance the main task of depression level regression (estimation). The effectiveness of the proposed method has been validated on a public dataset (AVEC 2019 Detecting Depression with AI Sub-challenge) in terms of the PHQ-8 scores. Experimental results indicate that the proposed method achieves better performance compared with currently state-of-the-art methods. Our proposed method achieves a concordance correlation coefficient (CCC) of 0.733 on AVEC 2019 which is 6.2% higher than the accuracy (CCC = 0.696) of the state-of-the-art method.

リンク情報
DOI
https://doi.org/10.3390/s21144764
URL
https://www.mdpi.com/1424-8220/21/14/4764/pdf
ID情報
  • DOI : 10.3390/s21144764
  • eISSN : 1424-8220

エクスポート
BibTeX RIS