Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition - Université Grenoble Alpes
Communication Dans Un Congrès Année : 2024

Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition

Résumé

Human communication is multi-modal; e.g., face-to-face interaction involves auditory signals (speech) and visual signals (face movements and hand gestures). Hence, it is essential to exploit multiple modalities when designing machine learning-based facial expression recognition systems. In addition, given the ever-growing quantities of video data that capture human facial expressions, such systems should utilize raw unlabeled videos without requiring expensive annotations. Therefore, in this work, we employ a multitask multi-modal self-supervised learning method for facial expression recognition from in-the-wild video data. Our model combines three self-supervised objective functions: First, a multi-modal contrastive loss, that pulls diverse data modalities of the same video together in the representation space. Second, a multi-modal clustering loss that preserves the semantic structure of input data in the representation space. Finally, a multi-modal data reconstruction loss. We conduct a comprehensive study on this multimodal multi-task self-supervised learning method on three facial expression recognition benchmarks. To that end, we examine the performance of learning through different combinations of self-supervised tasks on the facial expression recognition downstream task. Our model ConCluGen outperforms several multi-modal self-supervised and fully supervised baselines on the CMU-MOSEI dataset. Our results generally show that multi-modal self-supervision tasks offer large performance gains for challenging tasks such as facial expression recognition, while also reducing the amount of manual annotations required. We release our pre-trained models as well as source code publicly 1 .
Fichier principal
Vignette du fichier
2404.10904v2.pdf (863.65 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04778749 , version 1 (12-11-2024)

Licence

Identifiants

Citer

Marah Halawa, Florian Blume, Pia Bideau, Martin Maier, Rasha Abdel Rahman, et al.. Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition. CVPR 2024 - IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2024, Seattle, United States. pp.1-12, ⟨10.1109/CVPRW63382.2024.00463⟩. ⟨hal-04778749⟩
4 Consultations
3 Téléchargements

Altmetric

Partager

More