ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Deep multimodal emotion recognition using modality-aware attention and proxy-based multimodal loss
Cited 0 time in scopus Download 7 time Share share facebook twitter linkedin kakaostory
Authors
Sungpil Woo, Muhammad Zubair, Sunhwan Lim, Daeyoung Kim
Issue Date
2025-05
Citation
INTERNET OF THINGS, v.31, pp.1-12
ISSN
2543-1536
Publisher
ELSEVIER
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.iot.2025.101562
Abstract
Emotion recognition based on physiological signals has garnered significant attention across various fields, including affective computing, health, virtual reality, robotics, and content rating. Recent advancements in technology have led to the development of multi-modal bio-sensing systems that enhanced the data collection efficiency by simultaneously recording and tracking multiple bio-signals. However, integrating multiple physiological signals for emotion recognition presents significant challenges due to the fusion of diverse data types. Differences in signal characteristics and noise levels significantly deteriorate the classification performance of a multi-modal system and therefore require effective feature extraction and fusion techniques to combine the most informative features from each modality without causing feature conflict. To this end, this study introduces a novel multi-modal emotion recognition method that addresses these challenges by leveraging electroencephalogram and electrocardiogram data to classify different levels of arousal and valence. The proposed deep multimodal architecture exploits a novel modality-aware attention mechanism to highlight mutually important and emotion-specific features. Additionally, a novel proxy-based multimodal loss function is employed for supervision during training to ensure the constructive contribution of each modality while preserving their unique characteristics. By addressing the critical issues of multi-modal signal fusion and emotion-specific feature extraction, the proposed multimodal architecture learns a constructive and complementary representation of multiple physiological signals and thus significantly improves the performance of emotion recognition systems. Through a series of experiments and visualizations conducted on the AMIGOS dataset, we demonstrate the efficacy of our proposed methodology for emotion classification.
KSP Keywords
Attention mechanism, Bio-sensing, Classification Performance, Collection efficiency, Critical issues, Data Collection, Data type, Feature extractioN, Informative features, Modal signal, Multimodal emotion recognition
This work is distributed under the term of Creative Commons License (CCL)
(CC BY NC)
CC BY NC