ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Speech Emotion Recognition Model Based on Joint Modeling of Discrete and Dimensional Emotion Representation
Cited 0 time in scopus Download 34 time Share share facebook twitter linkedin kakaostory
Authors
John Lorenzo Bautista, Hyun Soon Shin
Issue Date
2025-01
Citation
APPLIED SCIENCES-BASEL, v.15, no.2, pp.1-20
ISSN
2076-3417
Publisher
MDPI
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.3390/app15020623
Abstract
This paper introduces a novel joint model architecture for Speech Emotion Recognition (SER) that integrates both discrete and dimensional emotional representations, allowing for the simultaneous training of classification and regression tasks to improve the comprehensiveness and interpretability of emotion recognition. By employing a joint loss function that combines categorical and regression losses, the model ensures balanced optimization across tasks, with experiments exploring various weighting schemes using a tunable parameter to adjust task importance. Two adaptive weight balancing schemes, Dynamic Weighting and Joint Weighting, further enhance performance by dynamically adjusting task weights based on optimization progress and ensuring balanced emotion representation during backpropagation. The architecture employs parallel feature extraction through independent encoders, designed to capture unique features from multiple modalities, including Mel-frequency Cepstral Coefficients (MFCC), Short-term Features (STF), Mel-spectrograms, and raw audio signals. Additionally, pre-trained models such as Wav2Vec 2.0 and HuBERT are integrated to leverage their robust latent features. The inclusion of self-attention and co-attention mechanisms allows the model to capture relationships between input modalities and interdependencies among features, further improving its interpretability and integration capabilities. Experiments conducted on the IEMOCAP dataset using a leave-one-subject-out approach demonstrate the model’s effectiveness, with results showing a 1–2% accuracy improvement over classification-only models. The optimal configuration, incorporating the joint architecture, dynamic weighting, and parallel processing of multimodal features, achieves a weighted accuracy of 72.66%, an unweighted accuracy of 73.22%, and a mean Concordance Correlation Coefficient (CCC) of 0.3717. These results validate the effectiveness of the proposed joint model architecture and adaptive balancing weight schemes in improving SER performance.
KSP Keywords
Adaptive weight, Attention mechanism, Audio signal, Correlation Coefficient, Dynamic weighting, Enhance performance, Feature extractioN, Input modalities, Joint modeling, Mel-frequency Cepstral Coefficient(MFCC), Model architecture
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY