ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper End-to-End ASR semi-supervised training using adversarial self-training
Cited - time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hoon Chung, Yoonhyung Kim, Byung-Ok Kang
Issue Date
2022-10
Citation
International Congress on Acoustics (ICA) 2022, pp.1-8
Language
English
Type
Conference Paper
Abstract
This paper proposes an interleaved self-training using adversarial augmentation for semi-supervised end-to-end automatic speech recognition (ASR) model training. As a method to use unlabelled speech corpora for ASR model training, a consistency regularized self-training is a common approach, which uses the model’s highly confident predictions of weakly augmented data as target labels for strongly augmented versions of the same data. It means that augmentation and training strategy are important, and it is desirable to sample the augmented data around the decision boundary in classification problems. Even though there are various speech augmentations techniques such as speed perturbation, noise addition, channel distortion and so on, these methods are not directly concerned with generating augmented data around decision boundaries. Therefore, to handle the issue, we propose to use adversarial augmentation which generates examples misclassified by a model, and we also investigate batch-wise interleaved training strategy to prevent ASR model overfitted to unlabelled data. The proposed approach was evaluated on the Wall Street Journal task domain. The experimental results show that the proposed method is effective by reducing the character error rate from 10.4% to 6.8%.
KSP Keywords
Classification problems, End to End(E2E), Noise addition, Speech corpora, Wall Street, automatic speech recognition(ASR), decision boundary, end-to-end ASR, error rate, self-training, semi-supervised training