ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Dialogue State Tracking with Zero-Shot and Few-Shot Learning for Generalization: A Review
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Seungyeon Kim, Yejin Park, Junseong Bang
Issue Date
2022-08
Citation
International Conference on Platform Technology and Service (PlatCon) 2022, pp.75-79
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/PlatCon55845.2022.9932101
Abstract
Research on Dialogue State Tracking (DST) has achieved meaningful advancements in benchmark datasets. However, the generalization ability of DST models to handle unseen data robustly remains an issue. Hence, recent studies on DST with zero-shot and few-shot learning are reviewed in this paper. For a task-oriented dialogue system, DST is explained by introducing datasets and evaluation metrics. DST models could be categorized into four groups: DST based on a pre-trained model, DST using a description, DST using a prompt, and DST with cross-task. Characteristics of each model are described and the performance of the model experimented under the same conditions is summarized.
KSP Keywords
Benchmark datasets, Generalization ability, Pre-trained model, State tracking, Task-oriented, Zero-shot, dialogue system, evaluation metrics