ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Improving Statistical Machine Translation using Shallow Linguistic Knowledge
Cited 9 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Young-Sook Hwang, Andrew Finch, Yutaka Sasaki
Issue Date
2007-04
Citation
Computer Speech and Language, v.21, no.2, pp.350-372
ISSN
0885-2308
Publisher
Elsevier
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.csl.2006.06.007
Abstract
We describe methods for improving the performance of statistical machine translation (SMT) between four linguistically different languages, i.e., Chinese, English, Japanese, and Korean by using morphosyntactic knowledge. For the purpose of reducing the translation ambiguities and generating grammatically correct and fluent translation output, we address the use of shallow linguistic knowledge, that is: (1) enriching a word with its morphosyntactic features, (2) obtaining shallow linguistically-motivated phrase pairs, (3) iteratively refining word alignment using filtered phrase pairs, and (4) building a language model from morphosyntactically enriched words. Previous studies reported that the introduction of syntactic features into SMT models resulted in only a slight improvement in performance in spite of the heavy computational expense, however, this study demonstrates the effectiveness of morphosyntactic features, when reliable, discriminative features are used. Our experimental results show that word representations that incorporate morphosyntactic features significantly improve the performance of the translation model and language model. Moreover, we show that refining the word alignment using fine-grained phrase pairs is effective in improving system performance. © 2006 Elsevier Ltd. All rights reserved.
KSP Keywords
Computational expense, Discriminative feature, Language model, Linguistic knowledge, Machine Translation(MT), Statistical Machine Translation, Syntactic features, System performance, Translation Model, Word Alignment, fine-grained