ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Block-wise Word Embedding Compression Revisited: Better Weighting and Structuring
Cited 5 time in scopus Download 137 time Share share facebook twitter linkedin kakaostory
Authors
Jong-Ryul Lee, Yong-Ju Lee, Yong-Hyuk Moon
Issue Date
2021-11
Citation
Findings of the Association for Computational Linguistics: EMNLP 2021, pp.4379-4388
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.18653/v1/2021.findings-emnlp.372
Abstract
Word embedding is essential for neural network models for various natural language processing tasks. Since the word embedding usually has a considerable size, in order to deploy a neural network model having it on edge devices, it should be effectively compressed. There was a study for proposing a block-wise low-rank approximation method for word embedding, called GroupReduce. Even if their structure is effective, the properties behind the concept of the block-wise word embedding compression were not sufficiently explored. Motivated by this, we improve GroupReduce in terms of word weighting and structuring. For word weighting, we propose a simple yet effective method inspired by the term frequency-inverse document frequency method and a novel differentiable method. Based on them, we construct a discriminative word embedding compression algorithm. In the experiments, we demonstrate that the proposed algorithm more effectively finds word weights than competitors in most cases. In addition, we show that the proposed algorithm can act like a framework through successful cooperation with quantization.
KSP Keywords
Approximation methods, Compression Algorithm, Edge devices, Frequency method, Low-rank approximation, Natural Language Processing(NLP), Neural network model, Word Embedding, neural network(NN), term frequency-inverse document frequency(TF-IDF)
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY