ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Block-wiseWord Embedding Compression Revisited: BetterWeighting and Structuring
Cited 2 time in scopus Download 70 time Share share facebook twitter linkedin kakaostory
저자
이종률, 이용주, 문용혁
발행일
202111
출처
Findings of the Association for Computational Linguistics: EMNLP 2021, pp.4379-4388
협약과제
21HS7200, 능동적 즉시 대응 및 빠른 학습이 가능한 적응형 경량 엣지 연동분석 기술개발, 문용혁
초록
Word embedding is essential for neural network models for various natural language processing tasks. Since the word embedding usually has a considerable size, in order to deploy a neural network model having it on edge devices, it should be effectively compressed. There was a study for proposing a block-wise low-rank approximation method for word embedding, called GroupReduce. Even if their structure is effective, the properties behind the concept of the block-wise word embedding compression were not sufficiently explored. Motivated by this, we improve GroupReduce in terms of word weighting and structuring. For word weighting, we propose a simple yet effective method inspired by the term frequency-inverse document frequency method and a novel differentiable method. Based on them, we construct a discriminative word embedding compression algorithm. In the experiments, we demonstrate that the proposed algorithm more effectively finds word weights than competitors in most cases. In addition, we show that the proposed algorithm can act like a framework through successful cooperation with quantization.
KSP 제안 키워드
Approximation methods, Compression Algorithm, Edge devices, Frequency method, Low-rank approximation, Natural Language Processing, Word Embedding, neural network model, term frequency-inverse document frequency(TF-IDF)
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)