ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Selective Compression Learning of Latent Representations for Variable-rate Image Compression
Cited 7 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jooyoung Lee, Seyoon Jeong, Munchurl Kim
Issue Date
2022-12
Citation
Conference on Neural Information Processing Systems (NeurIPS) 2022, pp.1-12
Language
English
Type
Conference Paper
Abstract
Recently, many neural network-based image compression methods have shown promising results superior to the existing tool-based conventional codecs. However, most of them are often trained as separate models for different target bit rates, thus increasing the model complexity. Therefore, several studies have been conducted for learned compression that supports variable rates with single models, but they require additional network modules, layers, or inputs that often lead to complexity overhead, or do not provide sufficient coding efficiency. In this paper, we firstly propose a selective compression method that partially encodes the latent representations in a fully generalized manner for deep learning-based variable-rate image compression. The proposed method adaptively determines essential representation elements for compression of different target quality levels. For this, we first generate a 3D importance map as the nature of input content to represent the underlying importance of the representation elements. The 3D importance map is then adjusted for different target quality levels using importance adjustment curves. The adjusted 3D importance map is finally converted into a 3D binary mask to determine the essential representation elements for compression. The proposed method can be easily integrated with the existing compression models with a negligible amount of overhead increase. Our method can also enable continuously variable-rate compression via simple interpolation of the importance adjustment curves among different quality levels. The extensive experimental results show that the proposed method can achieve comparable compression efficiency as those of the separately trained reference compression models and can reduce decoding time owing to the selective compression.
KSP Keywords
Binary mask, Coding efficiency, Compression method, Compression model, Decoding time, Image Compression, Latent representations, Learning-based, Network modules, Quality level, compression efficiency