ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article RRNet: Repetition-Reduction Network for Energy Efficient Depth Estimation
Cited 4 time in scopus Download 142 time Share share facebook twitter linkedin kakaostory
Authors
Sangyun Oh, Hye-Jin S. Kim, Jongeun Lee, Junmo Kim
Issue Date
2020-06
Citation
IEEE Access, v.8, pp.106097-106108
ISSN
2169-3536
Publisher
IEEE
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1109/ACCESS.2020.3000773
Project Code
20PS1400, Development of cloud big data platform for the innovative manufacturing in ceramic industry, Suyoung Chi
Abstract
Lightweight neural networks that employ depthwise convolution have a significant computational advantage over those that use standard convolution because they involve fewer parameters; however, they also require more time, even with graphics processing units (GPUs). We propose a Repetition-Reduction Network (RRNet) in which the number of depthwise channels is large enough to reduce computation time while simultaneously being small enough to reduce GPU latency. RRNet also reduces power consumption and memory usage, not only in the encoder but also in the residual connections to the decoder. We apply RRNet to the problem of resource-constrained depth estimation, where it proves to be significantly more efficient than other methods in terms of energy consumption, memory usage, and computation. It has two key modules: the Repetition-Reduction (RR) block, which is a set of repeated lightweight convolutions that can be used for feature extraction in the encoder, and the Condensed Decoding Connection (CDC), which can replace the skip connection, delivering features to the decoder while significantly reducing the channel depth of the decoder layers. Experimental results on the KITTI dataset show that RRNet consumes 3.84\times less energy and 3.06\times less memory than conventional schemes, and that it is 2.21\times faster on a commercial mobile GPU without increasing the demand on hardware resources relative to the baseline network. Furthermore, RRNet outperforms state-of-the-art lightweight models such as MobileNets, PyDNet, DiCENet, DABNet, and EfficientNet.
KSP Keywords
Baseline network, Channel depth, Depth estimation, Feature extractioN, Graphic Processing Unit(GPU), Hardware Resources, Lightweight model, Mobile GPU, Neural networks, Power Consumption, Resource-constrained
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY