ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Depth Attention Net
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hye-Jin S. Kim, Seung-Min Choi, Suyoung Chi
Issue Date
2019-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2019, pp.1110-1112
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC46691.2019.8939688
Abstract
Depth estimation has been achieved much attention in recent five years for visual SLAM, AR, VR, autonomous vehicle, 3D object understanding and so on. Similar to stereo matching analysis, stereo image based depth estimation methods using deep learning surprisingly obtain great improvement in performance but still cannot understand why learning depth can estimate depth in a new image and also require accuracy improvement. Therefore, we apply attention mechanism into depth learning. Attention mechanism achieve great improvement in various applications such as visual identification of objects, speech recognition, reasoning, image captioning, summarization, segmentation, machine translation(or NLP) and image classification etc. but yet depth estimation. This is because depth estimation is considered as geometric area. We apply attention mechanism in the PSMNet [1] method. Attention mechanism usually apply to channel attention. In our experiments, we apply our method into two benchmark dataset: KITTI [2] and SceneFlow [3]. In our experiments, we found the attention net can improve the quality of depth.
KSP Keywords
3D object understanding, Attention mechanism, Autonomous vehicle, Benchmark datasets, Depth estimation, Image Classification, Machine Translation(MT), Matching analysis, Visual Identification, Visual SLAM, accuracy improvement