ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Knowledge Distillation based Compact Model Learning Method for Object Detection
Cited 4 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jong Gook Ko, Wonyoung Yoo
Issue Date
2020-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1276-1278
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC49870.2020.9289463
Abstract
Recently, video analysis technology through deep learning has been developing at a very rapid pace, and most of the technology related to improving recognition performance in server environment is being developed. However, in addition to video analysis technology in the existing server environment, the demand of object detection in visual image analysis have been increasing recently in embedded boards of low specification and mobile environments such as smartphones, drones, and industrial boards. Despite the significant improvement in the accuracy of existing object detectors, image processing for real- time applications often requires a lot of runtime. Therefore, many studies are being conducted on lightweight object detection technology, and knowledge distillation is one of the solutions. Efforts such as model compression use fewer parameters, but there is a problem that accuracy is significantly reduced. In this paper, we propose method to improve the performance of lightweight mobilenet-SSD models in object detection by using knowledge transfer methods. We conduct evaluation with PASCAL VOC dataset. Our results show detection accuracy improvement in object detection.
KSP Keywords
Detection accuracy, Detection technology, Image analysis, Image processing(IP), Knowledge Distillation, Knowledge transfer, Learning methods, Model compression, Model learning, PASCAL VOC dataset, Real-time