ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Knowledge Distillation based Compact Model Learning Method for Object Detection
Cited 0 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
고종국, 유원영
발행일
202010
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1276-1278
DOI
https://dx.doi.org/10.1109/ICTC49870.2020.9289463
협약과제
20HH2700, 객체추출 및 실-가상 정합 지원 모바일 AR 기술 개발, 고종국
초록
Recently, video analysis technology through deep learning has been developing at a very rapid pace, and most of the technology related to improving recognition performance in server environment is being developed. However, in addition to video analysis technology in the existing server environment, the demand of object detection in visual image analysis have been increasing recently in embedded boards of low specification and mobile environments such as smartphones, drones, and industrial boards. Despite the significant improvement in the accuracy of existing object detectors, image processing for real- time applications often requires a lot of runtime. Therefore, many studies are being conducted on lightweight object detection technology, and knowledge distillation is one of the solutions. Efforts such as model compression use fewer parameters, but there is a problem that accuracy is significantly reduced. In this paper, we propose method to improve the performance of lightweight mobilenet-SSD models in object detection by using knowledge transfer methods. We conduct evaluation with PASCAL VOC dataset. Our results show detection accuracy improvement in object detection.
키워드
knowledge distillation, lightweight deep learning model, object detection
KSP 제안 키워드
Detection accuracy, Detection technology, Image Analysis, Image processing, Knowledge transfer, Learning methods, Learning model, Model compression, Model learning, Object detection, PASCAL VOC dataset