ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Target-Style-Aware Unsupervised Domain Adaptation for Object Detection
Cited 2 time in scopus Download 29 time Share share facebook twitter linkedin kakaostory
저자
윤우한, 한병옥, 이재연, 김재홍, 김준모
발행일
202104
출처
IEEE Robotics and Automation Letters, v.6 no.2, pp.3825-3832
ISSN
2377-3766
출판사
IEEE
DOI
https://dx.doi.org/10.1109/LRA.2021.3062333
협약과제
20HS6400, 실환경 서비스 상황에서 사용자 반응에 지속적으로 지역(Local) 적응하는 로봇 지능 기술 개발, 장민수
초록
Vision modules running on mobility platforms, such as robots and cars, often face challenging situations such as a domain shift where the distributions of training (source) data and test (target) data are different. The domain shift is caused by several variation factors, such as style, camera viewpoint, object appearance, object size, backgrounds, and scene layout. In this work, we propose an object detection training framework for unsupervised domain-style adaptation. The proposed training framework transfers target-style information to source samples and simultaneously trains the detection network with these target-stylized source samples in an end-to-end manner. The detection network can learn the target domain from the target-stylized source samples. The style is extracted from object areas obtained by using pseudo-labels to reflect the style of the object areas more than that of the irrelevant backgrounds. We empirically verified that the proposed methods improve detection accuracy in diverse domain shift scenarios using the Cityscapes, FoggyCityscapes, Sim10k, BDD100k, PASCAL, and Watercolor datasets.
KSP 제안 키워드
Detection accuracy, End to End(E2E), Object detection, Target domain, Unsupervised domain adaptation, camera viewpoint, scene layout, style adaptation