ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술대회 What Do Pedestrians See?: Visualizing Pedestrian-View Intersection Classification
Cited 2 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
마셀라, 무함마드, 이진하, 이재영, 이승익
International Conference on Control, Automation and Systems (ICCAS) 2020, pp.769-773
20HS2600, 불확실한 지도 기반 실내ㆍ외 환경에서 최종 목적지까지 이동로봇을 가이드할 수 있는 AI 기술 개발, 이재영
Extensive research has been carried out on intersection classification to assist the navigation in autonomous maneuvering of aerial, road, and cave mining vehicles. In contrast, our work tackles intersection classification at pedestrian-view level to support navigation of the slower and smaller robots for which it is too dangerous to steer on a normal road along with the usual vehicles. Particularly, we focus on investigating the kind of features a network may exploit in order to classify intersection at pedestrian-view. To this end, two sets of experiments have been conducted using an ImageNet-pretrained ResNet-18 architecture fine-tuned on our image-level pedestrian-view intersection classification dataset. First, ablation study is performed on layer depth to evaluate the importance of high-level feature, which demonstrated superiority in using all of the layers by yielding 77.56% accuracy. Second, to further clarify the need of such high level features, Class Activation Map (CAM) is applied to visualize the parts of an image that affect the most on a given prediction. The visualization justifies the high accuracy of an all-layers network.
KSP 제안 키워드
High accuracy, High-level features, Layer depth