ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Fusion of 3D-2D Data for Drivable Area Detection and Road Condition Analysis in Assistive Quadruped Robots for the Visually Impaired
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jiho Chang, Beomsu Seo
Issue Date
2024-11
Citation
International Conference on Consumer Electronics (ICCE) 2024 : Asia, pp.909-912
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICCE-Asia63397.2024.10773942
Abstract
This paper presents a system designed for a quadruped robot that assists visually impaired individuals by analyzing road conditions and detecting drivable areas. The system integrates 3D point cloud data from Lidar with 2D semantic segmentation from RGB cameras, enabling a comprehensive understanding of the environment through a 3D-2D data fusion approach. By evaluating critical road properties such as flatness, material uniformity, slope, and vertical clearance, the system ensures safe and efficient navigation in complex terrains. To enhance depth perception, we incorporate depth completion techniques, allowing for real-time road condition assessment. The fusion of Lidar and RGB data creates accurate drivable area maps, enabling the robot to make informed decisions about path planning and obstacle avoidance. The proposed system is validated through real-world experiments, demonstrating its effectiveness in assisting visually impaired users by providing reliable navigation across diverse environments. The contributions of this work include the development of a robust road condition analysis module and the fusion of 3D-2D data for enhanced semantic segmentation, advancing the capabilities of assistive robotic systems for visually impaired navigation.
KSP Keywords
3D point cloud data, Complex terrains, Condition Assessment, Condition analysis, Data fusion, Depth completion, Fusion approach, Material uniformity, Obstacle Avoidance, Real-time, Real-world