ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Adaptive Cross-Attention Gated Network for Radar-Camera Fusion in BEV Space
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Ji-Yong Lee, Jae-Hyeok Lee, Dong-oh Kang
Issue Date
2025-02
Citation
International Conference on Advanced Communications Technology (ICACT) 2025, pp.279-284
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.23919/ICACT63878.2025.10936296
Abstract
Fusing multimodal sensors for 3D object detection has been extensively researched in the field of autonomous driving. However, existing multimodal sensor fusion methods still struggle to provide reliable detection across different modalities under diverse environmental conditions. Specifically, straightforward methods like summation or concatenation in radar-camera fusion may lead to spatial misalignment and fail to localize objects in complex scenes. To address this, we propose Adaptive CrossAttention Gated Network (ACAGN) to enhance radar-camera fusion capabilities in Bird’s-Eye View (BEV) space. Our approach integrates a deformable cross-attention and an adaptive gated network mechanism. The deformable cross-attention aligns radar and camera features from BEV with greater spatial precision, handling variations between those features effectively. Meanwhile, the adaptive gated network dynamically filters and prioritizes the most relevant information from each sensor. This dual approach improves stability and robustness of detection, as demonstrated through extensive evaluations on the nuScenes dataset.
KSP Keywords
3D object detection, Dual approach, Environmental conditions, Fusion method, Network mechanism, Spatial precision, Stability and robustness, autonomous driving, complex scenes, multimodal sensor fusion, relevant information