ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article LFCNet: Deep Learning-Based LiDAR-Fisheye Camera Online Automatic Targetless Extrinsic Calibration
Cited 0 time in scopus Download 48 time Share share facebook twitter linkedin kakaostory
Authors
Muhammad Rangga Aziz Nasution, Jaejun Yoo, Miftahul Khoir Shilahul Umam, Ida Bagus Krishna Yoga Utama, Muhammad Fairuz Mummtaz, Muhammad Alfi Aldolio, Su Mon Ko, Yeong Min Jang
Issue Date
2025-08
Citation
IEEE Access, v.13, pp.146305-146318
ISSN
2169-3536
Publisher
IEEE
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1109/ACCESS.2025.3599979
Abstract
This paper presents LFCNet, an end-to-end deep-learning framework for targetless extrinsic calibration between LiDAR and fisheye cameras. The method is specifically designed for autonomous driving systems, where accurate alignment of sensor data is critical. LFCNet performs calibration without physical targets and operates online, which makes it practical for real-time applications involving fisheye cameras and LiDAR. Sensor fusion has become a key strategy to improve the perception capabilities of autonomous vehicles. Among the various sensors used, LiDAR and cameras are commonly employed due to their complementary strengths. Fisheye cameras, in particular, provide a wide field of view and capture more comprehensive visual information. As a result, precise calibration between these sensors is essential to align their outputs within a unified coordinate system and maintain consistent environmental understanding. Conventional calibration methods often rely on statistical techniques, which struggle with high-dimensional multivariate data. Deep learning-based approaches have recently shown promise, particularly for LiDAR-camera systems. However, calibration involving fisheye cameras remains underexplored. LFCNet addresses this gap through a lightweight model with 12.1 million parameters, and a novel two-stage feature-extraction architecture that captures both local and global features while compensating for fisheye distortion. Experimental results demonstrate that LFCNet achieves high accuracy, with a mean calibration error of 0.509 cm in translation and 0.063° in rotation - setting a new benchmark for real-time, learning-based LiDAR-fisheye calibration. The code for LFCNet is available at https://github.com/rangganast/LFCNet
KSP Keywords
Autonomous driving system, Autonomous vehicle, Calibration error, Calibration method, Camera system, End to End(E2E), Feature extractioN, Field of View(FoV), High accuracy, High-dimensional, Learning framework
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY