Targetless Camera-LiDAR Calibration in Unstructured Environments

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/108689
Información del item - Informació de l'item - Item information
Title: Targetless Camera-LiDAR Calibration in Unstructured Environments
Authors: Muñoz-Bañón, Miguel Á. | Candelas-Herías, Francisco A. | Torres, Fernando
Research Group/s: Automática, Robótica y Visión Artificial
Center, Department or Service: Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal
Keywords: Camera-LiDAR | Extrinsic calibration | Target-less calibration | Sensor fusion | Mobile robots
Knowledge Area: Ingeniería de Sistemas y Automática
Issue Date: 4-Aug-2020
Publisher: IEEE
Citation: IEEE Access. 2020, 8: 143692-143705. https://doi.org/10.1109/ACCESS.2020.3014121
Abstract: The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations.
Sponsor: This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the grants ACIF/2019/088 and AICO/2019/020.
URI: http://hdl.handle.net/10045/108689
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2020.3014121
Language: eng
Type: info:eu-repo/semantics/article
Rights: This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Peer Review: si
Publisher version: https://doi.org/10.1109/ACCESS.2020.3014121
Appears in Collections:INV - AUROVA - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
ThumbnailMunoz-Banon_etal_2020_IEEEAccess.pdf1,87 MBAdobe PDFOpen Preview


This item is licensed under a Creative Commons License Creative Commons