Online extrinsic calibration of multiple long baseline ranging sensors

With the rapid development of autonomous vehicles and vehicle-to-everything, V2X application, ranging sensors, typically RGBD sensors and LiDARs are widely deployed as perception tools in smart city. They are commonly used for object detection and tracking, trajectory estimation, collision avoidance...

Full description

Saved in:
Bibliographic Details
Main Author: Yan, Qiao
Other Authors: Wang Dan Wei
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/155836
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-155836
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Yan, Qiao
Online extrinsic calibration of multiple long baseline ranging sensors
description With the rapid development of autonomous vehicles and vehicle-to-everything, V2X application, ranging sensors, typically RGBD sensors and LiDARs are widely deployed as perception tools in smart city. They are commonly used for object detection and tracking, trajectory estimation, collision avoidance. One single RGBD sensor or LiDAR has a limited field of view and thus has plenty of bind spots. In order to fully cover the surroundings and enhance the perception capability, it is necessary to fuse the information from multiple ranging sensors. To fuse the information, accurate and robust extrinsic calibration is needed. The extrinsic calibration is to find out the extrinsic parameters which describe the relative pose between two sensors. The extrinsic parameters consist of a rotation matrix and a translation vector. Through extrinsic calibration, the rotation matrix and translation vector across different sensor frames can be obtained. Current calibration solutions mainly focus on short baseline scenario, and most of them are offline calibration which requires calibration target, and are usually conducted in indoor scenarios. However, in intelligent transportation system, RGBD sensors and LiDARs are typically deployed far from each other with varying perspectives. Thus, most of these methods are impractical and inconvenient for V2X, which has large baseline and viewpoint difference. To address these problems, two calibration methods are proposed for extrinsic calibration of multiple RGBD sensors and LiDARs, LB-RGBD2RGBD-Calib and Object4Calib. LB-RGBD2RGBD-Calib is proposed to achieve online extrinsic calibration between multiple long baseline RGBD sensors (abbr. Long Baseline RGBD to RGBD Calibration). The advantages of both color image and depth information are fully utilized. In this method, first, image features are extracted with texture information and are matched to build the initial correspondence between multiple RGBD sensors with outliers removed by geometric constraints. Second, the depth information is refined by accumulating multiple frames to smooth the noise. The refined depth is utilized in PnP Ransac method to remove outliers and obtain a coarse extrinsic parameter. Third, to refine the extrinsic parameter, Iterative Closest Point and edge alignment are carried out. Both quantitative and qualitative experiments demonstrate that LB-RGBD2RGBD-Calib achieves accurate result, the reprojection error is around 2.01 pixels and point cloud RMSE error is 0.35 and 0.38 meters for ICP and edge alignment. Object4Calib is to solve the extrinsic calibration of multiple LiDARs with long baseline and large viewpoint difference. The main novelty is that this method creatively uses the easy-to-obtain objects, the vehicles, in the traffic scenario for calibration and the essence is that the 3D bounding box centers of the vehicles are viewpoint-invariant. Thus, the object center is a good common feature for multiLiDARs in V2X. Object4Calib consists of two steps: exhaustive searching and global ICP refinement. For the coarse step, an exhaustive searching strategy is proposed to find the optimal correspondence of the center points between different LiDARs. For the refinement step, ICP is carried out with coarse parameter as initial value. Both quantitative and qualitative experiments in different simulation and real environments demonstrate that Object4Calib is robust and accurate. Even for a setup where the translation and rotation between two LiDARs is more than 30m and 90°, Object4Calib can achieve successful calibration.
author2 Wang Dan Wei
author_facet Wang Dan Wei
Yan, Qiao
format Thesis-Master by Research
author Yan, Qiao
author_sort Yan, Qiao
title Online extrinsic calibration of multiple long baseline ranging sensors
title_short Online extrinsic calibration of multiple long baseline ranging sensors
title_full Online extrinsic calibration of multiple long baseline ranging sensors
title_fullStr Online extrinsic calibration of multiple long baseline ranging sensors
title_full_unstemmed Online extrinsic calibration of multiple long baseline ranging sensors
title_sort online extrinsic calibration of multiple long baseline ranging sensors
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/155836
_version_ 1772826839210786816
spelling sg-ntu-dr.10356-1558362023-07-04T17:49:30Z Online extrinsic calibration of multiple long baseline ranging sensors Yan, Qiao Wang Dan Wei School of Electrical and Electronic Engineering EDWWANG@ntu.edu.sg Engineering::Electrical and electronic engineering With the rapid development of autonomous vehicles and vehicle-to-everything, V2X application, ranging sensors, typically RGBD sensors and LiDARs are widely deployed as perception tools in smart city. They are commonly used for object detection and tracking, trajectory estimation, collision avoidance. One single RGBD sensor or LiDAR has a limited field of view and thus has plenty of bind spots. In order to fully cover the surroundings and enhance the perception capability, it is necessary to fuse the information from multiple ranging sensors. To fuse the information, accurate and robust extrinsic calibration is needed. The extrinsic calibration is to find out the extrinsic parameters which describe the relative pose between two sensors. The extrinsic parameters consist of a rotation matrix and a translation vector. Through extrinsic calibration, the rotation matrix and translation vector across different sensor frames can be obtained. Current calibration solutions mainly focus on short baseline scenario, and most of them are offline calibration which requires calibration target, and are usually conducted in indoor scenarios. However, in intelligent transportation system, RGBD sensors and LiDARs are typically deployed far from each other with varying perspectives. Thus, most of these methods are impractical and inconvenient for V2X, which has large baseline and viewpoint difference. To address these problems, two calibration methods are proposed for extrinsic calibration of multiple RGBD sensors and LiDARs, LB-RGBD2RGBD-Calib and Object4Calib. LB-RGBD2RGBD-Calib is proposed to achieve online extrinsic calibration between multiple long baseline RGBD sensors (abbr. Long Baseline RGBD to RGBD Calibration). The advantages of both color image and depth information are fully utilized. In this method, first, image features are extracted with texture information and are matched to build the initial correspondence between multiple RGBD sensors with outliers removed by geometric constraints. Second, the depth information is refined by accumulating multiple frames to smooth the noise. The refined depth is utilized in PnP Ransac method to remove outliers and obtain a coarse extrinsic parameter. Third, to refine the extrinsic parameter, Iterative Closest Point and edge alignment are carried out. Both quantitative and qualitative experiments demonstrate that LB-RGBD2RGBD-Calib achieves accurate result, the reprojection error is around 2.01 pixels and point cloud RMSE error is 0.35 and 0.38 meters for ICP and edge alignment. Object4Calib is to solve the extrinsic calibration of multiple LiDARs with long baseline and large viewpoint difference. The main novelty is that this method creatively uses the easy-to-obtain objects, the vehicles, in the traffic scenario for calibration and the essence is that the 3D bounding box centers of the vehicles are viewpoint-invariant. Thus, the object center is a good common feature for multiLiDARs in V2X. Object4Calib consists of two steps: exhaustive searching and global ICP refinement. For the coarse step, an exhaustive searching strategy is proposed to find the optimal correspondence of the center points between different LiDARs. For the refinement step, ICP is carried out with coarse parameter as initial value. Both quantitative and qualitative experiments in different simulation and real environments demonstrate that Object4Calib is robust and accurate. Even for a setup where the translation and rotation between two LiDARs is more than 30m and 90°, Object4Calib can achieve successful calibration. Master of Engineering 2022-03-23T08:42:21Z 2022-03-23T08:42:21Z 2021 Thesis-Master by Research Yan, Q. (2021). Online extrinsic calibration of multiple long baseline ranging sensors. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/155836 https://hdl.handle.net/10356/155836 10.32657/10356/155836 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University