Multi-sensor calibration and multi-modal perception for intelligent systems
Nowadays, various intelligent systems rely on multiple heterogeneous sensors for environmental perception. For example, different sensors are mounted on unmanned vehicles, autonomous mobile robots and to the road side. By fusing heterogeneous sensors, the perception system could be more robust in co...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/161102 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Nowadays, various intelligent systems rely on multiple heterogeneous sensors for environmental perception. For example, different sensors are mounted on unmanned vehicles, autonomous mobile robots and to the road side. By fusing heterogeneous sensors, the perception system could be more robust in complex real environments. To achieve that, accurate extrinsic calibration is indispensable, through which the transformation matrix (rotation matrix and translation vector) between the sensor frames can be obtained. However, as new sensors enter the market and new applications arise, new problems need to be solved. In this thesis, a bunch of solutions are proposed for different scenarios.
Firstly, the extrinsic calibration between a sparse 3D LiDAR and a thermal camera is important for low-cost, day and night operation, but limited solutions can be found. To solve the problem, a two-step method is proposed, in which a visual camera is introduced as the bridge sensor. The effectiveness and advantages of the method are evaluated by experiments and three applications on multi-modal perception.
Secondly, following the two-step method, a one-step method is proposed, namely SLAT-Calib. The main novelty is we observe that circular holes could be detected by both sensors. Thus, a specially designed calibration board (a rectangular board with four circular holes) is introduced for common feature extraction. Meanwhile, homograph matrix is introduced to extract 3D circle centers from the thermal camera. Experiments demonstrate that SLAT-Calib outperforms the two-step method.
Thirdly, for the extrinsic calibration between a non-repetitive scanning 3D LiDAR and a thermal camera, limited work has been performed. Thus, ThermalVoxCalib is proposed. An algorithm is proposed to automatically detect the calibration board from the raw point cloud. Experiments demonstrate that the automatic calibration board detection algorithm is reliable. The rotation and translation error can reach 0.172 degrees and 0.01m. The 2D re-projection error can reach 0.58 pixels.
Fourthly, for the extrinsic calibration between a 4D mmWave radar (4D: x,y,z, velocity) and a thermal camera, it is important for robust perception in harsh weathers, but limited solutions can be found. Thus, 4DRadar2ThermalCalib is proposed. A novel calibration target is introduced - a spherical-trihedral. The sphere center is used as the common feature. The re-projection error of the method can reach 1.88 pixels.
Fifthly, for the extrinsic calibration between multiple long baseline 3D LiDARs. It is important in V2X (Vehicle to Everything). Thus, LB-L2L-Calib is proposed. A sphere is utilized as the target, by observing that the sphere center is a viewpoint-invariant feature. Then, a sphere detection and sphere center estimation method is introduced to extract the sphere center from a cluttered point cloud. Experiments demonstrate that LB-L2L-Calib is highly accurate and robust. The rotation and translation error is less than 0.01m and 0.01 degree.
Sixthly, following LB-L2L-Calib, a novel online extrinsic calibration method is proposed, namely Object4Calib. The main novelty is we propose to use the easy-to-obtain objects in the traffic for calibration (cars, trucks, buses, etc.). The essence is we observe that the 3D bounding box centers of the vehicles are viewpoint invariant. Moreover, an exhaustive searching strategy is proposed to find the optimal correspondence of the centers between different LiDARs. Experiments demonstrate that Object4Calib is robust and accurate. |
---|