Sensor fusion for long-range object detection
A safe and reliable autonomous vehicle requires an accurate and fast perception module. This module, often regarded as the "eye" of a self-driving car, must be capable of performing 3D object detection in both short-range and long-range scenarios. Long-range object detection is crucial, as...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167508 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | A safe and reliable autonomous vehicle requires an accurate and fast perception module. This module, often regarded as the "eye" of a self-driving car, must be capable of performing 3D object detection in both short-range and long-range scenarios. Long-range object detection is crucial, as without it, autonomous vehicles will not be able to respond quickly enough to potential hazards and avoid collisions. However, most existing LiDAR-based 3D object detectors face significant challenges in detecting objects at long ranges (50 meters and above) due to the sparseness of the far LiDAR cloud. To address this problem, we propose building a 3D object detection model that fuses input from the LiDAR point cloud and RGB image. This is a promising solution since each sensor has advantages and drawbacks that can compensate for each other through sensor fusion.
In this project, we explore two methods to improve the performance of state-of-the-art detectors in long-range object detection: Feature-level fusion and Decision-level fusion. In addition, we propose a low-cost solution to generate more training data for long-range object detection, which involves using a simulated dataset. |
---|