DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES
Autonomous vehicles are a technology that is starting to be developed for automation-based transportation systems. One of the problems in autonomous vehicles is the localization system, which is the determination of the position of the vehicle in an area. Each type of sensor used in the localization...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/84365 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:84365 |
---|---|
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Autonomous vehicles are a technology that is starting to be developed for automation-based transportation systems. One of the problems in autonomous vehicles is the localization system, which is the determination of the position of the vehicle in an area. Each type of sensor used in the localization process still has shortcomings. To overcome the shortcomings of each sensor, the sensor fusion method is applied, which is a method of combining data from various sensors to determine the accurate position of autonomous vehicles. One of the emerging tools in the development of autonomous vehicles is the High Definition Map (HD Map), which is a map with a high level of accuracy. The features on the HD Map are specifically designed for autonomous vehicles to be able to “read” maps as humans read maps in general.
Therefore, in this thesis, a sensor fusion method using physical sensors and HD Map is developed. In this thesis, two main methods were taken to be combined with the sensor fusion. For the first approach, a dead reckoning method with Inertial Measurement Unit (IMU) and GPS data is used. The vehicle position is predicted using the linear acceleration and angular velocity data contained in the IMU. GPS data is used as a correction because the IMU is prone to drift/accumulated error. However, since the amount of GPS data is much less than the IMU data, this correction process takes place every 19th IMU data, according to the ratio between the two data. Both prediction and correction processes are performed using a Kalman filter. The second approach is done using shape registration by processing HD Map point cloud data obtained using a 2D LiDAR sensor. In this method, slicing is performed on the point cloud to obtain data at each point in time. Then, a single point coordinate representation of each part of the sliced point cloud is reunited into a complete map and processed with the Winsor method to remove outliers in the data. The position results from both methods are combined using a Kalman filter with 4 state variables.
In this research, the Complex Urban Dataset provided by the Korea Advanced Institute of Science and Technology (KAIST) which contains the results of vehicle sensor detection in urban areas is used. For the purposes of this thesis, a subset of the dataset is used, namely urban38-pankyo which contains sensor data for urban street areas with sensor data taken in the form of IMU sensor data, GPS data, and 2D LiDAR. There are 216,225 IMU data points, 11,138 GPS data points, and an HD Map consisting of 60,088,154 point clouds from 2D LiDAR readings. This dataset also has vehicle position reference information that is used as ground truth for the system.
The accuracy value of the vehicle position between the designed system and the ground truth on the x-axis and y-axis is set as an indicator of system performance. From the dead reckoning method, an average error value of 28.378 meters was obtained for the x-axis and 28.408 meters for the y-axis. Better results were achieved when using the HD Map method which resulted in errors of 6.802 meters and 5.764 meters on the x-axis and y-axis, respectively. The error values from the HD Map method that were further processed to remove outliers in the data were 3.895 meters on the x-axis and 3.783 meters on the y-axis. The sensor fusion results using both approaches resulted in an error value of 3.726 meters on the x-axis and 4.421 meters on the y-axis. Although these error values are larger than the error values in the HD Map method, the sensor fusion results have a significantly larger number of data points, namely 495,981 data with the same timestamp/time range as the HD Map method. This shows that the sensor fusion results have a shorter time difference between two data points, so that the autonomous vehicle localization process can be done more quickly.
Keywords: HD Map, localization system, autonomous vehicle, monocular camera IMU, GPS, dead reckoning, shape registration, Kalman Filter
?
|
format |
Theses |
author |
Dhany Ashedananta, Muhammad |
spellingShingle |
Dhany Ashedananta, Muhammad DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
author_facet |
Dhany Ashedananta, Muhammad |
author_sort |
Dhany Ashedananta, Muhammad |
title |
DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
title_short |
DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
title_full |
DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
title_fullStr |
DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
title_full_unstemmed |
DEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES |
title_sort |
development of a sensor fusion method using shape registration with hd map for localization system in autonomous vehicles |
url |
https://digilib.itb.ac.id/gdl/view/84365 |
_version_ |
1822998536557428736 |
spelling |
id-itb.:843652024-08-15T11:19:42ZDEVELOPMENT OF A SENSOR FUSION METHOD USING SHAPE REGISTRATION WITH HD MAP FOR LOCALIZATION SYSTEM IN AUTONOMOUS VEHICLES Dhany Ashedananta, Muhammad Indonesia Theses HD Map, localization system, autonomous vehicle, monocular camera IMU, GPS, dead reckoning, shape registration, Kalman Filter INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/84365 Autonomous vehicles are a technology that is starting to be developed for automation-based transportation systems. One of the problems in autonomous vehicles is the localization system, which is the determination of the position of the vehicle in an area. Each type of sensor used in the localization process still has shortcomings. To overcome the shortcomings of each sensor, the sensor fusion method is applied, which is a method of combining data from various sensors to determine the accurate position of autonomous vehicles. One of the emerging tools in the development of autonomous vehicles is the High Definition Map (HD Map), which is a map with a high level of accuracy. The features on the HD Map are specifically designed for autonomous vehicles to be able to “read” maps as humans read maps in general. Therefore, in this thesis, a sensor fusion method using physical sensors and HD Map is developed. In this thesis, two main methods were taken to be combined with the sensor fusion. For the first approach, a dead reckoning method with Inertial Measurement Unit (IMU) and GPS data is used. The vehicle position is predicted using the linear acceleration and angular velocity data contained in the IMU. GPS data is used as a correction because the IMU is prone to drift/accumulated error. However, since the amount of GPS data is much less than the IMU data, this correction process takes place every 19th IMU data, according to the ratio between the two data. Both prediction and correction processes are performed using a Kalman filter. The second approach is done using shape registration by processing HD Map point cloud data obtained using a 2D LiDAR sensor. In this method, slicing is performed on the point cloud to obtain data at each point in time. Then, a single point coordinate representation of each part of the sliced point cloud is reunited into a complete map and processed with the Winsor method to remove outliers in the data. The position results from both methods are combined using a Kalman filter with 4 state variables. In this research, the Complex Urban Dataset provided by the Korea Advanced Institute of Science and Technology (KAIST) which contains the results of vehicle sensor detection in urban areas is used. For the purposes of this thesis, a subset of the dataset is used, namely urban38-pankyo which contains sensor data for urban street areas with sensor data taken in the form of IMU sensor data, GPS data, and 2D LiDAR. There are 216,225 IMU data points, 11,138 GPS data points, and an HD Map consisting of 60,088,154 point clouds from 2D LiDAR readings. This dataset also has vehicle position reference information that is used as ground truth for the system. The accuracy value of the vehicle position between the designed system and the ground truth on the x-axis and y-axis is set as an indicator of system performance. From the dead reckoning method, an average error value of 28.378 meters was obtained for the x-axis and 28.408 meters for the y-axis. Better results were achieved when using the HD Map method which resulted in errors of 6.802 meters and 5.764 meters on the x-axis and y-axis, respectively. The error values from the HD Map method that were further processed to remove outliers in the data were 3.895 meters on the x-axis and 3.783 meters on the y-axis. The sensor fusion results using both approaches resulted in an error value of 3.726 meters on the x-axis and 4.421 meters on the y-axis. Although these error values are larger than the error values in the HD Map method, the sensor fusion results have a significantly larger number of data points, namely 495,981 data with the same timestamp/time range as the HD Map method. This shows that the sensor fusion results have a shorter time difference between two data points, so that the autonomous vehicle localization process can be done more quickly. Keywords: HD Map, localization system, autonomous vehicle, monocular camera IMU, GPS, dead reckoning, shape registration, Kalman Filter ? text |