Obstacles mapping based on 3-D perception for mobile robot navigation
Many previous researchers have offered two-dimensional mapping for robotic navigation. However, since two-dimensional mapping is only able to detect the barriers in planar fields, researchers are looking for other better ways to discover the obstacles in the spherical area. The disadvantage of two-d...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/30397/1/Obstacles%20mapping%20based%20on%203-D%20perception%20for%20mobile.pdf http://umpir.ump.edu.my/id/eprint/30397/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaysia Pahang |
Language: | English |
Summary: | Many previous researchers have offered two-dimensional mapping for robotic navigation. However, since two-dimensional mapping is only able to detect the barriers in planar fields, researchers are looking for other better ways to discover the obstacles in the spherical area. The disadvantage of two-dimensional mapping for robot navigation is that it is unable to detect the barriers that have elevation differences. This research offers several steps in order to build a three-dimensional map. The first step is to develop the mobile robot as a test-bed platform. Robot projects the obstacles by measuring the distance uses depth camera to get obstacles geometry information in the form of point-cloud that show the position of landmarks on X, Y, and Z coordinate. The second step offers a method of estimating robot translation and rotation accurately using sensors fusion technique, which is a combination of wheel odometry, visual odometry, and inertial odometry. Wheel odometry estimates the position of the robot based on information on wheel rotation speed without being affected by the presence of light, magnetism, or gravity vectors, but wheel odometry has error accumulation issue. Visual odometry performs estimation functions based on visual images with the combination of Features from Accelerated Segment Test (FAST) and singular value decomposition (SVD) methods. However, visual odometry is very dependent on the presence of light and texture of the object, the less light and texture of the object, the higher the error of position estimation. Inertial odometry uses Magnetic-Angular-Gravity (MARG) measurement then combines the three measurements through the Madgwick method to produce accurate position estimation values. However, inertial odometry is only able to estimate rotational motion. This study offers a fusion method based on the Extended Kalman Filter (EKF) to produce a new estimation output that eliminates the weaknesses of each estimation result (wheel odometry, visual odometry, inertial odometry). The third step is the registration of three-dimensional map based on robot pose estimation and depth measurement. All these issues are examined and investigated from an estimation-theoretic perspective through mathematical analysis. The theories have been validated through experimental investigations. The results of position estimation test using multi-sensor fusion techniques based on the EKF method for 120 seconds in the area of 10m x 10m show the average value of X axis translation error of 7.6cm, Y axis translation error of 8.5cm, roll rotation error of 0.678○, pitch rotation error of 0.491○, and yaw rotation errors are 0.483○. The visual results show a 3-D map which successfully reconstructed has a minimal fracture or overlapping, and represent the same situation as the reality. |
---|