Advanced vision-based localization and mapping
With the realization of many state-of-the-art computing processors, the field of robotics is experiencing an amazing advancement with a plethora of applications. One of such application is based on Simultaneous Localization and Mapping (SLAM), which is a technique that allows a robot to determine it...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
2019
|
主題: | |
在線閱讀: | http://hdl.handle.net/10356/77764 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | With the realization of many state-of-the-art computing processors, the field of robotics is experiencing an amazing advancement with a plethora of applications. One of such application is based on Simultaneous Localization and Mapping (SLAM), which is a technique that allows a robot to determine its movement from a sequence of images from its “eyes”. Applications from SLAM ranges from self-driving cars to autonomous surveillance drones. While the development of SLAM on drones is gradually come to its final stages, it still has a lot of room for improvements. There are still problems of losing track or unreliable accuracy. Therefore, we decided to implement a more robust SLAM system by integrating SLAM results with an additional low-cost Inertial Measurement Unit (IMU). In this study, the work is two-fold. Firstly, various cutting-edge SLAM algorithms are tested to find the most suitable and most robust choice for drone application. We opted for PL-SLAM for our system as it showed the most confident results. Secondly, to integrate the system with the IMU, we chose the loosely-coupled solution with Extended Kalman Filter (EKF) to enhance the system’s accuracy. Overall, our SLAM showed a promising accuracy and it no longer lost track while operating. |
---|