Mobile robot ego motion estimation using RANSAC-based ceiling vision
Visual odometry is a commonly used technique for recovering motion and location of the robot. In this paper, we present a robust visual odometry estimation approach based on ceiling view from 3D camera (Kinect). We extracted Speedup Robust Features (SURF) from the monocular image frames retrieved fr...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Online Access: | https://hdl.handle.net/10356/98916 http://hdl.handle.net/10220/12870 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Visual odometry is a commonly used technique for recovering motion and location of the robot. In this paper, we present a robust visual odometry estimation approach based on ceiling view from 3D camera (Kinect). We extracted Speedup Robust Features (SURF) from the monocular image frames retrieved from the camera. SURF features from two consecutive frames are matched by finding the nearest neighbor using KD-tree. 3D information of the SURF features are retrieved using the camera's depth map. The 3D affine transformation model is estimated between these two frames based on Random Sample Consensus (RANSAC) method. All inliers are then used to reestimate the relative transformation between two frames by Singular Value Decomposition (SVD). Given this, the global robot position and orientation can be calculated. Experimental results demonstrate the performance of the proposed algorithm in real environments. |
---|