Conditional bayesian filtering for robot navigation and human tracking
As a model for estimating the state of a dynamical system through noisy measurements, Bayesian filtering (BF) has become one of the fastest growing research fields in the last decade. Bayesian filtering techniques play an important role in a wide variety of engineering applications such as navigatio...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2015
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/65039 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-65039 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Liu, Jigang Conditional bayesian filtering for robot navigation and human tracking |
description |
As a model for estimating the state of a dynamical system through noisy measurements, Bayesian filtering (BF) has become one of the fastest growing research fields in the last decade. Bayesian filtering techniques play an important role in a wide variety of engineering applications such as navigation, control engineering, finance and aerospace engineering. Despite its remarkable advances made by researchers, the existing Bayesian filtering techniques, such as Kalman filter, extended Kalman filter (EKF) and particle filters, are still challenged by prediction accuracy. To improve the prediction accuracy of EKF, we present a novel model called conditional EKF by introducing a condition with respect to image data. For predicting the system state in a prediction step, an image-based dynamic model is built to replace the smooth motion model which is commonly used in EKF. Based on the conditional EKF, a robust visual simultaneous localization and mapping (SLAM) system, named C-SLAM is presented for both map-building and camera-localization tasks. SLAM is an effective method developed for autonomous robot applications to obtain navigation information. In the traditional EKF-based SLAM systems, a predefined or learnt dynamic model is required for predicting camera state in the prediction step. However, these dynamic models do not necessarily describe complex camera motion in the real world correctly. For example, when the camera experiences a sudden velocity change, or when low-speed cameras are used, predicted states by these dynamic models will deviate largely from ground truth, leading to the growth of state uncertainty and even the failure for tracking features. To solve this problem, in the prediction step, C-SLAM employs optical flow and epipolar geometry techniques to predict camera pose instead of using a smooth motion model which is widely used in existing SLAM systems. The limitations of current particle filter approaches include sensitivity to discontinuous motion caused by low frame rate or sudden velocity change. To address these problems, exemplar-based conditional particle filter (EC-PF) is proposed by including a conditional term with respect to exemplars and image data. By using EC-PF, a 3D human motion tracking algorithm is proposed to recover 3D full body motions. Within the method, an exemplar-based dynamic model is constructed to guide human motion prediction so that particles are able to evolve within a reasonable area close to the true state. Furthermore, by adopting shape context-based exemplar matching, the proposed 3D human motion tracking approach can be effectively achieved by a monocular camera setup. Both SLAM and human motion tracking are important research topics in the area of robot navigation. In addition to Bayesian filtering techniques, we also present a vision-based cuboid model for corridor robot navigation. The robot control system uses an omni-directional camera that observes the navigation area, and provides required visual information. Two distinct navigation methods are proposed according to the cuboid model. The first method is based on X-shape and vanishing point which are two useful features that can be extracted from captured images. Vanishing point is used to estimate the robot orientation and X-shape helps robot to move in the middle of a corridor. The second navigation method is to recover corridor guidelines by tracking vertical lines. With the information of corridor guidelines, the robot can be localized in the corridor environments effectively. Thanks to the wide view of the omni-directional camera used in the proposed navigation algorithm, features like X-shape, vanishing point and vertical line can be extracted from captured image even when there are obstacles close to the robot. The proposed method was tested in different corridor environments. The results of several experiments show the promising performance of the proposed navigation algorithm. |
author2 |
Deepu Rajan |
author_facet |
Deepu Rajan Liu, Jigang |
format |
Theses and Dissertations |
author |
Liu, Jigang |
author_sort |
Liu, Jigang |
title |
Conditional bayesian filtering for robot navigation and human tracking |
title_short |
Conditional bayesian filtering for robot navigation and human tracking |
title_full |
Conditional bayesian filtering for robot navigation and human tracking |
title_fullStr |
Conditional bayesian filtering for robot navigation and human tracking |
title_full_unstemmed |
Conditional bayesian filtering for robot navigation and human tracking |
title_sort |
conditional bayesian filtering for robot navigation and human tracking |
publishDate |
2015 |
url |
http://hdl.handle.net/10356/65039 |
_version_ |
1759856551616577536 |
spelling |
sg-ntu-dr.10356-650392023-03-04T00:34:00Z Conditional bayesian filtering for robot navigation and human tracking Liu, Jigang Deepu Rajan School of Computer Engineering Centre for Computational Intelligence DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition As a model for estimating the state of a dynamical system through noisy measurements, Bayesian filtering (BF) has become one of the fastest growing research fields in the last decade. Bayesian filtering techniques play an important role in a wide variety of engineering applications such as navigation, control engineering, finance and aerospace engineering. Despite its remarkable advances made by researchers, the existing Bayesian filtering techniques, such as Kalman filter, extended Kalman filter (EKF) and particle filters, are still challenged by prediction accuracy. To improve the prediction accuracy of EKF, we present a novel model called conditional EKF by introducing a condition with respect to image data. For predicting the system state in a prediction step, an image-based dynamic model is built to replace the smooth motion model which is commonly used in EKF. Based on the conditional EKF, a robust visual simultaneous localization and mapping (SLAM) system, named C-SLAM is presented for both map-building and camera-localization tasks. SLAM is an effective method developed for autonomous robot applications to obtain navigation information. In the traditional EKF-based SLAM systems, a predefined or learnt dynamic model is required for predicting camera state in the prediction step. However, these dynamic models do not necessarily describe complex camera motion in the real world correctly. For example, when the camera experiences a sudden velocity change, or when low-speed cameras are used, predicted states by these dynamic models will deviate largely from ground truth, leading to the growth of state uncertainty and even the failure for tracking features. To solve this problem, in the prediction step, C-SLAM employs optical flow and epipolar geometry techniques to predict camera pose instead of using a smooth motion model which is widely used in existing SLAM systems. The limitations of current particle filter approaches include sensitivity to discontinuous motion caused by low frame rate or sudden velocity change. To address these problems, exemplar-based conditional particle filter (EC-PF) is proposed by including a conditional term with respect to exemplars and image data. By using EC-PF, a 3D human motion tracking algorithm is proposed to recover 3D full body motions. Within the method, an exemplar-based dynamic model is constructed to guide human motion prediction so that particles are able to evolve within a reasonable area close to the true state. Furthermore, by adopting shape context-based exemplar matching, the proposed 3D human motion tracking approach can be effectively achieved by a monocular camera setup. Both SLAM and human motion tracking are important research topics in the area of robot navigation. In addition to Bayesian filtering techniques, we also present a vision-based cuboid model for corridor robot navigation. The robot control system uses an omni-directional camera that observes the navigation area, and provides required visual information. Two distinct navigation methods are proposed according to the cuboid model. The first method is based on X-shape and vanishing point which are two useful features that can be extracted from captured images. Vanishing point is used to estimate the robot orientation and X-shape helps robot to move in the middle of a corridor. The second navigation method is to recover corridor guidelines by tracking vertical lines. With the information of corridor guidelines, the robot can be localized in the corridor environments effectively. Thanks to the wide view of the omni-directional camera used in the proposed navigation algorithm, features like X-shape, vanishing point and vertical line can be extracted from captured image even when there are obstacles close to the robot. The proposed method was tested in different corridor environments. The results of several experiments show the promising performance of the proposed navigation algorithm. Doctor of Philosophy (SCE) 2015-06-11T02:39:27Z 2015-06-11T02:39:27Z 2015 2015 Thesis Liu, J. (2017). Conditional bayesian filtering for robot navigation and human tracking. Doctoral thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/65039 en 204 p. application/pdf |