Biologically inspired visual intelligence for unmanned ground vehicles
Human drivers effectively navigate through most outdoor unstructured environments purely based on their visual inputs. The human visual system has a capability to learn and adapt to its surrounding environment robustly despite the complexity possessed by ground...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2012
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/48017 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Human drivers effectively navigate through most outdoor unstructured environments
purely based on their visual inputs. The human visual system has a capability to learn and
adapt to its surrounding environment robustly despite the complexity possessed by ground
cover variations, uncontrolled lighting, weather conditions, and shadows.
However, none of the artificial perceptual systems in the current state of autonomous
Unmanned Ground Vehicles (UGVs) have this level of intelligent perception that can
continuously cope with the various unexpected situations and perform robustly in
dynamic environments. Therefore, understanding the mechanisms underlying the
intelligent abilities of a human driver can be the key to design fully autonomous UGVs.
The main philosophy underlying this thesis is to understand how humans use their vision
systems (eyes) and processing units (brains) in driving scenarios in order to develop
humanlike visual intelligence for UGVs that can navigate autonomously in an
unstructured outdoor environment. To achieve this goal two important aspects need to be
investigated.
First aspect of this research addresses the case where a UGV is expected to navigate in a
familiar outdoor unstructured environment. Based on empirical evidences, it is known that
when human drivers become familiar with the road situations, they prefer to look at the
far field where the road edges converge (called the vanishing point) to anticipate the
upcoming road trajectory and the car steering with maximal lead time. Based on these
findings, vanishing point is considered as a salient and consistent feature during most of
the open driving behavior tasks regardless of type of the environments. Hence, in this
work, a computational model for the vanishing point estimation based on the visual gaze
behavior of drivers is developed for UGVs. The proposed model uses Local Dominant
Orientation (LDO) patterns such as road edges, ruts and tire tracks left by previous
vehicles to vote for the global convergence point of the road in the visual field.
For robust processing of Local Dominant Orientation (LDO), a novel biologically inspired
mechanism is proposed utilizing distributed population coding and the opponency
mechanism, found in the activities of ensemble of correlated neurons in the primary visual
cortex of the mammalian brain. Apart from being biologically plausible, the proposed
LDO model significantly outperforms the state of the art LDO methods in terms of both
accuracy and robustness in natural and synthetic images. |
---|