Autonomous navigation of mobile robots using visual servoing
Technological revolution has allowed robots to play a more important role than before due to its immense potential in bringing more convenience to people’s lives. This convenience is especially valuable to, for example, people who are feeling unwell or immobile. Hence, providing personal services to...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139682 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Technological revolution has allowed robots to play a more important role than before due to its immense potential in bringing more convenience to people’s lives. This convenience is especially valuable to, for example, people who are feeling unwell or immobile. Hence, providing personal services to cater their needs can provide a more holistic medical care to patients. Unfortunately, current autonomous navigation of robot does not take into account of the orientation of the object of interest in determining the target location. This would mean that there is an extremely high chance that the robot is not facing the frontal pose of the object of interests which results in inconvenience. Thus, this provides motivation in exploring the use of visual servoing for autonomous navigation. This project aims to develop an autonomous robot that is able to approach two respective targets – an empty and occupied chair according to their desired pose. Thus, this makes it suitable for applications such as food or medicine delivery in which the robot is able to move to the target person and deliver items or medicines to him/her. Even in cases where the person is not in his/her seat, this will still not affect the robot’s ability in moving towards the target. Point Cloud processing and deep learning detection - Openpose will be used to determine the pose of an empty chair and occupied chair respectively. With point cloud processing, an algorithm is developed to carry out the segmentation of planes – backrest and seat and hence identify the pose of an empty chair. For the case of occupied chair, an algorithm has been created to identify the pose of the person using three-dimensional coordinates of the body parts. Lastly, these data will determine the path planning algorithms for the robot to move independently towards the front of object of interest. Results have shown that the robot is able to determine the position and orientation of an empty chair and occupied chair and navigate autonomously to the front of the empty and occupied chair. Moreover, further improvements such as reducing the amount of time for the robot to reach to its target pose and usage of other sensors to enable the robot to move in a more complex environment are suggested. |
---|