Robot programming using augmented reality

Current robot programming approaches lack the intuition required for quick and simple applications. As new robotic applications are being identified, there is a greater need to be able to programme robots safely and quickly. Recently, an approach known as Programming by Demonstration (PbD) has emerg...

Full description

Saved in:
Bibliographic Details
Main Author: Chong, Jonathan Wun Shiung
Other Authors: Andrew Nee Yeh Ching
Format: Theses and Dissertations
Language:English
Published: 2014
Subjects:
Online Access:https://hdl.handle.net/10356/60515
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Current robot programming approaches lack the intuition required for quick and simple applications. As new robotic applications are being identified, there is a greater need to be able to programme robots safely and quickly. Recently, an approach known as Programming by Demonstration (PbD) has emerged primarily addressing intuitiveness in Human-Robot Interaction (HRI). This approach is complemented by another emerging technology known as Augmented Reality (AR) which is the environment where computer-generated 3D objects are blended (registered) onto a real world scene, to enhance the interaction of a user with the real world. The work in this thesis marries the two concepts above by proposing a new approach for immersive robot programming known as Robot Programming using AR (RPAR), where the user directly moves a virtual robot amongst real objects in an unknown environment. The first part of this work is the development of a new RPAR methodology aimed at applications such as pick-and-place tasks, where there are a number of possible path solutions for a given start and goal configuration pair. The ability of the user to make sense of an unknown environment is utilized by enabling him/her to intuitively define subsets of the free space that are relevant to his/her goals. Collision-free paths for userdefined start and goal configurations are then generated using a beam search strategy. The second part of this work is the development of a new RPAR methodology aimed at applications such as arc welding and laser cutting, where the end-effector of the robot is constrained to follow a user-defined 3D path at a certain orientation, consistently with respect to that path. The first step requires the user to perform a number of demonstrations to obtain data points of the path/curve to be followed. These data points are processed using a number of algorithms to produce an output curve that best represents the demonstrated curve, i.e., learning the human intention. The Piecewise Linear Parameterization (PLP) algorithm and a curve learning method based on Bayesian neural networks and reparameterization are proposed. After the output curve is generated, the user is required to interactively define the free space along the curve. The orientation of the end-effector can then be planned intuitively using visual feedback from AR, and a final collision-free path can be obtained. The two RPAR methodologies developed in this research have been applied to a number of case studies and the various related issues have been discussed.