GANstronome :GAN-based gastronomic robot

Over the years, robots have been developed to learn and perform new tasks without the need of rigid manual programming that limits the robots to only a few tasks. Methods such as teleoperation, kinesthetics teaching have allowed robots to learn new tasks through demonstrations. In recent years, the...

Full description

Saved in:
Bibliographic Details
Main Author: Muhammad Rafiq Rifhan Rosman
Other Authors: Tan Yap Peng
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149634
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Over the years, robots have been developed to learn and perform new tasks without the need of rigid manual programming that limits the robots to only a few tasks. Methods such as teleoperation, kinesthetics teaching have allowed robots to learn new tasks through demonstrations. In recent years, the introduction of Artificial Intelligence (A.I) to the field of robotics has opened up new rooms for more efficient ways to allow robots to learn via demonstrations. The introduction of A.I has also allowed humans to demonstrate intricate tasks without the need of special equipment and this could revolutionise the way robots learn. This project is particularly interested in robotic learning in the kitchen environment where tasks are often intricate. To date, robots learning via demonstrations in the gastronomical scene is rare as such tasks are often very complex to define. Demonstrating to the robot directly with our own bodies is still the best way to define such tasks. This project thus aims to find effective methods to allow robots map trajectories directly from human actions without any programming scripts, specialised equipment or any technical expertise. Specifically, this project investigates the deployment of an A.I framework called CycleGAN to translate video frames of the human arm demonstrating a task to frames of the robot performing the task itself. The use of CycleGAN has been known to work well in unpaired image-to-image translation between domains of different styles such as horses to zebras and from Monet-style artwork to Van Gogh-style artwork, however there is little study in translating two very different domains of different shapes. Further studies are also needed to translate human images to robot images. Learning directly and accurately just from human demonstrations could revolutionise how robots learn new and complex tasks.