GANstronome :GAN-based gastronomic robot
Over the years, robots have been developed to learn and perform new tasks without the need of rigid manual programming that limits the robots to only a few tasks. Methods such as teleoperation, kinesthetics teaching have allowed robots to learn new tasks through demonstrations. In recent years, the...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/149634 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-149634 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1496342023-07-07T18:22:41Z GANstronome :GAN-based gastronomic robot Muhammad Rafiq Rifhan Rosman Tan Yap Peng School of Electrical and Electronic Engineering Institute of High Performance Computing (IHPC) Wang Tianying EYPTan@ntu.edu.sg, wang_tianying@ihpc.a-star.edu.sg Engineering::Electrical and electronic engineering Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Over the years, robots have been developed to learn and perform new tasks without the need of rigid manual programming that limits the robots to only a few tasks. Methods such as teleoperation, kinesthetics teaching have allowed robots to learn new tasks through demonstrations. In recent years, the introduction of Artificial Intelligence (A.I) to the field of robotics has opened up new rooms for more efficient ways to allow robots to learn via demonstrations. The introduction of A.I has also allowed humans to demonstrate intricate tasks without the need of special equipment and this could revolutionise the way robots learn. This project is particularly interested in robotic learning in the kitchen environment where tasks are often intricate. To date, robots learning via demonstrations in the gastronomical scene is rare as such tasks are often very complex to define. Demonstrating to the robot directly with our own bodies is still the best way to define such tasks. This project thus aims to find effective methods to allow robots map trajectories directly from human actions without any programming scripts, specialised equipment or any technical expertise. Specifically, this project investigates the deployment of an A.I framework called CycleGAN to translate video frames of the human arm demonstrating a task to frames of the robot performing the task itself. The use of CycleGAN has been known to work well in unpaired image-to-image translation between domains of different styles such as horses to zebras and from Monet-style artwork to Van Gogh-style artwork, however there is little study in translating two very different domains of different shapes. Further studies are also needed to translate human images to robot images. Learning directly and accurately just from human demonstrations could revolutionise how robots learn new and complex tasks. Bachelor of Engineering (Electrical and Electronic Engineering) 2021-06-06T12:36:30Z 2021-06-06T12:36:30Z 2021 Final Year Project (FYP) Muhammad Rafiq Rifhan Rosman (2021). GANstronome :GAN-based gastronomic robot. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149634 https://hdl.handle.net/10356/149634 en B3258-201 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Electrical and electronic engineering Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Muhammad Rafiq Rifhan Rosman GANstronome :GAN-based gastronomic robot |
description |
Over the years, robots have been developed to learn and perform new tasks without the need of rigid manual programming that limits the robots to only a few tasks. Methods such as teleoperation, kinesthetics teaching have allowed robots to learn new tasks through demonstrations. In recent years, the introduction of Artificial Intelligence (A.I) to the field of robotics has opened up new rooms for more efficient ways to allow robots to learn via demonstrations. The introduction of A.I has also allowed humans to demonstrate intricate tasks without the need of special equipment and this could revolutionise the way robots learn. This project is particularly interested in robotic learning in the kitchen environment where tasks are often intricate. To date, robots learning via demonstrations in the gastronomical scene is rare as such tasks are often very complex to define. Demonstrating to the robot directly with our own bodies is still the best way to define such tasks. This project thus aims to find effective methods to allow robots map trajectories directly from human actions without any programming scripts, specialised equipment or any technical expertise. Specifically, this project investigates the deployment of an A.I framework called CycleGAN to translate video frames of the human arm demonstrating a task to frames of the robot performing the task itself. The use of CycleGAN has been known to work well in unpaired image-to-image translation between domains of different styles such as horses to zebras and from Monet-style artwork to Van Gogh-style artwork, however there is little study in translating two very different domains of different shapes. Further studies are also needed to translate human images to robot images. Learning directly and accurately just from human demonstrations could revolutionise how robots learn new and complex tasks. |
author2 |
Tan Yap Peng |
author_facet |
Tan Yap Peng Muhammad Rafiq Rifhan Rosman |
format |
Final Year Project |
author |
Muhammad Rafiq Rifhan Rosman |
author_sort |
Muhammad Rafiq Rifhan Rosman |
title |
GANstronome :GAN-based gastronomic robot |
title_short |
GANstronome :GAN-based gastronomic robot |
title_full |
GANstronome :GAN-based gastronomic robot |
title_fullStr |
GANstronome :GAN-based gastronomic robot |
title_full_unstemmed |
GANstronome :GAN-based gastronomic robot |
title_sort |
ganstronome :gan-based gastronomic robot |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/149634 |
_version_ |
1772828314063339520 |