Simulation with vision for embodied AI
This project is an embodied AI simulator development project, and it is divided into three stages: unity 3D simulator, C# webserver, and python API/GUI. The main objective of this project is to create an interactive platform as a testbed to conduct some cognitive experiments in an environment that s...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157364 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-157364 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1573642023-07-07T19:10:20Z Simulation with vision for embodied AI Ye, Jieyi Wen Bihan School of Electrical and Electronic Engineering Institute of Materials Research and Engineering, A*STAR Cheston Tan Yin Chet bihan.wen@ntu.edu.sg Engineering::Computer science and engineering::Software Engineering::Computer science and engineering::Computer applications This project is an embodied AI simulator development project, and it is divided into three stages: unity 3D simulator, C# webserver, and python API/GUI. The main objective of this project is to create an interactive platform as a testbed to conduct some cognitive experiments in an environment that simulates the real world. In the project’s first stage, a unity 3D simulator is proposed and developed to create a simulated multi-agent platform for virtual agents to navigate around and interact with the objects within. The simulator is embedded with a physics engine and contains models that replicate real-life objects. For the project’s next stage, a webserver is designed and developed to receive external requests to alter the simulator in real-time. The webserver is attached to the simulator and starts with the simulator to constantly monitor network requests and handle the requests correspondingly. For accessible communication with the simulator externally, a python API is created to allow interactions with the simulator via Python commands. A python GUI is designed to enable sending commands over by clicking buttons instead of explicitly writing codes to control the environment. Lastly, integrating the simulator, the webserver and the Python API is accomplished. By combining all the individual available actions in the simulator, two different modes are created – active learning mode and teaching mode to experiment with whether the baby agent will learn differently in those two modes. Bachelor of Engineering (Information Engineering and Media) 2022-05-12T05:27:53Z 2022-05-12T05:27:53Z 2022 Final Year Project (FYP) Ye, J. (2022). Simulation with vision for embodied AI. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157364 https://hdl.handle.net/10356/157364 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Software Engineering::Computer science and engineering::Computer applications |
spellingShingle |
Engineering::Computer science and engineering::Software Engineering::Computer science and engineering::Computer applications Ye, Jieyi Simulation with vision for embodied AI |
description |
This project is an embodied AI simulator development project, and it is divided into three stages: unity 3D simulator, C# webserver, and python API/GUI. The main objective of this project is to create an interactive platform as a testbed to conduct some cognitive experiments in an environment that simulates the real world.
In the project’s first stage, a unity 3D simulator is proposed and developed to create a simulated multi-agent platform for virtual agents to navigate around and interact with the objects within. The simulator is embedded with a physics engine and contains models that replicate real-life objects.
For the project’s next stage, a webserver is designed and developed to receive external requests to alter the simulator in real-time. The webserver is attached to the simulator and starts with the simulator to constantly monitor network requests and handle the requests correspondingly.
For accessible communication with the simulator externally, a python API is created to allow interactions with the simulator via Python commands. A python GUI is designed to enable sending commands over by clicking buttons instead of explicitly writing codes to control the environment.
Lastly, integrating the simulator, the webserver and the Python API is accomplished. By combining all the individual available actions in the simulator, two different modes are created – active learning mode and teaching mode to experiment with whether the baby agent will learn differently in those two modes. |
author2 |
Wen Bihan |
author_facet |
Wen Bihan Ye, Jieyi |
format |
Final Year Project |
author |
Ye, Jieyi |
author_sort |
Ye, Jieyi |
title |
Simulation with vision for embodied AI |
title_short |
Simulation with vision for embodied AI |
title_full |
Simulation with vision for embodied AI |
title_fullStr |
Simulation with vision for embodied AI |
title_full_unstemmed |
Simulation with vision for embodied AI |
title_sort |
simulation with vision for embodied ai |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/157364 |
_version_ |
1772828154447003648 |