Simulation with vision for embodied AI

This project is an embodied AI simulator development project, and it is divided into three stages: unity 3D simulator, C# webserver, and python API/GUI. The main objective of this project is to create an interactive platform as a testbed to conduct some cognitive experiments in an environment that s...

Full description

Saved in:
Bibliographic Details
Main Author: Ye, Jieyi
Other Authors: Wen Bihan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157364
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This project is an embodied AI simulator development project, and it is divided into three stages: unity 3D simulator, C# webserver, and python API/GUI. The main objective of this project is to create an interactive platform as a testbed to conduct some cognitive experiments in an environment that simulates the real world. In the project’s first stage, a unity 3D simulator is proposed and developed to create a simulated multi-agent platform for virtual agents to navigate around and interact with the objects within. The simulator is embedded with a physics engine and contains models that replicate real-life objects. For the project’s next stage, a webserver is designed and developed to receive external requests to alter the simulator in real-time. The webserver is attached to the simulator and starts with the simulator to constantly monitor network requests and handle the requests correspondingly. For accessible communication with the simulator externally, a python API is created to allow interactions with the simulator via Python commands. A python GUI is designed to enable sending commands over by clicking buttons instead of explicitly writing codes to control the environment. Lastly, integrating the simulator, the webserver and the Python API is accomplished. By combining all the individual available actions in the simulator, two different modes are created – active learning mode and teaching mode to experiment with whether the baby agent will learn differently in those two modes.