Towards human-level artificial intelligence agents
Deep learning has provided a method to train large neural networks to learn a representation of data that best solves a given task without the need for manual feature engineering. The combination of Reinforcement Learning (RL) and deep learning, often referred to as Deep Reinforcement Learning (DRL)...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174532 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174532 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1745322024-05-03T02:58:52Z Towards human-level artificial intelligence agents Leung, Jonathan Cyril Miao Chun Yan School of Computer Science and Engineering Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) ASCYMiao@ntu.edu.sg Computer and Information Science Goal modelling Reinforcement learning Agent development Machine learning Deep learning has provided a method to train large neural networks to learn a representation of data that best solves a given task without the need for manual feature engineering. The combination of Reinforcement Learning (RL) and deep learning, often referred to as Deep Reinforcement Learning (DRL), has resulted in agents that have achieved superhuman performance in some games. However, DRL can be difficult to apply in practice as it suffers from issues such as sample inefficiency, learning in sparse reward environments, and correct definition of reward functions. The removal of human intervention from the agent's training process has also led to agent behaviour that is unpredictable, uninterpretable, and potentially unsafe. In this work, we use Goal Net, a goal-oriented agent modelling methodology, as a way for agent designers to define an agent's goals and incorporate their prior knowledge about how an agent should achieve goals. As agents become more intelligent, the scenarios in which they can be used will increase, thus increasing the number of potential agent developers and designers. Goal Net uses goals as an abstraction of agent behaviour that can be understood by stakeholders who may have little knowledge about how to implement an agent. Goal Nets can be defined graphically, easing the design process for those who are unfamiliar with programming. We survey recent methods on defining and achieving goals, which include methods related to goal modelling and RL, and identify how the two areas are related. This is followed by an introduction of Goal Net in which we present a method for using Goal Nets for the customization of virtual assistants. Then, we present our method of combining Goal Net and DRL that addresses some of the issues with DRL discussed previously. Experimental results show that our method achieves better results than other methods that incorporate the same level of human knowledge. We then adapt and apply our method to a negotiation dialogue agent. We perform both automatic and human evaluation, and include ChatGPT in the human evaluation as a powerful language generation model to which we can compare. We identify problems with ChatGPT with regards to controllability and usability, and highlight how our proposed method helps mitigate these issues. Finally, we discuss potential future directions for this work and challenges that these directions may pose. Doctor of Philosophy 2024-04-01T06:04:02Z 2024-04-01T06:04:02Z 2024 Thesis-Doctor of Philosophy Leung, J. C. (2024). Towards human-level artificial intelligence agents. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174532 https://hdl.handle.net/10356/174532 10.32657/10356/174532 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Goal modelling Reinforcement learning Agent development Machine learning |
spellingShingle |
Computer and Information Science Goal modelling Reinforcement learning Agent development Machine learning Leung, Jonathan Cyril Towards human-level artificial intelligence agents |
description |
Deep learning has provided a method to train large neural networks to learn a representation of data that best solves a given task without the need for manual feature engineering. The combination of Reinforcement Learning (RL) and deep learning, often referred to as Deep Reinforcement Learning (DRL), has resulted in agents that have achieved superhuman performance in some games. However, DRL can be difficult to apply in practice as it suffers from issues such as sample inefficiency, learning in sparse reward environments, and correct definition of reward functions. The removal of human intervention from the agent's training process has also led to agent behaviour that is unpredictable, uninterpretable, and potentially unsafe.
In this work, we use Goal Net, a goal-oriented agent modelling methodology, as a way for agent designers to define an agent's goals and incorporate their prior knowledge about how an agent should achieve goals. As agents become more intelligent, the scenarios in which they can be used will increase, thus increasing the number of potential agent developers and designers. Goal Net uses goals as an abstraction of agent behaviour that can be understood by stakeholders who may have little knowledge about how to implement an agent. Goal Nets can be defined graphically, easing the design process for those who are unfamiliar with programming.
We survey recent methods on defining and achieving goals, which include methods related to goal modelling and RL, and identify how the two areas are related. This is followed by an introduction of Goal Net in which we present a method for using Goal Nets for the customization of virtual assistants. Then, we present our method of combining Goal Net and DRL that addresses some of the issues with DRL discussed previously. Experimental results show that our method achieves better results than other methods that incorporate the same level of human knowledge. We then adapt and apply our method to a negotiation dialogue agent. We perform both automatic and human evaluation, and include ChatGPT in the human evaluation as a powerful language generation model to which we can compare. We identify problems with ChatGPT with regards to controllability and usability, and highlight how our proposed method helps mitigate these issues. Finally, we discuss potential future directions for this work and challenges that these directions may pose. |
author2 |
Miao Chun Yan |
author_facet |
Miao Chun Yan Leung, Jonathan Cyril |
format |
Thesis-Doctor of Philosophy |
author |
Leung, Jonathan Cyril |
author_sort |
Leung, Jonathan Cyril |
title |
Towards human-level artificial intelligence agents |
title_short |
Towards human-level artificial intelligence agents |
title_full |
Towards human-level artificial intelligence agents |
title_fullStr |
Towards human-level artificial intelligence agents |
title_full_unstemmed |
Towards human-level artificial intelligence agents |
title_sort |
towards human-level artificial intelligence agents |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174532 |
_version_ |
1800916198559318016 |